The Next Digital Decade

Page 1


T N D D


THE NEXT DIGITAL DECADE_ CHIN-CHEN CHAUNG


contentS INTRODUCTION 0 ABOUT

0.1 What is a computer? 0.2 Brief history 0.3 Here is the beginning 1 Appearance

1.1 1.2 1.3 1.4

Size Design Shape Materials

2 Functionality

2.1 Application Software 2.2 Communication 2.3 Nature User Interface 3 Humanity

3.1 3.2 3.3 3.4

TRANSFORM NEW language new values DIGITAL NARCISSISM BRAIN CHANGE


the NEXT DIGITAL DECADE

contentS


INTRO DUCTION If you ask me why I choose this topic, I will say that it is not my decision. In the beginning, I have no idea what is future of computer. Because of this project, I started to research more about development of technology. After researching, I am surprised that it is not bored as my imagination. This book is about the future of computer. I will talk about the development of computer from the past until now. Then, I will focus on the humanity part in the future of computer. The audiences are the people who are interested in computers, and who have the abilities to achieve the future of computer.


the NEXT DIGITAL DECADE

_ N


_ 0


the NEXT DIGITAL DECADE [ABOUT]

_ 0



the NEXT DIGITAL DECADE [ABOUT]


WHAT IS A COMPUTER?


the NEXT DIGITAL DECADE [ABOUT]

0.1


1703 1822

Integrated Circuit

1958

Robert Noyce

Jack Kilby

UNIVAC

First Transitor

1947 1951

Shockley

William

John von Neumann

Colossus

1944 1945

Alan Turing

Berry computer

1942

Atanasoff—

Konrad Zuse Z3

Hollerith

1890 1941

Herman

Charles Babbage

Binary Code


the NEXT DIGITAL DECADE [ABOUT]

Laptop

1981

First IBM PC First

System

DOS Operating

Apple II

Computer Is Founded

1976 1977

Wozniak Apple

Steve Jobs & Steve

Altair 8800

Xerox Alto

Ted Hoff

Englebart

1964 1968 1973 1975

Douglas

0.2


Brief History Webster’s Dictionary defines “computer” as any programmable electronic device that can store, retrieve, and process data. The basic idea of computing develops in the 1200’s when a Moslem cleric proposes solving problems with a series of written procedures. As early as the 1640’s mechanical calculators are manufactured for sale. Records exist of earlier machines, but Blaise Pascal invents the first commercial calculator, a hand powered adding machine. Although attempts to multiply mechanically were made by Gottfried Liebnitz in the 1670s the first true multiplying calculator appears in Germany shortly before the American Revolution. In 1801 a Frenchman, Joseph-Marie Jacquard builds a loom that weaves by reading punched holes stored on small sheets of hardwood. These plates are then inserted into the loom which reads (retrieves) the pattern and creates(process) the weave. Powered by water, this “machine” came 140 years before the development of the modern computer. Shortly after the first mass-produced calculator(1820), Charles Babbage begins his lifelong quest for a programmable machine. Although Babbage was a poor communicator and record-keeper, his difference

engine is sufficiently developed by 1842 that Ada Lovelace uses it to mechanically translate a short written work. She is generally regarded as the first programmer. Twelve years later George Boole, while professor of Mathematics at Cork University, writes An Investigation of the Laws of Thought(1854), and is generally recognized as the father of computer science. The 1890 census is tabulated on punch cards similar to the ones used 90 years earlier to create weaves. Developed by Herman Hollerith of MIT, the system uses electric power(non-mechanical). The Hollerith Tabulating Company is a forerunner of today’s IBM. Just prior to the introduction of Hollerith’s machine the first printing calculator is introduced. In 1892 William Burroughs, a sickly ex-teller, introduces a commercially successful printing calculator. Although handpowered, Burroughs quickly introduces an electronic model. In 1925, unaware of the work of Charles Babbage, Vannevar Bush of MIT builds a machine he calls the differential analyzer. Using a set of gears and shafts, much like Babbage, the machine can handle simple calculus problems, but accuracy is a problem.


the NEXT DIGITAL DECADE [ABOUT]

Apple

1988

vs

Microsoft

Operating System

1985

Windows

Computer

1984

First MacIntosh


The period from 1935 through 1952 gets murky with claims and counterclaims of who invents what and when. Part of the problem lies in the international situation that makes much of the research secret. Other problems include poor record-keeping, deception and lack of definition. In 1935, Konrad Zuse, a German construction engineer, builds a mechanical calculator to handle the math involved in his profession. Shortly after completion, Zuse starts on a programmable electronic device which he completes in 1938. John Vincent Atanasoff begins work on a digital computer in 1936 in the basement of the Physics building on the campus of Iowa State. A graduate student, Clifford (John) Berry assists. The “ABC” is designed to solve linear equations common in physics. It displays some early features of later computers including electronic calculations. He shows it to others in 1939 and leaves the patent application with attorneys for the school when he leaves for a job in Washington during World War II. Unimpressed, the school never files and ABC is cannibalized by students. The Enigma, a complex mechanical encoder is used by the Germans and they believe it to be unbreakable. Several people involved, most notably

Alan Turing, conceive machines to handle the problem, but none are technically feasible. Turing proposes a “Universal Machine” capable of “computing” any algorithm in 1937. That same year George Steblitz creates his Model K(itchen), a conglomeration of otherwise useless and leftover material, to solve complex calculations. He improves the design while working at Bell Labs and on September 11, 1940, Steblitz uses a teletype machine at Dartmouth College in New Hampshire to transmit a problem to his Complex Number Calculator in New York and receives the results. It is the first example of a network. First in Poland, and later in Great Britain and the United States, the Enigma code is broken. Information gained by this shortens the war. To break the code, the British, led by Touring, build the Colossus Mark I. The existence of this machine is a closely guarded secret of the British Government until 1970. The United States Navy, aided to some extent by the British, builds a machine capable of breaking not only the German code but the Japanese code as well.


“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth tobegins evolution.” In 1943 development on the Electronic Numerical Integrator And Computer (ENIAC) in earnest at Penn State. Designed by John Mauchly and J. Presper Eckert of the Moore School, they get help from John von Neumann and others. In 1944, the Havard Mark I is introduced. Based on a series of proposals from Howard Aiken in the late 1930’s, the Mark I computes complex tables for the U.S. Navy. It uses a paper tape to store instructions and Aiken hires Grace Hopper(“Amazing Grace”) as one of three programmers working on the machine. Thomas J. Watson Sr. plays a pivotal role involving his company, IBM, in the machine’s development.

Early in 1945, with the Mark I stopped for repairs, Hopper notices a moth in one of the relays, possibly causing the problem. From this day on, Hopper refers to fixing the system as “debugging”. The same year Von Neumann proposes the concept of a “stored program” in a paper that is never officially published. Work completes on ENIAC in 1946. Although only three years old the machine is woefully behind on technology, but the inventors opt to continue while working on a more modern machine, the EDVAC. Programming ENIAC requires it to be rewired. A later version eliminates this problem. To make the machine appear more

the NEXT DIGITAL DECADE [ABOUT]

—Albert Einstein


impressive to reporters during its unveiling, a team member (possibly Eckert) puts translucent spheres(halved ping pong balls) over the lights. The US patent office will later recognize this as the first computer. The next year scientists employed by Bell Labs complete work on the transistor (John Bardeen, Walter Brattain and William Shockley receive the Nobel Prize in Physics in 1956), and by 1948 teams around the world work on a “stored program” machine. The first, nicknamed “Baby”, is a prototype of a much larger machine under construction in Britain and is shown in June 1948. The impetus over the next 5 years for advances in computers is mostly the government and military. UNIVAC, delivered in 1951 to the Census Bureau, results in a tremendous financial loss to its manufacturer, RemingtonRand. The next year Grace Hopper, now an employee of that company proposes “reuseable software,” code segments that could be extracted and assembled according to instructions in a “higher level language.” The concept of compiling is born. Hopper would revise this concept over the next twenty years and her ideas would become an integral part of all modern computers. CBS uses one of the 46 UNIVAC computers produced to predict the outcome of the 1952

Presidential Election. They do not air the prediction for 3 hours because they do not trust the machine. IBM introduces the 701 the following year. It is the first commercially successful computer. In 1956 FORTRAN is introduced(proposed 1954, it takes nearly 3 years to develop the compiler). Two additional languages, LISP and COBOL, are added in 1957 and 1958. Other early languages include ALGOL and BASIC. Although never widely used, ALGOL is the basis for many of today’s languages. With the introduction of Control Data’s CDC1604 in 1958, the first transistor powered computer, a new age dawns. Brilliant scientist Seymour Cray heads the development team. This year integrated circuits are introduced by two men, Jack Kilby and John Noyce, working independently. The second network is developed at MIT. Over the next three years computers begin affecting the day-to-day lives of most Americans. The addition of MICR characters at the bottom of checks is common.


the NEXT DIGITAL DECADE [ABOUT]


More Recent Advances In 1961 Fairchild Semiconductor introduces the integrated circuit. Within ten years all computers use these instead of the transistor. Formally building sized computers are now room-sized, and are considerably more powerful. The following year the Atlas becomes operational, displaying many of the features that make today’s systems so powerful including virtual memory, pipeline instruction execution and paging. Designed at the University of Manchester, some of the people who developed Colossus thirty years earlier make contributions. On April 7, 1964, IBM introduces the System/360. While a technical marvel, the main feature of this machine is business oriented...IBM guarantees the “upward compatibility” of the system, reducing the risk that a business would invest in outdated technology. Dartmouth College, where the first network was demonstrated 25 years earlier, moves to the forefront of the “computer age” with the introduction of TSS(Time Share System) a crude(by today’s standards) networking system. It is the first Wide Area Network. In three years Randy Golden, President and Founder of Golden Ink, would begin working on this network. Within a year MIT returns to the top of the intellectual computer community with the introduction of a greatly refined network that features shared

resources and uses the first minicomputer (DEC’s PDP-8) to manage telephone lines. Bell Labs and GE play major roles in its design. In 1969 Bell Labs, unhappy with the direction of the MIT project, leaves and develops its own operating system, UNIX. One of the many precursors to today’s Internet, ARPANet, is quietly launched. Alan Keys, who will later become a designer for Apple, proposes the “personal computer.” Also in 1969, unhappy with Fairchild Semiconductor, a group of technicians begin discussing forming their own company. This company, formed the next year, would be known as Intel. The movie Colossus:The Forbin Project has a supercomputer as the villain. Next year, The Computer Wore Tennis Shoes was the first feature length movie with the word computer in the title. In 1971, Texas Instruments introduces the first “pocket calculator.” It weighs 2.5 pounds. With the country embroiled in a crisis of confidence known as Watergate, in 1973 a little publicized judicial decision takes the patent for the computer away from Mauchly and Eckert and awards it to Atanasoff. Xerox introduces the mouse. Proposals are made for the first local area networks.


During the next few years the personal computer explodes on the American scene. Microsoft, Apple and many smaller PC related companies form (and some die). By 1977 stores begin to sell PC’s. Continuing today, companies strive to reduce the size and price of PC’s while increasing capacity. Entering the fray, IBM introduces it’s PC in 1981(it’s actually IBM’s second attempt, but the first failed miserably). Time selects the computer as its Man of the Year in 1982. Tron, a computergenerated special effects extravaganza is released the same year.

the NEXT DIGITAL DECADE [ABOUT]

In 1975 the first personal computer is marketed in kit form. The Altair features 256 bytes of memory. Bill Gates, with others, writes a BASIC compiler for the machine. The next year Apple begins to market PC’s, also in kit form. It includes a monitor and keyboard. The earliest RISC platforms become stable. In 1976, Queen Elizabeth goes on-line with the first royal email message.


Here is the begin ning


0.3

“That’s what’s cool about working with computers. They don’t argue, they remember everything, and they don’t drink all your beer.” Today is Monday morning, someone is jogging on the road. His name is Gary, —Paul Leary

Winston, and lives in 2020. Suddenly, he picks up his phone by his voice from his watch-phone and at the same time music which from his watch automatically stopped.

“O.K. I got it. See you then. Good bye!” When he arrives, the light and all electronics are opened automatically. Then, he speaks to television and asks it to connect to his meeting with his employees. Next, he puts his keychain on the screen to download the documents of meeting. After meeting, he decides to go to office, he doesn’t need to touch any thing, everything controls by his voice. The only one thing he needs is communicate with his car...

the NEXT DIGITAL DECADE [ABOUT]

“Boss, I need you to be at home right now! The video meeting will be started in 20 minutes.”


_ 1


the NEXT DIGITAL DECADE [appearance]

_



the NEXT DIGITAL DECADE [appearance]


1.1

LESS is MORE The electronics industry underwent a fundamental change with the invention of the transistor, paving the way for the microelectronics revolution. Devices are now constructed in complex arrays using lithographic methods derived from classical photography. However, as the size of the circuit shrinks, the resolution of the features that can be fabricated by this so-called “top-down” methodology reaches a limit - currently about 100 nanometres.

effects is probably less than 30. Our report shows that it is possible to control electrical conduction through these structures by the electronic state of the spacer molecules.

If computers are to become even smaller - and hence faster - the size of the circuit must be reduced towards the “nanoscale” domain. One prospect is the so-called “bottom-up” approach in which smaller electronic components are constructed from even smaller building blocks, preferably by a process of chemical self-assembly.

These research efforts represent a step towards decreasing the size of electronic components using chemical means. The challenges for further development are twofold. The design and creation of chemical structures that are able to organise themselves will permit the construction of more elaborate nanoscopic devices by selfassembly. There is also a need for the development of methods by which these elements may be connected together and interfaced to the macroscopic world using the engineering nano-fabrication techniques that are becoming available.

Each spacer molecule included a central chemical group, related to the herbicide Paraquat. The group can easily accept one or two electrons, thereby changing its electronic state. This chemistry allowed a group of spacer molecules to behave as controllable molecular wires, connecting the gold nanoparticle to the base contact.

The future will probably involve the conjunction of the two approaches to nanotechnology: bottom-up, which has its roots in the synthetic and physical chemistry of selfassembled nanostructures, and topdown, representing the engineering approach to the construction of nanoscopic objects.

The number of electrons per nanoparticle required to achieve each of these


1.2

Conceptual DESIGN The Compenion concept notebook, designed by Felix Schmidberger, eschews the familiar clamshell design in favor of two superbright organic LED panels that slide into place next to each other, making the notebook just three-quarters of an inch thick. The Canova concept notebook from V12 Design features two touch-sensitive displays. It can be oriented as a traditional laptop for typing or writing, laid flat as a sketch pad or turned on its side as an e-reader. The Siafu concept notebook, designed for the blind by Jonathan Lucas, omits a display altogether. Images from applications and Web sites are converted into corresponding 3-D shapes on Siafu’s surface. It can be used for reading a Braille newspaper, feeling the shape of someone’s face or going over a tactile representation of a blueprint, for example.

Laptop design is set to change significantly. These concept The Solar Laptop Concept, designed by Nikola Knezevic, has an extra hinged lid notebooks point the way. covered with solar cells that can be adjusted to get the most out of the sun.

the NEXT DIGITAL DECADE [appearance]

The Cario concept notebook from Anna Lopez can be carried around by its handle, positioned like an easel or placed on a car’s steering wheel. When the car’s not in motion, the Cario can project maps, video conferences and more onto the windshield.


1.3

SHAPE ADVANCE The future of the laptop and the tablet PC has been rolled up into one. German designer Evgeny Orkin has developed a concept design for The Rolltop. The Rolltop is constructed of a flexible OLED display that wraps around the removable power supply stand. Tucked into the power stand is a webcam, speaker sound bar, USB ports, and the power supply and power cord. The screen can either be formed into a 13-inch conventional laptop or rolled flat for a 17-inch tablet with stylus. Another great feature is you can stand it upright with a fold out support leg and watch videos like a flat screen TV. And with multi-touch technology you don’t need a separate mouse and keyboard because everything is right on the screen. It might be hard to type on a digital QWERTY keyboard but it might be like a large iPhone. I know its just in the concept form but it would really be cool to see one in person and maybe use one some day. To really appreciate the capabilities and possibilities of The Rolltop check out this video from Orkin Design.


the NEXT DIGITAL DECADE [appearance]



1.4

MATERIAL With the environment and sustainability firmly in mind the Dell Froot concept saves the planet courtesy two projectors: One for the virtual keyboard, and another for the monitor. Designed by Pauline Carlos as part of a sustainability contest sponsored by Dell, the Froot also uses a colorful case that’s constructed out of a biodegradable starch-based polymer. As it’s a futuristic concept, the lack of a mouse is understabdable—we’ll no doubt be using our brains by then.

“Simplicity, carried to the extreme, becomes elegance.” the NEXT DIGITAL DECADE [appearance]

—Jon Franklin

More seriously, Pico projectors are *almost* there, but not quite, otherwise I’d be asking why this is still just a concept.


_ 2


the NEXT DIGITAL DECADE [Functionality]

_



the NEXT DIGITAL DECADE [Functionality]


2.1

APPLICATION SOFTWARE In computer science, an application is a computer program designed to help people perform a certain type of work. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming language (with which computer programs are created). Depending on the work for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Some application packages offer considerable computing power by focusing on a single task, such as word processing; others, called integrated software, offer somewhat less power but include several applications. User-written software tailors systems to meet the user’s specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft antitrust trial was whether Microsoft’s Internet Explorer

web browser was part of its Windows operating system or a separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an application: see Application Portfolio Management.


COM MUNI CATION

the NEXT DIGITAL DECADE [Functionality]

2.2



CLOUD COMPUTING It is a paradigm shift following the shift from mainframe to client-server that preceded it in the early ‘80s. Details are abstracted from the users who no longer have need of, expertise in, or control over the technology infrastructure “in the cloud” that supports them. Cloud computing describes a new supplement, consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. The term cloud is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored on servers.

The majority of cloud computing infrastructure consists of reliable services delivered through data centers and built on servers. Clouds often appear as single points of access for all consumers’ computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers and typically offer SLAs. In general, cloud computing customers do not own the physical infrastructure, instead avoiding capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, whereas others bill on a subscription basis. Sharing “perishable and intangible” computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side-effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits.

the NEXT DIGITAL DECADE [Functionality]

Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices ondemand, like a public utility.


The Current State of Twitter There is no doubt that Twitter has been a runaway success. Add to their rapid growth the recently announced @anywhere platform, and plans for further international expansion, and it comes as no surprise that the company is not looking to sell — at least within the next 2 years. While the site’s growth has certainly been impressive and it has reached the point of non-displacement, there are some interesting hidden truths about Twitter and its users. The following graphic takes a look at Twitter’s path to 10 billion tweets, what we have learned about its users and what they’ve been talking about along the way.


the NEXT DIGITAL DECADE [Functionality]


2.3

NATURAL USER INTERFACE Even four decades ago, Buxton could picture a future enhanced by technology. Eventually he came to dream about humans and computers having close interaction – being able to operate a computer by gesturing at it or by touching it, or having a computer recognize your voice and face. Bill Buxton, a principal researcher for Microsoft, talks about working on new systems that allow people to work more naturally with computers. “I’m excited more now than I have been since I’ve been in the business because I can taste it now,” says Buxton, a principal researcher at Microsoft since 2005. “Stuff I’ve been working towards and thinking about and dreaming about for 20 or 30 years is now at the threshold of general usage.” Touch, face- and voice-recognition, movement sensors – all are part of an emerging field of computing often called natural user interface, or NUI. Interacting with technology in these humanistic ways is no longer limited to high-tech secret agents and Star Trek. Buxton says everyone can enjoy using technology in ways that are more adaptive to the person, location, task, social context and mood.

He sees a bright future in which entire “ecosystems” of devices and appliances co-exist with humans in a way that makes life better for people. Microsoft, with researchers like Buxton, is a leader in developing these new, more natural ways of interacting with computers. The company will showcase some of this technology at the 2010 International Consumer Electronics Show (CES) in Las Vegas this week. Project Natal, which turns game players into human controllers, is among the most high profile examples of the coming shift in technology, Buxton said. Microsoft announced at CES this week that the Xbox gaming device will be available in stores this coming holiday season. Project Natal is the code name for an Xbox 360 add-on that incorporates face, voice, gesture, and object recognition technology to give users a variety of ways to interact with the console, all without needing a controller. It’s a “delightful” new way to spend time with friends and family playing games, watching TV shows and movies, and listening to music, says Robbie Bach, president of Microsoft’s Entertainment and Devices Division.


“For me, when people talk about touch and voice technologies, or anything related to Natural User Interface, it all comes back to what’s most natural for the users,” Bach says. “That’s why you’ll see a variety of user interfaces that are considered natural, because each one is tuned to the environment in which it operates.” The holiday 2010 release of Project Natal will come exactly one decade after the first Xbox console hit the shelves in the holiday season of 2000. “Natal is a next-generation experience that we’re actually delivering this generation,” says Aaron Greenberg, director of product management for Xbox 360. “And they don’t even need to buy a new and controllers. The goal of natural interfaces is not to make the keyboard and mouse obsolete, says August de los Reyes, principal director of user experience for Microsoft Surface.

Instead, NUI is meant to remove mental and physical barriers to technology, to make computing feel more intuitive, and to expand the palette of ways users can experience technology. Whether it’s a receptionist and patient at a doctor’s office separated by a large computer monitor, or a family in a living room sitting together in silence, parents immersed in laptops and kids texting away on cell phones, technology is increasingly creating situations de los Reyes calls “connected but alone.” “Technology today isolates people,” de los Reyes says. NUI and Microsoft Surface are “almost anti-technology solutions.” A member of the Advanced Studies Program at Harvard University Graduate School of Design, and a former visiting associate at Oxford, de los Reyes has become a leader in the field of finding new and intuitive ways to interact with computers. He says natural interfaces are just the latest in a long line of evolving human-computer interaction.

In the last few decades as computers became widely used, early on there was the command line interface (CLI) with its flashing cursor calling on users to type in commands. Then came the graphical user interface (GUI) and its point-and-click mouse and desktop continually developing, a number of the graphical user interface (GUI) and its point-and-click mouse and desktop with icons and windows.

the NEXT DIGITAL DECADE [Functionality]

Bach says Project Natal and other NUIrelated products will offer more natural ways to interact with video games, computers and other technology.


Both interfaces were revolutionary in their time, and natural interfaces are the next step, de los Reyes said. Microsoft is “absolutely leading” in the category of natural interfaces, de los Reyes says. Microsoft has released, and is continually developing, a number of products that incorporate touch, gestures, speech, and more to make user-computer interaction more natural – more like the way humans interact with each other. It’s a video game that a grandmother can play with her grandson, using intuitive body movements to compete rather than having to learn to use a controller, as with Xbox’s Project Natal. Or an in-car communications and entertainment system such as Microsoft Auto’s Ford SYNC that responds to a driver’s voice commands, playing favorite songs or answering text messages so drivers can keep their eyes on the road, hands on the wheel, and mind on their driving. Kia Motors also has announced this week its new UVO in-car entertainment system, which the car company developed with Microsoft.


the NEXT DIGITAL DECADE [Functionality]



Though the human-computer relationship is becoming more personalized, it’s also becoming more personally contained. The Pixar movie “Wall-E” darkly portrayed one possible version of the future in which technology usurped even the most basic human interactions, with humans moving about in high-tech chairs that meet their every need, including communication. Even if the person they are communicating with is sitting right next to them. Though the movie’s message was a warning about technology surpassing humanism, it’s a future that’s not necessarily out of the question yet, says Buxton. “Without informed design, [technology] is far more likely to go bad than good,” Buxton says. “It’s too important and plays too large a part of our lives to leave these things to chance. The only way it’s going to come out right is if we really work hard on understanding that … it’s about people, it’s not about technology.”

Technology for the sake of technology doesn’t interest Buxton. What interests him is when technology takes a more subordinate position – the supporting actor to humankind’s starring role. Buxton has won a number of awards and honors for his work, which advocates innovation, design, and “the appropriate consideration of human values and culture in the conception, implementation, and use of new technology.” He also frequently teaches and speaks on the subject, and his writings include a book and regular column for BusinessWeek magazine. “It’s not about interface design, it’s about ‘out of your face’ design,” Buxton says. “How do I get the technology out of my face so I can focus on the stuff that’s of interest of me – the material I’m reading, the film I’m viewing, the person I’m talking to, the problem I’m trying to solve and doing so in a way that brings unexpected delight.” But creating a more natural relationship between user and technology is not merely a matter of simply removing mice, keyboards, buttons, and knobs, or adding new input methods such as speech, touch, and in-air gestures. “The days are over where a one-sizefits-all interface is appropriate, or even acceptable,” Buxton says.

Some technology, although advanced, is not appropriate or natural in certain situations. For example, text messaging is widely used, but driving or walking while texting is difficult – even dangerous. Speech-recognition technology works well for driving or walking, but works poorly on an airplane where privacy is important, or in a noisy, crowded restaurant. “The trick of elegant design is making sure you do the right thing the right way for the right person at the right time. What’s right here may be wrong there,” Buxton said. Microsoft leaders say no other company is as well situated to create new user interfaces across a range of devices and contexts. “These are all pieces of a larger puzzle that we are methodically trying to solve in this emerging field,” Buxton says. Best of all, Buxton notes, after so many years of dreaming about the true potential of computers, “I’m still around to touch and play with it.”

the NEXT DIGITAL DECADE [Functionality]

And it is a table that acts as a collaborative massive multi-touch-screen computer, such as Microsoft Surface, or a voice-enabled Windows phone device, or a Windows 7 laptop that lets users navigate files or the Web using their fingers or a pen tool.


3


the NEXT DIGITAL DECADE [humanity]

_ 3



the NEXT DIGITAL DECADE [humanity]


3.1

TRANSFORM For many, this might seem like a blessing. Who likes to wait in line? But on a grand scale, might this kind of automated world dramatically change — perhaps even eliminate — how we communicate and connect with one another? Could it change something about us as individuals, or as a whole society? “My short answer is yes. It’s absolutely changing society and the way people are,” says Melissa Cefkin, an ethnographer at IBM. “But there’s nothing new in that. We’ve always had the introduction of new technologies that transform and move society in new ways. It changes our interactions, our sense of the world and each other.”

Imagine a world where your phone is smart enough to order and pay for your morning coffee. No more giving orders, handing over your payment or waiting in lines. No more But if primitive hand tools changed us face-to-face chit-chat orand the from gatherers to hunters, invention of the printing press propahuman interaction. gated literacy while downgrading the importance of the oral tradition, what individual and cultural transformations do new computer technologies portend?

Researchers and technologists alike say they’re already seeing technologywrought changes in how we operate as individuals and as a society. To be clear, they’re not finding evidence of evolutionary transformations -- those show up over thousands of years, not merely decades. But there have been shifts in individual and societal capabilities, habits and values. And just how these all will play out remains to be seen. “We’re in a big social experiment. Where it ends up, I don’t know,” says Dan Siewiorek, a professor of computer science and electrical and computer engineering and director of the HumanComputer Interaction Institute at Carnegie Mellon University. Like other researchers, Siewiorek has identified a number of areas in which individuals and societies have changed in response to technology during the past two decades. One of the most obvious, he says, is the shift in how we view privacy. Having grown up in the McCarthy era, Siewiorek remembers how guarded people were with their phone conversations, fearful that they would be overheard or, worse, recorded.


“Technology changes our interactions, our sense of the world and each other.” “Now you can sit in any airport and “Today, with technology, we can en— Melissa Cefkin hear all the grisly details about a able people to act collectively across

Any doubts? Just look at the deeply personal details that people post on YouTube, MySpace and Facebook. At the same time, people have used this willingness to share via technology to forge new definitions of community. “There are certainly different versions of community emerging, and that’s facilitated by innovative uses of technology,” says Jennifer Earl, associate professor of sociology and director of the Center for Information Technology and Society at the University of California, Santa Barbara. A hundred years ago, neighbors would come together for a barn raising, willing to put in hard labor because they might need similar help someday. Today, Earl says, technology -- whether it’s Twitter or e-mails or a viral video appeal -- can spur people across the world to the same type of communal action, even if they have no personal connection to the individuals helped or the tasks involved.

boundaries. And one of the things that is different today isn’t that we can just act collectively very quickly, but we act across heterogeneous groups,” Earl says.

We’re in a big social experiment. Where it ends up, She points to the collective actions to help the victims of Hurricane Itakendon’t know. Katrina as an example. Citizens throughout the U.S. posted their spare items, from clothes to extra rooms, that displaced Louisiana residents could then use in their time of need.

And it doesn’t take an emergency for new and different types of communities to emerge. “Technology changes the whole idea of who we belong with,” says anthropologist Ken Anderson, a senior researcher at Intel Corp. In the past, community members had some sense of a shared history and shared goals and objectives. Today, an online community can have more specific, tailored interests than would likely be found in a physical neighborhood or town, whether it’s a rare disease, a passion for running or an interest in a celebrity.

the NEXT DIGITAL DECADE [humanity]

divorce or something like that. I don’t know if the convenience overrides privacy or people don’t care about the other people around them, but certainly what we let hang out there has changed,” he says.



Our ability to reach across time and space and build connections via technology with anyone, anywhere and at any time is changing more than our sense of community; it’s changing how we communicate, too. “There is a new language being produced, although it’s not replacing our existing language,” says anthropologist Patricia Sachs Chess, founder and president of Social Solutions Inc., a consulting firm in Tempe, Ariz. Chess and others point to the use of slang and jargon (both pre-existing and newly developed for today’s instant communication tools), phonics, abbreviations and colloquial syntax as the evolving standards for electronic discourse. And this new vernacular is spilling over into traditional writing and oral exchanges. “The first thing that comes to mind is the term bandwidth,” Chess says. “It is a technology term and has become incorporated in the language in ways such as, ‘Do you have enough bandwidth to take on that project?’ There’s also ‘I’ll IM you’ and ‘Just text me.’ “ While we aren’t seeing those yet in formal writing, she says, they are common in casual writing such as emails and in everyday conversation.

This emerging language could presage even deeper changes in what we value, which skills we possess and, ultimately, what we’re capable of. For example, Gregory S. Smith, vice president and CIO at World Wildlife Fund, a Washington-based nonprofit, says he has seen the quality of writing among younger generations of workers decline in the past decade or so, corresponding with the rise in instant messaging, texting and Twitter. “The advent of tools that allow for these short types of content are flooding the Internet with writing that doesn’t matter, and they’re lowering the grammatical and writing skills of our up-and-coming professionals,” says Smith, who also teaches at Johns Hopkins University.

the NEXT DIGITAL DECADE [humanity]

New language new values

3.2


3.3

Digital narcissism Others voice deeper concerns about this evolving digital community. Go back to that example of the smartphone ordering and paying for your morning coffee. Yes, it might eliminate waiting in long lines, but ultimately it could also affect our capacity to interact meaningfully with one another. Evan Selinger, an assistant professor in the philosophy department at the Rochester Institute of Technology, explains (ironically enough) via e-mail: “The problems posed by automation are not new, and that scenario would not present any distinctive problems, were it an isolated convenience. However, the scenario does pose a deep moral challenge because it can be understood as part of a growing trend in digital narcissism.”

“People often use these mediums as tools to Digital tune out much narcissism, Selinger explains, “is a term that some use to describe of the external theworld. self-indulgent practices that typify all-too-much user behavior on blogs —Evan Selinger and social networking sites. People often use these mediums as tools to tune out much of the external world, while reinforcing and further rationalizing overblown esteem for their own mundane opinions, tastes and lifestyle choices.”

Others point out that technology isn’t just changing our connections and how we communicate with one another. It’s also affecting our cognitive skills and abilities -- even what it means to be intelligent. Researchers say the constant stimulation and ongoing demands created by technology, as we jump from texting to videoconferences to a phone call to listening to our MP3 players, seem to affect how we organize our thoughts. Christopher R. Barber, senior vice president and CIO at Western Corporate Federal Credit Union in San Dimas, Calif., says he has noticed that some of his workers, notably the younger ones, are skilled at multitasking using various technologies. “And the results show that the work is getting done, and it’s getting done well,” he says. IBM’s Cefkin confirms that we have become better able to multitask as we’ve gotten used to these technologies. “But the question is, can you then fully participate in something? Scientific studies have come out on both sides,” she says.


Burleson doesn’t put the blame on technology alone, noting that there are multiple factors in modern life that could be contributing to this. But, he says, technology is indeed a factor. It has enabled so much multitasking that many people simply lack experience in focusing on one task for an extended period of time. “So some people are concerned about the ability to do thinking at a deep level,” Burleson says

“I don’t see the deep thinking. I see superficial connecting of dots rather than logical thinking,” Siewiorek says. “I see people who are not going to the source anymore. They just forward things. There’s no in-depth research going on. It seems that people have lost history. [They don’t ask] ‘Where did these things come from? Who said it first?’ “

Some question whether the constant barrage of technology . has reduced our ability to Siewiorek says he has seen the effect engage in deep thinking. of this lack of deep thinking in how people gather information. The Internet and the ease with which people share information via technology allows them to gather data -- whether accurate or not -- and use it without necessarily understanding its context.

the NEXT DIGITAL DECADE

Winslow Burleson, an assistant professor of human-computer interaction at Arizona State University, says studies have found that the amount of attention many of us can devote to a single specific task is about three minutes -- 15 at the most.


3.4

Brain changes There does seem to be something going on inside the brain these days, says Intel’s Anderson. Researchers are finding differences in the brains of those who grew up wired, with tests showing that the neurons in the brains of younger people fire differently than in those of older generations. “I don’t know what that means; I don’t think anybody knows,” Anderson says. But some question whether we even need the same thinking skills as we had in the past. After all, why should we memorize facts and figures when search engines, databases and increasingly powerful handheld computing devices make them instantly available? Years ago, intelligence was measured in part by memory capacity, says Brad Allenby, a professor of civil, environmental and sustainable engineering at Arizona State University who studies emerging technologies and transhumanism. “Now Google makes memory a network function,” he says. “Does that make memory an obsolete brain function? No, but it changes it. But how it will be changed isn’t clear. We won’t know what will happen until it happens.” Maybe so, but Allenby is already predicting how we’ll measure intelligence in a world flooded with electronic technology.


For society, the computer is just the latest in a string of new technologies requiring adaptation, anthropologists say. In the past, we’ve embraced other technologies, including farming tools and the printing press, and we’ve managed to remain human, even if some of our skills and values have evolved. For example, Cefkin says she used to pride herself on how many phone numbers she could remember. “Now I can’t even remember my own,” she says. “We can all point to things we used to do that we can no longer do because of technology, but we don’t talk about the things we now have. I remember where computer files are today. That’s not a different realm of memory or experience” than memorizing telephone numbers.

Besides, Cefkin adds, society often finds ways to fix the glitches that technologies introduce. Too many phone numbers to memorize? Cell phones with memory, or a smart card containing your important data, can replace a brain full of memorized numbers -- and do much more to boot, she adds. In the end, Cefkin and others point out, it’s still humans who are in control of the technology and will determine whether the “advancements” technology enables will make a positive or negative impact on our world. “Technology is another artifact in human existence that shifts us, but it does not replace thinking, it does not replace figuring out. It does not do that job for people; it can only enhance it,” says Social Solutions’ Chess. “Because human beings can do what technology will never be able to do -- and that’s form judgments.”

the NEXT DIGITAL DECADE [humanity]

“Once we get seriously into [augmented cognition] and virtual reality, the one who has the advantage isn’t the one who is brilliant but the one who can sit in front of the computer screen and respond best,” he says. “The one who will be best is the one best integrated with the technology.”


FUTURE o computer Artificial Intelligence


the NEXT DIGITAL DECADE

of _ uter icial _ ntelligence



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.