1495
Chapter
23
Wearable Computing by Steve Mann.
W
earable computing is the study or practice of inventing, designing, building, or using miniature body-borne computational and sensory devices.
Wearable computers may be worn under, over, or in clothing, or may also be themselves clothes (i.e. “Smart Clothing” (Mann, 1996a))
23.1 Bearable Computing The field of wearable computing, however, extends beyond “Smart Clothing”. The author often uses the term “Body-Borne Computing” or “Bearable Computing” as a substitute for “Wearable Computing” so as to include all manner of technology that is on or in the body, e.g. implantable devices as well as portable devices like smartphones. In fact the word “portable” comes from the French word “porter” which means “to wear”.
1496
Encyclopedia of Human-Computer Interaction
23.2 Practical Applications Applications of body-borne computing include seeing aids for the blind or visually impaired, as well as memory aids to help persons with special needs. The MindMesh, an EEG (ElectroEncephaloGram) based “thinking cap”, for example, allows the user to plug various devices into their brain. A blind person can plug in a camera and use it as an “eye”. Moreover, body-borne computing in the inclusive sense is for everyone, in the form of such applications as wayfinding, and Personal Safety Devices (PSDs). Bodyborne computing is already a part of many people’s lives, in the form of a smartphone that helps them find their way if they get lost, or helps protect them from danger (e.g. for emergency notification). The next generation of smartphones will be borne by the body in a way that it is always attentive (e.g. that the camera can always “see” one’s environment), so that if a person gets lost, the device will help the user “remember” where they are. Additionally, it will function like the “black box” flight recorder on an aircraft, and, in the event of danger, will be able to automatically notify others of the user’s physiological state as well as what happened in the environment. Consider, for example, a simple heart monitor that continuously records ECG (ElectroCardioGram) along with video of the environment. This may help physicians correlate heart arrythmia, or other irregularties, with possible envioronmental causes of stress - a physician may be able to see what was happening to the patient at the time a problem was first detected.
23.3 Wearable computing as a reciprocal relationship between man and machine An important distinction between wearable computers and portable computers (handheld and laptop computers for example) is that the goal of wearable computing is to position or contextualize the computer in such a way that the human and computer are inextricably intertwined, so as to achieve Humanistic Intelli-
Wearable Computing
1497
gence – i.e. intelligence that arises by having the human being in the feedback loop of the computational process, e.g. Mann 1998. An example of Humanistic Intelligence is the wearable face recognizer (Mann 1996) in which the computer takes the form of electric eyeglasses that “see” everything the wearer sees, and therefore the computer can interact serendipitously. A handheld or laptop computer would not provide the same serendipitous or unexpected interaction, whereas the wearable computer can pop-up virtual nametags if it ever “sees” someone its owner knows or ought to know. In this sense, wearable computing can be defined as an embodiment of, or an attempt to embody, Humanistic Intelligence. This definition also allows for the possibility of some or all of the technology to be implanted inside the body, thus broadening from “wearable computing” to “bearable computing” (i.e. body-borne computing). One of the main features of Humanistic Intelligence is constancy of interaction, that the human and computer are inextricably intertwined. This arises from constancy of interaction between the human and computer, i.e. there is no need to turn the device on prior to engaging it (thus, serendipity). Another feature of Humanistic Intelligence is the ability to multi-task. It is not necessary for a person to stop what they are doing to use a wearable computer because it is always running in the background, so as to augment or mediate the human’s interactions. Wearable computers can be incorporated by the user to act like a prosthetic, thus forming a true extension of the user’s mind and body. It is common in the field of Human-Computer I nteraction (HCI) to think of the human and computer as separate entities. The term “Human-Computer Interaction” emphasizes this separateness by treating the human and computer as different entities that interact. However, Humanistic Intelligence theory thinks of the wearer and the computer with its associated input and output facilities not as separate entities, but regards the computer as a second brain and its sensory modalities as additional senses, in which synthetic synesthesia merges with the wearer’s senses. In this context, wearable computing has been referred to as a
1498
Encyclopedia of Human-Computer Interaction
“Sixth Sense” (Mann and Niedzviecki 2001, Mann 2001 and Geary 2002). When a wearable computer functions as a successful embodiment of Humanistic Intelligence, the computer uses the human’s mind and body as one of its peripherals, just as the human uses the computer as a peripheral. This reciprocal relationship is at the heart of Humanistic Intelligence (Mann 2001, Mann 1998, and Knight 2000)
23.4 Concrete examples of wearable computing 23.4.1 Example 1: Augmented Reality Augmented Reality means to super-impose an extra layer on a real-world environment, thereby augmenting it. An ”augmented reality” is thus a view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. One example is the Wikitude application for the iPhone which lets you point your iPhone’s camera at something, which is then “augmented” with information from the Wikipedia (strictly speaking this is a mediated reality because the iPhone actually modifies vision in some ways - even if nothing more than the fact we’re seeing with a camera).
Figure 23.1: Augmented Reality prototype. Courtesy of Leonard Low. Copyright: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).
Wearable Computing
1499
Figure 23.2:  Photograph of the Head-Up Display taken by a pilot on a McDonnell Douglas F/A-18 Hornet. Copyright: pd (Public Domain (information that is common property and contains no original authorship)).
1500
Encyclopedia of Human-Computer Interaction
Figure 23.3: The glogger.mobi application: Augmented reality ‘lined up’ with reality. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Wearable Computing
1501
Figure 23.4:  The Wikitude iphone application. Courtesy of Mr3641. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).
A concrete example of wearable computing used for augmented reality is Mann’s pendant-based camera and projector system for Augmented Reality. The system shown below was completed by Mann in 1998:
1502
Encyclopedia of Human-Computer Interaction
Figure 23.5:  Neckworn self-gesturing webcam and projector system designed and built by Steve Mann in 1998. Courtesy of Steve Mann. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).
Wearable Computing
1503
Figure 23.6: Closeup of dome pendant showing the laser-based infinite depth-offocus projector, called an “aremac” (Mann 2001). The laser-based aremac was developed to project onto any 3D surface and does not require any focusing adjustments. Courtesy of Steve Mann. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).
1504
Encyclopedia of Human-Computer Interaction
Figure 23.7: Early breadboard prototype of the aremac that Mann developed for the neckworn webcam+projector. Courtesy of Steve Mann. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).
In Figure 23.5 the wearer is shopping for milk, but this could also have been a more significant purchase like a new car or a house. The wearer’s wife, at a remote location, is looking through the camera by way of a projection screen in her living room in another country. She points a laser pointer at the screen, and a vision system in the projector tracks that and remotely operates the aremac in the wearer’s necklace. Thus he sees whatever she draws or scribbles on her screen. This scribbling or drawing directly annotates the “reality” that he’s experiencing. In another application, the wearer can use hand gestures to control the wearable computer. The author referred to this system as “Synthetic Synesthesia of the Sixth Sense”, and it is often called “SixthSense” for short.
Wearable Computing
1505
This wearable computer system was used as a teaching example at University of Toronto, where hundreds of students were taught how to build the system, including the vector-graphics laser-based infinite depth-of-field projector, using surplus components obtained at low cost. The system cost approximately $75 for each student to build (not including the computer). The software and circuit board design for this system was distributed to students under an Open Source licence, and the circuit board itself was designed using Open Source computer programs (PCB, kicad, etc.), see Mann 2001b.
23.4.2  Example 2: Diminished Reality While the goal of Augmented Reality is to augment reality, an Augmented Reality system often accomplishes quite the opposite. For example, Augmented Reality often adds to the confusion of an already confusing existence, adding extra clutter to an already cluttered world. There seems to be a fine line between Augmented Reality and information overload. Sometimes there are situations where it is appropriate to remove or diminish clutter. For example, the electric eyeglasses (www.eyetap.org) can assist the visually impaired by simplifying rather than complexifying visual input. To do this, visual reality can be re-drawn as a high-contrast cartoon-like world where lines and edges are made more bold and crisp and clear, thus being visible to a person with limited vision. Another situation in which diminished reality makes sense is dealing with advertising. Our world is increasingly being cluttered with advertising and visual detritus. The electric eyeglasses can filter out unwanted advertising, and reclaim that visual space for useful information. Unwanted advertising, seen once, is inserted into a killfile (e.g. a file of particular ads that are to be reclaimed). For example, if the user is a non-smoker, he or she may decide to put certain cigarette ads into the killfile, so that when subsequently seen, they are removed. That space can then be overwritten with useful data. The following videos show examples:
1506
Encyclopedia of Human-Computer Interaction
Video 23.1 Diminished Reality concept video by Steve Mann and James Fung from 2008. Implementation was done on an eyetap device. Courtesy of Steve Mann. Copyright: CC-Att-ND (Creative Commons Attribution-NoDerivs 3.0 Unported). View full screen or download (0)
Video 23.2 Diminished Reality video by Jan Herling and Wolfgang Broll 2010. Copyright © Jan Herling and Wolfgang Broll. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms. View full screen or download (896 KB)
Wearable Computing
1507
Courtesy of US Senate. Copyright: pd (Public Domain (information that is common property and contains no original authorship)).
Courtesy of US Senate (derivative work). Copyright: pd (Public Domain (information that is common property and contains no original authorship)).
Figure 23.8 A-B: Simplifying rather than complexifying visual input. Such “diminished reality” may help the visually impaired.
1508
Encyclopedia of Human-Computer Interaction
23.4.3 Example 3: Mediated Reality Another concrete example is Mediated Reality. Whereas the Augmented Reality system shown above can only add to “reality”, the Mediated Reality systems can augment, deliberately diminish, or otherwise enhance or modify visual reality beyond what is possible with Augmented Reality. Thus mediated reality is a proper superset of augmented reality. Mediated Reality refers to a general framework for artificial modification of human perception by way of devices for augmenting, deliberately diminishing, and more generally, for otherwise altering sensory input. A simple example is electric eyeglasses (www.eyetap.org) in which the eyeglass prescription is downloaded wirelessly, and can be updated continuously in a way that’s subject-matter specific or task-specific. These electric eyeglasses also allow the wearers to reconfigure their vision into different spectral bands. For example, infrared eyeglasses allow us to see where people have recently stood on the ground (where the ground is still warm) or which cars in a parking lot recently arrived (because the engine is still warm). One can see how well the insulation in a building is doing, by observing where heat is leaking out of the building. A roofer can see where a roof membrane may be problematic, or where heat is leaking out of a building. Moreover, during roof repair, one can see the molten asphalt, and get a good sense of whether or not it is at the right temperature. The electric eyeglasses can allow us to see in different spectral bands while actually repairing a roof, thus forming a closed feedback look, as an example of Humanistic Intelligence. See Figure 23.8 A-B.
Wearable Computing
1509
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1510
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.9 A-B: A (at left): Author (looking down at the mop he is holding) wearing a thermal EyeTap wearable computer system for seeing heat. This device modified the author’s visual perception of the world, and also allowed others to communicate with the author by modifying his visual perception. A bucket of 500 degree asphalt is present in the foreground. B (at right): Thermal EyeTap principle of operation: Rays of thermal energy that would otherwise pass through the center of projection of the eye (EYE) are diverted by a specially made 45 degree “hot mirror” (DIVERTER) that reflects heat, into a heat sensor. This effectively locates the heat sensor at the center of projection of the eye (EYETAP POINT). A computer controlled light synthesizer (AREMAC) is controlled by a wearable computer to reconstruct rays of heat as rays of visible light that are each collinear with the corresponding ray of heat. The principal point on the diverter is equidistant to the center of the iris of the eye and the center
Wearable Computing
1511
of projection of the sensor (HEAT SENSOR). (This distance, denoted “d”, is called the eyetap distance.) The light synthesizer (AREMAC) is also used to draw on the wearer’s retina, under computer program control, to facilitate communication with (including annotation by) a remote roofing expert.
23.5 History of Wearable Computing Depending on how broadly wearable computing is defined, the first wearable computer might have been an abacus hung around the neck on a string for convenience, or worn on the finger. Or it might have been the pocket watches of the early 1500s, or the wristwatches that replaced them, since a timepiece is a computer of sorts (i.e. a device that computes or keeps time). See Figure 23.10 A-B.
1512
Encyclopedia of Human-Computer Interaction
Courtesy of Pirkheimer. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).
Wearable Computing
1513
Courtesy of Wallstonekraft. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).
Figure 23.10 A-B: Leftmost, One of the first pocket watches, called “The Nuremberg Egg”, made arround 1510. Rightmost, an early digital wristwatch from the 1920s.
More recently electronic calculators (which could be carried in a pocket or worn on the wrist) emerged, as did electronic timepieces. Other task-specific electronic circuits included a timing device concealed in a shoe to help the wearer cheat at a game of roulette (Bass 1985). A common understanding of the term “computer” is that a computer is something that is programmable by the user, while it is being used, or that is of a relatively general-purpose nature (e.g. the user can change programs and run various applications).
1514
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.11 A-B: A timing device designed to be concealed in a shoe for use in roulette invented by Ed Thorp and Claude Shannon in 1961 but first mentioned in Thorp 1966. Although it uses electronic circuits it could not be programmed by the wearer, and ran only one application: a program that computed time. The devices described above are predecessors to what is commonly meant by the term “wearable computer”.
Wearable Computing
1515
Figure 23.12: Here is a “computer” (an abacus) and since it is a piece of jewelry (a ring), it is wearable. Such devices have existed for centuries, but do not successfully embody Humanistic Intelligence. In particular, because the abacus is task-specific, it does not give rise to what we generally mean by “wearable computer”. For example, its functions and purpose (algorithms, applications, etc.) can’t be reconfigured (programmed) by the end user while wearing it. In short, “wearable computer” means more than the sum of its parts i.e. more than just “wearable” and “computer”. Made with beads of a silver ring abacus of 1.2 centimeter long and 0.7 centimeter wide, dating back to Chinese Qing Dynasty (1616-1911 BC). Copyright © Xinhua Photo and The People’s Government of Anhui Province. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
Thus a task-specific device like an abacus or wristwatch or timer hidden in a shoe is not generally what we think of when we think of “computer”. Indeed, what made the computer revolution so profound was that the computer is a software re-programmable device capable of being used for a wide variety of complex algorithms and applications. In the 1970s and early 1980s Steve Mann designed and built a number of
1516
Encyclopedia of Human-Computer Interaction
general-purpose wearable computer systems, including various kinds of sensing, biofeedback, and multimedia computers such as wearable musical instruments, audio-based computers, and seeing aids for the blind. In 1981 Mann designed and built a backpack-based general-purpose multimedia wearable computer system with a head-mounted display visible to one eye. The system provided text, graphics, audio, and video capability, and included a handheld chording keyer (for one-handed input). Because of its generality, this system fit the description of what most people would call a “computer” by today’s standards. The system allowed various computer applications to be run while walking around doing other things. The computer could even be programmed (i.e. new applications could be written) while walking around. Among the applications written for this wearable computer system was an application for photographically mediated reality and “lightvector painting” (“lightvectoring”) used extensively throughout the 1980s. A variety of different systems were designed and built by Mann in the 1980s, and this marked a steady evolution in wearable computing toward something resembling ordinary eyeglasses by the late 1990s (Mann 2001b).
Wearable Computing
1517
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1518
Encyclopedia of Human-Computer Interaction
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.13 A-B: The WearComp wearable computer by the late 1970s and early 1980s - a backpack based system.
Wearable Computing
1519
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1520
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.14 A-B: The WearComp wearable computer by the mid 1980s.
Wearable Computing
1521
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1522
Encyclopedia of Human-Computer Interaction
Copyright © Webb Chappell. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
Figure 23.15 A-B: The WearComp wearable computer anno 1990 (leftmost) and by the mid 1990s (rightmost).
Wearable Computing
1523
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1524
Encyclopedia of Human-Computer Interaction
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.16 A-B: The WearComp wearable computer by the late 1990s, resembling ordinary eyeglasses.
By 1994 Mann had streaming live video from his wearable computer to and from the World Wide Web, such that viewers to his web site could see what he was seeing, as well as annotate what he was seeing (i.e. “scribble on his retina” so to speak). This “Wearable Wireless Webcam” was the first embodiment of live webcasting from a wireless device. Because there were no wireless service providers at this time (much of this technology had not been invented yet), it all had to be built by hand. See Figure 23.17 A-B-C.
Wearable Computing
1525
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1526
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Wearable Computing
1527
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1528
Encyclopedia of Human-Computer Interaction
Figure 23.17 A-B-C: Early 1990s wireless communications system invented, designed, and built by Mann. Home-made wireless network (left) 19-inch relay rack with various equipment including microwave link to+from the roof of the tallest building in the city (middle) Steve Mann wearing the computer system (electric eyeglasses) while servicing the antenna on the roof of the tallest building in the city. This along with a network of other antennas, was setup to obtain wireless connectivity. Mann applied for and obtained a 100kHz spectral allocation through the New England Spectrum Management Council, 445.225MHz, for a community of “cyborgs”. In many ways amateur radio (ham radio) was the predecessor of the modern Internet, where radio operators would actively communicate from their homes (base stations), vehicles (mobile units), or bodies (portable units) with other radio operators around the world. Mann was an active ham radio operator, with callsign N1NLF.
Another ham radio operator, Steven K. Roberts, callsign N4RVE, designed and built Winnebiko-II, a recumbent bicycle with on-board computer and chording keyer. Roberts referred to his efforts as “nomadness”, which he defined as “nomadic computing”. For example, he could type while riding the bicycle (Roberts 1988).
Wearable Computing
1529
Figure 23.18: The Winnebiko II system, which integrated a wide range of computer and communication systems in such a way that they could be effectively be used while riding, including a chord keyboard in the handlebars. Copyright © Steven K. Roberts, Nomadic Research Labs. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
1530
Encyclopedia of Human-Computer Interaction
In 1989 a “pushbroom” type display using a linear array of 280 red LEDs became available from a company called Reflection Technology. The product was referred to as the Private Eye. Because of the lack of adequate grayscale on the 280 LEDs, and also due to the use of red light (which makes image display difficult to see clearly), the Private Eye was aimed mainly at text display, rather than the multimedia computing typical of the earlier wearable computing efforts. However, despite its limitations, the Private Eye display product brought wearable computing to the mainstream, making it easy for hobbyists to put together a wearable computer from commercial off-the-shelf devices. Among these hobbyists were Gerald “Chip” Maguire from Columbia University and Doug Platt who built a system he called the “Hip-PC”, (Bade et al 1990) and later, Thad Starner at MIT. In 1993, Starner built a system based on Platt’s design, using a Private Eye display and a Handykey Twiddler keyer. That same year, Steve Feiner, et al, at Columbia University created an augmented reality system based on the Private Eye (Feiner et al 1993). By 1990 Xybernaut Corporation was founded, originally called Computer Products & Services Incorporated (CPSI), the name being changed to Xybernaut in 1996. Xybernaut marketed wearable computing in vertical market segments such as to telephone repair technicians, soldiers, and the like. Around this time, another company, ViA Inc., produced a flexible wearable computer that could be worn like a belt, although there were some problems with the “rubber dockey” product that connected it to the outside world. In 1998 Steve Mann made a working prototype of a wristwatch computer running GNU Linux. The wristwatch included video-conferencing capability and was demonstrated at the ISSCC 2000 conference in February. In July 2000, Mann’s Linux wristwatch was featured on the cover of Linux Journal, Issue 75, along with an article about it. See Figure 23.19.
Wearable Computing
1531
Figure 23.19: A wristwatch computer with videoconferencing capability running the videoconferencing application underneath a transparent oclock, running XF86 under the GNUX (GNU+Linux) operating system. The computer, being general-purpose in nature, rather than task-specific (e.g. beyond merely keeping time, etc.) made this device fit what we typically mean by “wearable computer” (i.e. something that the wearer can reconfigure, program, etc., while wearing it, as well as something that implements Humanistic Intelligence). The project was completed in 1998. The SECRET function, when selected, conceals the videoconferencing window by turning off the transparency of the o’clock, so that the watch then looks like an ordinary watch (just showing the clock filling the entire 640x480 pixel screen). The OPEN function cancels the SECRET function and opens the videoconferencing session up again. The system streamed live video at 7fps, 640x480, 24 bit color. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1532
Encyclopedia of Human-Computer Interaction
In 2001 IBM publicly unveiled a prototype for a wristwatch computer running Linux, but it has yet to be commercially released. The vision of wearable computing has yet to be fulfilled commercially, but the proliferation of portable devices such as smart phones suggests an evolution in that direction. Most notably, with the appropriate input and output devices, a smart phone can form a good central processor upon which to realize an embodiment of Humanistic Intelligence.
23.6 Wearable computing Input Output devices For a wearable computer to achieve a full implementation of Humanistic Intelligence there needs to be a constancy of user interaction, or at least a low threshold for interaction to begin. Much of the serendipity is lost if the computer must be taken out of a purse or pocket and started up). Therefore a wearable computer typically has an output device such as a display that the user can sense, and an input device with which to communicate explicitly with the computer. Starting with the input device, the first wearable computers used a keying device called a “keyer”. The keyer is inspired by the telegraph keyer of ham radio (e.g. a morse code input device), which has evolved from the single key, then to iambic (or what the author calls “biambic”), then to triambic, and more generally, multiambic. The term “iambic” existed previously to describe two-key morse code devices (e.g. morse code comprised of iambs, i.e. concepts of rhythm borrowed from poetry, having meter of verse comprised of iambs). Mann, upon hearing the word “iambic” in childhood, misunderstood the term “iambic” and thought it meant “biambic” and due to his mistake, he generalized the concept to “triambic” (3 buttons), and so on
Wearable Computing
1533
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1534
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.20 A-B: Early (1978) wearable computing keyer prototypes invented, designed, and built by the author. These keyers were built into the hand grip of another device. Leftmost, is the author’s PoV (Persistence of Vision) pushbroom text generator. Text keyed into the keyer was displayed on a linear array of lights waved through the air like a pushbroom, visible in a dimly lit space, either by persistence of human vision, or by long exposure photographs. Rightmost, a “painting with lightvectors” invention allows various layers to be built in a multidimensional “lightspace”. Note the keyer combined with the pointing device, which was connected to a multimedia wearable computer.
Wearable Computing
1535
Figure 23.21: Input device for wearable computing: This EyeTap uses a framepiece to conceal the laser light source, which runs along an image fiber optic element, and the camera which runs along another image fiber optic element, the two fibers running in opposite directions, one along the left earpiece, and the other along the right earpiece. This fully functioning prototype of the EyeTap technology has a physical appearance that approximates that of ordinary eyeglasses. The result is a more sleek and slender design. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1536
Encyclopedia of Human-Computer Interaction
Figure 23.22:  Original wearable computer input devices were inspired by the telegraph key -- This particular telegraph key is a J38 World War II-era U.S. military model. Copyright: pd (Public Domain (information that is common property and contains no original authorship)).
Morse code was an early form of serial communication, which in modern times is usually automated. In a completely automated teleprinter system, the sender presses keys to send an ASCII data stream to a receiver, and computation alleviates the need for timing to be done by the human operator. In this way, much higher typing speeds are possible. In simple terms, a keyer is like a keyboard but without the board. Instead of keys fixed to a board like one might find on a desktop, the keys are held in the hand so that a person can press keys in mid air without having to sit at a desk, or the like.
Wearable Computing
1537
An important application of wearable computing is mediated reality, for which the input+output devices are the sensors and effectors which (a) capture sensory experiences the wearer experiences or would experience; and (b) stimulate these senses. An example is the EyeTap device which causes the eye itself, in effect, to function as if it were both a camera and a display. See Figure 23.23. An EyeTap having a physical appearance of ordinary eyeglasses, was also designed and built by the author, with materials and assistance provided by Rapp optical; see Figure 23.21.
Figure 23.23: Mann’s ‘GlassEye™’ invention, also known as an EyeTap device, is an input+output device that can connect to a smart phone or other body-borne computer, wirelessly, or by a connection to the AudioVisual ports. A person wearing an EyeTap has the appearance of having a glass eye, or an appearance as if the camera were inside the eye, because of the diverter which diverts eyeward-bound rays of light into the camera system to resynthesize them, typically in laser light. The wearer of the eyetap sees visual reality as re-synthesized from the laser light (computer-controlled laser light source). Pictured here is designer Chris Aimone who collaborated with Mann on this design. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1538
Encyclopedia of Human-Computer Interaction
23.7 Lifeglogging From 1994-1996, Steve Mann conducted a Wearable Wireless Webcam experiment where he streamed live video from his wearable computer to and from the World Wide Web, on an essentially 24 hour-a-day basis., For the most part, the wearable computer streamed continuously although the computer itself was not waterproof so it needed to be set aside during showering or bathing. As a personal data capture, Wearable Wireless Webcam raised some new and interesting issues in the capture and archival of a person’s entire life from their own perspective. And it also opened up some new ideas such as the roving reporter, where day-to-day living can result in serendipitous capture of newsworthy events. See Figure 23.24 and Joi Ito’s chronology of moblogging/lifelogging at the end of this chapter. In another incident the author was the victim of a hit-and-run. The visual memory of the incident resulted in the arrest and prosecution of the perpetrator. Wearable Wireless Webcam was at the nexus of art, science, and technology, i.e. it followed a tradition commonly used in contemporary endurance art. For example, it was akin to the living art and endurance art of Linda Montano and Tehching Hsieh who tied themselves to opposite ends of a rope, and remained that way 24 hours a day for a whole year, without touching one another.
Wearable Computing
1539
Figure 23.24: Screenshot from Steve Mann’s Wearable Wireless Webcam experiment from 1994-1996. Real-time webcast of everyday life resulted in the serendipitous capture of a newsworthy incident. Interestingly the traditional media like newspapers ?had no pictures of the incident, so this is the only photographic record of the incident. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1540
Encyclopedia of Human-Computer Interaction
But it also served a scientific purpose (controlled in-lab experiments are more controlled but make a trade-off between external validity versus internal validity), and an engineering or technical purpose (inventing new technologies, etc.). One interesting by-product of Wearable Wireless Webcam was the concept of lifeglogging, also known as cyborGLOGGING, glogging, lifelogging, lifecasting, or sousveillance. The word “surveillance” derives from the French words “sur”, meaning from above, and “veiller” meaning “to watch”. Surveillance therefore means “watching from above” or “overwatching” or “oversight”. While much has been written about surveillance and the relative balance between privacy and security (i.e. some people arguing for more surveillance and others arguing for countersurveillance), this argument is one-dimensional in that it functions like a one degree-offreedom “slider” to choose more or less surveillance. But sousveillance (“sous” is French for “from below” so the English word would be “undersight”) has recently emerged as an alternative. Sousveillance refers to the recording of an activity by a participant in the activity, typically by way of small wearable or portable personal technologies (Mann et al 2003, Mann 2004, Dennis 2008, Baikr 2010, Deirdre 2009, Thompson 2011, Brin 2011).
Wearable Computing
1541
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1542
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.25 A-B: Leftmost, a surveillance dome camera atop a lamp post serves as an “eye-in-the-sky” watching down on a parking lot. Rightmost: a surveillance dome as a necklace has a fisheye lens and various physiological sensors. Sensor camera designed and built and photographed by Steve Mann 1998. Mann presented this invention to Microsoft Corporation as the Opening Keynote at ACM Multimedia’s CARPE in 2004 (http://wearcam.org/carpe/).
Wearable Computing
1543
Copyright © Microsoft Research, Cambridge. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
1544
Encyclopedia of Human-Computer Interaction
Copyright © Microsoft Research, Cambridge. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
Figure 23.26 A-B: Around 2005, Microsoft built and researched their SenseCam prototype - a version of the neckworn camera. It was commercialized in 2009 (licenced to Vicon) and is now available as a product called Vicon Revue.
Wearable Computing
1545
While surveillance and sousveillance both generally refer to visual monitoring (i.e. “veiller” being “to watch”), the terms also denote other forms of monitoring such as audio surveillance or sousveillance. In the audio sense (e.g. recording of phone conversations) sousveillance is referred to as “one party consent”.
23.8 Future directions and underlying themes 23.8.1 Cyborgs, Humanistic Intelligence and the reciprocal relationship between man and machine The wearable computer can provide many benefits, such as assistive technologies to help people see better, remember better, and function better, e.g. for the elderly to age gracefully, or for those with Alzheimer’s disease to be able to remember and recognize names and faces. One project, the author’s Mindmesh, enables the blind to see, and people with visual memory impairment to remember and recall visual subject matter. The Mindmesh comprises a permanently attached skull cap with a combination of implantable and surface electrodes, as well as a mesh-based computing architecture in which individual processors are each responsible for eight electrodes. The Mindmesh, still in its early development stages, is evolving toward an apparatus that allows the user to plug various sensory devices “into their brain” in a sense. So a blind person will be able to plug a camera into their brain, or an Alzheimer’s patient will be able to attain a form of autoassociative memory. See Figure 23.27 A-B
1546
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Wearable Computing
1547
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.27 A-B: The Mindmesh is a mesh-based computing architecture currently under development, to allow various sensors and related devices to be “plugged into the brain”. Some variations of the Mindmesh can be permanently attached, and are ruggedized to withstand the rigours of life, e.g. running through fountains or jumping into the ocean, etc. The author wishes to thank Olivier Mayrand, InteraXon, and the OCE (Ontario Centres of Excellence) for assistance with this work.
1548
Encyclopedia of Human-Computer Interaction
The Visual Memory Prosthetic (VMP) is thus combined with the new computational seeing aid, which can thusly capture a cyborglog of a person’s entire life and hopefully in the future be able to index into it. This is part of the author’s “Silicon Brain” project in which the Mindmesh indexes into an autoassociative memory to assist persons with sensory integration disorder, or the like. As we replace more of our mind with external memory, these memories become part of us, and our own personhood. Businesses and other organizations have a legal obligation not to discriminate against persons with special needs, or the like, or to treat persons differently depending on such technologies. As we see the widespread adoption of technologies like Mindmesh, which, essential to their functioning as memory aids, must capture, process, and retain data, may be interpreted as making recordings. The “Silicon Brain” of the Mindmesh thus asks the question “is remembering recording?”. As more people embrace prosthetic minds, this distinction will disappear. Businesses and other organizations have a legal obligation not to discriminate, and will therefore not be able to prevent individuals from seeing and remembering, whether by natural biological or computational means. But we don’t have to even wait for the future widespread adoption of the Mindmesh to observe culture in contention. As mentioned earlier, smartphones are the precursor to full-on wearable computing, and their proliferation has already brought forth this very issue.
23.8.2 Privacy, surveillance, and sousveillance Surveillance is an established practice, and while controversial, much of the controversies have been worked out and understood. Sousveillance, however, being a newer practice, remains, in many ways, yet to be worked out. The proliferation of camera phones itself has even resulted in numerous cases in which police and security guards have been caught in wrongdoing. Also, there have been numerous cases where police and security guards have tried to destroy evidence captured by ordinary citizens. In one case, a man named Simon Glik was arrested for
Wearable Computing
1549
recording actions of police officers on video. However, The United States Court of Appeals ruled in favor of Glik, and after finding no wrong doing on his part, the courts found that the officers violated Glik’s first and fourth amendment rights1. These controversies will not go away, though, as the line between remembering and recording, and between the eye and camera, blur. Many department stores and other establishments have signs up prohibiting photography and prohibiting cameras (see Figure 23.28 A-B-C), but they also have 2-dimensional barcodes designed to be read by patrons using their smartphones (see Figure 23.29 A-B). Thus they simultaneously encourage patrons to take photographs and prohibit patrons from taking photographs.
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms. 1. See the full details at http://www.citmedialaw.org/sites/citmedialaw.org/files/10-1764P-01A.pdf
1550
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Wearable Computing
1551
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.28 A-B-C: Signs that say “No video or photo taking” and “NO CELL PHONE IN STORE PLEASE!” are commonplace, yet people are relying more and more on cameras and cellphones as seeing aids (hand-held magnifiers to help them read the very signs that prohibit their use, for example), and to access additional information that will help them make a purchase decision.
1552
Encyclopedia of Human-Computer Interaction
Courtesy of Steve Mann. Copyright: .
Wearable Computing
1553
Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.29 A-B: “SCAN ME... Use your smartphone to scan this QR code...” says the box in a store where cellphones and cameras are forbidden.
1554
Encyclopedia of Human-Computer Interaction
Figure 23.30: The irony of treating cameras and cellphones as contraband in semipublic places is that this trend seems to come around the same time as the proliferation of CCTV surveillance cameras. In the future, when a security guard demands a patron remove their electric eyeglasses, the guard may be liable when the patron trips and falls. The authority of the guard does not extend to mandating eyeglass prescriptions of their customers. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
23.8.3 Copyright and ownership: Who do memories, information and data belong to? This potential controversy extends to content. For example, the trend toward licensed software leads to a situation where licenses expire and the computer stops working when the license fee is not paid. When a computer program is helping someone see,
Wearable Computing
1555
a site license becomes a sight license. As our bodies and computing increasingly intersect and intertwine, as in the case of Wearable Computing, we must ask ourselves if we really want to live in a society where “your pacemaker firmware license is about to expire, please insert your credit card to continue living”. (Mann 2003). This theme was the topic of an art installation at San Francisco Art Institute (SFAI), in response to their request for an exhibit on wearable computing. See Figure 23.31
Figure 23.31: Wearable computing exhibit at San Francisco Art Institute 2001 Feb. 7th. This exhibit comprised a chair with spikes that retract for a certain time period when a credit card is inserted to purchase a seating license. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1556
Encyclopedia of Human-Computer Interaction
Figure 23.32: Poster for the wearable computing exhibit at San Francisco Art Institute 2001. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
By its very nature, wearable computing evokes a visceral response, and will likely fundamentally change the way in which people live and interact. In the future, devices that capture our lifelong memories, and share them in real-time, will be commonplace and worn continuously, and perhaps even permanently implanted. As an example, the author has invented and filed a patent for an artificial eye that provides people with vision in one eye, an implantable eye to see stereo using a crosseyetap. Various filmmakers have approached the author requesting help embodying this invention. See for example Figure 23.33.
Wearable Computing
1557
Figure 23.33: Occular implant artificial eye camera invented by Steve Mann (Canadian Pat.2313693), and built by Rob Spence and others, in collaboration with Mann. The artificial eye has a camera built into it for persons with vision in only one eye; the eye may thus function as a wearable wireless webcam and cyborglogging device, and hopefully soon in the future as a vision replacement. With the computer and camera implanted fully inside the body, some people are able to stream live video without having to wear anything. The apparatus has the appearance of a normal eye, yet provides sousveillance and cyborglogging (today) and may provide vision replacement tomorrow. Courtesy of Steve Mann. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1558
Encyclopedia of Human-Computer Interaction
23.9 Where to learn more Mann, Steve (1997): Wearable Computing: A First Step Toward Personal Imaging. In IEEE Computer, 30 (2) pp. 25-32 Mann, Steve (2001): Intelligent Image Processing. Wiley-IEEE Press Mann, Steve (1996): Smart Clothing: The Shift to Wearable Computing. In Communications of the ACM, 39 (8) pp. 23-24 Mann, Steve and Niedzviecki, Hal (2001): Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer. Doubleday of Canada ff Marketing Wearable Computers to Consumers ff IEEE Special Issue on Wearable Computing and Humanistic Intelligence ff Editor’s Blog Sousveillance: Wearable Computing and Citizen “Undersight” ff Bootstrap Knowledge Web ff Barfield, Woodrow and Caudell, Thomas (eds.) (2001): Fundamentals of Wearable Computers and Augmented Reality. CRC Press
23.9.1 Wearable Computing Conferences The first wearable computing conferences were: ff The International Conference on Wearable Computing (ICWC) and ff The International Symposium on Wearable Computing (ISWC)
23.9.2 Historical chronology of moblogging, also known as cyborglogging, lifeglogging, lifelogging, lifecasting, and the like The following is a “chronology of articles, events and resources, About moblogging”, written by Joi Ito:
Wearable Computing
1559
ff February 1995 - wearcam.org as roving reporter Steve Mann ff January 4, 2001 4:16p - Stuart Woodward first posts from his cellphone on Stuart Woodward’s LiveJournal using J-Phone, Python and Qmail ff January 6, 2001 15:09 - First reported posting from under the sea ff Thursday, March 1, 2001 - First post by SMS to David Davies’ SMSblog and his announcement on the Radio Userland Support List ff January 11, 2002 - Radio Userland released with mail to post feature ff January 13, 2002 - A text post from an Docomo P503i to Al’s Radio Weblog ff February 18, 2002 - Hiptop Photo Gallery started by Michael Morrissey (using procmail and PERL script later improved by Dave Bort who shared it to people including Mike Popovic who started Hiptop Nation) ff February 2002 - Justin Hall posts pictures and text to Jiqoo.com using a a J-Phone and Brian Hooper’s code. (Link is now dead) ff Fisher, Scott S., “ An Authoring Tool Kit for Mixed Reality Experiences”, International Workshop on Entertainment Computing (IWEC2002): Special Session on Mixed Reality Entertainment, May 14 -17 2002, Tokyo, Japan ff Summer 2002 - Kuku Nipernaadide - Estonian moblog started by Peeter Marvet ff 2001 - Howard Rheingold coins the word “Smartmobs” ff November 5, 2002 - Adam Greenfield coins the term “Moblogging” ff October 1, 2002 - T-Mobile Sidekick launch moblog (Danger internal) 149 pictures in 24 hours
1560
Encyclopedia of Human-Computer Interaction
ff Friday October 4, 2002 9:15a - Hiptop Nation created by mikepop ff October 31, 2002 - Hiptop Nation Halloween Scavenger Hunt ff November 21 2002 - The Feature article From Weblog to Moblog by Justin Hall ff Tuesday, November 26, 2002 - Stuart Woodward’s first image posted from a cell phone ff November 27, 2002 10:41p - Joi Ito’s Moblog (using attached image mail to MT) ff December 4, 2002 - Milano::Monolog blogs text from mobile phones (Japanese) ff December 12, 2002 - Guardian article Weblogs get upward mobility ff December 31, 2002 - New Year’s Eve Moblog Blog-Misoka by JBA ff January 8, 2003 - electricnews.net Start-up marries blogs and camera phones ff January 8, 2003 - The Register Start-up marries blogs and camera phones ff January 9, 2003 - Robert announces PhoneBlogger
Wearable Computing
1561
23.10 Commentary by Katina Michael and M. G. Michael How to cite this commentary in your report
Katina Michael
© Katina Michael
Katina Michael is the IEEE Technology and Society Magazine editor-in-chief. She is the author of Innovative Automatic Identification and Location-Based Services: from Bar Codes to Chip Implants (2009) and has hosted six workshops on the Social Implications of National Security. Michael (MIEEE’04, SMIEEE’06) holds a Doctor of Philosophy in Information and Communication Technology (ICT)... Katina Michael Katina Michael is a member of The Interaction Design Foundation
1562
Encyclopedia of Human-Computer Interaction
M. G. Michael
© M. G. Michael
M.G. Michael coined the term “überveillance” in 2006 to denote omnipresent electronic surveillance embedded beneath the skin. The term was entered into the official dictionary of Australia, the Macquarie Dictionary in 2008. Michael received a PhD from the School of Theology at the Australian Catholic University in 2003 in Brisbane, Queensland, and a Master of Arts Honors from the Sch... M. G. Michael M. G. Michael is a member of The Interaction Design Foundation
23.10.1 About Steve Mann In Professor Steve Mann - inventor, physicist, engineer, mathematician, scientist, designer, developer, project director, filmmaker, artist, instrumentalist, author, photographer, actor, activist - we see so much of the paradigmatic classical Greek philosopher. I recall asking Steve if technology shaped society or society shaped technology. He replied along the lines that the question was superfluous. Steve instead pointed to praxis, from which all theory, lessons or skills stem, are practiced, embod-
Wearable Computing
1563
ied and realized. Steve has always been preoccupied by the application of his ideas into form. In this way too, he can be considered a modern day Leonardo Da Vinci. It is not surprising that Professor Mann was awarded the 2004 Leonardo Award for Excellence (Leonardo, 2004). In his winning article he presented “Existential Technology” as a new category of in(ter)ventions and as a new theoretical framework for understanding privacy and identity (Mann, 2003). At the time, Mann had written more than 200 research publications already, and was the keynote speaker at numerous industry symposia and conferences. His work had also been shown in museums globally, including the Museum of Modern Art in New York, the Stedelijk Museum in Amsterdam and the Triennale di Milano (Quillian, 2004). I embarked on my PhD in 1997, the same year in which Steve graduated with a PhD in Media Arts and Sciences from MIT under the supervision of Professor Rosalind Picard and MIT Media Lab creator and director Professor Nicholas Negroponte. I remembered being amazed by the research Steve was engaged in, particularly his insights into wearable computing, and thinking all at once what incredible uses the technologies he was developing could have but also what they might mean in terms of the social implications. At that time I was working for Nortel Networks as a network planner and strategically positioning big pipes throughout the world in anticipation of the big data that was coming through IP-based applications. Few, however, could possibly have imagined that people would be willingly creating lifelogs (or cyborg logs) through the act of glogging (PCMag, 2012), another Mann discovery, and uploading them in real-time through wireless technology, every minute of every hour of every day (Mann, 1995). 4G will make glogging even easier. Presently, there are over 165,000 gloggers at http://glogger.mobi.
23.10.2 Corresponding with Professor Mann about Sousveillance in the Educational Context At the beginning of 2009, my close collaborator Dr MG Michael and I decided to explore the idea of glogging, inspired by correspondence with Steve on his notion
1564
Encyclopedia of Human-Computer Interaction
of existential education (ExistEd), Figure 23.1, first officially demonstrated in 1998 (Mann, 1998). We asked our class of 163 undergraduate and postgraduate students in a compulsory computer ethics course to take part in some personal field work through the use of glogger.mobi. Examples from this class can be found in Table 1. We wanted to see sousveillance acted out before us, what its limits were, if any, and we wanted to attempt it without having sought prior Human Research Ethics Committee approval.
Copyright status: Unknown (pending investigation). See section “Exceptions� in the copyright terms.
Wearable Computing
1565
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1566
Encyclopedia of Human-Computer Interaction
Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Figure 23.1 A-B-C: Professor Steve Mann’s Existential Learning = “Learn by being [a photoborg]”
Storyboard Title
Creator
Date Uploaded
Computer Ethics Sock Puppet Theatre
kshuntley
20 April 2009
Nanny Cams
watguy
20 April 2009
In the ‘hood: Identity theft from your
randomisa-
20 April 2009
home
tions
Photography and privacy
Jiang
20 April 2009
Cameras and Privacy
ls013
23 April 2009
1567
Wearable Computing
Invasion of Location Privacy
akarin
23 April 2009
Nanotech Future
Jeesper
23 April 2009
Mobile phone privacy
ml733
19 April 2009
Identity Fraud
Hooka69
20 April 2009
RFID Issues
tlc91
20 April 2009
DNA
seven3one
20 April 2009
Health Insurance
cjal073
20 April 2009
Bluetooth - The phone is mightier than
rt902
4 May 2009
ha766
24 April 2009
Life inside a camera
skt999
20 April 2009
2 guys one camera
ads32
19 April 2009
the handgun Australian Government ‘Cleanfeed’ Internet Filtering Scheme
Table 23.1: Storyboards Created by University of Wollongong Students during the Course IT and Citizen Rights, Session 1, 2009.
It was interesting to observe that in our class of 163 there were: ff a handful of students who claimed their parents would not allow them to upload their own image, or images of others online; ff about fifteen students who refused to take any photographs themselves using any camera device and instead downloaded images from the public domain (many of them copyrighted); ff a handful of students who did not wish to participate in a public glog without any privacy controls but who would have otherwise considered participation;
1568
Encyclopedia of Human-Computer Interaction
ff dozens of students who thought it was inappropriate to film others without asking their permission in a public setting even if they were in the point of view or fellow participants in events they were engaged in; ff about a dozen students who thought it would be better to use cartoon characters or puppets instead of humans in their storyboards; ff a handful of students who did not wish to disclose their identity and so wore a hood or hat or high collar; ff about a dozen students (mostly internationals) who removed their storyboards the same day assessment results were returned to them; and ff a handful of students who possibly went too far and filmed or photographed sensitive data such as Automatic Teller Machines (ATMs) at point blank, or other very personal activities they were engaged in. We cannot claim that in all cases our students were ‘learning by being’ (a step beyond ‘learning by doing’), but some did become true photoborgs, whilst others took on the persona of a photoborg, even if it was for a few short weeks. It takes nerve for someone to actually wear a camera, not just carry it, to admit to it recording when questioned, and to cope with the responses that that kind of activity might provoke in a setting like a regional centre in Australia. But as Steve plainly emphasizes, “[w] hat really matters, much more than whether the technology is implanted, worn, carried, or non-existent, is the degree to which the educational paradigm embodies an epistemology of personal choice, and the metaphysics of personal freedom, growth, and development” (Mann, 2006). Furthermore, Mann writes about deconstructionist learning: “As a “cyborg” in the sense of long-term adaptation, body-borne technologies, etc., one encounters a new kind of existential self-determination and mastery over one’s own destiny, that can be learned, in the postmodern (posthumanism) context one might think of as the “cyborg age” in which many of us now live.” Increasingly, photoborgs are now everywhere, and as they increase in numbers over the next decade, comfort levels of photoborg presence will also likely
Wearable Computing
1569
increase as it becomes commonplace. However, there are still laws for instance, that are in direct conflict with photoborgology. See, for example, the Surveillance Devices Act in the state of Western Australia, in Australia (WA, 1998): SURVEILLANCE DEVICES ACT 1998 - SECT 6 ff 6. Regulation of use, installation and maintenance of optical surveillance devices (1) Subject to subsections (2) and (3), a person shall not install, use, or maintain, or cause to be installed, used, or maintained, an optical surveillance device (a) to record visually or observe a private activity to which that person is not a party; or (b) to record visually a private activity to which that person is a party. Penalty: (a) for an individual: $5 000 or imprisonment for 12 months, or both; (b) for a body corporate: $50 000. As Roger Clarke (2012b) has pointed out, this act: “seems to mean that, although you can audio-record your own conversations with other people, you can’t video-record them... That has serious implications for sousveillance, i.e. the use of surveillance by the less powerful, when dealing with the more powerful”. It was in preparation for our class that we discussed the ultimate trajectory of wearables and implantables with Steve, finding his work on the existentiality axis (Figure 23.2) to be critical (Mann, 2001; Mann, 1998). It just so happened at the time we were corresponding, that Steve was working on a project with the Eyeborg Man, Rob Spence (Spence, 2008) and we were in full preparation to host the International Symposium on Technology and Society (ISTAS10) at the University of Wollongong which had as one of its major themes microchip implants for humans (UOW, 2010; Michael, 2011). Researcher Mark Gasson and Mr Amal
1570
Encyclopedia of Human-Computer Interaction
Graafstra, both bearers of chip implants spoke on their experience of being radiofrequency identification (RFID) implantees at the symposium.
Figure 23.2: Wearability/Portability versus Existentiality. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
It was during this time that the interface between sousveillance and überveillance began to emerge. On the one hand, you had camera technologies that people wore to conduct surveillance “from below”, and on the other we had proposed in 2006 that embedded systems, such as implantables, would one day do the surveil-
Wearable Computing
1571
ling “from within”. It was in the Eyeborg’s ‘implantable’ camera that sousveillance came face-to-face with überveillance (Michael and Michael, 2009). In Figure 23.3, the various veillances are depicted in a triquetra by Mr Alexander Hayes.
Figure 23.3: The ?berveillance Triquetra by Mr Alexander Hayes. Copyright © Alexander Hayes. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
1572
Encyclopedia of Human-Computer Interaction
23.10.3 Sousveillance Outside the Context of Existential Education Taken away from the context of learning and reflection, a wearable (or implantable) camera worn by any citizen carries with it significant and deep personal and societal implications. The photoborg may feel entirely free, a master of his/her own destiny; they may even feel safe that their point of view is being noted for re-use, if needed at a later time. Indeed, the power the photoborg presumes when they put on the camera or bear the implant, can be considered even more powerful than the traditional CCTV overhead gazing in an unrestricted manner. But no matter how one looks at it, others will inevitably be in the field of view of the wearer or bearer of technology, and unless these fellow citizens also become photoborgs themselves, there will always be inequality. Professor Mann’s sousveillance carries with it huge political, educational, environmental and spiritual overtones. The narrative which informs sousveillance is more relevant today due to the proliferation of new media, than ever before. But wherein sousveillance grants the citizen the ability to combat the powerful using their own strategic game, it also grants other citizens the ability to put on the guise of the powerful. Sousveillance is here in the eye of the beholder, the one wearing the camera. In the end it comes down to lifeworld and context and stakeholder types. What we all agree on however, is the pervasiveness of the camera, that sees everything, hears everything but comes endowed with obvious limitations such as the potential for the impairment of data, through data loss, data manipulation, or misrepresentation. MG and Steve had similar conceptions of where the surveillance capability of the powerful is going, expressed so eloquently during the Singularity Summit in San Francisco where Steve described his Ladder Theory (Mann, 2010). MG Michael likewise referred to the idea of the “axis of access” in 2010, which Steve noted would be more correct if written “axes of access”. It was unsurprising to us, in conducting research for this article on Steve’s wearable computing history, that we stumbled across the following Wired article written in 2003 by Shachtman:
Wearable Computing
1573
“The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable... The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read... All of this -- and more -- would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health... This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life.” It simply goes to show how any discovery can be tailored toward any ends. Glogging was meant to sustain the power of the individual, to enable growth, maturity and development in the person. Here, it has been hi-jacked by the very same stakeholder it was created to gain protection from. Many would ask are we playing into the hands of such initiatives as DARPA’s Lifelog program by researching sousveillance and überveillance. The answer to this is not difficult- the natural trajectory of these emerging technologies would have propelled us there regardless. Arthur (2009, p. 15) speaks of an evolution of technology which is “the process by which all objects of some class are related by ties of common descent from the collection of earlier objects.” If it were not Steve Mann, then someone else would have at some given point in time discovered the true capabilities of sousveillance -- better for it to have been Mann who embraces genuine discussion on issues related to privacy, identity, and human rights.
23.10.4 International Workshop in Recognition of Steve Mann’s Sousveillance Research Mid-way through 2011, MG and I decided we would host an international workshop on Sousveillance and Point of View (POV) Technologies in Law Enforcement as our
1574
Encyclopedia of Human-Computer Interaction
sixth workshop in the Social Implications of National Security series (Michael and Michael, 2012b) which started as an initiative under the Australian Research Council’s (ARC) Research Network for a Secure Australia (RNSA). The workshop was hosted on the 22 February 2012, exactly 17 years after Mann uploaded his wearable webcam images of MIT’s east campus fire as a “roving reporter.” But rather than just focusing on sousveillance, the workshop also emphasised the rise of crowd-sourced sousveillance, citizen rights, body-worn technologies and just-in-time policing. The workshop investigated the use of sousveillance by law enforcement for evidencebased gathering as well as its use against law enforcement by everyday citizens. The ways in which we have witnessed the proliferation of overt and covert surveillance technologies has set the stage for the re-evaluation of existing laws and practices.
Wearable Computing
1575
Figure 23.4: The International Workshop on Sousveillance and Point of View (POV) Technologies in Law Enforcement was held exactly 17 years after Mann uploaded his wearable webcam images of MIT’s east campus fire as a “roving reporter.” Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1576
Encyclopedia of Human-Computer Interaction
Over 50 delegates attended (Figure 23.5) including former Privacy Commissioners of Australia, prosecutors and barristers of the high court, members of the Queensland and Victorian police, private investigators, spy equipment vendors (Figure 23.6), National Information and Communications Technologies Australia (NICTA) representatives, educational technologists (Figure 23.7), the Commissioner for Law Enforcement Data Security, members of the Australian Privacy Foundation, artists (Figure 23.8) and academics from across the country. A highlight of the workshop was the attendance of well-known Canadian sociologists Professors Kevin Haggerty and David Lyon, who gave a keynote address and invited paper (Figure 23.6). Professor Haggerty spoke on the ‘Monitoring of Police by Police’ and Professor Lyon on the concept of the ‘Omniscient Gaze’ (Bradwell, 2012) (Figure 23.9).
Wearable Computing
1577
Figure 23.5: Delegates at the International Workshop on Sousveillance in Australia. Copyright © Alexander Hayes. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
1578
Encyclopedia of Human-Computer Interaction
Figure 23.6: DUSS Pty Ltd. Demonstrating the eWitness Wearable Camera. Copyright © Alexander Hayes. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
Wearable Computing
1579
Figure 23.7: Mr Alexander Hayes speaking about Professor Steve Mann and sousveillance during his presentation on POV and Education. Copyright © Katina Michael. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
1580
Encyclopedia of Human-Computer Interaction
Figure 23.8: Mr Tim Burns. Western Australian Artist speaking at the Workshop. Copyright © Alexander Hayes. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
Wearable Computing
1581
Figure 23.9: Professor Kevin Haggerty (keynote), Professor David Lyon (invited speaker), and Mr Mark Lyell (plenary speaker) at the International Workshop on Sousveillance. Copyright © Mark Lyell. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
In addressing the audience with opening remarks on the workshop’s conception, I began with defining sousveillance and then went on to demonstrate its use. I could think of no better example of sousveillance-at-work than to show a short five minute clip taken by Mann himself in Downtown Toronto (Figure 23.10). In this clip you will note that Steve is exercising his civil rights and pointing out to the police officer on duty that there is a risk of someone getting electrocuted because cables are exposed to pedestrians on the sidewalk. The officer on duty rejects being a subject of Mann’s visual recording. He stops Steve as he is nearing him and exclaims: “Sir, you cannot take a picture!” To this Steve questions: “Oh. Why not?” Again, the officer exhorts Steve
1582
Encyclopedia of Human-Computer Interaction
to stop recording. To this Steve replies- “Ok, I photograph my whole life, I always have...” To this the officer says: “I don’t want to be a part of your life through a photograph. Can you erase that photo please?” Steve does not have a chance to reply at this point and again the officer interjects growing in impatience: “Did you take a picture of me?” Steve replies: “I record my life.” Again the officer extorts: “Did you take a picture of me?” To this Steve makes a correction: “I’m recording video.” The officer interjects several times: “It’s a simple question, did you take a picture of me? Answer the question, yes or no.” Steve admits to taking footage and the officer replies: “Okay, I need you to erase that.” Steve provocatively then says: “Okay, I’ll need to call my lawyer then...” The officer is disgruntled at this point and tells Steve to call his lawyer and to give him his number. The officer continues by insisting: “Do you understand the ramifications of what is going to happen here? Don’t you realise what can happen here?” Steve tells the officer to fill out an incident report about what happened.
Figure 23.10: Code violation and physical assault. Full video at http://wearcam.org/ password-66-450.htm. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Wearable Computing
1583
A struggle over the camera occurs for a good thirty seconds with Steve refusing to let go of it... Steve reassures the officer that he is just recording his whole life. To this the officer replies: “I don’t care about your entire life.” At this point Steve wishes to seek assistance from another officer but he does not wish to do so by leaving his camera behind. Some thirty seconds later after trying to reason with Steve the officer says: “You don’t get it do you... you just don’t get it... What you have on this thing here is me, do you understand that?... Doing this job, I don’t want other people to know what I look like and what I do. Now, I don’t know where these pictures are going. They can get on the Internet, they can get somewhere like that and someone can recognise me. And I’ve got a family, and so then my family is in jeopardy. That’s what I’m telling you.” At this point of the exchange, Steve points to the film set camera to try to explain why he is recording. The officer again retorts: “I don’t care about that camera, I care about this thing pointed at my face.” Steve continues to try to convince the officer about the danger of the film set wiring and that he needs the recording for evidence. He suggests going to the police station with the officer but still this does not appease the officer who by this stage is bewildered: “You just don’t get it do you, you don’t, because if you did there wouldn’t be a problem.” To this Steve replies, “I get it... I’ve been recording my life for 20 years.” Again, the officer is adamant: “I’m not a part of your life.” Steve replies, “Well everything that I pass is a part of my life.” The struggle continues and Steve is asked if he has been in trouble with the police before and whether or not he is on any medication. There is no resolution. Steve is hurt in the incident. One week prior to this he had been electrocuted by an above-ground cable exposure in a similar context while a crew was filming at as nearby locale. This is not the first time that Mann has been the subject of investigation. On a return flight from the United States to Canada, Mann was required by security guards to turn his machine on and off and put it through the X-ray machine while they tugged on his wires and electrodes. The New York Times reported: “the
1584
Encyclopedia of Human-Computer Interaction
guards took him to a private room for a strip-search in which, he said, the electrodes were torn from his skin, causing bleeding, and several pieces of equipment were strewn about the room” (Guernsey, 2002). Mann was quoted as saying: “We have to make sure we don’t go into a police state where travel becomes impossible for certain individuals.” At the time, Mann described suffering as a result from the sudden detachment of technology he had worn for decades. This encounter, and others like it, cut to the core of the implications of sousveillance but also about its everyday role. When we asked Steve what we would do if participants in our glogger assessment were questioned about why they were recording others without their permission, Steve pointed us to his Request for Deletion (RFD) page (Mann, 2009). This is admittedly only a part of the solution. In the future off-the-shelf products might exist to blank out the images of people and car number plates in everyday films, just like in the realm of Google StreetView but for now this is a real issue, as seen in the encounter with the Toronto police officer. No matter how one looks at it however, the increasing use of in-car video recording cameras by law enforcement, business and civilian vehicles, helmet mounted cameras used by motorbike riders, cyclists and extreme sportsmen, roads and traffic authority cameras, embedded cameras in apparatus (e.g. cricket stumps, tazer guns) and the like, mean that through the adoption of sousveillance techniques, the average citizen can reclaim at least some of that asymmetry they have lost. Steve’s RFD approach acknowledges however, that an opt-out approach is much more realistic than an opt-in approach. Expecting everyone in my field of view (FOW) to sign a consent form allowing me to film them because I decide to walk the streets wearing a camera is just impossible. But deleting an image or film based on an individual request can be satisfied although it may not always be practical. We are certain that as social media platforms like glogger proliferate, many will ask: ff To be let alone; ff In what context the footage being taken will be used;
Wearable Computing
1585
ff How the footage taken will be validated and stored; and ff To whom the footage will belong. These questions are particularly pertinent in the insurance industry as in-car video recording is now widely commercialised (Figure 23.11). Wearables that record like Helmet Cams are becoming plentiful, widely used in the military, extreme sports, the mining sector, and film industry. And we now even have Taser cams, perhaps predated by the stump cam in the game of cricket over a decade ago. Some of these devices, e.g. audio listening spy glasses, are even marketed to minors through school book clubs (Figure 23.12). This raises some interesting questions about how devices used for sousveillance might be misused contra to law in a given jurisdiction. Offences may in fact be committed based on the current law but the law is not yet being enforced to curb activities related to sousveillance. On the other hand, point of view technologies more broadly may even be misused in a stalking capacity or other voyeuristic manner. See for instance, online games marketed to minors that require a webcam to be switched on for play (Figure 23.13). There are also purported borderline cases where a camera is worn by an individual who decides to take footage in a store owned by another person. While it is not a private setting per se, the store owner may not wish for his/her goods and services to be filmed. What are the rights of individuals in public spaces when it comes to private activities? How do we go about a framework for the analysis of any type of surveillance (Clarke, 2012a)?
1586
Encyclopedia of Human-Computer Interaction
Figure 23.11: Autovision Mobile Media Van in New South Wales, specialising in incar vehicle tracking, navigation and recording solutions. Copyright © Katina Michael. All Rights Reserved. Reproduced with permission. See section “Exceptions” in the copyright terms.
Wearable Computing
1587
Figure 23.12: Spy glasses plus book marketed to 8+ years olds. Listen to conversations up to 4 metres away. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
1588
Encyclopedia of Human-Computer Interaction
Figure 23.13: “Angelina Balerina” online dancing game that requires the use of a webcam. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.
Wearable Computing
1589
23.10.5  Sousveillance and Point of View Technologies in Law Enforcement The reality is that those supplying point of view (POV) equipment, some specialising in spy equipment, have undergone a massive uptake in demand. This surge is witnessed by the number of organisations that are now dealing in this kind of covert and overt new media and franchising of some of these companies. The statistics indicate that many citizens are now taking matters into their own hands and most probably at the expense of existing legislation to do with Surveillance Devices and Listening Devices Acts. In addition, members of the police force are acquiring their own technology for safety related reasons and incidence/complaint handling in an attempt to reduce the on the job stress they undergo on a daily basis. In Australia, there were accounts of police officers some 7 years ago, purchasing video camera units and using Velcro to place these cameras in police wagons that had not come equipped with high tech gadgetry to film roadside incidences. But today we are talking about new camera kits that are just not used for in-car recording but for body-worn recording by the police. Police today may wear helmet cams, ear cams, chest cams with audio capability, GPS locators, taser cams etc. But how much evidence gathering is too much? In the last 12 months we have seen several riots take place -- e.g. the Vancouver Riots and the London Riots. For the first time crowdsourced surveillance played second fiddle to crowdsourced sousveillance. The police called for footage to be submitted for use in convicting rioters for crimes committed. So many thousands of minutes were presented to the police -- above and beyond footage they had taken. One could consider crosscorrelation of sorts taking place. It is predicted however that with time, crowdsourced surveillance will overwhelm the limited resources employed by the police to look at such video evidence -- in many cases potentially thousands of hours worth. Professor Andrew Goldsmith of the Centre for Transnational Crime Prevention at the University of Wol-
1590
Encyclopedia of Human-Computer Interaction
longong, has written about this new visibility with respect to the Tomlinson case. It is the commentators’ opinion, as recently recorded in a special guest edited issue of the Journal of Location Based Services (Michael and Michael, 2011), that the police will be moving away from intelligent led policing and toward an IT led policing in near real-time, if not real-time. It will be the ability by the police to say that if you are currently in a zone of public disturbance, or riot, that access to real-time engineering information will be used to denote your location. Additional smartphone modalities will then be harnessed to ascertain whether or not you are a potential perpetrator- for instance accelerometer information that can denote whether you are going up or down stairs or jolting around smashing windows. To borrow from Roger Clarke, this is a form of dataveillance “on the move” (Clarke, 2009). No doubt this kind of scenario will mean that momentarily people will decide to live off the grid- leave their mobiles behind- or use mobiles on secure and secret platforms like the Blackberry device. But it is exactly this type of scenario that may herald in the age of überveillance- a tiny onboard implant that is injected into the translucent layer of the skin, and records everything as it sees it... implants cannot be left behind... implants are always with you... and implants allegedly do not allow for tampering... the iplant as we have termed it, is that ‘shock and awe’ instrument we have been waiting for to be commercialised in all its spectre. It will supposedly be the answer to all of our electronic health record problems, our social security and tax file numbers, our real-name Internet identity, and secure mobile payments (Michael and Michael, 2012a). The pitfalls with POV, no matter how many cameras are recording, and no matter from how many perspectives and stakeholders, is that visual evidence has limitations. What is a whole incident? How can we denote past provocation or historical data not available during a given scene? How can we ensure that data on mobile transmission has not been intercepted? How can we ensure data validation? We might well be on another road similar to that of DNA as admissible
Wearable Computing
1591
evidence in a court of law in terms of “eyewitness” recording of events. The key question to ask here is whether or not we can ever achieve “omniscience” through the use of seemingly “omnipresent” new media?
23.10.6 References ff Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. New York: Free Press. ff Bradwell, J. (2012). Security workshop bring ‘sousveillance’ under the lens. University of Wollongong: News and Media, http://media.uow.edu.au/news/UOW120478. html?referer=http%3A%2F%2Fworks.bepress. com%2Fkmichael%2F256%2F (6 March 2012). ff Clarke, R. (2009). Roger Clarke’s Dataveillance and Information Privacy Home-Page. Rogerclarke.com. Available Online: http:// www.rogerclarke.com/DV/ (15 February 2009). ff Clarke, R. (2012a). Point-of-View Surveillance. Rogerclarke.com. Available Online: http://www.rogerclarke.com/DV/PoVS.html (19 March 2012). ff Clarke, R. (2012b). Sousveillance banned in WA. Privacy-List privacy@lists.efa.org.au (18 March 2012). ff Guernsey, L. (2002). At Airport Gate, a Cyborg Unplugged. The New York Times. Available Online: http://www.nytimes. com/2002/03/14/technology/at-airport-gate-a-cyborg-unplugged. html (14 March 2002). ff Leonardo (2009). The Leonardo Awards Program: Award Recipients. Leonardo. Available Online: http://leonardo.info/isast/ awardwinners.html (26 August 2009).
1592
Encyclopedia of Human-Computer Interaction
ff Mann, S. (1995). Wearcam.org as roving reporter; (c) Steve Mann, Feb. 1995. Wearcam.org. Available Online: http://wearcam.org/ eastcampusfire.html (17 March 2009). ff Mann, S. (1998). Personal Cybernetics and Intelligent Imaging Systems.Wearcam.org. Available Online http://wearcam.org/ece516/ (18 March 2012). ff Mann, S. (1998). Humanistic Intelligence: WearComp as a new framework for Intelligent Signal Processing. Proceedings of the IEEE. 86 (11) 2123-2151. Available Online: http://hi.eecg.toronto. edu/hi/index.html (8 January 1998). ff Mann, S. (2001). Can Humans Being Clerks make Clerks be Human? - Exploring the Fundamental Difference between UbiComp and WearComp . Informationstechnik und Technische Informatik, 43 (2), 97-106. Available Online: http://wearcam.org/itti/itti.htm (2012). ff Mann, S. (2003). Existential Technology. Leonardo. 36 (1), 19-25. Available Online: http://leonardo.info/isast/articles/Leo36-1_ mann.pdf (2003). ff Mann, S. (2006). Learning By Being: Thirty Years of Cyborg Existemology. Learning by Being: Thirty Years of Cyborg Existemology. The International Handbook of Virtual Learning Environments. 1571-1592. Available Online: http://wearcam.org/existed/ existed.pdf (18 March 2012). ff Mann, S. (2009). Form 698 - Request for Deletion (RFD). Wearcam.org. http://wearcam.org/rfd.htm (19 March 2012). ff Mann, S. (2010). Steve Mann On Cyborg Living and Virtual Reality. Singularity Summit: San Francisco. Available Online: http:// www.youtube.com/watch?v=jY2m5Ru1Fm4 (31 August 2010).
Wearable Computing
1593
ff Michael, K. and Michael, M.G. (2009). Innovative Automatic Identification and Location-Based Services: from bar codes to chip implants. New York: Information Science Reference. ff Michael, K. (2011).Event Report: The IEEE Symposium on Technology and Society 2010 (7-10 June): The Social Implications of Emerging Technologies. Journal of Cases on Information Technology. 13 (2), 80-87. Available Online: http://www.igi-global.com/ Files/Ancillary/10.4018_jcit.2011040106.pdf (2011). ff Michael, K. and Michael, M.G. (2011). The social and behavioural implications of location-based services. Journal of Location Based Services. 5 (3-4), 121-137. Available Online: http://www.tandfonline.com/doi/pdf/10.1080/17489725.2011.642820 (18 March 2012). ff Michael, K. and Michael, M.G. (2012a). Implementing Namebers Using Microchip Implants: The Black Box Beneath The Skin. In J. Pitt (Ed.) This Pervasive Day: The Potential and Perils of Pervasive Computing. London, United Kingdom: Imperial College Press, 101-142. ff Michael, K. and Michael, M.G. (2012b). Program: Sousveillance and the Social Implications of Point of View Technologies in the Law Enforcement Sector. Sixth Workshop on the Social Implications of National Security. Sydney, NSW, Australia. Available Online: http://works.bepress.com/cgi/viewcontent.cgi?article=1290& context=kmichael (22 May 2012). ff PCMag (2009). Definition of Glogger. PCMag.com. Available Online: http://www.pcmag.com/encyclopedia_term/0,2542,t=glogge r&i=59866,00.asp (2012).
1594
Encyclopedia of Human-Computer Interaction
ff Quillian, K. (2004). 2004 Leonardo Award for Excellence Given to Steve Mann. Rhizome. Available Online: http://rhizome.org/discuss/view/15557/ (7 December 2004). ff Shachtman, N. (2003). A Spy Machine of DARPA’s Dreams. Wired: Tech Biz|Media. Available Online: http://www.wired.com/techbiz/ media/news/2003/05/58909?currentPage=all (20 May 2003). ff Spence, R. (2008). Tiny Camera oh my! Eyeborg. http://eyeborg.blogspot. com.au/2008/11/tiny-camera-oh-my.html (25 November 2008). ff UOW (2010). ISTAS 2010. University of Wollongong. Available Online: http://www.uow.edu.au/conferences/2010/ISTAS/index. html (20 January 2011). ff WA (1998). Surveillance Devices Act 1998 — Sect 6 Available Online: http://www.austlii.edu.au/au/legis/wa/consol_act/ sda1998210/s6.html (18 March 2012).
Wearable Computing
1595
23.11 Commentary by Douglas L. Baldwin How to cite this commentary in your report
Douglas L. Baldwin
© Douglas L. Baldwin
In December, 2012, I created a blog that I call “Bugs, Blindness and the Pursuit of Happiness.” The blog was set up so that I could interview presenters at the IEEE conference, as well as begin a dialogue concerning wearable computing devices for children in special education. The initial series of blog posts reflects my first interview--a three-day series of discussions with my frie... Douglas L. Baldwin Douglas L. Baldwin is a member of The Interaction Design Foundation Steve Mann provided the vision for a new kind of digital professional; a practitioner capable of prescribing alternative perception. Like any doctor, these specialists would use diagnostic tools to arrive at a diagnosis, from which tailored prescriptions would arise.
1596
Encyclopedia of Human-Computer Interaction
After working for over thirty years in special education, as the founder of a vision clinic for children with special needs, and as the founder of a non-proďŹ t institute that brought sophisticated navigational technologies to blind and visually impaired kids, it is clear to me that digital perception is a revolution waiting to happen in rehabilitation and special education. In 2010, I was approached by the X-Prize Foundation to contribute to a proposal concerning portable devices that would beneďŹ t the blind, incorporating high technologies into i-pads, cell phones; handheld systems. I knew that these handheld tools were already on the market, or soon to be. I also knew that they would be short of the vision that Steve Mann laid down years ago. There would be no concept of humanistic intelligence, no Eyetap putting sensory input where the brain was expecting to receive it (on the face at eye level), and no attention would be focused toward special education where tailored solutions were the only answer to the needs of unique children. The list of potential remediations available to the digital perception specialist is extraordinary. I will list ďŹ ve that were outlined for the X Prize-Foundation report. Digital perception specialists will work with the whole body, but my focus is on the potential of what Steve calls Electric Glasses; Eyetap technology that can place computer altered images on the retina in real time. 1.
The brain can be directly impacted by electromagnetic variables that alter perception and consciousness. Hemi-synchronization (a sound wave technology) combined with light wave therapies (blue spectrum for wakefulness, and to counter seasonal affective disorder), for example, has the potential to alter mood, affect energy, and assist the evolution of consciousness for individuals.
2. Disabilities, like autism and visual impairment, could be affected (theoretically) by placing laser input on the retina that is augmented, diminished, and/or mediated. For example, stimulating central while in-
Wearable Computing
1597
hibiting peripheral vision (and the opposite), or emphasizing left field stimulation while inhibiting right field (or reverse) could significantly alter behavior patterns. Cashing input and then slowing it down or speeding it up is a method for determining how altering frame rate affects individual processing and memory storage (i.e. behavior). Enriching or reducing input directly to visual quadrants, hemi-fields, or the whole retina could benefit many people with processing and sensory disabilities. The entire field of optics will eventually give way to digital image capture and realtime alteration of images reaching the retina. Steve recognizes this when he speaks of downloading visual prescriptions. This switch to digital diagnosis and remediation is a revolution for eye doctors, and eventually portends the demise of the optical industry. Prescriptions will be altered to fit environmental demands in real time, as the environment changes. The visually impaired and blind populations are handicapped not only by sensory loss, but also by a stark, silent environment. Daniel Kish, CEO of World Access for the Blind, says that blind individuals could navigate the environment as fluidly as the sighted if there was adequate signage available. Kish says that the world is designed to help the sighted navigate. If the environment was smarter (as it could be) and if this smart environment was networked with Steve Mann’s Smart Eyeglasses, the blind could navigate without assistance. Facebook is the beginning of hive brain. Eventually, with molecular implants, hive brain (social networking) will become a reality. In the meantime, Steve Mann’s Electric Glasses are an intermediate step, where social networking is at face level. You could look out the eyes of your friends; in a way, become them as their sounds and sights are beamed directly into your perception. These five examples are entirely possible now. The revolution is overdue. Google’s internet glasses may be the door that brings this one step closer to reality. The military has long been working on land warrior goggles. Both Google and
1598
Encyclopedia of Human-Computer Interaction
the military (and others) have been secret for a long time as these technologies evolved. The Lion is about to be let out of the cage. My concern remains: Who will transform these technologies into tools that will revolutionize special education and rehabilitation?
23.12  Commentary by Woodrow Barfield and Jessica Barfield How to cite this commentary in your report
Woodrow Barfield
Woodrow Barfield served as Professor and Director of the Sensory Engineering Laboratory at the University of Washington. He has degrees in engineering and intellectual property law and has served on the editorial board of Presence and the Virtual Reality Journal.... Woodrow Barfield Woodrow Barfield is a member of The Interaction Design Foundation
Wearable Computing
1599
Jessica Barfield
Jess Barfield is a student and ceramics artist and will be attending Dartmouth College in the fall of 2012. She can normally be found on a field hockey field or in front of a computer.... Jessica Barfield Jessica Barfield is a member of The Interaction Design Foundation Steve Mann has written a comprehensive and informative chapter on the general topic of wearable computing (which Steve describes as miniature body-borne computational and sensory devices). We use the phrase- “general topic” because Steve expands his discussion of wearable computing to include the more expansive term, “bearable” computing (essentially wearable computing technology that is on or in the body). In the chapter, Steve also discusses how wearable computers may be used to augment, mediate, or diminish reality. As background for this commentary, I first met Steve many years ago when I attended a meeting at MIT concerning the first conference to be held on wearable computers, and Steve was then a PhD student at the MIT Media Laboratory (At the conference I made the statement: “Are we wearing the computers, or are they wearing us!”). As the fac-
1600
Encyclopedia of Human-Computer Interaction
ulty gathered to discuss the aims and direction of the conference, I thought then that Steve had done more to develop the field of wearable computers than the faculty that had gathered to organize the conference. Since my first meeting with Steve, he has continued his innovative work on wearable computing, and he has published extensively on the subject. I particularly enjoyed reading Steve’s antidotes concerning his experiences as a “cyborg” in a book Steve wrote for the general public, “Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer,” 2001. While much of Steve’s current chapter is historical in content, he also discusses many of the wearable computing applications he has created, often with Steve’s insight as to the rationale behind his inventions. When we think of the different types of computing technology that may be worn on or in the body, we can envision a continuum that starts with the most basic of wearable computing technology (Steve mentions a wearable abacus) and ends with wearable computing that is actually connected to a person’s central nervous system. In fact, as humans are becoming more-and-more equipped with wearable (and bearable) computing technology, the distinction as to what is thought of as a “prosthesis” is becoming blurred as we integrate more computing into human anatomy and physiology. On this very topic, I co-authored a chapter about the use of computing technology to control feedback systems in human physiology (“Computing Under the Skin” which was published in Barfield and Caudell, “Fundamentals of Wearable Computing and Augmented Reality,” 2001). I agree with Steve that the extension of computing integrated into a person’s brain could radically enhance human sensory and cognitive capabilities and alter the direction of human evolution; in fact, in my view, we are just now at the cusp of this development and experimental systems (computing technology integrated into a person’s brain) are in-field now that are helping those with severe physical disabilities. For example, consider people with debilitating diseases such that they are essentially “locked in” their own body. With the appropriate wearable computing technology
Wearable Computing
1601
consisting of a microchip that is implanted onto the surface of the brain (where it monitors electronic ‘thought’ pulses), such people may use a computer by thought alone allowing them to communicate with their family, caregivers, and through the internet, the world at large. Sadly, in the U.S. alone about 5,000 people yearly are diagnosed with just such a disease that ultimately shuts down the motor control capabilities of their body- Amyotrophic lateral sclerosis, sometimes called Lou Gehrig’s disease. This disease is a rapidly progressive, invariably fatal neurological disease that attacks the nerve cells responsible for controlling voluntary muscles. Much of the work on control theory and supervisory control of remote robots, along with digital technology, is applicable to the design and use of wearable computing for such individuals. In our view, anyone at the cutting-edge of their discipline is not only pushing their field further, but by nature of their work, is also at the forefront of other academic disciplines as well. For example, particle physicists in search of the ultimate building blocks of the Universe, often find themselves debating those who hold a nonsecular view of the origins and structure of the Universe. Similarly, Steve’s work, albeit on a less dramatic fashion, has raised many important issues of public policy and law. For example, Steve presents the idea that wearable computers can be used to film newsworthy events as they happen or people of authority as they perform their duties. This brings up the question of whether a person has a legal right to film other people in public (answer: generally they do). In the chapter, Steve refers to an interesting case on just this topic decided by the U.S. First Circuit Court of Appeals. In the case, Simon Glik was arrested for using his cell phone’s digital video camera (a wearable computer) to film several police officers arresting a young man on the Boston Common. The charges against Glik, which included violation of Massachusetts’s wiretap statute and two other state-law offenses, were subsequently judged baseless and were dismissed. Glik then brought suit under a U.S. Federal Statute (42 U.S.C. § 1983), claiming that his very arrest for film-
1602
Encyclopedia of Human-Computer Interaction
ing the officers constituted a violation of his rights under the First (free speech) and Fourth (unlawful arrest) Amendments to the U.S. Constitution. The court held that based on the facts alleged, that Glik was exercising clearly-established First Amendment rights in filming the officers in a public space, and that his clearlyestablished Fourth Amendment rights were violated by his arrest without probable cause. However, the readers of this comment should know: In the U.S. the right to film is not without limitations. It may be subject to reasonable time, place, and manner restrictions a topic in which much case law has been decided. Steve also discusses privacy issues they may occur when an individual wearing a computer/camera films and records people in public places. While Steve emphasizes the example where state actors, or people generally in positions of authority, are filmed, we worry about the potential to abuse people’s privacy using the technology of wearable computing. For example, video voyeurism, the act of filming or disseminating images of a person’s “private areas” under circumstance in which the person had a reasonable expectation of privacy regardless of whether the person is in a private or public location, is possible using the technology of wearable computers. In the U.S. such conduct is prohibited under State and Federal law (see for example, Video Voyeurism Prevention Act of 2004, 18 U.S.C.A. § 1801). And what about the privacy issues associated with other wearable computing technology such as the ability to recognize a person’s face, then search the internet for personal information about the individual (e.g., police record, or credit report), and “tack” that information on the person as they move through the environment? Could digital “scarlet letters” be far off? Steve’s concept of “diminished reality” in which a wearable computer can be used to replace or remove clutter, say for example, an unwanted advertisement on the side of a building, is also of interest to those in law and public policy. On this topic, I published an article in the UCLA Entertainment Law Review, 2006, titledCommercial Speech, Intellectual Property Rights, and Advertising Using Virtual Images Inserted in TV, Film, and the Real World. In the article, I discussed the
Wearable Computing
1603
legal ramifications of placing ads consisting of virtual images projected in the real world. We can think of virtual advertising as a form of digital technology that allows advertisers to insert computer-generated brand names, logos, or animated images into television programs or movies; or with Steve’s wearable computer technology, the real world. In the case of TV, a reported benefit of virtual advertising is that it allows the action on the screen to continue while displaying an ad viewable only by the home audience. What may be worrisome about the use of virtual images to replace portions of the real world is that corporations and government officials may be able to alter what people see based on political or economic considerations; an altered reality may then become the accepted norm, the consequences of which seem to bring up the dystopian society described in Huxley’s “Brave New World.” As a final comment, one often hears people discuss the need for “theory” to provide an intellectual framework for the work done in virtual and augmented reality. When I was on the faculty at the University of Washington, my students and I built a head tracked augmented reality system that as one looked around the space of the laboratory, they saw a corresponding computer-generated image that was rendered such that it occluded real objects in that space. We noticed that some attributes of the virtual images allowed the person to more easily view the virtual object and real world in a seamless manner. Later, I became interested in the topic of how people performed cognitive operations on computer-generated images. With Jim Foley, now at Georgia Tech, I performed experiments to determine how people mentally rotated images rendered with different lighting models. This led to thinking about how virtual images could be seamlessly integrated into the real world. I asked the question of whether there was any theory to explain how different characteristics of virtual images combined to form a “seamless whole” with the environment they were projected into, or whether virtual images projected in the real world appeared separate from the surrounding space (floating and disembodied from the real world scene). I recalled a paper I had read while in college by
1604
Encyclopedia of Human-Computer Interaction
Garner and Felfoldy, published in Cognitive Psychology, 1970, on the integrality of stimulus dimensions in various types of information processing. The authors of the paper noted that “separable” dimensions remain psychologically distinct when in combination; an example being forms varying in shape and color. A vast amount of converging evidence suggests that people are highly efficient at selectively attending to separable dimensions. By contrast, “integral” dimensions combine into relatively unanalyzable, unitary wholes; an example being colors varying in hue, brightness and saturation. Although people can selectively attend to integral dimensions to some degree, the process is far less efficient than occurs for separable-dimension stimuli (see also Shepard, R. N., Attention and the metric structure of the stimulus space, Journal of Mathematical Psychology, 1964). I think that much can be done to develop a theory of augmented, mediated, or diminished reality using the approach discussed by Garner and Felfody, and Shepard, and I encourage readers of this comment to do so. Such research would have to expand the past work which was done on single images, to virtual images projected into the real world. Returning to Steve’s chapter, it is an excellent source for those interested in learning about the historical context of wearable computing, and about the numerous applications Steve has developed to design a world in which the humans signal processing capabilities and wearable computing system functions form a feedback loop; the thought being, two brains are better than one! We also see Steve’s work evolving in the not too distant future to the point where humans and wearable computing technology “live” in a mutually symbiotic manner, which implies of course, the primary thinker, the wearable computing, is in some way benefiting from having a human in the loop. So, returning to what I said at the first conference held on wearable computers: Are we wearing them, or are they wearing us?
Wearable Computing
1605
23.13 Commentary by Hiroshi Ishii How to cite this commentary in your report
Hiroshi Ishii
Hiroshi Ishii is Jerome B. Wiesner Professor of Media Arts and Sciences at the MIT Media Lab, where he is head of the Tangible Media group and co-director of the Things That Think (TTT) consortium. Ishii’s research focuses upon the design of seamless interfaces between humans, digital information, and the physical environment. His group seeks to change the “painted bits” of GUIs to “tang... Hiroshi Ishii Hiroshi Ishii is a member of The Interaction Design Foundation Although most people now see Steve Mann as a father of Wearable Computing, he was already far beyond “wearable” some 20 years ago when I first met this cyborg in the MIT Media Lab. His series of inventions was not just about “wearable,” but a radical form of symbiosis and co-evolution of machine and human being, which he has been experimenting with for more than two decades, living in symbiosis with computations on his skin, eyes, ears, and in his soul. His vision of “Mediated Reality” has the same significance as the “Collective Intelligence” vision of Doug-
1606
Encyclopedia of Human-Computer Interaction
las Engelbart. Because of the same reason that thinking of Doug as an inventor of the Mouse is inappropriate and disrespectful, seeing Steve as merely an inventor of “wearable” is not right. Please read his visionary papers, and enjoy Steve Mann’s deep universe.
23.14 References Bade, J. Peter, Jr., Gerald Quentin Maguire and Bantz, David F. (1990). The IBM/Columbia Student Electronic Notebook Project. IBM, T. J. Watson Research Lab, Yorktown Heights, NY, USA Barfield, Woodrow and Caudell, Thomas (eds.) (2001): Fundamentals of Wearable Computers and Augmented Reality. CRC Press Bass, Thomas A. (2000): The Eudaemonic Pie. iUniverse Brin, David (2011). Sousveillance: A New Era for Police Accountability. Retrieved 25 December 2011 from http://davidbrin.blogspot.com/2011/06/sousveillance-new-era-for-police.html Dennis, Kingsley (2008): Keeping a Close Watch - The Rise of SelfSurveillance & the Threat of Digital Exposure. In Sociological Review, 56 Feiner, Steven K., MacIntyre, Blair and Seligmann, Dorée D. (1993): Knowledge-Based Augmented Reality. In Communications of the ACM, 36 (7) pp. 53-62 Geary, James (2002): The Body Electric: An Anatomy of the New Bionic Senses. Rutgers University Press Herling, Jan and Broll, Wolfgang (2011). Video: Page title suppressed - page not yet published. Retrieved 6 June 2013 from [URL suppressed - page not yet published] Knight, Brooke A. (2000): Watch Me! Webcams and the Public Exposure of Private Lives. In Art Journal, 59 (4) Mann, Steve (2011). Video: Page title suppressed - page not yet published. Retrieved 6 June 2013 from [URL suppressed - page not yet published] Mann, Steve (2003): Existential Technology: Wearable Computing is not the real issue. In Leonardo, 36 (1) pp. 19-25 Mann, Steve (2004): “Sousveillance”: inverse surveillance in multimedia imaging. In: Schulzrinne, Henning, Dimitrova, Nevenka, Sasse, Martina Angela, Moon, Sue B. and Lienhart,
Wearable Computing
1607
Rainer (eds.) Proceedings of the 12th ACM International Conference on Multimedia October 10-16, 2004, New York, NY, USA. pp. 620-627 Mann, Steven (1998): Humanistic computing: “WearComp” as a new framework and application for intelligent signal processing. In Proceedings of the IEEE, 86 (11) pp. 2123-2151 Mann, Steve (1996b). Wearable, tetherless computer--mediated reality: WearCam as a wearable face--recognizer, and other applications for the disabled, MIT Tech Report number 361, Cambridge, Massachusetts. MIT http://wearcam.org/vmp.htm Mann, Steve (1996a): Smart Clothing: The Shift to Wearable Computing. In Communications of the ACM, 39 (8) pp. 23-24 Mann, Steve (2001b): Intelligent Image Processing. Wiley-IEEE Press Mann, Steve (2001a): Guest Editor’s Introduction: Wearable Computing-Toward Humanistic Intelligence. In IEEE Intelligent Systems, 16 (3) pp. 10-15 Mann, Steve and Niedzviecki, Hal (2001): Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer. Doubleday of Canada Mann, Steve, Nolan, Jason and Wellman, Barry (2003): Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments. In Surveillance & Society, 1 (3) Mulligan, Deirdre K. (2009). University Course on Sousveillance: SURVEILLANCE, SOUSVEILLANCE, COVEILLANCE, AND DATAVEILLANCE. Retrieved 1 December 2011 from Berkeley iSchool: Roberts, Steven K. (1988): Computing Across America: The Bicycle Odyssey of a High-Tech Nomad. Information Today Inc Thompson, Clive (2011). Clive Thompson on Establishing Rules in the Videocam Age. Retrieved 1 December 2011 from Wired Magazine: http://www.wired.com/magazine/2011/06/ st_thompson_videomonitoaring/ Thorp, Edward O. (1966): Beat the Dealer: A Winning Strategy for the Game of TwentyOne. Vintage
1608
Encyclopedia of Human-Computer Interaction
About the author Steve Mann
© Steve Mann
Personal Homepage: http://www.eecg.utoronto.ca/~mann/ Steven Mann is a tenured professor at the Department of Electrical and Computer Engineering at the University of Toronto. Mann holds degrees from the Massachusetts Institute of Technology (PhD in Media Arts and Sciences ‘97) and McMaster University, where he was also inducted into the McMaster University Alumni Hall of Fame, Alumni Gallery, 2004, in recognition of his career as an inventor and teacher. While at MIT he was one of the founding members of the Wearable Computers group in the Media Lab. In 2004 he was named the recipient of the 2004 Leonardo Award for Excellence for his article “Existential Technology,” published in Leonardo 36:1.
Wearable Computing
1609
Your notes and thoughts on chapter 23 Record your notes and thoughts on this chapter. If you want to share these thoughts with others online, go to the bottom of the page at: http://www.interaction-design.org/encyclopedia/wearable_computing.html
NOTES _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________
1610
Encyclopedia of Human-Computer Interaction
_______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________