CH24 - The Encyclopedia of Human-Computer Interaction

Page 1

1611

Chapter

24

Socio-Technical System Design by Brian Whitworth with Adnan Ahmad.

A

socio-technical system (STS) is a social system operating on a technical base, e.g. email, chat, bulletin boards, blogs, Wikipedia, E-Bay, Twitter, Facebook

and YouTube. Hundreds of millions of people use them every day, but how do they work? More importantly, can they be designed? If socio-technical systems are social and technical, how is computing both at once? This chapter may be used as part of a STS design course. Hence each part has a set of interesting discussion questions that students can investigate and report back to the class. Anyone wishing to set up a course in the design of social technologies is welcome to use this resource


1612

Encyclopedia of Human-Computer Interaction

24.1  Part 1: The evolution of computing “Evolution is systems evolving higher levels”

24.1.1  A short history The first computer was conceived of as a machine of cogs and gears (Figure 24.1). It became operational in the 1950s and -60s with the invention of semiconductors. In the 1970s, a hardware company called IBM1 was a computing leader. In the 1980s software became more important, so by the 1990s a software company called Microsoft2 took the computing lead, giving ordinary people tools like word-processing. During the 1990s, computing became more personal, as the World-Wide-Web turned Internet URLs into web site names that people could read3. Then a company called Google4 offered the ultimate personal service, free access to the vast public library we call the Internet, and everyone’s gateway to the web became the new computing leader. The 2000s computing evolved yet again, to become a social medium as well as a personal tool. So now Facebook challenges Google, as Google challenged Microsoft, as Microsoft challenged IBM.

1.  IBM stands for International Business Machines 2.  Microsoft stands for microcomputer software 3.  IP addresses like 208.80.154.225 became Uniform Resource Locator (URL) names like http://en.wikipedia.org/ 4.  Google stands for a 1 followed by 100 zeros, i.e. a very large number


Socio-Technical System Design

1613

Figure 24.1:  Charles Babbage (1791-1871) designed the first automatic computing engines. He invented computers but failed to build them. The first complete Babbage Engine was completed in London in 2002, 153 years after it was designed. Difference Engine No. 2, built faithfully to the original drawings, consists of 8,000 parts, weighs five tons, and measures 11 feet. The one pictured above is Serial Number 2 and is located in Silicon Valley at the Computer History Museum in Mountain View, California. Courtesy of Jitze Couperus. Copyright: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).


1614

Encyclopedia of Human-Computer Interaction

Courtesy of Jitze Couperus. Copyright: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).


Socio-Technical System Design

1615

Courtesy of Jitze Couperus. Copyright: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).

Figure:  Details from Babbage’s difference engine.

Computing has re-invented itself every decade or so (Figure 24.2). What began as just hardware became about software, then people, and now communities. A physical machine exchanging electricity became software exchanging information, people exchanging meaning and now communities exchanging memes5. The World Wide Web was initially an information web (Web 1.0), then an active web (Web 2.0), now a semantic web (Web 3.0) and is becoming a social web (Web 4.0). Each evolutionary step built on the previous, as social computing needs personal computing, personal computing needs software and software needs hardware. 5.  A meme is an idea, behavior or style communicated within a culture


1616

Encyclopedia of Human-Computer Interaction

The corresponding evolution of computing design culminates in socio-technical design. When the software era arrived, hardware continued to evolve but hardware leaders like IBM no longer dominated computing as before. The evolution of computing changed business fortunes by changing what computing is. Selling software makes more money than selling hardware because it changes more often. Web queries are even more volatile, but Google gave a service away for free and then sold advertising around it — it sold its services to those who sold theirs. The business model changed, because selling knowledge is not like selling software. Facebook’s business model is still evolving. It now challenges Google because we relate to family and friends more than we query knowledge - social exchanges have more trade potential than knowledge exchange.

Figure 24.2:  The computing evolution. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


Socio-Technical System Design

1617

Yet friends are just social dyads. It is naive to think that social computing will stop at a unit of two. Beyond friends are tribes, cities, city-states, nations and metanations like the European Union. A community isn’t like a friend, as one has a friend but belongs to a community. With a world population at seven billion and growing, Facebook’s 900 million active accounts are just the beginning. The future is computer support for acting groups, families, tribes, nations and eventually a global community, e.g. a group browser for people to tour the Internet together, commenting to each other as they go. Each could take turns to pick the next site or follow an expert host. If socio-technology is just beginning, we need to understand how it works.

24.1.2  Computing levels The basis of socio-technical design is general systems theory (Bertalanffy, 1968). It describes what the disciplines of science have in common: sociologists see social systems, psychologists cognitive systems, computer scientists information systems and engineers hardware systems. All refer to systems. In general systems theory, no discipline has a monopoly on science and all are valid. Discipline isomorphies6 arise from common system properties, e.g. a social agreement measure that matched a biological diversity measure (Whitworth, 2006). Mechanical, logical, psychological and social systems are studied by engineers, computer scientists, psychologists and sociologists respectively. These perspectives in computing give levels (Table 24.1). Computing then began at the mechanical level, evolved an information level, then acquired human and community levels.

6.  A discipline isomorphy is when different fields with different forms present the same equation or law


1618

Encyclopedia of Human-Computer Interaction

Level Community

Examples

Discipline

Norms, culture, laws, zeitgeist, sanctions,

Sociology

roles Personal

Semantics, attitudes, beliefs, feelings, ideas

Informa-

Programs, data, bandwidth, memory

Psychology Computer science

tional Mechanical

Hardware, motherboard, telephone,

Engineering

FAX Table 24.1:  Computing levels as discipline perspectives.

Levels also help clarify terminology. In Figure 24.3, a technology is any tool people build to use, e.g. a spear is a technology7. So a hardware device alone is a technology, but information technology (IT) is both hardware and software. Likewise, computer science(CS)8 is a hybrid of mathematics and engineering, not either alone. So information technology is not a sub-set of technology, nor is computer science a sub-set of engineering. Human computer interaction (HCI) is then a person plus an IT system, with physical, informational and psychological levels. Just as IT isn’t hardware, so HCI isn’t IT, but the child of IT and psychology. HCI links CS to psychology as CS linked engineering to mathematics. HCI introduces human requirements to computing and HCI systems turn information into meaning. Finally, people can form an online community with hardware, software, personal and community levels. If the first two levels are technical and the last two so7.  Anything we use physically is technology, e.g. a table is technology 8.  The study of information processing


Socio-Technical System Design

1619

cial, the result is a socio-technical system (STS). If technology design is computing built to hardware and software requirements, then socio-technical design is computing built to personal and community requirements as well. In socio-technical systems, the new “user” of computing is the community (Whitworth, 2009b). Currently, many terms refer to human factors in computing: Engineers extend the term IT to refer to applications built to user requirements; Business calls people and organizations using computing information systems (IS); Education prefers information communication technology(ICT) to describe computer communication; Health chose the term informatics. Whether your preferred term is IT, IS, ICT or informatics doesn’t change the basic idea, that people are now part of computing. This chapter uses the term HCI for consistency9. In this pan-discipline view, all of Figure 24.3 is computing, whose complexity arises from its discipline promiscuity. Socio-technology then designs a computer product as a social and technical system. Limiting computing to hardware (engineering) or software (computer science) denies its obvious evolution. Levels in computing are ways to view it, not ways to partition it, e.g. a pilot in a plane is one system with different levels, not a mechanical part (the plane) plus a human part (the pilot). The physical level includes not just the plane body but also the pilot’s body, as both have weight, volume etc. The information level isn’t just the onboard computer, but also the neuronal processing in the pilot’s brain that generates the qualia10 of human experience.

9.  An alternative term is CHI, or computer-human interaction. 10.  A qualia is a basic subjective experience, e.g. the pain of a headache


1620

Encyclopedia of Human-Computer Interaction

Figure 24.3:  Computer system levels. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

The human level is just a pilot who sees the plane as an extension of his or her body, like extra hands or feet, and computer data like extra eyes or ears. On this level, the pilot is the actor and the plane is just a tool. The information level covers all processing, not just of onboard computers but also of the brain. The physical level is not just the body of the plane but also of the pilot. In an aerial conflict, the tactics of a piloted plane will be different from a computer drone. Finally, a plane in a squadron may do things it would not do alone, e.g. expose itself as a decoy so others can attack the enemy.


Socio-Technical System Design

1621

24.1.3  The reductionist dream The reductionist dream, based on logical positivism11, is that only the physical level is “real”, so everything else must reduce to it. Yet when Shannon and Weaver defined information as a choice between physical options, the options were physical but the choosing wasn’t (Shannon and Weaver, 1949). A message physically fixed in one way has by this definition zero information, because the other ways it could have been don’t exist physically12. It is strange but logically true that hieroglyphics one can’t read have in themselves no information at all. It is reader choices that generate information, which until deciphered is unknown. If this were not so, data compression couldn’t put the same data in a physically smaller signal, which it can. Information is defined by the encoding, not the physical message. If the encoding is unknown, the information is undefined, e.g. an electronic pulse sent down a wire could be a bit, or a byte (an ASCII “1”), or as the first word of a dictionary, say Aardvark, be many bytes. The information a message conveys depends on the decoding process, e.g. every 10th letter of a text gives a new message. Information doesn’t exist physically, as it can’t be touched or seen. Physicality is necessary for it, but not sufficient. That mathematical laws are real even though they aren’t concrete is mathematical realism (Penrose, 2005). Mathematics is a science because its constructs are logical, not because they are physical. They are real because we conceive them, not because they physically exist. That they are later physically useful is another matter. Cognitive realism is the case that cognitions are also real because we experience them. Mathematical or cognitive constructs defined in physical terms become empirical13, and so the feedback loop of science still works, e.g. fear mea11.  Logical positivism is a nineteenth century meta-physical position stating that all science involves only physical observables. In psychology, it led to Behaviorism (Skinner, 1948) which is now largely discredited (Chomsky, 2006). Science is not a way to prove facts, but a way to use world feedback to make best guesses. See researchroadmap.org for more details 12.  An on/off line voltage choice is one bit, but a physical ‘on’ signal alone is no information 13.  Empirical means derived from the physical world. Mental constructs with no physical referent, like love, are outside it.


1622

Encyclopedia of Human-Computer Interaction

sured by heart rate is a cognitive construct measured in physical terms. Yet fear isn’t just heart-rate, as it can also be measured by pupil dilation, blood pressure, etc. Even terms like “red” aren’t physical facts as the light spectrum is continuous, with no red frequency section. The physical level alone is what it is. It has no choices so has no information, i.e. reductionism denies information science. In physics, it gave a clockwork universe, where each state perfectly defined the next. Quantum theory flatly denied this, as quantum events are by definition random, i.e. explained by no physical history. Either quantum theory is wrong, which it has never been, or reductionism, that only the physical is real, is a naive nineteenth century assumption that has had its day. If all science were physical, all science would be physics, which it is not. Physics today has a quantum level, i.e. a primordial non-physical14 reality below physical reality (Whitworth, 2011). Yet long ago, the great 18th Century German philosopher Kant argued that reality is just a view, that we don’t see things as they are in themselves (Kant, 1999)15. Levels return the observer to science, as quantum theory’s measurement paradoxes demand. In philosophy, psychology, mathematics, computing and quantum physics, levels apply16.

14.  For example, quantum collapse ignores the speed of light limit and quantum waves travel many paths at once 15.  He called a ‘thing in itself’ the noumenon, as opposed to the phenomenon, or the view we see. A bat or a bee would see the world differently from us. It is egocentrism to assume the world is only as we see it. 16.  With a non-physical quantum reality below the physical


Socio-Technical System Design

1623

24.1.4  Science as a world view

Figure 24.4:  Computing levels as abstract views. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

A level is a world view, a way of seeing complete and consistent in itself. In the mechanical view, a computer is all hardware, but in the informational view it is all data. One can’t point to a program on a motherboard nor a device in a data structure. A mobile phone doesn’t have hardware and software parts, but is hardware or software in toto. Hardware and software are ways to look at it, not ways to divide it up. Hardware becomes software when we view computing in a different way. The switch is as one swaps glasses to see the same object close-up. The disciplines of science are world views, like walking around an object to see it from different perspectives. Levels are a fact of science, e.g. to describe World War II as a “history” of atomic events would be ridiculous. A political summary is more useful. Yet levels


1624

Encyclopedia of Human-Computer Interaction

emerge from each other, as higher abstractions form from lower ones (Figure 24.4). Information needs hardware choices, cognitions need information flows, and communities need common cognitions. Conversely, without physical choices there is no information, without information there are no cognitions and without cognitions there is no community17. A world view has properties, like being: 1.

Essential.One cannot view a world without first having a point of view.

2. Empirical. Based on world interaction, e.g. information is empirical. 3. Complete. A world view consistently describes a whole world. 4. Subjective. One chooses a view before viewing, explicitly or implicitly. 5. Exclusive. One can’t view two ways at once, as one can’t sit in two places at once.18 6. Emergent. One world view can emerge from another. Levels as views must be chosen before viewing, i. e. pick a level then view. Yet how we see the world affects how we act, e.g. if we saw ultra-violet light, as bees do, previously dull flowers would become bright. Every flower shop would have to change its stock. Levels as higher ways to view a system are also new ways to operate and design it, e.g. new software protocols like Ethernet can improve network performance as much as new cables. New ways to view computing affect how we build it, and how social levels affect technology design is socio-technical design. Level requirements cumulate, so socio-technical design includes hardware, software and HCI requirements 17.  A community is a set of people who see themselves as a social unit. 18.  Or as one can’t lever from two fulcrums at once. One can, of course, view from one perspective then another


Socio-Technical System Design

1625

(Figure 24.5). What appears as just hardware now has requirements outside itself, e.g. smart-phone buttons mustn’t be too small for people’s fingers. Levels are why computer design has evolved from hardware engineering to sociotechnology.

Figure 24.5:  Computing applications and levels. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

For a village beside a factory, community needs come second to factory productivity, with ethics an after-thought, but for socio-technology, the community and the technology are one. If social needs are not met there is no community, and if there is no community the technology fails to perform as expected. Sociotechnical design is the application of community requirements to people, software and hardware. The following sections derive each computing level from the previous.

24.1.5  From hardware to software Hardware is any physical computer part, e.g. mouse, screen or case. It doesn’t “cause” software nor is software a hardware output, as physical systems have physical outputs. We create software by seeing choice in physicality. Software needs hardware but it isn’t hardware, as the same code can run on a PC, Mac or mobile phone. An entity relationship diagram can work for any physical storage,


1626

Encyclopedia of Human-Computer Interaction

whether disk, CD or USB, as data entities aren’t disk sectors. Software assumes some hardware but no specific one. If any part of a device acquires software, the whole system gets an information level, e.g. a computer is information technology even though its case is just hardware. We describe a system by its highest level, so if the operating system “hangs”19 we say “the computer” crashed, even though the computer hardware is working fine. Rebooting fixes the software problem with no hardware change, so a software system can fail while the hardware still works perfectly. Conversely, a computer can fail as hardware but not software, if a chip overheats. Replace the hardware part and the computer works with no software change needed. Software can fail without hardware failing and hardware can fail without software failing. New hardware needn’t change software and new software needn’t change hardware. Each level has its own performance requirements: if software fails we call a programmer, but if hardware fails we call an engineer. Software requirements can be met by hardware operations, e.g. reading a logical file takes longer if the file is fragmented, as the drive head must jump between physically distant disk sectors. Defragmenting a disk improves software access by putting files in adjacent physical sectors. File access improves, but the physical drive read rate hasn’t changed, i.e. hardware actions can meet software goals, e.g. database and network requirements gave new hardware chip commands. The software goal, of better information throughput, also becomes the hardware goal, e.g. physical chip design today is as much about caching and coprocessing as it is about cycle rate.

24.1.6  From software to HCI HCI began with the personal computing era. Adding people to the computing equation meant that getting technology to work was only half the problem - the other half 19.  If the software gets in an infinite loop, we say it ‘hangs’


Socio-Technical System Design

1627

was getting people to use it. Web users who didn’t like a site just clicked on. Web sites that got more hits succeeded because given equal functionality, users chose the more usable product (Davis, 1989), e.g. Word replaced Word Perfect because it was more usable - users who took a week to learn Word Perfect picked up Word in a day. As computing previously gained a software level, it now gained a human level. Human computer interaction (HCI) is a person using IT, as IT is software using hardware. As computer science merges mathematics and engineering, but is neither, so HCI merges psychology and computer science, but is neither. Psychology is the study of people, and computer science the study of software, but the study of people using software, or HCI, is new. It is another computing discipline that cuts across other disciplines. HCI applies psychology to computing design, e.g. Miller’s paper on cognitive span suggests limiting computer menu choices to seven (Miller, 1956). Our many senses and multi-media computing is another example of a human requirement defining computing.

24.1.7  From HCI to STS Social structures, roles and rights add a fourth level to computing. Socio-technical design uses the social sciences in computing design as HCI uses psychology. STS is not part of HCI, nor is sociology part of psychology, because a society is more than the people in it, e.g. East and West Germany, with similar people, performed differently as communities, as is true for North and South Korea today. To say “the Jews” survived but “the Romans” didn’t is to say that the society didn’t continue, not its people, as no Roman era people are alive today. A society is not just the people in it. People who gather to view a spectacle or customers coming to shop for bargains, are not a community. A community is here an agreed form of social interaction that persists (Whitworth and de Moor, 2003). Social interactions can have a physical or a technical base, e.g. a socio-physical system is people connecting by physical means. Face-to-face friendships cross


1628

Encyclopedia of Human-Computer Interaction

seamlessly to Facebook because the social level persists across physical and electronic architecture bases. Whether electronically or physically mediated, a social system is always people interacting with people. Electronic communication may be “virtual”, but the people involved are real.

Figure 24.6:  The computing requirements hierarchy. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

A community works through people using technology as people work through software using hardware, so social requirements are now part of computing de-


1629

Socio-Technical System Design

sign (Sanders and McCormick, 1993). While sociology studies the social level alone, socio-technical design studies how personal and social requirements can be met by IT system design. Certainly this raises the cost of development, but then systems like social networks have far more performance potential.

24.1.8  The computing requirements hierarchy The evolution of computing implies a requirements hierarchy (Figure 24.6). If the hardware works software becomes the priority, if the software works user needs arise, and when user needs are met social requirements follow. As one level’s issues are met those of the next appear, as climbing one hill reveals another. As hardware over-heating problems are solved, software data locking problems arise. As software response times improve, user response times become the issue. Companies like Google and E-bay still seek customer satisfaction, but customers in crowds have social needs like fairness and synergy. As computing evolves, higher levels come to drive success. In general, the highest level of a system defines its success, e.g. social networks need a community to succeed. If no community forms, it doesn’t matter how easy to use, fast or reliable the software is. Lower levels are essential to avoid failure, but higher levels are essential to success. Level Community

Requirements

Errors

Reduce community overload, Unfairness, slavery, selfishclashes, increase productiv-

ness, apathy, corruption, lack of

ity, synergy, fairness, free-

privacy.

dom, privacy, transparency. Personal

Reduce cognitive overload,

User misunderstands, gives up,

clashes, increase meaning

is distracted, or enters wrong

transfer efficiency.

data.


1630

Encyclopedia of Human-Computer Interaction

Informa-

Reduce information overload,

Processing hangs, data storage

tional

clashes, increase data process-

full, network overload, data

ing, storage, or transfer effe-

conflicts.

ciency Mechanical

Reduce physical heat or

Overheating, mechanical frac-

force overload. Increase heat tures or breaks, heat leakage, or force efficiency.

jams.

Table 24.2:  Computing errors by system level.

Conversely, any level can cause failure, e.g. it doesn’t matter how high community morale is if the hardware fails, the software crashes or the interface is unusable. An STS fails if its hardware fails, if its program crashes or if users can’t figure it out. Hardware, software, personal and community failures are all computing errors (Table 24.2). The one thing they have in common is that the system fails to perform, and in evolution, what doesn’t perform doesn’t survive. When computing was just technology, it only failed for technical reasons, but now it is socio-technology; it can also fail for social reasons. Technology is hard, but society is soft. That the soft should direct the hard seems counter-intuitive, but trees grow at their soft tips not their hard base. As a tree trunk doesn’t direct its expanding canopy, so today’s social computing was undreamt of by its technical base.


Socio-Technical System Design

1631

24.1.9  Design combinations

Figure 24.7.A:  Remote controls for Apple products are good examples of HCI Design. Courtesy of Ocrho. Copyright: pd (Public Domain (information that is common property and contains no original authorship)).


1632

Encyclopedia of Human-Computer Interaction

Figure 24.7.B:  Remote controls for televisions are not. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


Socio-Technical System Design

1633

Design fields combine different requirements and design levels, as in Table 24.3: 1.

Ergonomics is the design of safe and comfortable machines for people. To design technology to human body needs like posture and eye-strain merges biology and engineering.

2. Object design, as defined by Norman, applies psychological needs to mechanical design (Norman, 1990), e.g. a door’s design affects whether it is pushed or pulled. An affordance is a physical object design that cues its use, as a button cues pressing. Physical systems designed to human requirements work better. In World War II, planes crashed until engineers designed cockpit controls to the cognitive needs of pilots as follows (with computing examples): 1. Put the control by the thing controlled, e.g. a handle on a door (context menus). 2. Let the control “cue” the required action, e.g. a joystick (a 3D screen button). 3. Make the action/result link intuitive, e.g. press a joystick forward to go down, (press a button down to turn on). 4. Provide continuous feedback, e.g. an altimeter, (a web site breadcrumbs line). 5. Reduce mode channels, e.g. altimeter readings, (avoid edit and zoom mode confusions). 6. Use alternate sensory channels, e.g. warning sounds, (error beeps). 7. Let pilots “play”, e.g. flight simulators, (a system sandbox). 3. Human computer interaction applies psychological requirements to software design. Usable interfaces respect cognitive principles, e.g. by the nature of human attention, users don’t usually read the entire screen. HCI turns psychological needs into IT designs as architecture turns buyer needs into house designs. Compare Steve Jobs’ IPod to a television re-


1634

Encyclopedia of Human-Computer Interaction

mote (Figure 24.7). Both do the same job20 but one is a cool tool and the other a mass of buttons. One was designed to engineering requirements and the other to human needs. Which then performs better? 4. Fashion is the social need to look good applied to object design. In computing, a mobile phone can be a fashion accessory, just like a hat or handbag. Its role is to impress, not just to function. Aesthetic criteria apply when people buy mobile phones to be trendy or fashionable, so color is as important as battery life in mobile phone selection. 5. Socio-technology, the social design of information technology, applies social requirements to software design. Anyone online can see the power of sociotechnology but most see it as an aspect of their specialty. Sociologists study society as if it were apart from physicality, which it is not. Technologists study technology as it were apart from community, which it is not. Only socio-technology studies how the social links to the technical, as a new discipline.

Field

Target

Requirements

Example

STS

IT

Community ...

Wikipedia, YouTube, E-bay

Fashion

Accessory

Community ...

Mobile phone as an accessory

HCI

IT

Personal ...

Framing, border contrast, richness

Design

Technology

Ergonomics Technology

Personal ...

Keyboard, mouse

Biological ...

Adjustable height screen

Table 24.3:  Design fields by target and requirement levels. 20.  In fact the IPod does more


Socio-Technical System Design

1635

In Figure 24.8, higher level requirements filter down to affect lower level operation and design. This higher affects lower principle is that higher levels directing lower ones improves system performance. Any level requirement can translate down, e.g. communities require agreement to act, which at the citizen level gives norms, at the informational level laws and at the physical level cultural events. The same applies online, e.g. online communities make demands of netizens21 as well as hardware. STS design then is about having it all: reliable devices, efficient code, intuitive interfaces and sustainable communities.

Figure 24.8:  Computing requirements cumulate. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

In physical society, over thousands of years, families formed tribes, tribes formed city states, city-states formed nations states, and nations formed nations of nations, each with more complex social structures (Diamond, 1998). The social level in Figure 24.8 isn’t just one step, as social units can form bigger social units22 to get new requirements (Whitworth and Whitworth, 2010). 21.  For example, online ‘netiquette’, see: http://www.kent.edu/dl/technology/etiquette.cfm 22.  A social unit of analysis can be a person, a dyadic friendship, a group, a tribe, etc.


1636

Encyclopedia of Human-Computer Interaction

24.1.10  The flower of computing The evolution of computing involves four main specialties (Figure 24.9), but pure engineers see only mechanics, pure computer scientists only information, pure psychologists only cognitions and pure sociologists only social structures. So computing as a whole isn’t pure, yet this hybrid is the future because performance isn’t about purity, as practitioners understand (Raymond, 1999).

Figure 24.9:  The four stages of computing. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

The kingdom of computing is a realm divided, as academics specialize to get publications, grants and promotions (Whitworth and Friedman, 2009). Specialties guard their knowledge in journal castles with jargon walls, like medieval fiefdoms, but in doing so hold hostage knowledge, that by its nature should be free. This division also disguises and limits the growth of computing. Every day more people


Socio-Technical System Design

1637

use more computers to do more things in more ways but computing staff rarely get critical mass, because engineering, computer science, health23, business, psychology, mathematics and education all compete for the computing crown24. A realm divided is weak, and will get weaker if music, art, journalism, architecture etc. also set up outposts. Computing faculty scatter over the academic landscape like the tribes of Israel, some in engineering, some in computer science, some in health, etc. Yet we are one. Mathematics split up like this would be equally dilute. The flower of computing is borne of many disciplines but belongs to none. It is a new discipline in itself (Figure 24.10). For it to bear research fruit, its academic parents must set it free. Let us trade knowledge not dominate it. Using different terms, models and theories for the same subject invites confusion. Universities that split computing research into small groups, isolated by discipline boundaries, distance themselves from its multi-disciplinary future. Until computing research becomes one, computing theory will remain as it is now - decades behind computing practice.

23.  Health even created its own computing field of informatics, with separate journals, conferences and courses, to meet its non-engineering and non-business computing needs 24.  Computing is the Afghanistan of academia, often invaded but never conquered. It should be the Singapore, a knowledge trade centre


1638

Encyclopedia of Human-Computer Interaction

Figure 24.10:  The flower of computing. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


Socio-Technical System Design

1639

24.1.11  Discussion questions Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples. 1.

How has computing evolved since it began? Is it just faster machines and better software? What is the role of hardware companies like IBM and Intel in modern computing?

2. How has the computing business model changed as it evolved? Why does selling software make more money than selling hardware? Can selling knowledge make even more money? What about selling friendships? Can one sell communities? 3. Is a kitchen table a technology? Is a law a technology? Is an equation a technology? Is a computer program a technology? Is an information technology (IT) system a technology? Is a person an information technology? Is an HCI system (person plus computer) an information technology? What, exactly, isn’t a technology? 4. Is any set of people a community? How do people form a community? Is a socio-technical system (an online community) any set of HCI systems? How do HCI systems form an online community? 5. Is computer science part of engineering or of mathematics? Is human computer interaction (HCI) part of engineering, computer science or psychology? Is socio-technology part of engineering, computer science, psychology or one of the social sciences?25 6. In an aircraft, is the pilot a person, a processor, or a physical object? Can one consistently divide the aircraft into human, computer and mechanical parts? How can one see it? 25.  Like, sociology, history, political science, anthropology, ancient history, etc.


1640

Encyclopedia of Human-Computer Interaction

7. What is the reductionist dream? How did it work out in physics? Does it recognize computer science? How did it challenge psychology? Has it worked out in any discipline? 8. How much information does a physical book, that is fixed in one way, by definition, have? If we say a book “contains” information, what is assumed? How is a book’s information generated? Can the same physical book “contain” different information for different people? Give an example. 9. If information is physical, how can data compression put the same information in a physically smaller signal? If information is not physical, how does data compression work? Can one encode more than one semantic stream into one physical message? Give an example. 10. Is a bit physical “thing”? Can you see or touch a bit? If a signal wire sends a physical “on” value, is that always a bit? If a bit isn’t physical, can it exist without physicality? How can a bit require physicality but not itself be physical? What creates information, if it is not the mechanical signal? 11. Is information concrete? If we can’t see information physically, is the study of information a science? Explain. Are cognitions concrete? If we can’t see cognitions physically, is the study of cognitions (psychology) a science? Explain. What separates science from imagination if it isn’t physicality? 12. Give three examples of other animal species who sense the world differently from us. If we saw the world as they do, would it change what we do? Explain how seeing a system differently can change how it is designed. Give examples from computing. 13. If a $1 CD with a $1,000 software application on it is insured, what do you get if it is destroyed? Can you insure something that is not physical? Give current examples.


Socio-Technical System Design

1641

14. Is a “mouse error” a hardware, software or HCI problem? Can a mouse’s hardware affect its software performance? Can it affect its HCI performance? Can mouse software affect HCI performance? Give examples in each case. If a wireless mouse costs more and is less reliable, how is it better? 15. Give three examples of a human requirement giving an IT design heuristic. This is HCI. Give three examples of a community requirement giving an IT design heuristic. This is STS. 16. Explain the difference between a hardware error, a software error, a user error and a community error, with examples. What is the common factor here? 17. What is an application sandbox? What human requirement does it satisfy? Show an online example. 18. Distinguish between a personal requirement and community requirement in computing. Relate to how STS and HCI differ and how sociotechnology and sociology differ. Why can’t sociologists or HCI experts design socio-technical systems? 19. What in general to people do if their needs aren’t met by a physical situation? What do users do if their needs aren’t met online? What is the difference? What do citizens of a physical community do if it doesn’t meet their needs? What about an online community? Again, what is the difference? Give specific examples to illustrate. 20. According to Norman, what is ergonomics? What is the difference between ergonomics and HCI? What is the difference between HCI and STS? 21. Give examples of: Hardware meeting engineering requirements. Hardware meeting Computer Science requirements. Software meeting CS


1642

Encyclopedia of Human-Computer Interaction

requirements. Hardware meeting psychology requirements. Software meeting psychology requirements. People meeting psychology requirements. Hardware meeting community requirements. Software meeting community requirements. People meeting community requirements. Communities meeting their requirements. Which are computing design 22. Why is an IPod so different from TV or video controls? Which is better and why? Why has TV remote design changed so little in decades? If TV and the Internet compete for the hearts and minds of viewers, who will win? 23. How does an online friend differ from a physical friend? Can friendships transcend physical and electronic interaction architectures? Give examples. How is this possible? 24. Why do universities spread computing researchers across many disciplines? What is a cross-discipline? What past cross-disciplines became disciplines. Why is computing a cross-discipline?

24.2  Part 2: Design spaces ““All my cuts are the best” (said by a butcher to a housewife who asked him for the best cuts)” The previous section reviewed computing system levels, this one reviews constituent parts.

24.2.1  The elephant in the room The beast of computing has regularly defied pundit predictions. Key advances like the cell-phone (Smith et al, 2002) and open-source development (Campbell-Kelly, 2008) weren’t predicted by the experts of the day, though the signs were there for all to see. As experts pushed media-rich systems, lean text chat, blogs, texting


Socio-Technical System Design

1643

and wikis took off. Even today, people with rich video-phones still text. Google’s simple white screen scooped the search engine field not Yahoo’s multi-media graphics. In gaming, the innovation was social gaming not virtual reality helmets. Investors in Internet bandwidth lost money when the future wasn’t all video. In computing, that practice leads but theory bleeds has a long history. Over thirty years ago, paper was declared “dead”, by the electronic paperless office (Toffler, 1980). Yet today, paper is used more than ever before. James Martin saw program generators replacing programmers, but today, we still have a programmer shortage. A “leisure society” was supposed to arise as machines took over our work, but today, we are less leisured than we ever were (Golden and Figart, 2000). The list goes on: email was supposed to be for routine tasks, the Internet was supposed to collapse without central control, video was supposed to replace text, teleconferencing was supposed to replace air travel, AI smart-help was supposed to replace help-desks, and so on. We get it wrong time and again, because computing is the elephant in our living room. We can’t see it because it is too big. In the story of the blind men and the elephant, one grabbed its tail and found it like a rope and bendy, another took a leg and declared it fixed like a pillar, a third felt an ear and thought it like a rug and floppy, while the last seized the trunk, and found it like a pipe but very strong (Sanai, 1968). Each saw a part but none saw the whole. How can one see an elephant by analyzing its toenails?26

24.2.2  Design requirements To design a system is to find problems early, e.g. a misplaced wall on an architect’s plan can be moved by the stroke of a pen, but design needs performance requirements, like efficiency. Requirements engineering analyzes stakeholder needs, to 26.  Yet one can see its genome in any cell, because in nature, each cell has the information to regenerate the elephant


1644

Encyclopedia of Human-Computer Interaction

specify what a system must do for them to sign off on the end product. It is basic to system design:

“The primary measure of success of a software system is the degree to which it meets the purpose for which it was intended. Broadly speaking, software systems requirements engineering (RE) is the process of discovering that purpose...” -- Nuseibeh and Easterbrook, 2000: p. 1

A requirement can be a particular value (e.g. uses SSL), a range of values (e.g. less than $100), or a criterion scale (e.g. is secure). Given a system’s requirements, designers can build it, but for computing, the literature can’t agree on what they are. One text has usability, repairability, security and reliability (Sommerville, 2004, p. 24) but the ISO 9126-1 quality model has functionality, usability, reliability, efficiency, maintainability and portability (Losavio et al, 2004). Berners-Lee made scalability a World Wide Web criterion (Berners-Lee, 2000) while others stress open standards between systems (Gargaro et al, 1993). Business criteria are cost, quality, reliability, responsiveness and conformance to standards (Alter, 1999), but software architects prefer portability, modifiability and extendibility (de Simone and Kazman, 1995). Others espouse flexibility (Knoll and Jarvenpaa, 1994) and privacy (Regan, 1995). On the issue of what computer systems need to succeed, the literature is at best confused. This gives what developers call the requirements mess (Lindquist, 2005), that has ruined many a software project. It is the problem that agile methods address. In current theories, each specialty sees only itself. Security specialists see security as availability, confidentiality and integrity (OECD, 1996), so to them,


1645

Socio-Technical System Design

reliability is part of security. Reliability specialists see dependability as reliability, safety, security and availability (Laprie and Costes, 1982), so to them security is part of a general reliability concept. Yet both can’t be generally true. Similarly, a usability review finds functionality and error tolerance part of usability (Gediga et al, 1999) while a flexibility review finds scalability, robustness and connectivity aspects of flexibility (Knoll and Jarvenpaa, 1994). In academia, each specialty expands to fill the theory space around it. Yet there is recognition that no specialty is the be all or end all:

“The face of security is changing. In the past, systems were often grouped into two broad categories: those that placed security above all other requirements, and those for which security was not a significant concern. But ... pressures ... have forced even the builders of the most security-critical systems to consider security as only one of the many goals that they must achieve.” -- Kienzle and Wulf, 1998: p5

Analyzing performance goals in isolation is giving diminishing returns.

24.2.3  Design spaces Architect Christopher Alexander observed that vacuum cleaners with powerful engines and more suction were also heavier, noisier and cost more (Alexander, 1964). One performance criterion has a best point, but two criteria, like power and cost, give a best line. The efficient frontier of two performance criteria is the maximum of one for a value of the other (Keeney and Raiffa, 1976). A system de-


1646

Encyclopedia of Human-Computer Interaction

sign is choosing a many value point in a multi-dimensional design space, of many combinations. So there are many “best” points, e.g. a cheap, heavy but powerful vacuum cleaner, or light, expensive and powerful one (Figure 24.11). The efficient frontier of a design space is a surface of “best” combinations27. Advanced system performance is not a one dimensional ladder to excellence, but a station with many trains to many destinations.

Figure 24.11:  A vacuum cleaner design space. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Designing in a multi-dimensional space gives many “best” points, so nature has no best animal. Successful life includes flexible viruses, reliable plants, social insects and powerful tigers, with the latter endangered. In evolution, not just 27.  Not all possible criterion combinations may be achievable, e.g. a light, cheap and powerful vacuum cleaner


1647

Socio-Technical System Design

the strong are fit and over specialization can even lead to extinction. Likewise, computing has no “best”. If computer performance was just about processing we would all want supercomputers, but laptops with less power perform better for some (David et al, 2003). Blindly adding software functions gives bloatware28, applications full of features that no-one uses. Design is then the art of reconciling many requirements in a system form, e.g. a quiet, reliable, cheap and powerful vacuum cleaner. It is the innovative synthesis of a performance form in a requirements space (Alexander, 1964). It isn’t one dimensional, e.g. Berners-Lee chose HTML for the World Wide Web for its flexibility (across platforms), reliability and usability (easy to learn). An academic conference rejected his WWW proposal because HTML was inferior to SGML (Standard Generalized Markup Language). Specialists saw their specialty, not system performance. Even after the World Wide Web’s phenomenal success, their blindness remained:

“Despite the Web’s rise, the SGML community was still criticising HTML as an inferior subset ... of SGML” -- Berners-Lee, 2000: p96

What has changed since academia found the World Wide Web “inferior”? Not a lot. If it is any consolation, an equally myopic Microsoft also found it “unprofitable”. In system design, a focus on any one criterion gives diminishing returns, whether it is functionality, security (OECD, 1996), extendibility (Simone and Kazman, 1995), privacy (Regan, 1995), usability (Gediga et al., 1999) or flexibility 28.  Also called featuritis or scope creep


1648

Encyclopedia of Human-Computer Interaction

(Knoll and Jarvenpaa, 1994). Improving one aspect alone can even reduce performance, i.e. “bite back” (Tenner, 1997), e.g. a network so secure that no-one uses it. Advanced system performance does not result from one dimensional design.

24.2.4  Non-functional requirements In traditional requirements engineering, criteria like usability are quality requirements that affect functional goals but can’t stand alone (Chung et al, 1999). For decades, these non-functional requirements (NFRs), or “-ilities”, were considered second class requirements. They defied categorization, except to be non-functional. How exactly they differed from functional goals was never made clear (Rosa et al, 2001), yet most modern systems have more lines of interface, error and network code than functional code, and increasingly fail for “unexpected” nonfunctional reasons29 (Cysneiros and Leite, 2002, p. 699). The logic is that NFRs like reliability can’t exist without functionality, so are subordinate to it. Yet by the same logic, functionality can’t exist without reliability, e.g. a car that won’t start has no speed function, nor does a car that is stolen or can’t be driven. NFRs don’t just modify performance they define it. In nature, functionality isn’t the only key to success, e.g. viruses hijack the functionality of other system’s. Functionality differs from other system requirements only in being more obvious to us. It is really just one of many requirements. The distinction between functional and non-functional requirements is our bias, like seeing the sun going round the earth because we are on the earth.

24.2.5  Constituent parts In general systems theory, any system consists of: 1.

Parts, and

2. Interactions. 29.  Hardly surprising if we define NFRs to be less important.


1649

Socio-Technical System Design

But are software parts lines of code, variables or sub-programs? Let a system’s elemental parts be those not formed of other parts. A mechanic stripping a car stops at the bolt element, as to decompose it further gives atoms, which are no longer mechanical. Each level has a different elemental part: physics has quantum strings, information has bits, psychology has qualia, and society has citizens (Table 24.4). Elemental parts then form complex parts as bits form bytes. Level

Elemental

Other parts

part Community

Citizen

Friendships, groups, organizations, societies.

Personal

Qualia

Cognitions, attitudes, beliefs, feelings, theories.

Informa-

Bit

Bytes, records, files, commands, databases.

Quantum

Quarks, electrons, nucleons, atoms,

strings?

molecules.

tional Physical

Table 24.4:  System parts by level.

Let a system’s constituent parts be those that interact to form the system but are not part of other parts (Esfeld, 1998). So, disconnecting a car entirely gives elemental parts not constituent parts, e.g. a bolt on a wheel isn’t a constituent because it is part of the wheel. To say a body is composed of cells ignores its structure: how elemental parts form constituent parts. Only in system heaps, like a pile of sand, are elemental parts also constituent parts. The body’s constituent parts are the digestive system, the


1650

Encyclopedia of Human-Computer Interaction

respiratory system, etc, not its cells. Just sticking together arbitrary physical parts, like head, arms, and legs, gives the Frankenstein effect30 (Tenner, 1997).

24.2.6  Holism and specialization The performance of a system of parts that interact isn’t defined by decomposition alone. Even simple parts, like air molecules, can interact strongly to form a chaotic system like the weather (Lorenz, 1963). Gestalt psychologists called the whole being more than its parts holism, as a curve is just a curve but in a face becomes a “smile”. Holism is how system parts change by interacting with others. Holistic systems are individualistic, because changing one part, by its interactions, can cascade to change the whole system drastically. People rarely look the same because one gene change can change everything. The brain is also holistic - one thought can change everything you know. Yet a system’s parts needn’t be simple. The body began as one cell, a zygote, that divided into all the cells of the body, including liver, skin, bone and brain cells31. Likewise in early societies most people did most things, but today we have millions of specialist jobs. A system’s specialization32 is the degree its parts differ in form and action, especially constituent parts. Holism (complex interactions) and specialization (complex parts) are hallmarks of evolved systems, giving both levels and constituent specializations.

24.2.7  General performance requirements Requirements engineering aims to define a system’s purposes. If levels and constituent specializations change those purposes, how can requirements engineer30.  Dr Frankenstein made a human being by putting together the best of each individual body part he could find in the graveyard. The result was a monster. 31.  Deciphering the human genome gave the pieces of the genetic puzzle, not how they connect 32.  Specialization is also called differentiation


1651

Socio-Technical System Design

ing succeed? The answer proposed here is to take the view of the system itself, specifying requirements for different levels and constituent specializations. How these are reconciled is then the art of system design. A system interacts with its environment to perform, i.e. to gain value and avoid loss in order to survive. In Darwinian terms, what doesn’t survive fails and what does succeeds. So a system needs a boundary to exist apart from the world and an internal structure to support and manage that existence. It needs effectors to act upon the environment around it and receptors to monitor the world for risks and opportunities. Constitu-

Require-

ent

ment

Boundary

Security

Definition

To deny unauthorized entry, misuse or takeover by other entities.

Extendibility

To attach to or use outside elements as system extensions.

Structure

Effector

Receptor

Flexibility

To adapt system operation to new environments

Reliability

To continue operating despite system part failure

Functionality

To produce a desired change on the environment

Usability

To minimize the resource costs of action

Connectivity

To open and use communication channels

Privacy

To limit the release of self information by any channel

Table 24.5:  System performance requirements by constituent specialty.


1652

Encyclopedia of Human-Computer Interaction

So as cells evolved they first got a boundary membrane, then organelle and nuclear structures for support and control, then eukaryotic cells evolved flagella to move and protozoa got photo-receptors (Alberts et al, 1994). We also have a skin boundary, metabolic and brain structures, muscle effectors and sense receptors, like the eye. Computers also have a case boundary, a motherboard internal structure, printer or screen effectors and keyboard or mouse receptors. Four constituent specializations by risk and opportunity goal options gives eight performance requirements (Table 24.5). The details are as follows: 1.

Boundary constituents manage the system boundary. They can be designed to deny outside things entry (security) or to use them (extendibility). In computing, virus protection is security and system add-ons are extendibility (Figure 24.12). In people, the immune system gives biological security and tool-use illustrates extendibility.

2. Structure constituents manage internal operations. They can be designed to limit internal change to reduce faults (reliability), or to allow internal change to adapt to outside changes (flexibility). In computing, reliability reduces and recovers from error and flexibility is the system preferences that allow customization. In people, reliability is the body fixing a cell ÂŤerrorÂť that might cause cancer, while the brain learning illustrates flexibility. 3. Effector constituents manage environment actions, so can be designed to maximize effects (functionality) or minimize resource use (usability). In computing, functionality is the menu functions, while usability is how easy they are to use. In people, functionality gives muscle effectiveness and usability is metabolic efficiency. 4. Receptor constituents manage signals to and from the environment, so can be designed to open communication channels (connectivity) or close


Socio-Technical System Design

1653

them (privacy). Connected computing can download updates or chat online, while privacy is the power to disconnect or log off. In people, connectivity is conversing and privacy is the legal right to be left alone. In nature, privacy is camouflage, and the military calls it stealth. Every system is somehow created, which takes effort both for applications that are built or organisms that are born. A system’s ability to reproduce is important but outside the current scope, as apart from virus programs, few computer systems do this. These general system criteria map well to current terms (Table 24.6). They apply at any level, but as what is exchanged changes, so do the names used: 1.

Hardware systems exchange energy. So “functionality” is power, i.e. hardware with high CPU cycle or disk read-write rates. “Usable” hardware uses less power for the same result, e.g. mobile phones that last longer. Reliable hardware is rugged enough to work if you drop it, and flexible hardware is mobile to still work if you move around, i.e. change environments. Secure hardware blocks physical theft, e.g. by laptop cable locks, and extendible hardware has ports for peripherals to be attached. Connected hardware has wired or wireless links and private hardware is tempest proof i.e. it doesn’t physically leak energy.

2. Software systems exchange information. Functional software has many ways to process information, while “usable” software uses less CPU processing (“lite” apps). Reliable software avoids errors or recovers from them quickly. Flexible software is operating system platform independent. Secure software can’t be corrupted or overwritten. Extendible software can access OS program library calls. Connected software has protocol “handshakes” to open read/write channels. Private software can encrypt information so others can’t see it.


1654

Encyclopedia of Human-Computer Interaction

3. HCI systems exchange meaning, including ideas, feelings and intents. In functional HCI the human computer pair is effectual, i.e. meets the task goal. Usable HCI requires less intellectual, affective or conative33 effort, i.e. is intuitive. Reliable HCI avoids or recovers from unintended user errors by checks or undo choices — the web Back button is an HCI invention. Flexible HCI lets users change language, font size or privacy preferences, as each person is a new environment to the software. Secure HCI avoids identity theft by user password. Extendible HCI lets users use what others create, e.g. mash-ups and third party add-ons. Connected HCI communicates with others, while privacy includes not getting spammed or being located on a mobile device. Each level applies the same ideas to a different system view. The community level is covered later. GSR Crite-

Related Criteria

rion Functionality

Effectualness, capability, usefulness, effectiveness, power, utility.

Usability

Ease of use, simplicity, user friendliness, efficiency, accessibility.

Extendibility

Openness, interoperability, permeability, compatibility, standards.

Security

Defense, protection, safety, threat resistance, integrity, inviolable.

Flexibility

Adaptability, portability, customizability, plasticity, agility, modifiability.

33.  Conative refers to the will; affective refers to the emotions; while intellectual refers to thoughts. All are cognitions that form from perceptions.


Socio-Technical System Design

Reliability

1655

Stability, dependability, robustness, ruggedness, durability, availability.

Connectivity

Networkability, communicability, interactivity, sociability.

Privacy

Tempest proof, confidentiality, secrecy, camouflage, stealth, encryption.

Table 24.6:  Related performance criteria.

Figure 24.12:  Mozilla/Firefox add-ons. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


1656

Encyclopedia of Human-Computer Interaction

24.2.8  A general system design space The above gives the general system design space of Figure 24.13, where for a particular system: ff The area is the overall performance requirements met, i.e. performance in general. ff The shape is the requirement weights, defined by the environment. ff The lines are design requirement “tensions” (see below).

Figure 24.13:  A general system performance design space. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


Socio-Technical System Design

1657

This space has active requirement that enhance opportunities34 and passive ones that reduce risks35, where taking opportunities is as important in performance as reducing risk (Pinto, 2002). Criteria weights vary by environment, so security is more important in threat environments and extendibility is better in opportunity environments (Whitworth et al, 2008). These performance criteria are general because they have no inherent contradictions, e.g. a bullet-proof plexi-glass room can be secure but not private, while encrypted files can be private but not secure. Reliability provides services but security denies them (Jonsson, 1998), so a system can be reliable but insecure, unreliable but secure, unreliable and insecure or reliable and secure. Functionality needn’t deny usability (Borenstein and Thyberg, 1991) or connectivity privacy. Cross-cutting requirements (Moreira et al, 2002) can be reconciled by innovative design if they are logically modular, so one can get both.

24.2.9  Design tensions and innovations A design tension is when satisfying one design requirement denies another. Applying different requirements to the same constituent gives a design tension, e.g. castle walls that protect against attacks but need a gate to receive supplies, or computers impenetrable to virus attacks that need plug-in software hooks. These contrasts are not anomalies, but built into the nature of systems. Design tensions begin slack for new systems, but increase as performance improves. Eventually, like stretched rubber bands, the system becomes so “tight” that advancing one requirement can easily pull back another, or more than one. In the version 2 paradox, development effort spent improving a successful product can decrease its performance! To expand a performance web, one can’t just pull one corner, e.g. in 1992, Apple CEO Sculley introduced the hand held Newton, claiming that portable 34.  Functionality, flexibility, extendibility and connectivity 35.  Security, reliability, privacy and usability.


1658

Encyclopedia of Human-Computer Interaction

computing was the future. We now know he was right, yet in 1998 Apple dropped the line due to poor sales. The Newton’s small screen made data entry hard, i.e. the portability gain was nullified by a usability loss. Only when Palm’s Graffiti language improved handwriting recognition, did the personal digital assistant market revive. Sculley’s innovation was only half the answer - the other half was resolving the usability problems created by increasing flexibility. Innovative design must meet specialist requirements and resolve design tensions.

24.2.10  Project development Constituent

Code

Require-

Analysis

Testing

ment Actions

Interac-

Application

Functionality

Task

Business

Interface

Usability

Usability

User

Authorization

Security

Threat

Penetration

Plug-ins

Extendibility

Standards

Compatibility

Error recovery

Reliability

Stress

Load

Preferences

Flexibility

Contingency

Situation

tions

Changes

(Beta) Inter-

Network

Connectivity

Channel

changes

Communication

Rights

Privacy

Legitimacy 36

Table 24.7:  Project development specializations by constituent. 36.  See the next section. Legitimacy analysis specifies social interaction requirements.

Community


Socio-Technical System Design

1659

The days when programmers could list a system’s functions then just code them are gone, if they ever existed. Today, design involves not only many specialties but also their interaction. A system development could involve up to eight specialists, with distinct requirements, analysis and testing (Table 24.7). Smaller systems might have four (actions, interactions, changes and interchanges), two (opportunities and risks) or just one (performance). Design tensions are reduced by agile methods, where specialists talk to each other and stakeholders, but system development also needs innovators, people to cut across specialist boundaries to resolve cross-cutting design tensions.

24.2.11  Discussion questions Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples. 1.

What three widespread computing expectations didn’t happen? Why not? What three unexpected computing outcomes did happen? Why?

2. What is a system requirement? How does it relate to system design? How do system requirements relate to performance? Or to system evaluation criteria? How can one specify or measure system performance if there are many factors? 3. What is the basic idea of general systems theory? Why is it useful? Can a cell, your body, and the earth all be considered systems? Describe Lovelock’s Gaia Hypothesis. How does it link to both General Systems Theory and the recent film Avatar? Is every system contained within another system (environment)? 4. Does nature have a best species? If nature has no better or worse, how can species evolve to be better? Or if it has a better and worse, why is


1660

Encyclopedia of Human-Computer Interaction

current life so varied instead of just the “best”?37 Does computing have a best system? If it has no better or worse, how can it evolve? If it has a better and worse, why is current computing so varied? Which animal actually is “the best”? 5. Why did the electronic office increase paper use? Give two good reasons to print an email in an organization. How often do you print an email? When will the use of paper stop increasing? 6. Why wasn’t social gaming predicted? Why are MMORPG human opponents better than computer ones? What condition must an online game satisfy for a community to “mod” it (add scenarios)? 7. In what way is computing an “elephant”? Why can’t it be put into an academic “pigeon hole”?38 How can science handle cross-discipline topics? 8. What is the first step of system design? What are those who define what a system should do called? Why can’t designers satisfy every need? Give examples from house design. 9. Is reliability an aspect of security or is security an aspect of reliability? Can both these things be true? What are reliability and security both aspects of? What decides which is more important? 10. What is a design space? What is the efficient frontier of a design space? What is a design innovation? Give examples (not a vacuum cleaner). 11. Why did the SGML academic community find Tim Berners-Lee’s WWW proposal of low quality? Why didn’t they see the performance potential? Why did Microsoft also find it “of no business value”? How did the WWW eventually become a success? Given that business and academia now use it extensively, 37.  Success in evolutionary terms is what survives. Over 99% of all species that ever existed are now extinct, so every species existing today is a great success. Bacteria and viruses are as evolved as us in evolutionary terms. 38.  A pigeon-hole is a small space used to roost pigeons.


Socio-Technical System Design

1661

why did they reject it initially? What have they learned from this lesson? 12. Are NFRs like security different from functional requirements? By what logic are they less important? By what logic are they equally critical to performance? 13. In general systems theory (GST), every system has what two aspects? Why doesn’t decomposing a system into simple parts fully explain it? What is left out? Define holism. Why are highly holistic systems also individualistic? What is the Frankenstein effect? Show a “Frankenstein” web site. What is the opposite effect? Why cant “good” system components just be stuck together? 14. What are the elemental parts of a system? What are its constituent parts? Can elemental parts be constituent parts? What connects elemental and constituent parts? Give examples. 15. Why are constituent part specializations important in advanced systems? Why do we specialize as left-handers or right-handers? What about the ambidextrous? 16. If a car is a system, what are its boundary, structure, effector and receptor constituents? Explain its general system requirements, with examples. When might a vehicle’s “privacy” be a critical success factor? What about its connectivity? 17. Give the general system requirements for browser application. How did its designers meet them? Give three examples of browser requirement tensions. How are they met? 18. How do mobile phones meet the general system requirements, first as hardware and then as software? 19. Give examples of usability requirements for hardware, software and HCI. Why does the requirement change by level? What is “usability” on a community level?


1662

Encyclopedia of Human-Computer Interaction

20. Are reliability and security really distinct? Can a system be reliable but insecure, unreliable but secure, unreliable and insecure, or reliable and secure? Give examples. Can a system be functional but not usable, not functional but usable, not functional or usable, or both functional and usable? Give examples. 21. Performance is taking opportunities and avoiding risks. Yet while mistakes and successes are evident, missed opportunities and mistakes avoided aren’t. Explain how a business can fail by missing an opportunity, with WordPerfect vs Word as an example. Explain how a business can succeed by avoiding risks, with air travel as an example. What happens if you only maximize opportunity? What if you only reduce risks? Give examples. How does nature both take opportunities and avoid risks? How should designers manage this? 22. Describe the opportunity enhancing general system performance requirements, with an IT example of each. When would you give them priority? Describe the risk reducing performance requirements, with an IT example of each. When would you give them priority? 23. What is the Version 2 paradox? Give an example from your experience, of software that got worse on an update. You can use a game example. Why does this happen? How can designers avoid this? 24. Define extendibility for any system. Give examples for a desktop computer, a laptop computer and a mobile device. Give examples of software extendibility, for email, word processing and game applications. What is personal extendibility? Or community extendibility? 25. Why is innovation so hard for advanced systems? Why stops a system being secure and open? Or powerful and usable? Or reliable and flexible? Or connected and private? How can such diverse requirements ever be reconciled? 26. Give two good reasons to have specialists in a large computer project team. What happens if they disagree? Why are cross-disciplinary integrators also needed?


1663

Socio-Technical System Design

24.3  Part 3: Socio-technical design “Let the social define the technical” Social ideas like freedom seem far removed from computer code but computing today is social. That technology designers aren’t ready, have no precedent or don’t recognize social needs is irrelevant. Like a baby being born, online society is pushing forward, ready or not. And like new parents, socio-technical designers are causing it, whether they want to or not. As the World Wide Web’s creator observes:

“... technologists cannot simply leave the social and ethical questions to other people, because the technology directly affects these matters ” -- Berners-Lee, 2000: p124

The online reality is that how people interact in socio-technical systems depends entirely on the software.

24.3.1  Designing work management The term socio-technical was first introduced by the Tavistock Institute39 in the late 1950’s to oppose Taylorism - reducing jobs to efficient elements on assembly lines in mills and factories. Community level needs gave work-place management ideas like (Porra and Hirschheim, 2007): 1.

Congruence. A process must match its objective - democratic results need democratic means.

39.  see http://www.strategosinc.com/socio-technical.htm


1664

Encyclopedia of Human-Computer Interaction

2. Minimize control. Give employees clear goals, but let them decide how to achieve them. 3. Local control. Let those experiencing a problem change the system, not absent managers. 4. Flexibility. Without “extra” skills to handle change, specialization will precede extinction. 5. Boundary innovation. Innovate at the boundaries, where work goes between groups. 6. Transparency. Give information first to those it affects, e.g. give work rates to workers. 7. Evolution. Work system development is an iterative process that never stops. 8. Lead by example. Chinese saying: “If the General takes an egg, his soldiers will loot a village.”40 9. Support human needs. Work that lets people learn, choose, feel and belong gives loyal staff. In computing it became a call for the ethical use of technology. Yet social needs apply to technology design as well as to work management. Technology that mediates social interactions must also satisfy social needs. In the industrial revolution, “dark satanic mills” enslaved people, so technology was the enemy. Yet people ran those factories. It was the rich oppressing the poor, as always, with machines just letting them do it better. Technology is an effect magnifier, i.e. it isn’t in itself good or evil. The people of nineteenth century Britain rejected slavery41 but embraced 40.  While Steve Jobs worked for $1 per year, other CEOs take all they can get - simply because they can. 41.  The industrial revolution brought the feudalism myth to a head, as socialism and communism fought class slavery. Last century’s technology brought the myth of world domination to a head, as a world wide peace movement fought war. The myth today’s information revolution challenges is perpetual profit, the fantasy of getting something for nothing that drove Enron, Worldcom and the sub-prime


1665

Socio-Technical System Design

car and phone technologies. In today’s information revolution we “love” technology. It is on the other side of the class war, as Twitter, Facebook and YouTube support the Arab spring. Yet the core socio-technical principle is the same:

“To participate in a market economy, to be willing to ship goods to distant destinations and to invest in projects that will come to fruition or pay dividends only in the future, requires confidence, the confidence that ownership is secure and payment dependable. ... knowing that if the other reneges, the state will step in...” -- Mandelbaum, 2002: p272

In the middle ages, democracies weren’t just unthinkable, they were also unworkable. Freeing people who aren’t ready just gives anarchy and a return to autocracy, as the French revolution gave the terror then the Emperor Napoleon42. Yet America and England somehow got democracy, and now it is unclear why our predecessors ever settled for less. Democracies out-produce autocracies as free people do more and online is no different (Beer and Burrows, 2007). Communities perform by improving social interactions, which happens when citizens do what they should - not what they can.

24.3.2  Social requirements One can’t design socio-technology in a social vacuum. Fortunately, while virtual society is new people have been socializing for thousands of years. We know meltdown. We laugh at myths of perpetual youth or perpetual motion, yet today seek perpetual profit, which is equally impossible. 42.  A community can be governed in various ways. Autocracy is control by one person, aristocracy is control by an elite, plutocracy is control by the rich, democracy is control by all the citizens and anarchy is no-one in charge.


1666

Encyclopedia of Human-Computer Interaction

that fair communities prosper but corrupt ones don’t (Eigen, 2003). Social inventions like laws, fairness, freedom, credit and contracts were bought with blood and tears (Mandelbaum, 2002), so why start anew online? Why reinvent the social wheel in cyber-space (Ridley, 2010)? Why re-learn electronically what we already know physically, if the social level in both cases is the same? When nuclear technology magnified the physical power of war, humanity had a choice: to destroy itself physically by nuclear holocaust, or not. We didn’t destroy ourselves by choice, not by technology, which just upped the ante. As the new bottle of information technology fills with the old wine of society, the stakes are raised again. Today’s information revolution vastly increases the power to gather, store and distribute information, for good or ill (Johnson, 2001). We can be hunter-gatherers of the information age or an online civilization (Meyrowitz, 1985). Yet a stone-age society with space-age technology isn’t a good mix. In general, we are “environment blind”. We don’t see social environments not because they are too far away but because they are too close. As a fish is the last to see water, or a bird the air, so we can’t see social environments. Yet if technology is to support civilization, it must specify its requirements. Computing can’t implement what it can’t specify. We live in social environments every day, but struggle to specify them43, e.g. a shop-keeper swipes a credit card with a reading device designed to not store data like credit card number or pin. It is designed to the social requirement that shopkeepers don’t steal from customers, even if they can. Without this, credit would collapse and a social failure, or depression, can be worse than a natural disaster. In sum, credit card readers support social trust by design. Likewise, if online systems take and sell customer data like home address and phone for advantage, users will lose trust, and either refuse to register at all, or register with fake data, like “123 MyStreet, MyTown, NJ” (Foreman and Whitworth, 2005). The key to online privacy is not storing data. To say it will never be revealed 43.  In general, entities are environment blind, e.g. fish don’t see the sea or birds the air.


Socio-Technical System Design

1667

isn’t good enough, as companies can be forced by governments or bribed by cash to reveal data. One can’t be forced or bribed to give data one doesn’t have. The best way to guarantee online trust is to not to store unneeded information in the first place44.

24.3.3  The socio-technical gap

Figure 24.14:  The socio-technical gap. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Socio-technical design is the application of community requirements to people and software and hardware. Pure technical design gives a socio-technical gap (Figure 24.14), between what technology supports and what people want (Ackerman, 2000), e.g. designing email to let anyone message anyone without permission gave the spam problem. Filters help on a personal level but transmitted spam 44.  Trying to gather all the information you can is information greediness.


1668

Encyclopedia of Human-Computer Interaction

as a system problem has never stopped growing. While inbox spam is constant, due to filters, transmitted spam grew from 20% to 40% in 2002-2003 (Weiss, 2003), to 60-70% in 2004 (Boutin, 2004), to from 86.2% to 86.7% of the 342 billion emails sent in 2006 (MAAWG, 2006; MessageLabs, 2006), to 87.7% of spam in 2009 and 89.1% in 2010 (MessageLabs, 2010). A 2004 prediction that within a decade over 95% of all emails transmitted by the Internet will be spam is coming true (Whitworth and Whitworth, 2004). Filters address spam as a user problem, but it is really a community problem. Transmitted spam uses Internet processing, bandwidth and storage whether users behind their filter walls see it or not. Only socio-technology can resolve social problems like spam, because in the “spam wars”, technology helps both sides, e.g. image spam can bypass text filters, AI can solve captchas45, botnets can harvest web site emails, and zombie sources can send emails. So spam isn’t going away any time soon (Whitworth and Liu, 2009a). Aliens visiting our planet might suppose our email system was build for machines, as most of the messages it transmits go from one computer (spammer) to another computer (filter), untouched by human eye.This result is not just bad luck. A communication technology isn’t a Pandora’s box, unknown until opened, because we built it. Spam happens when we build technologies instead of socio-technologies.

24.3.4  Legitimacy analysis In politics, a legitimate government is seen as rightful by its citizens, i.e. accepted. In contrast, illegitimate governments need force of arms and propaganda to stay in power. By extension, legitimate interaction is accepted by the parties involved, who freely repeat it, e.g. fair trade. Legitimacy has been specified as: fairness and public good (Whitworth and de Moor, 2003). Physical and online citizens prefer legitimate communities because they perform better socially. 45.  CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart.


Socio-Technical System Design

1669

In physical society, legitimacy is maintained by laws, police and prisons, that punish criminals. Legitimacy is the human concept by which judges create new laws and juries decide on never before seen cases. The higher affects lower principle applies here: communities engender human ideas like fairness, which generate informational laws, that are used to govern physical interactions. Communities affect people to create rules to direct acts that benefit the community, i.e. higher level goals drive lower level operations to improve system performance. Doing so online, applying social principles to technical systems, is socio-technical design. Conversely, over time laws get a “life of their own” and the tail wags the dog, e.g. copyright laws designed to encourage innovators are now just a tool to perpetuate corporate profit (Lessig, 1999)46. Unless continuously “re-invented” at the human level, laws inevitably decay. Today’s online society is a social evolution as well as a technical one. The social Internet is a move to community goals like service and freedom, so to reduce it to a hawker market place would be its devolution. So let the old ways of business, politics and academia be changed by the Internet, not the other way around. One can’t just “stretch” physical laws into cyberspace (Samuelson, 2003) because they often: 1.

Don’t transfer (Burk, 2001), e.g. what is online “trespass”?

2. Don’t apply, e.g. what law applies to online “cookies” (Samuelson, 2003)? 3. Change too slowly, e.g. laws change in years but code changes in months. 4. Depend on code (Mitchell, 1995), e.g. anonymity means actors can’t be identified. 5. Have no jurisdiction. U.S. law applies to U.S. soil but cyber-space isn’t “in” America. 46.  Disney copyrighted public domain stories like Snow White that they didn’t create, then stopped others using them.


1670

Encyclopedia of Human-Computer Interaction

Figure 24.15:  Legitimacy analysis. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

There are no shortcuts here, as to repeat the past isn’t progress. To get legitimacy online we must build it in again, because online code is law (Lessig, 1999).The software that mediates online interaction has control, e.g. any application could upload any hard drive file on your computer to any server. In itself, code could create a perfect online police state, where everyone is monitored, all “wrong” acts punished and all “undesirables” excluded, i.e. a tyranny of code. Yet code is also an opportunity to be better than the law, based on legitimacy analysis (Figure 24.15). Physical justice, by its nature, operates after the fact, i.e. a person must commit a crime to be punished. With appeals, this can take years, and justice delayed is justice denied. In contrast, code as the online environment itself, acts right away. It can be designed to enable social acts not just deny antisocial ones. Socio-technical systems that are legitimate by design perform better socially (Whitworth and de Moor, 2003). Saying that technology supporting social requirements, like fairness, improves system performance is the radical core of socio-technical design. So is every STS designer an application law-giver, like Moses coming down from the mountain with tablets of code instead stone? Not quite, as STS directives are to software not people. Telling people to choose rightly is the job of ethics. The job of right code, like right laws, is to allow what is legitimate, not to enforce choices on people. Socio-technical design is socializing technology not technologizing society, the higher directing the lower not the reverse.


Socio-Technical System Design

1671

To achieve online what laws do offline, STS developers must re-invoke legitimacy for each application. It seems hard but every citizen on jury service already interprets the “spirit of the law” for complex physical cases. STS design is the same but for application cases. That the result isn’t perfect doesn’t matter. Cultures have different laws and ethics but all have some laws and ethics, because some social requirements are always better than none. Yet to build a society as one does a house is wrong. Social engineering by coercion, propaganda or indoctrination is a few enforcing their will on the many. Yet a community by definition is many people working together, so an elite few enslaving the rest isn’t a community. To socially engineer a community is to treat people like bricks in a wall. It denies freedom and accountability, which are social requirements. Communities can’t be “built” because their parts are actors. They just emerge as people interact.

24.3.5  The web of social performance requirements Communities interact with others, using spies as “eyes”, diplomats to communicate, engineers to effect, soldiers to defend, intellectuals to adapt and traders to extend, but a community can also interact with itself, to communicate or synergize, as follows: 1.

Productivity. Previously, functionality was what a system can do. What communities do is to produce bridges, dams, art, science, etc. This productivity is based on citizen competence, which education systems increase. Help and FAQ systems do the same online.

2. Synergy. Previously, usability was less effort per result. Communities do this by synergy, by citizens giving to others47. Public goods like roads and hospitals are specialists giving what they do well to all. If everyone in a 47.  In synergy, everyone gives so everyone gets, while if everyone takes everyone is taken from.


1672

Encyclopedia of Human-Computer Interaction

community specializes, and offers their services to others, all get more for less. Wikipedia is synergy, as many give a little knowledge to all get a lot. Productivity and synergy are in tension, as one invokes competition and the other cooperation48. One improves what citizen “parts” do, the other how they interact. Service by free-good citizens reconcile them, as free citizens raise productivity and good citizens increase synergy. Free-goodness combines the invisible hand of the market and the visible hand of public good (Whitworth and Whitworth, 2010). 1.

Freedom. Previously, flexibility was changing a system to fit the environment. A community gains flexibility by giving citizens freedom, i.e. the right to not be a slave49. It allows local resource control to increase performance, as do decentralized network protocols like Ethernet.

2. Order. Previously, reliability was a system’s ability to survive internal part failure or error. A community gets reliability by order, that citizens, by rank, role or job, know and do their duty. Some cultures set up warrior or merchant castes to achieve this. Online order is also by roles, e.g. Sysop or Editor. Freedom and order are in tension, as freedom has no class but order does. Democracy merges freedom and order, as free citizens select an order hierarchy, not just of President or Prime Minister, but for all positions. Democracy is rare online, but Slashdot uses it. 1.

Ownership. Previously, security was a system’s defense against outside takeover. A community is secure internally by ownership, e.g. to “own” a house guarantees that if another takes it, the community will step in50. Online, ownership works by access authorization.

48.  The problem with competition is that if you give peanuts you get monkeys but if you give honey you get wasps, while with cooperation helping others doesn’t help them help themselves. 49.  Physical slavery is tyranny, informational slavery is propaganda and psychological slavery is political correctness, where the few tell the many how to think. Wanting to live other’s lives is a sign of emptiness. 50.  Note: state ownership, as in communism, is still ownership. Real ownership can be given away. When a state gives ownership away, one gets freedom.


Socio-Technical System Design

1673

2. Openness. Previously, extendibility was a system using what is outside itself. A community doing this was America’s invitation to the world: “Give me your tired, your poor, your huddled masses yearning to breathe free.” A society is open internally if any citizen can achieve any role by merit, as Abraham Lincoln, borne in a log cabin, became US president. The opposite is nepotism or cronyism, giving jobs to family or friends. If community advancement is by who you know not what you know performance reduces. Open source systems like Source Forge let people advance by merit. Ownership and openness are in tension, as the right to keep out denies the right to go in. Fairness can reconcile public access and private control. Offline fairness is based on justice systems but online it is supported by code. 1.

Connectivity. Previously, connectivity was the ability to open communication channels. Communities connect internally by media like TV, newspapers, radio, and now the Internet. A centrally controlled press is propaganda, while a free press lets everyone put a point of view.

2. Privacy. Previously, privacy was a citizen’s right to control information about themselves. It is the ownership of self-data, not secrecy, so it includes the right to make personal data public. Connectivity and privacy are in tension, as opening a channel to connect can reveal personal data. Transparency illustrates a combination, as public officials are entitled to privacy, except if acting for a community. Transparency is a citizens right to see governance on their behalf, including money spent and privileges given. In summary, a community must increase citizen competence to be productive, increase trust and deny crime to get synergy, give freedoms to adapt and innovate, establish order to define responsibilities, allocate ownership to prevent


1674

Encyclopedia of Human-Computer Interaction

property conflicts, be open to talent outside and inside51, be connected to generate agreement, and grant privacy to relieve citizens from the pressure of social interaction. All these increase social performance and prosperity.

24.3.6  Synergy Social synergy arises when people work to create each other’s outcomes. It isn’t just people adding efforts, say to lift a heavy log together. Positive synergy is the majority adding value to others and negative synergy is reducing it, e.g. war. Trade is mutual synergy, when my acts give your benefits, e.g. a fisherman who trades fish for a farmer’s grain turns excess into value. Each gives an extra they don’t really need for a deficit they lack. Modern prosperity arises when specialists share, and specialists produce nearly everything we use52. Synergy is even greater for information, as one can give information to others without losing it oneself. As connected communities grow and work at higher levels, they produce more but synergize much more. Productivity adds with size but synergy multiplies, because it depends on the number of interactions, not the number of citizens. Synergy is the key to prosperity in large connected societies (Wright, 2001)because it “expands the pie”, making every slice larger. In contrast, zero-sum gains like war expand one slice at another’s expense. Communities that generate synergy are “civilized”. Game theory, the formal calculation of personal gain and loss in social interactions, points out the fly in this social ointment. If my acts make your gain and yours make mine, what if I take from you and give nothing back? In fact, on the personal level, it always pays to defect, e.g. for a seller to give shoddy goods or for a buyer’s check to bounce. But if the cheated “sucker” doesn’t repeat the 51.  Sexism and racism are community level losses. If women can’t work, half the population can’t add to its productivity. If a race, like black people, are excluded, so are their contributions. 52.  How many of the objects you use each day do you make? How many are even produced in your country?


Socio-Technical System Design

1675

interaction, both lose their synergy gains, so cheaters destroy their own success. Synergy is destroyed by anti-social defections, or crime. Social dilemmas are common in society, e.g. social loafing, the volunteer dilemma and the tragedy of the commons53. The predicted equilibrium is that all parties defect (Poundstone, 1992), i.e. that synergy is unstable. The mystery isn’t why people don’t trust but why they do. The answer proposed here is that people evolve a community sense, when it doesn’t pay to defect, e.g. a community overgrazing its commons loses a valuable resource forever54. Social dilemmas can’t be solved at the personal level, as an honest person among cheats is just a sucker. Only community level action changes the social unit and the gain-loss equation, as explained in detail elsewhere (Whitworth and Whitworth, 2010). As people, we struggle to see social acts are hard on a community level. A theft that is “good” for a robber is “bad” for the victim, but for a community, theft is always bad. Why spend thousands of dollars in police, court and prison costs to prosecute a hundred dollar theft? For a community, it is a good deal, as crimes that succeed create copycats. The main reason people cheat is because “everyone is doing it” (Callahan, 2004), so one defection can snowball into a social collapse, i.e. no synergy55. Giuliani’s clean up of crime in New York56 cost millions but the community synergy gain was billions.

53.  In social loafing people let the rest of the group do the work, e.g. people pull a rope less when with others than when alone. In the volunteer dilemma, people don’t volunteer to help a group because someone else will. In the tragedy of the commons, farmers overgraze and destroy a commons because if they don’t, others will. 54.  The tragedy of the commons exemplifies whaling, forest and wildlife conservation issues. A community destroying a common resource is like a farmer killing all his cows to eat. It is stupidity not profit. 55.  For example, if a fast-food restaurant is kept clean people drop less rubbish. If it is messy, they drop more. 56.  By Wilson’s ‘Broken Windows Theory’


1676

Purpose

Encyclopedia of Human-Computer Interaction

Examples

Synergy

Defection

Communi-

Email,

Shared communica-

Spam: Spammers waste

cate

Chat, List-

tion: People send

others time, giving spam

Serv, IM

more useful mes-

filters.

sages Learn

Moodle

Shared learning:

Plagiarism: Student

Blackboard

Students help others

copying gives systems like

learn, reduce bottle-

Turnitin.com.

necks Know

Wikipedia,

Shared knowledge:

Trolls: Wikipedia’s

Tiddlywiki

Taps group knowl-

monitors fight knowledge

edge, not just a few

“trolls”.

experts Friend

Facebook,

Relationships:

Predation: Social

Myspace

People keep in

networks report and

touch with friends

banish predators

and family Keep cur-

Digg,

Shared bookmarks:

Advocates: Who “digg” a

rent

Del.icio.us

Social bookmarks let

web site they own.

people see trends. Play

Second

Shared play: Ava-

Bullies/Thieves: Newbies

Life, Sims

tars experience

robbed by veterans need

things impossible in

“safe” areas.

reality.


1677

Socio-Technical System Design

Trade

E-Bay,

Item trading: Peo-

Scams: Scams are

Craig’s List,

ple from anywhere

reduced by online

Amazon

exchange more

reputation systems.

goods. Work

Download

Work trading: Peo-

Faking: Padded CVs and

ple find and offer

fake job offers need repu-

work more easily.

tation systems.

Webdon-

Shared down-load-

Piracy: Prosecutions by

key, Bit-

ing: Groups share

society’s copyright laws.

Torrent

processing down-

Monster

loads. Publish

Flickr, You-

Shared experience:

Offensiveness: Editors

Tube

People share photos

remove items that offend.

and videos. Advice

Discuss

Follow

Help

Technical advice:

Confusers: People who

boards

People who have

ask questions before

AnandTech

solved problems

checking old ones are

help others

scolded.

Slashdot,

Shared views: People Caviling: Karma systems

Boing-Bo-

comment and read

deselect those who just

ing

others opinions easily

“peck” new ideas

Twitter

Forms a group view

Identity theft. A leader’s

by linking leaders

online persona can be

and followers.

hijacked.

Table 24.8:  Socio-technical synergies and defections.


1678

Encyclopedia of Human-Computer Interaction

Socio-technical systems not only deny defections, but also enable synergies (Table 24.8). Forums like AnandTech illustrate this, as if anyone in a group solves a problem everyone can get the answer. The larger the group, the more likely someone can solve in seconds a problem you have struggled with for days. Same again functions let Amazon readers use the experiences of others to find books bought by those who bought the book they are looking at now. Wikipedia users correct errors of fact, supply references and examples to everyone. Synergy reduces when citizens work to personal requirements like: “Take what you can and give nothing back” Synergy increases when citizens follow community ethics like: “Give unto others as you would they give unto you”. Personal ethics is community pragmatics because without the former there is no social synergy, and without synergy there is no community prosperity. If synergy gains return to the people who generate them, the society will be stable. Previously, only heroes, of art, science, music, politics or other, gave to society. Today, socio-technology lets us all be “small heroes”, giving back to a community that gives to us. The miracle of socio-technology is that people will help others for no personal gain whatsoever57.

57.  Social evolution require personal evolution. Social health is the percentage of citizens who hand in a lost wallet. Without social health, self-service supermarkets would fail, and without self-service we would queue for hours for goods. Online communities, selected by the digital divide, have more social health, and so predict our social future.


Socio-Technical System Design

1679

24.3.7  Communication performance

Figure 24.16:  Linkage types (S = Sender, R = Receiver). Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


1680

Encyclopedia of Human-Computer Interaction

Communication media transmit meaning between senders and receivers. Meaning is any change in a person’s thoughts, feelings or intents. Communication performance is then the total meaning exchanged by a transmission, i.e. its sum human impact. Richness. Part of communication performance is richness, the amount of meaning a message conveys. To see video as automatically richer than text confuses meaning richness with information bandwidth. Meaning is the impact on a person, so texting “I’m safe” can have more meaning58 than a multi-media marketing video. Hence video phones didn’t immediately replace audio phones and lean texting is still used. Media richness can thus be classified by the symbols that generate meaning, as follows: 1.

Position. A single, static symbol, e.g. to raise one’s hand.

2. Document. Many static symbols that form a pattern with meaning, as words form a sentence by syntax or as pixelsform an objectby gestalt principles. Documents are text or pictures. 3. Dynamic-media (Audio). A dynamic channel with multiple semantic streams, e.g. speech has tone of voice and content59. Music has melody, rhythm and timbre. 4. Multi-media (Video). Many dynamic channels, e.g. video is audio and visual channels. Face-to-face communication uses many sensory channels. One expects richer media to have the potential to transfer more meaning. Linkage. The meaning exchanged also depends on the number of senders and receivers, i.e. on linkage (Figure 24.16), which can be: 58.  One may be overwhelmed to hear a loved one is safe, but completely ignore a sales video. 59.  A semantic stream is the meaning produced by human processing. A physical channel processed differently can have many semantic streams if, e.g. tone of voice and message content are different streams.


Socio-Technical System Design

1.

1681

Interpersonal (one-to-one, two-way): Both parties can send and receive, usually signed.

2. Broadcast (one-to-many, one-way): From one sender to many receivers, can be unsigned. 3. Matrix (many-to-many, two-way): Many senders to many receivers, usually unsigned. As people have interpersonal communication so communities communicate group-to-group by matrix communication. This most powerful linkage is when many send and many receive in one transmit operation. It combines one-to-many (broadcast) and many-to-one (merging) communication (Figure 24.16). Addressing an audience is one-to-many communication, applauding a speaker is many-to-one, and an applauding audience to itself is matrix communication. In the latter case, the group producing the clapping message also receives it. Matrix communication allows normative influence, so audiences can start and stop clapping together. A choir singing is matrix communication, so when choirs go off key, they usually do so together. Face-to-face groups use matrix communication, as body language and facial expressions convey everyone’s position on an issue. A valence index, calculated from member position indicators, can predict a group discussion outcome as well as the words (Hoffman and Maier, 1961). So online electronic groups can form social agreement using only anonymous, lean, many-to-many signals, with no rich information exchange or discussion (Whitworth et al, 2001). Community voting, as in an election, is a physically slow matrix communication that computers can speed up. Tag cloud, reputation system and social book-mark technologies all illustrate online support for matrix communication. If communication performance is richness and linkage, a tyranny bombarding citizens 24/7 with TV video propaganda is low linkage (one-to-many) while


1682

Encyclopedia of Human-Computer Interaction

people talking freely via text blogs is high linkage (many-to-many), i.e. the latter may communicate more.

24.3.8  Communication media Table 24.9 shows various communication media by richness and linkage, with electronic forms in italics, e.g. a phone call is an interpersonal audio but a letter is interpersonal text. A book is a broadcast document, but radio is broadcast audio and TV is broadcast video. The Internet can broadcast documents (web sites), audio (podcasts) or videos (YouTube). Email allows two-way interpersonal text messages, while Skype adds two-way audio and video. Chat is few-tofew matrix text communication, as is instant messaging but with known people. Blogs are text broadcasts that also allow comment feedback. Online voting is matrix communication, as many communicate with many in one operation. Computers allow “anytime60, anywhere” communication for less effort, e.g. an email is easier to send than posting a letter. Lowering the message threshold means that more messages are sent (Reid et al, 1996). Email stores a message until the receiver can view it61, but a face-to-face message is ephemeral, it disappears if you aren’t there to get it. Yet being unable to edit the message sent makes sender state streams like tone of voice more genuine.

60.  Asynchronous communication like email lets senders ignore distance and time but synchronous communication like Skype doesn’t. One can’t call someone who is asleep. So while the world is flat (Friedman), the day is still round. 61.  For a physical network, asynchrony depends on the buffer capacity of its nodes.


1683

Socio-Technical System Design

Linkage Richness

Position

Broadcast

Interpersonal

Matrix

Footprint,

Posture,

Show of hands, Ap-

Flare,

Gesture,

plause, An election,

Scoreboard,

Acknowledgement,

Web counter, Karma

Scream,

Salute,

system, Tag cloud,

Smiley

Online vote, Reputation systems, Social bookmarks

Document

Poster, Book,

Letter,

Chat, Twitter1,

Web site, Blog,

Note,

Wiki, E-market,

Online photo,

Email,

Bulletin board,

News feed,

Texting,

Comment system, Ad-

Online review,

Instant message,

vice board,

Instagram,

Social network

Social network2

2

Twitter1 Radio,

Telephone,

Choir,

Dynamic-

Loud-speaker,

Answer-phone,

Radio talk-back,

media (Au-

Record or CD,

Cell phone,

Conference call,

dio)

Podcast,

Skype audio

Skype conference call

Online music


1684

Multi-media (Video)

Encyclopedia of Human-Computer Interaction

Speech,

Face-to-face con-

Face-to-face meeting,

Show,

versation,

Cocktail party,

Television,

Chatroulette

Video-conference,

Movie, DVD,

Video-phone,

MMORPG

YouTube video

Skype video

Simulated world

1

Combines broadcast (text) and matrix (follow).

2

Combines interpersonal and matrix.

Table 24.9:  Communication media by richness and linkage.

Electronic communication was expected to just become richer, with video the anointed heir, but EBay’s reputations, Amazon’s book ratings, Slashdot’s karma, tag clouds, social bookmarks and Twitter aren’t rich at all. Table 24.9 shows that computer communication evolved by linkage as well as richness. Computer chat, blogs, messaging, tags, karma, reputations and wikis are all high linkage but low richness. Communication that combines richness and linkage is interface expensive, e.g. a face-to-face meeting has rich channels and matrix communication to give sender state information and resolve real time contentions like people talking at once by showing where others are looking. To do this online requires many video streams on a screen, but who then controls the interface? Does each person control their own, and ignore the rest, or does one person set a common interface? In audio-based tagging, a person speaking automatically makes their video central (Figure 24.17). The interface is common but it is group-directed, i.e. democratic. Gaze-based tagging is the same except that when people look at a person their window expands, as when many people use a link it gets bigger. It is in effect a group directed bifocal display (Spence and Apperley, 2012). Only when matrix communication is combined with media richness will online meetings start to match face-to-face ones.


Socio-Technical System Design

1685

Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Figure 24.17 A-B:  Audio based video tagging.


1686

Encyclopedia of Human-Computer Interaction

As video-phones are now easily available, why isn’t video-phoning the norm? Perhaps it has disadvantages, like having to dress-up for a call or check the background before calling Mum. Some may prefer text to video precisely because it is less rich, if they don’t want to communicate. Computer communication isn’t just about richness because communication isn’t just about the message — there is the sender and receiver too.

24.3.9  Semantic streams Communication goals can be classified by level as follows (Whitworth et al, 2000): 1.

Informational. The goal is to analyze information about the world and decide a best choice. This logical process is surprisingly fragile (Whitworth et al, 2000).

2. Personal. The goal is to form relationships which are more reliable. Relating involves aturn-taking, mutual-approach process, to manage the emotional arousal evoked by the presence of others (Short et al, 1976).62 3. Community. The goal is to stay “within” the group, as belonging to a community means being part of it, and so protected by it. Communities outlast friends.

62.  The Haka communicates between two Maori warbands or tribes. It conveys an intent as well as a state, see here: http://www.youtube.com/watch?v=c-lrE2JcO44


1687

Socio-Technical System Design

Goal

Influence

Link-

Questions

age Analyze (task

Informational influ-

Broad-

What is right? What is

information)

ence, of the facts

cast

best?

Relate (to other

Personal influence,

Inter-

Who do I like? Who do I

people)

of other people

personal

trust?

Belong (to a

Normative influence,

Matrix

What is everyone doing?

community)

of the community

Am I “in” the group?

Table 24.10:  Human goals by influence and linkage.

Table 24.10 shows how each goal maps to influence and linkage. Whether online or off, we analyze information, relate to others and belong to communities, so are subject to informational, personal and normative influence. The latter is based neither on logic nor friendship, e.g. patriotism is my country right or wrong, friendships or not. An individual may be influenced by task information, friend recommendations or community norms via different semantic streams. Semantic streams are people processing a physical signal in different ways to generate different meanings, where one physical message can at the same time convey: 1.

Message content. Symbolic statements about the literal world, e.g. a sentence.

2. Sender state. Sender psychological state, e.g. an agitated tone of voice. 3. Group position. Sender intent over many is a group intent, e.g. an election. Human communication is subtle becaus77e one message can have multiple meanings and people respond to many semantic streams at once, e.g. a person leaving a party may say “I had a good time”, but by tone imply the opposite. One can say


1688

Encyclopedia of Human-Computer Interaction

‘I AM NOT ANGRY!” in an angry voice63. What is less obvious is that a message can also indicate a position, or intent to act, e.g. saying “I had a good time” in a certain tone or body language can indicate an intention to leave a party. When a community acts, its citizens follow. In the general model (Figure 24.18), physical level signals generate many semantic streams and influences. While face-to-face interactions allow multi-stream communication, computing tends to pick one type, e.g. email text gives content but not sender state. Online voting gives position but not comments. Technologies that operate at the community level use matrix or group-to-group communication, such as: a.

The reputation ratings of Amazon and E-Bay are community-based product quality control and Slashdot does the same for content, letting readers rate comments so viewers can filter out low quality ones.

b.

Social bookmarks, like Digg and Stumbleupon, let users share link favorites, to see what the community is looking at.

c.

Tags are technology to increase the font size of links according to frequency of use. As peopl]e walk in forests on the paths trod by others, so we can follow the “web-tracks” of others on a browser screen.

d. Twitter’s follow function lets people see the leaders they like, and lets leaders broadcast ideas to followers. The power of the computer is to allow matrix communication by millions and billions. What might a global referendum on current issues reveal? The Internet could tell us. As for the future, in an Internet dominated by personal “apps”, multi-user apps are an obvious next step, as are applications supporting many semantic streams, like Facebook friend voting. Given recent advances in connectivity, we 63.  Some people may not process the sender state semantic stream, e.g. those with autism.


Socio-Technical System Design

1689

can expect a “bite-back” in privacy demands, i.e. more small groups or “tight” communities that are harder to get in.

Figure 24.18:  Cognitive processes in communication. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

The World Wide Web is a system evolving. Its first level, an information library accessed by search tools, is well in place. The second, a medium for personal relations, is also well underway. The third, a civilized social environment, is the current and future challenge. Even a cursory study of Robert’s Rules of Order will dispel any illusion that social dealings are simple (Robert, 1993). Socio-technology allows hundreds of millions of people to act together but we are still working out what “here comes everybody” (Robert, 1993). Socio-technology allows hundreds of millions of people to act together, but we are still figuring out what “Here comes Everybody”64 means (Shirky, 2008). None of us is an island, as we link to mothers, fathers, brothers, sons, daughters, aunts, sisters, wives, grandmothers, uncles, grandfathers husbands and friends, so when social others talk, even hardened dictators listen. This is a good thing.

64.  Question: ‘Where does an 800lb Gorilla sit when it comes to dinner?’ Answer: Anywhere it wants to. Communities are like this, but to act they must agree, which can take months, years or decades.


1690

Encyclopedia of Human-Computer Interaction

24.3.10  Discussion questions Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples. 1.

Why can’t technologists leave the social and ethical questions to nontechnologists? Give examples of IT both helping and hurting humanity. What will decide, in the end, whether IT helps or hurts us overall?

2. Compare central vs. distributed networks (Ethernet vs. Polling). Compare the advantages and disadvantages of centralizing vs. distributing control. Is central control ever better? Now consider social systems. Of the traditional socio-technical principles listed, which ones distribute work-place control? Compare the advantages and disadvantages of centralizing vs. distributing control in a social system. Compare governance by a dictator tyrant, a benevolent dictator and a democracy. Which type are most online communities? How might that change? 3. Originally, socio-technical ideas applied social requirements to workplace management. How has it evolved today? Why is it important to apply social requirements to IT design? Give examples. 4. Illustrate system designs that apply: Mechanical requirements to hardware65. Informational requirements to hardware66. Informational requirements to software. Personal requirements to hardware67. Personal requirements to software. Personal requirements to people. Community requirements to hardware. Community requirements to software. Community requirements to people. Community requirements to communities. Give an example in each case. Why not design software to mechanical requirements? 65.  As specified by engineering 66.  As specified by computer science. 67.  As specified by psychology.


Socio-Technical System Design

1691

5. Is technology the sole basis of modern prosperity? If people suddenly stopped trusting each other, would wealth continue? Use the 2009 credit meltdown to illustrate your answer. Can technology solve social problems like mistrust? How can social problems be solved? Can technology help? 6. Should an online system gather all the data it can during registration? Give two good reasons not to gather or store non-essential personal data. Evaluate three online registration examples. 7. Spam demonstrates a socio-technical gap, between what people want and what technology does. How do users respond to it? In the “spam wars”, who wins? Who loses? Give three other examples of a socio-technical gap. Of the twenty most popular third-party software downloads, which relate to a socio-technical gap? 8. What is a legitimate government? What is a legitimate interaction? How do people react to an illegitimate government or interaction? How are legitimacy requirements met in physical society? Why won’t this work online? What will work? 9. What is the problem with “social engineering”? How about “mental engineering” (brainwashing)? Why do these terms have negative connotations? Is education brainwashing? Why not? Explain the implications of all this for STS design. 10. For a well known STS, explain how it supports, or not, the eight proposed aspects of community performance, with screenshot examples. If it doesn’t support an aspect, suggest why. How could it? 11. Can one own something but still let others use it? Can a community be both free and ordered? Can people compete and cooperate at the same time? Give a physical and online examples. How are such tensions re-


1692

Encyclopedia of Human-Computer Interaction

solved? How does democracy reconcile freedom and order? Give examples in politics, business and online. 12. What is community openness for a nation? For an organization? For a club or group? Online? Why are organizations that promote based on merit more open? Illustrate technology support for merit-based promotion in an online community. 13. Is a person sending money to a personal friend online entitled to keep it private? What if the sender is a public servant? What if it is public money? Is a person receiving money from a personal friend online entitled to keep it private? What if they are a public servant? 14. What is social synergy? What destroys it? How do communities encourage synergy? How do they prevent its destruction? How do trust and synergy relate? Give physical and electronic examples. 15. Give five examples of defections in ordinary life. What happens if everyone defects? Give five online examples, and for two specify how technology lowers defections. 16. Would you prefer to be a middle class citizen now or a lord three hundred years ago? Consider factors like diet, health, clothes, leisure, travel, etc. Where did the lord’s wealth mainly come from? Where does the power of your salary to buy many things come from today? What is the principle and how does it apply online? 17. What is a social dilemma? Give three physical examples from your experience. Why can’t individuals solve them? How are they solved? Give three online social dilemmas. How are they to be solved? Relate this to socio-technical design.


Socio-Technical System Design

1693

18. What happens if no-one in a group suggests anything? What happens if you suggest things in a group? How can groups manage this? Answer the same questions for volunteering. Give examples from your experience. What percentage of online users are “lurkers”, who look but don’t post? Review a popular board you haven’t used before. What stops you contributing? Add something anyway. How could the board increase participation? 19. Is ethics idealism or pragmatism? Explain the statement: Personal ethics is community pragmatics. Consider a thief who steals a wallet and isn’t caught. List the thief’s gains and the victim’s losses. What is the net community result? What happens if everyone in a community steals, i.e. takes but does not give? Generalize to online cases. How then does STS design relate to ethics? 20. Why is synergy more important for larger communities? Why is it especially important for socio-technical systems? How can technology help increase synergy? Report the current estimated sizes of popular socio-technical systems. Clarify what is exchanged, who interacts and the synergy. 21. What is communication? What is meaning? What is communication performance? How can media richness be classified? Is a message itself rich? Does video always convey more meaning than text? Can rich media deliver more communication performance? Give online and offline examples. 22. What affects communication performance besides richness? How is it classified? Is it a message property? How does it communicate more? Give online/offline examples. 23. If media richness and linkage both increase communication power, why not have both? Describe a physical world situation that does this? What is the main restriction? Can online media do this? What is, currently, the main contribution of computing to communication power? Give examples.


1694

Encyclopedia of Human-Computer Interaction

24. What communication media type best suits these goals: telling everyone about your new product; relating to friends; getting group agreement? Give online and offline examples. For each goal, what media richness, linkage and anonymity do you recommend. You lead agile programming team spread across the world: what communication technology would you use? 25. State differences between the following media pairs: email and chat; instant messaging and texting; telephone and email; chat and face-to-face conversation; podcast and video; DVD and TV movie; wiki and bulletin board. Do another pair of your choice. 26. How can a physical message convey content, state and position semantic streams? Give examples of communications that convey: content and state; content and position; state and position; and content, state and position. Give examples of people trying to add an ignored semantic stream to technical communication, e.g. people introducing sender state data into lean text media like email. 27. Can a physical message generate many information streams? Can an information stream generate many semantic streams? Give examples. Does the same apply online? Use how astronomical or earthquake data is shared online to illustrate your answer. 28. You want to buy a new cell-phone and an expert web review suggests model A based on factors like cost and performance. Your friend recommends B, uses it every day, and finds it great. On an online customer feedback site, some people report problems with A and B, but most users of C like it. What are the pluses and minuses of each influence? Which advice would you probably follow? Ask three friends what they would do. 29. What is the best linkage to send a message to many others online? What is the best linkage to make or keep friends online? What is the best link-


Socio-Technical System Design

1695

age to keep up with community trends online? List the advantages and disadvantages of each style. How can technology support each of the above? 30. Explain why reputation ratings, social bookmarks and tagging are all matrix communication. In each case, describe the senders, the message, and the receivers. What is the social goal of matrix communication? How exactly does technology support it? 31. Give three online leaders searched by Google or followed on Twitter. Why do people follow leaders? How can leaders get people to follow them? How does technology help? If the people are already following a set of leaders, how can new leaders arise? If people are currently following a set of ideas, how can new ideas arise? Describe the innovation adoption model. Explain how it applies to “viral” videos?

24.4  Part 4: An example: Online rights “A right is a community permission to act” Legitimacy analysis specifies community requirements for technology design. Previous examples are polite computing (Whitworth and Liu, 2008) and channel email (Whitworth and Liu, 2009). This section proposes an access control model based on these social requirements: A. Ownership. To reduce object conflicts. B. Freedom. To own oneself, to not be a slave. C. Fairness. That social consequences reflect action contributions (Rawls, 2001).68 68.  It is unfair to B if A’s acts cause B’s loss only, and unfair to A if A’s acts cause B’s gain only.


1696

Encyclopedia of Human-Computer Interaction

D. Privacy. To control the release of personal information to others. E. Transparency. A democratic citizen’s right to know how they are governed. If the new user of computing is society, we must specify its requirements.

24.4.1  Access control In computing, decision support systems recommend decisions, access control systems permit them and control systems carry them out. Access control began with multi-user computing as users sharing the same system came into conflict (Karp et al., 2009). Traditional access control systems (ACSs) use a subject by object access permission matrix to allocate rights (Lampson, 1969). As computing evolved, ACS logic offered local access control for distributed systems and roles for many person systems. With these variants, the matrix approach has worked for military (Department of Defense, 1985), commercial (Clark and Wilson, 1987), organizational (Ferraiolo and Kuhn, 2004), distributed (Freudenthal et al, 2002), peer-to-peer (Cohen, 2003) and grid environment (Thompson et al, 1999) applications. Today, access control in social networks (SNs) is more about access than control. The permission matrix for friend interactions increases geometrically, not linearly, with group size, so for hundreds of millions of people the possible connections are astronomical. Each account also adds hundreds or thousands of photos or comments a year. Finally, each person wants the sort of domain control previously reserved only for system administrators. Social networkers want local access control, not just to read, write and execute files (Ahmad and Whitworth, 2011), but to control their own social structure, without asking a central authority for permission (Sanders and McCormick, 1993),e.g. to restrict a photo to family or friends. Social networks vastly increase ACS complexity, as millions of users want all rights to billions of resources, plus rights to re-allocate rights. They are the perfect storm for the traditional ship of access control.


Socio-Technical System Design

1697

The current rules of social network interaction are based on designer intuitions rather than formal models, so they vary between systems and over time, with public outrage the only check. There is no agreed scheme for allocating permissions to create, edit, delete or view object entities, let alone manage roles. The aim here is to fill that gap, to develop a socio-technical access control model that is legitimate, efficient, consistent and understandable.

24.4.2  Rights Communities, by norms, laws or culture, grant citizens rights, or social permissions to act. Rights reduced physical conflict, as parties who agree on rights don’t have to fight. This moved the conflict from the physical level to the informational or legal level69. Physical society expresses rights in terms of ownership (Freeden, 1991), so specifying who owns what online can specify rights in a way that designers can support and users can understand (Rose, 2000). This doesn’t mechanize online interaction, as rights are choices not obligations, e.g. the right to sue doesn’t force one to sue. Legitimate access control defines what online actors can do not what they must do. Traditional design refers to software “users”, as if they were on a drug, but Facebook’s users aren’t part of the software. Socio-technology talks of actors who switch software, not passive users. As shops can see “a sale” or “a customer”, IT designers see a user or an actor. An actor is a system able to act independently of outer conditions, i.e. to act not react. Actors can initiate acts, which implies some internal choice or autonomy70. A program that always responds the same way to the same input has no autonomy, so can’t itself be an actor71. 69.  Personal acts between people is the level after that, when people drop rules and believe in each other. 70.  From the Greek autos ‘self’ and nomos ‘law,’ i.e. a system that can make its own laws. It is not all or none, e.g. poke a ball with a stick and it moves, poke a dog and it runs away or bites you, poke a man and he might do any of the above, or take the stick off you. 71.  It can however be an agent.


1698

Encyclopedia of Human-Computer Interaction

A person is an actor with an ego-self and a citizen is a person who can be held to account72. To hold to account, to link consequences to people, is fundamental to all social interaction73. By accountability, communities reward those who benefit it and punish those who harm it.74 While philosophers argue over free will, all communities consider citizens accountable and govern accordingly. Those deemed not so, the criminal or insane, are in the care of those who are. A community holds citizens75 to account for the effects of their acts not on themselves and on others. Accountability is the over-arching social requirement, without which communities fail. It only applies to people, e.g. in car accidents the driver is held to account not the car, as the car has no personal self to be accountable.76 Rights arise when social requirements manifest as personal cognitions, which manifest as informational rules, which manifest as action directives. In physical communities, police and courts direct citizens to follow laws, written by judges who understand justice. Online, the same applies, but in this architecture code is the law, police, judge, jury and prison guard. To not be corrupt, systems must be legitimate by design. The following derives informational rights from community requirements stated on the personal level. In information terms, a right is an actor (A) applying an operation (O) to an entity (E): Right = (Actor, Entity, Operation) = (A, E, O) 72.  Accountability only assumes some choice at some point, e.g. a drunk with no control can be fined if he earlier chose to drink too much. Did a drug addict who can’t stop now, but once could, choose that path? To argue no denies accountability, so a community can take control of their life anyway. 73.  Community justice and law began with revenge, where people personally held others to account. 74.  For example, laws punish those who steal and copyright rewards those who create. 75.  A citizen is a person who in a community. A foreign visitor is not a citizen but still a person. A person is anyone who is accountable. A criminal who is not accountable is locked up in jail. 76.  A company, as an informational entity, can’t be accountable as it has no ego-self. To punish a cheating company by declaring it bankrupt lets its owners start another company to do the same thing again. Treating companies as people in the law was a great ethical and legal error of the last century. It underlies most of the scams of the wealthy.


Socio-Technical System Design

1699

Rights can be stored as (Actor, Entity, Operation)triplets, where an actor is an accountable entity or their agent, an entity is any object, actor77 or right, and an operation is any one available to the entity. A right transmitted or stored is often called a permission.

24.4.3  Specification Socio-technical systems can be modeled as entities and operations: 1.

Entities. Stored as static information, with properties. a. Actor. An entity that can participate in a social interaction. 78 i. Persona. Represents an accountable offline person or group. ii. Group. A set of personae acting as one.79 iii. Agent. An actor that represents another actor. b. Object. Conveys information and meaning. i. Item. A simple object with no dependents, e.g. a bulletin board post. ii. Space. A complex object with dependents, e.g. a bulletin board thread. c. Right. A system permission for an actor to operate on an entity. i. Simple rights. Rights to act on object or actor entities. ii. Meta-rights. Rights to act on right entities, e.g. delegate. iii. Role. A variable right (a set of rights).

77.  An actor, being an entity, can act on itself. A persona can even delete itself, as a person can commit suicide. 78.  A social actor need not be a person, e.g. a program can be an agent. 79.  Online and offline are different worlds by their base architecture - the online world has information base. An offline group is physical people who act as one. Groups can also form groups, e.g. the stock market is a group of groups (companies). An offline group can have one online persona, e.g. a company registered on Facebook. An online group is a set of personae that act as one, so the access control system must define how it does this, see Sedtion 4.13.


1700

Encyclopedia of Human-Computer Interaction

2. Operations. Stored as a program or method that processes entities. a. Null operations don’t change the target entity, e.g. view80, enter. b. Use operations change the target in some way, e.g. edit, create. c. Communication operations transfer data from sender(s) to receiver(s), e.g. send. d. Social operations change a right or role, e.g. delegate. Link operations are discussed elsewhere (Whitworth and Bieber, 2002).

24.4.4  The system itself The information system itself is the first entity, owned by the system administrator (SA), who is the first user. A tyrant SA might alter posts or votes by whim but a benevolent dictator, Plato’s best form of rule, gives citizens rights. As even benevolent dictators die, humanity invented democracy,to reduce dynasty transfer battles81. Yet no online system we know of votes for its system administrator, e.g. even Wikipedia isn’t a democracy. An ACS controls at the informational level. If it is not to be in charge, it must allocate all use rights to people who are accountable, giving the ACS operational principle: P1. All non-null entity rights should be allocated to actors at all times. So every only entity should be owned, ultimately, by a person.If this is not true, an access control system must at some point respond to an access request from itself. Yet as an information system, it has no self to act socially. Hence rights aren’t added or deleted, but allocated and re-allocated.

80.  View is null at the informational level but not at the psychological level, see later. 81.  Compare the peaceful power transitions of democracies to the violence of dictatorial change.


Socio-Technical System Design

1701

24.4.5  Persona An online persona represents an offline party, e.g. an avatar, profile, mail account, wall or channel can represent an offline person, group or organization. An online persona is activated by a logon operation, which equates it to the offline party. An online computer agent can act for a group, like installation software for a company, but social acts must ultimately trace back to people and online is no different82. If an installation misleads, we sue company directors not software83. Who owns a persona? Open systems let people self-register, to create their personae. If freedom applies online, one should own one’s online self, but some systems don’t permit this. Can you delete a Wikipedia or Wordpress profile?84 The freedom requirement gives the ACS principle: P2. A persona should be owned by itself. Some complexities are that a persona can be: Abandoned. HotMail accounts inactive for over 90 days are permanently deleted, i.e. if not used they “starve and die.” Transferred. One can permanently pass a persona to another, along with its reputation.85 Delegated. One can ask an agent act on one’s behalf, e.g. a proxy vote. Orphaned. If the person behind a persona dies, their will is physically respected, but online programs act as if death doesn’t exist, e.g. one can get an eerie Facebook message from a person the day after going to his funeral. As in a few decades Facebook will represent millions of obituaries, we need online wills. 82.  Registering by a nickname online instead of one’s ‘real’ name denies accountability offline but not online, e.g. a banned EBay seller name loses its online reputation. 83.  A person who acts as an agent can still be held accountable, e.g. if told to shoot someone and does do. 84.  See how to permanently delete you account on popular web sites here: http://www.smashingmagazine. com/2010/06/11/how-to-permanently-delete-your-account-on-popular-websites/ 85.  In the movie The Princess Bride, the Dread Pirate Robert persona was passed on, so the idea is not new.


1702

Encyclopedia of Human-Computer Interaction

Table 24.11 below shows a summary of persona access rights

Persona

View

Delete

Edit

Ban

Create

SA

Owner

√1

Table 24.11:  Persona access rights. 1 Delegated by the SA..

24.4.6  Object entities Object entities convey meaning by evoking cognitive processing, e.g. a family photo. Items. A simple object with no dependents, e.g. a board post. It can be deleted, edited or viewed. In the object hierarchy tree, items are like leaves. An item can be a: 1.

Comment: Items whose meaning depends on another, e.g. “I agree” makes no sense alone.

2. Message: Items with sender(s) and receiver(s), e.g. an email. 3. Vote: Items that convey a position, a choice from a response set. Spaces. As leaves need branches, so items need spaces, e.g. an online wall that accepts photos is an information space - a complex object with dependents. It can be deleted, edited or viewed like an item, but can also contain objects, e.g. a bulletin board. Spaces within spaces give object hierarchies, with the system itself the first space. A space is a parent to the child entities it contains, who depend on it to exist. So deleting a space deletes its contents, e.g. deleting a board deletes its posts. The


Socio-Technical System Design

1703

move operation changes the parent space of an object. The enter space operation shows the objects on display in it. As every entity is in the system space: P3: Every entity has a parent space, up to the system space. If every entity has a parent space86, its ancestors are the set of all spaces that contain it, up to the system itself, the first ancestor. The offspring of a space are any child objects it contains, their children, etc. So all entities have owners and ancestors, and any space can have offspring.

24.4.7  Operations Entity Type

Operations

Any entity

View

1. Social entity

..., Delete, Edit

a. Persona

..., Logon

b. Agent

..., Delegate

c. Group

..., Join

2. Object entity

..., Delete, Edit, Move

a. Item

..., ConvertToSpace

b. Space

..., Create, Enter

3. Right entity a. Role

..., Allocate, Re-allocate ..., Friend, Ban

Table 24.12:  Operation sets by entity type. 86.  Except, of course, for the system itself.


1704

Encyclopedia of Human-Computer Interaction

Operations are actor initiated methods on information entities subject to access control. Operation sets. Operations can be clustered for access control purposes, e.g. delete flags an entity for destruction, undelete reverses that, and destroy kills it permanently. An ACS that can manage one can manage all. Likewise, edit alters entity values, append extends them, version edits with backup, and Wikipedia’s revert is the inverse. Again, variants of a set present the same ACS issues, so to resolve one is to resolve all. Create. While edit changes existing entity values, create adds a new entity, e.g. creating a Wikipedia stubfor others to edit. Duplicate is a variant of create. Table 24.12 shows the operation sets for various entity types, where create is an act on a space - see Section 13.10. View. Operations like view are null acts that don’t change their informational level target but viewing another is a personal level act. In social facilitation, knowing one is being looked at energizes the viewed party (Geen and Gange, 1983). Viewing someone affects them because success in a social group depends very much on how others see you. Privacy, to control information about ourselves, is important for the same reason. The act of viewing can have great effect on the community level, e.g. a “viral” online video makes others want to view it too. The right use an entity implies accountability, but as one can’t use what one can’t see, use rights imply view rights, giving the ACS operational principle: P4: Any right to use an object implies a right to view it. Communication. In a simple communicative act, a sender creates a message that a receiver views. It is by definition a joint act where both parties have choice. Hence communication should be by mutual consent. Privacy is the right to remain silent, to not communicate and to not receive messages. In the physical world, people say “Can I talk to you?” because communication is by permission. Some online systems however, like email, don’t recognize this. They give anyone the


Socio-Technical System Design

1705

right to send a message to anyone, whether they will or no, and so invite spam. In contrast, in Facebook, chat, Skype and Twitter, one needs prior permission to message someone. The details of legitimate communication, where a channel is opened by mutual consent before messages are sent, are given in (Whitworth and Liu, 2009). The resulting ACS operational principle is: P5: Any communication act should have prior mutual consent. The evolution of telephony illustrates a communication evolution. At first phones just transmitted information — the phone rang and one answered, not knowing who was calling. This allowed telemarketing, the forerunner of spam. Now cell phones show caller id by default, so one can choose to respond, i.e. it is more mutual. Yet we still have to personally type in contact list names, while social networks synergize - we each type in our on name then let others add it to their contact list. Cell phone companies could use this synergy but like the makers of TV remotes, are locked into a one-level mind-set87.

24.4.8  Roles Roles, like parent, friend or boss, simplify rights management by covering many cases, but still remain understandable, so people can review, evaluate and accept them. They are equally useful online, e.g. Wikipedia citizens can aspire to steward, bureaucrat or sysop roles by good acts. Slashdot’s automated rating system offers readers the moderator role (Benkler, 2002) if registered (not anonymous), regular users (for a time) with positive “karma” (how others rate their comments). Every registered reader has five influence points to spend on others as desired over a three day period (or they expire). In this role democracy, high rated commenter’s get more karma points and so more say on who is seen. The technology lets a community democratically direct its governance. 87.  Users would have the option to show a name instead of a number when they call. It could be their real name or a nickname, just as they now can choose to show ‘Anonymous’ instead of a caller-id number. The social system would self-adjust, if receivers chose not to reply to anonymous senders. If people chose to show their real name to the friends they call, that is their choice, so no privacy is lost. Privacy is not secrecy.


1706

Encyclopedia of Human-Computer Interaction

In information terms, a role is a variable rights statement, e.g. a friend role is a set of people with extra permissions. Roles are generic rights, giving the ACS operational principle: P6: A role is a right expressed in general terms, as a pointer or set. Roles are the variables of social logic: Role = (Actor, Entity, Operation) The bolding indicates a variable, e.g. the owner role can be generally defined as any party who has all rights to an entity: RoleOwner = (Owner , Entityi , OperationAll ) Making a person the owner just allocates the Owner pointer to their persona. Roles are flexible, e.g. the friend role lets one change who can see photos posted on a wall: RoleFriend = (Friend , EntityWall , OperationView ) where Friend is a persona set. To “friend” another is to add them to this role set, and to unfriend is to remove them. As a variable can be undefined, so a role can be empty, i.e. a null friend set. To “friend” is spoken of as act on a person, but it doesn’t change the persona entity, so is really an act upon a local role. You decide your friends so don’t need permission to friend anyone. Equally to ban a person adds them to the denied entry role for your space. If banning were an act on another’s persona it, would need their consent. That it is an act on my role gives the ACS principle: P7. A space owner can ban or give entry to a persona without its owner’s permission. Re-allocating actors isn’t the only way to alter a role. By definition, one can change a role’s: 1.

Actor. The role actor set.


Socio-Technical System Design

1707

2. Entity. The entities it applies to. 3. Operation. The operations it allows. For example, a friend role could limit the objects it applies to, with some photos for family only. It could also allow adding comments to photos or not. Few current systems fully use the power of local roles, e.g. social networks could let actors define an acquaintance role, with fewer rights than a friend but more than the public, or an extended family role.

24.4.9  Meta-rights Owning an object is the right to use it: RightUser = R (User, Entityi, OperationUse) , but a right as an entity can also be acted on, i.e. re-allocated. A meta-right is the right to re-allocate a right. In formal terms: RightMetaRight = R (Owner, RightOwn , OperationAllocate ) , where the entity acted on is a right. An owner with all rights to an entity also has its meta-rights, i.e. the right to change its rights. Paradoxically, fully owning an entity implies the right to give it away entirely. Reachability88 requires meta-rights to be absolute, i.e. there are no meta-meta-rights. The gives the ACS operational principle: P8. A meta-right is the right to allocate any entity right, including itself. Previously to own an entity was to have all rights, but giving away use rights while keeping meta rights is still ownership, e.g. renting an apartment gives a tenant use rights, but the landlord still owns it, as they keep the meta-rights. The tenant can use it but the owner says who can use it. 88.  Reachability, or halting, is that a program logic finishes and doesn’t run endlessly.


1708

Encyclopedia of Human-Computer Interaction

24.4.10  The act of creation To create an object from nothing is as impossible in an information space as it is in a physical one. Creation cannot be an act upon the object created, which by definition doesn’t exist before it is created. An actor can’t request ACS permission to create an object that doesn’t exist. To create an information object, its data structure must be known, i.e. exist within the system. So creation is an act upon the system, or in general, an act on the space immediately containing the created object, giving the ACS operational principle: P9. Creation is always an act on a space, up to the system space. This rule is well defined if the system itself is the first space. Creating is an act upon a space because it changes the space that contains the created object. If creation is an act upon a space, the right to create in a space belongs to initially the space owner: RightCreate = R (SpaceOwneri , Spacei , OperationCreate ) The right to create in a space initially belongs to its owner, who can delegate it to others. The logic generalizes well, e.g. to add a board post, YouTube video or blog comment requires the board, video, or blog owner’s permission. One can only create in a space if its owner permits. Now an ACS can be simply initialized as a system administrator owning the system space with all rights, including create rights. The SA must then give rights away for a community to evolve. If the SA only delegates rights, they can always be taken back. Creator ownership. Object creation is a simple technical act, but a complex social one, e.g. how are newly created entity rights allocated? The 17th Century British philosopher Locke argued that creators owning what they create is fair and increases prosperity, whether a farmer’s crop, a painter’s painting or a hunter’s catch (Locke, 1963). If the creator of something chooses to sell or give it away, that


Socio-Technical System Design

1709

is another matter. A community that grants producers the right to their products encourages creativity. Conversely, why produce for others to own? This gives the ACS operational principle: P10. The creator of new entity should immediately gain all rights to it. Creator ownership conveniently resolves the issue of how to allocate new object rights — they go to its creator, including meta-rights. This isn’t what must happen, as a program can act any way it likes, e.g. to give all created object ownership to the system administrator. Creator ownership is a social requirement not a technical one, i.e. a condition of social success not a logical necessity. Such conditions can however be socio-technical axioms. Creation conditions. Are when a space owner partially delegates creation, limiting: 1.

Object type. The object type created, e.g. the right to create a conference paper isn›t the right to create a mini-track space.

2. Operations. The operations allowed on created objects, e.g. blog comments aren’t usually editable once added, but ArXiv lets authors edit publications as new versions. 3. Access. Who can access created objects, e.g. YouTube gives contributors exclusive edit rights, but Wikipedia lets anyone edit any creation. 4. Viewing. Who can view created objects, e.g. bulletin boards let you see what others submit, but conferences in the paper review phase don’t. Editing. The field values of a created object, e.g. date added may be noneditable. The space owner may also set field default values. A space owner can delegate creation rights as needed, e.g. to set vote results to only show to people who have voted, to avoid bias.


1710

Encyclopedia of Human-Computer Interaction

Transparency. Yet fairness dictates a creator’s right to know creation conditions in advance. In general, transparency is the right to view rights that affect you. So those who create in a space should know the creation rules in advance. The ACS principle is: P11. A person can view in advance any rights that could apply to them. Successful socio-technical systems like Facebook, YouTube and Wikipedia do this. In sum, a space owner can delegate the right to create in whole or part, but must disclose creation conditions up front so potential creators can decide if creation is worth it.

24.4.11  Role allocations When an entity is created in a space, the system can assign the following roles: ff Owner. Has meta rights to the entity. ff Parent. The containing space owner. ff Ancestor. Ancestor space owners, with the SA the first ancestor. ff Offspring. The owners of any entities contained in a space. ff Local public (space only). Actors who are permitted to enter the space. A space owner owns its local public role can define what others can do or see in the space: RoleLocalPublic = (LocalPublic , Spacei , OperationAny ) It can be set manually, as friends are allocated, or point to a GlobalPubliclist. Ancestor role. A conference paper’s ancestors are its mini-track, track and conference chairs. An entity, being part of the space it exists in, must be visible to the


Socio-Technical System Design

1711

owner of that space. Privacy doesn’t contradict this, as it refers to the display of personal information not created object information. Generalizing, the ACS principle is: P12. A space owner should have the right to view any offspring. So the ancestor role for any entity is given view rights to it: RoleAncestor = (Ancestors, Entityi , View) For example, a paper posted on a conference mini-track should be visible to track and conference chairs, but not necessarily to other track or mini-track chairs. Ancestors can be notified of new offspring, as an owner can be notified of new ancestors. Offspring role. An entity created in a parent space was by definition created by an actor with the right to enter that space. If a space bans the owner of an object in it, the object is disowned, contradicting P1. A child object’s owner must enter its space to act on it, even if they can’t do anything else. By extension, they can also enter any ancestor space. This doesn’t imply any other rights. The ACS principle is: P13. An entity owner should be able to enter any ancestor space. e.g. adding a mini-track paper should let one enter the track and conference spaces. Any space should allow its offspring owners to enter it: RoleOffspring = (Offspring , Space , Enter)


1712

Encyclopedia of Human-Computer Interaction

Table 24.13 summarizes the basic access rights for entities and spaces. Entity

View

Delete

Edit

Display

Allocate

Ancestor

Parent

√1

Owner

√2

√ 1,2

Enter

Create

Ancestor

Owner

LocalPublic

√1

√1

LocalPublic Space also

Table 24.13:  Entity and space access rights. 1 As allocated by the owner. 2 As allocated by the parent.

24.4.12  The act of display To display an object is to let others view it. The right to display isn’t the right to view, e.g. viewing a video online doesn’t let you display it on your web site89. Display is the meta-right to view, i.e. the right to give the right to view an object to others, e.g. privacy is the meta right to display the persona object. As people have private numbers in a phone book, so Facebook or Linkedin persona are displayed 89.  A more complex example is: if going out in public implicitly gives others the right to view you, anyone can take a photo of you without your consent, but they can’t display that photo on a magazine cover without your consent.


Socio-Technical System Design

1713

to the public by owner consent. The phone company that owns a phone book list can also choose not to display a listing, giving the ACS principle: P14. Displaying an entity in a space requires both persona and space owner consent. Displaying an item in a space is its owner giving display rights to the space owner. For example, to put a physical notice on shopkeeper notice board involves these steps: 1.

Creation. Create a notice. You own it and can still change it, or even rip it up.

2. Permission. Ask the board owner if it can be posted on the notice board. 3. Post. The board owner either vets notices in advance or lets people post themselves. 4. Removal. As the notice is displayed by mutual consent, either can remove it. A poster can also ask that it be removed.. The shopkeeper’s right to take a notice down isn’t the right to destroy it, because he or she doesn’t own it. Nor can he or she alter (deface) notices on the board. The same social logic applies online. Create a video on YouTube gives you view rights to it, but it isn’t yet displayed to the public, as this right belongs to the space owner. Giving the right to display YouTube video is like giving a notice to a shopkeeper to post on their board. The item owner gives the space owner the right to display it in their space. In general, to display any video, photo or text in any online space requires mutual consent, as one party gives another the right to display, giving the ACS principle: P15. An entity owner must give view meta-rights to a space owner to display in that space


1714

Encyclopedia of Human-Computer Interaction

Space owner Display result

Object owner

Accept

Reject

Submit

YES

NO

Withdraw

NO

NO

Table 24.14:  An open display interaction.

Display as a rights transaction is the basis of all publishing. Table 24.14 shows how the mutual interaction between authors and publishers, or object owners and space owners, operates. A space can delegate display rights, to let creators display as desired, e.g. YouTube. Or it may vet items before display and reject some, e.g. ArXiv, which also lets authors withdraw submissions. Bulletin boards let anyone submit but not withdraw, and reserve the right to moderate postings, i.e. reject later. Authors who publish must give all rights the publisher. An author can’t “unpublish” a paper, but then again, neither can the publisher90. Usually the right to publish a work is given once only, but some publishers contract the right to do so many times, e.g. publishing one IGI book chapter led to its re-publication in other collections without author permission91 (Whitworth and Liu, 2008). Entity creation. Technically, creating an entity is simple - the program just creates it - but socially adding into another’s space isn’t a one-step act. Adding a YouTube video involves: 1.

Registration. Create a YouTube persona.

2. Entry. Enter YouTube (not banned). 90.  A journal can ‘retract’ a publication, to deny it, but can’t ‘un-publish’ it. 91.  Namely, in ‘Selected Readings on the Human Side of Information Technology’ and ‘Human Computer Interaction: Concepts, Methodologies, Tools, and Applications’


Socio-Technical System Design

1715

3. Creation. Create and upload a video. 4. Edit. Edit video title, notes and properties. 5. Submit. Request YouTube to display the video to their public. 6. Display. The public sees it and can vote or comment. YouTube lets anyone registered in the public role (1) enter their space (2) and create a video, by uploading or recording, which they own (3). They can view it in private and edit details (4). At this point, the video is visible to them and administrators, but not to the public. They can still delete it. It is then submitted to YouTube for display to its public (5). This occurs quickly as display rights are delegated (6). To create, edit and display a video are distinct steps. YouTube can still reject videos that fail its copyright or decency rules. This isn’t a delete, as the owner can still view, edit and resubmit it. In contrast, a technology based design that lets space owners delete videos at will discourages participation, because people could waste their effort. Consistency. For the above logic to be consistent, it should also apply when the video itself a space for dependent comments and votes. Indeed it is, as video owners have to choice to allow comments or votes just as YouTube had to right to accept their video (Figure 19). That YouTube gives the same rights to others as it takes for itself is a key part of its success and a basic principle of socio-technical designs.


1716

Encyclopedia of Human-Computer Interaction

Figure 24.19:  YouTube video rights. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.


1717

Socio-Technical System Design

24.4.13  Re-allocating rights The right to re-allocate rights is part of social interaction. It allows socio-technical systems to evolve from an initial state of one administrator with all rights, to a community sharing rights. Use and meta rights can be re-allocated, as follows: 1.

Transfer. Re-allocate all rights, including meta-rights. Rights are irrevocably given to the new owner, e.g. after selling a house, the old owner has no rights to it.

2. Delegate. Re-allocate use rights but not meta-rights. It can be reversed, e.g. renting. 3. Divide. A right divided among an actor set requires all to agree to permit an act, and any party can stop it, e.g. couples who jointly own a house. Multiply. A right multiplied across an actor set lets them all exercise it as if they owned it exclusively, e.g. couples who severally share a bank account. Allocated by

Meta- rights

Allocated to

Use rights

Meta- rights

Use rights

Transfer

Delegate

Divide use

½√

½√

Divide all

½√

½√

½√

½√

Multiply use

Multiply all

Table 24.15:  Results use and meta rights re-allocations.


1718

Encyclopedia of Human-Computer Interaction

Dividing a right means that all must agree to it, while multiplying one means that any party alone can activate it. This isn’t just splitting hairs, as if a couple owns house jointly, both must sign the sale deed to sell it, but if they own it severally, either party can sell it and take all the money. Re-allocating rights applies to many social situations, e.g. submitting a paper online can transfer all rights to a primary author, or also let them delegate rights to others, or divide rights so all authors must confirm changes, or multiply rights to all authors. Table 24.15 shows the resultant states. Each has different consequences, e.g. multiplying the edit right is risky but invites participation, while dividing it is safe but reduces contributions. Delegation. Delegation, by definition, doesn’t give meta-rights, so a delegatee can’t pass rights on. Renting an apartment gives no right to sub-let, and lending a book doesn’t give the right to on-lend it. It isn’t hard to show that if delatees delegate, accountability is diluted. If one loans a book to one who loans it to another who loses it, who is accountable? This gives the operational principle: P16. Delegating doesn’t give the right to delegate. Allocating use rights to an existing object makes the target person accountable for it, so it requires consent, e.g. one can’t add a paper co-author without agreement. The principle is: P17. Allocating existing object use rights to a person requires their consent. An ACS might ask: “Bob offers you edit rights to ‘The 2012 Company Plan’, do you accept?” In contrast, rights to null acts, like view or enter, or to acts like create, can be allocated without consent because they imply no accountability: P18. Allocating null rights to existing objects, or the right to create, requires no consent. So space owners can freely delegate entry, view and create rights to anyone. Social networks. Social networks currently send messages like:


Socio-Technical System Design

1719

“X wants to be friends with you” In this tit-for-tat social trade: X offers to make you a friend if you make them one, i.e. it is a social trade. Yet by P7, one can befriend another without their permission.92 If the software allowed it, we might get messages like: “X considers you a friend “ This is giving friendship, not trading it. As one can love a child unconditionally, even if they don’t return the favor, so friendship needn’t be a commercial transaction. For a social network to consider the friends of my friends also my friends contradicts P16. As liking someone doesn’t guarantee that one will like their friends, so making a friend shouldn’t reset my friend list. This illustrates a technical option that failed because it had no social basis.

24.4.14  Implementation Traditional access control enforcement is done by a security kernel mechanism. A security kernel is a trusted software module that intercepts every access request call submitted to a system and decides if it should be granted or denied, based on some specified access policy model. Usually, a centralized approach is used, so one policy decision point handles all resource requests. The user sees either an executed action result or a permission denied message. SNSs have millions of users so centralized or semi-decentralized certificates are a bottle neck. This plus the social need for local ownership by content contributors suggests a strategy of distributed certificates to implement the ACS policy model outlined here. Allowing local policy decision points to handle resource requests also ensures local user control over resources. If distributed certificates are stored in the stakeholder’s namespace, only he or she can access and modify them (Figure 24.20). 92.  So giving another the right to view your wall doesn’t let them spam you with change notices from theirs.


1720

Encyclopedia of Human-Computer Interaction

Figure 24.20:  Distributed access control model architecture. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

24.4.15  Summary A legitimate ACS model can manage rights by assigning owner, parent, ancestor, offspring, and local public roles to objects and spaces. The ACS axioms deduced are: 1.

All non-null entity rights should be allocated to actors.

2. A persona should be owned by itself. 3. Every entity has a parent space, up to the system space.


Socio-Technical System Design

1721

4. Any right to use an object implies a right to view it. 5. Any communication act should have prior mutual consent. 6. A role is a right expressed in general terms, as a pointer or set. 7. A space owner can ban or give entry to a persona without its owner’s permission. 8. A meta-right is the right to allocate any entity right, including itself. 9. Creation is always an act on a space, up to the system space. 10. The creator of new entity should immediately gain all rights to it. 11. A person can view in advance any rights that could apply to them. 12. A space owner should have the right to view any offspring. 13. An entity owner should be able to enter any ancestor space. 14. Displaying an entity in a space requires both persona and space owner consent. 15. To display an entity in a space, the entity owner gives view meta-rights to the space owner. 16. Delegating doesn’t give the right to delegate. 17. Allocating existing object use rights to a person requires their consent. 18. Allocating null rights to existing objects, or the right to create, requires no consent. The above are social requirements not technical necessities, aiming at social sustainability. We are in the process of formalizing this model as a social interaction standard for any socio-technical system.


1722

Encyclopedia of Human-Computer Interaction

24.4.16  Discussion questions Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples. 1.

What is access control? What types of computer systems use it? What don’t? How does it traditionally work? How do social networks challenge this? How has access control responded?

2. What is a right in human terms? Is it a directive? How are rights represented as information? Give examples. What is a transmitted right called? Give examples. 3. What is the difference between a user and an actor? Contrast user goals and actor goals. Why are actors necessary for online community evolution? 4. Is a person always a citizen? How do communities hold citizens to account? If a car runs over a dog, is the car accountable? Why then is the driver accountable? If online software cheats a user, is the software accountable? If not, who is? Give an example. If automated bidding programs crash the stock market and millions lose their jobs, who is accountable? Can we blame technology for this? 5. Contrast an entity and an operation. What is a social entity? Is an online persona a person? How is a personae activated? Is this “possessing” an online body? Is a persona “really” you? If a program activates a persona, is it an online zombie? What online programs greet you by name? Do you like that? If an online banking web site welcomes you by name each time, does it build up a relationship? Who are you relating to?


Socio-Technical System Design

1723

6. Estimate how many hours a day you interact with technology. Be honest. Of those, how many are with online programs vs. people? Which do you prefer? Are any online programs your friend? Try out mobile phone help you can converse with, like Siri. Ask it to be your personal friend and report the conversation. If AI improved, would you like a personal AI friend? 7. Must all rights be allocated? What rights must be? Why? What manages online rights? Are AI programs accountable for rights allocated to them? In the USS Vincennes tragedy, was the computer program that shot down the Iranian civilian airliner held to account? Why not? What caused the error? What changed afterwards? 8. Who should own a persona and why? For three STSs, create a new persona, use it to connect, try to edit it, then to delete it. Compare what properties you can and can’t change. If you delete it entirely, what remains? Can you resurrect it? Describe two ways to join an online community. Which is easier? More secure? 9. Describe, with examples, current technical responses to the social problems of persona abandonment, transfer, delegation and orphaning. What do you recommend in each case? 10. Why is choice over displaying oneself to others important for social beings? What is the right to control this called? Who has the right to display your name in a telephone listing? Who has the right to remove it? Does the same apply to an online registry listing? Investigate three online cases and report what they do. 11. How do information entities differ from objects? How do spaces differ from items? What is the object hierarchy and how does it arise? What is the first space? What operations apply to spaces but not items? What


1724

Encyclopedia of Human-Computer Interaction

operations apply to items but not spaces? Can an item become a space? Can a space become an item? Give examples. 12. How do comments differ from messages? Define the right to comment as an AEO triad. If a comment becomes a space, what is it called? Demonstrate with three commenting STSs. For systems that allow “deep” commenting (comments on comments on comments, etc), what is going on? (Look at who adds). Would a chat type conversation function be simpler than so many indents? 13. For each operation set below, explain the differences, give examples, and give another variant: ff Delete: Delete, undelete, destroy. ff Edit: Edit, append, version, revert. ff Create: Create. What is the difference between create and edit? Define a fourth operation set. 14. Is viewing an object an act upon it? Is viewing a person an act upon them? How is viewing a social act? Can viewing an online objects be a social act? Why is viewing necessary for social accountability? 15. What is communication? Is an information transfer a communication, e.g. a download? Why should communication require mutual consent? What happens if it isn’t mutual? How does opening a channel differ from sending a message? Can a sender be anonymous to a receiver? Can a receiver be anonymous to a sender? Can senders or receivers be anonymous to the transmission system? Describe online systems that enable channel control.


Socio-Technical System Design

1725

16. Answer the following for a landline phone, mobile phone and Skype: How does the communication request manifest? What information does a receiver get and what choices do they have? What happens to anonymous senders? How does one create an address list? What else is different? 17. What is a role? Can it be empty or null? How is a role like a maths variable or computing pointer? Give role examples from three popular STSs. For each, give the ACS triad, stating what values vary. What other values could vary? Use this to suggest new useful roles. 18. How can roles, by definition, vary? For three different STSs, describe how each role variation type might work. Give three different examples of implemented roles and suggest three future developments. 19. If you unfriend a person, should they be informed? Test and report what actually happens on three common SNs. Must a banned bulletin board “flamer” be notified? What about someone kicked out of a chat room? What is the general principle here? 20. What is a meta-right? Give physical and online examples. How does it differ from other rights? Is it still a right? Can an ACS act on metarights? Are there ACS meta-meta-rights? If not, why not? What then does it mean to “own” an entity? 21. Why can’t an ACS creating an item be an act on that item? Why can’t it be an act on nothing? What then is it an act upon? Illustrate with online examples. 22. Who owns a newly created information entity? By what social principle? Must this always be so? Find online cases where you create a thing online but don’t fully own it. 23. In a space, who, initially, has the right to create in it? How then can others create in that space? What are creation conditions? What is the jus-


1726

Encyclopedia of Human-Computer Interaction

tification? Illustrate object, operation, access, visibility and edit conditions. How does transparency apply? 24. Give three examples of creating an entity in a space. For each, specify the owner, parent, ancestors, offspring and local public. Which role(s) can the owner change? 25. For five different STS genres, give examples of online creation conditions. Create something in each. Was the result transparent? Find two examples of non-transparent creations. 26. For the following, explain why or why not. Suppose you are the chair of a computer conference with several tracks. Should a track chair be able to exclude you, or hide a paper from your view? Should you be able to delete a paper from their track? What about their seeing papers in other tracks? Should a track chair be able to move a paper submitted to their track by error to another track? Investigate and report comments you find on online systems that manage academic conferences. 27. An online community has put an issue to a member vote. Evaluate these STS options: a. Voters can see how others voted, by name, before they vote. b. Voters can see the vote average before they vote. c. Voters can only see the vote average after they vote, but before all voting is over. d. Voters can only see the vote average after all the voting is over. Find online votes to illustrate. Do the same for these voting options: Voters aren’t registered, so one person can vote many times.


Socio-Technical System Design

1727

Voters are registered, but can change their one vote any time. Voters are registered, and can only vote once, with no edits. Can the person calling the vote legitimately define these vote conditions? What if they set conditions like all votes must be signed and will be made public? 1.

Is posting a video online like posting a notice in a local shop window? Explain, covering permission to post, to display, to withdraw and to delete. Can a post be deleted? Can it be rejected? Explain the difference. Give online examples.

2. Give physical and online examples of rights re-allocations. Specify rights and meta-rights. If four authors publish a paper online, list the ownership options. Discuss how each might work out in practice. Which would you prefer and why? 3. Should delegating give the right to delegate? Explain, with physical and online examples. What happens to ownership and accountability if delegatees can delegate? Discuss a worst case scenario. 4. If a property is left to you in a will, can you refuse to own it, or is it automatically yours? What rights can’t be allocated without consent? What can? Which of these rights can be freely allocated: Paper author. Paper co-author. Track chair. Being friended. Being banned. Bulletin board member. Logon ID. Bulletin board moderator. Online Christmas card access? Which require receiver consent? 5. Investigate how SN connections multiply. For you and four friends, list the number of friends and the average. Based on this, estimate the total possible friends of friends in general. By looking at your friend’s friend lists, give, in your case, the friends of friends actual. Estimate how many messages or notifications you get from all your friends per week. From that, estimate the average messages per friend per day. So if you friended


1728

Encyclopedia of Human-Computer Interaction

all your friend’s friends, potentially, how many messages could you expect per day? What if you friended your friend’s friend’s friends too? Why is the number so large? Discuss the film, Six Degrees of Separation. 6. Demonstrate how to “unfriend” a person in three social networks. Are they notified? Is unfriending “breaking up”? That an “anti-friend” is an enemy, suggests “anti-Facebook” sites. Investigate technology support for people you hate, e.g. celebrities or my relationship ex. Try anti-organization sites, like sickfacebook.com. What purpose could technology support for anti-friendship serve?

24.5  Part 5: The future “The future isn’t technical or social but both”

24.5.1  Technology utopianism Technology utopianism is the belief that technology alone creates the future. It is popular in fiction, e.g. Rosie in The Jetsons, C-3PO in Star Wars and Data in Star Trek are robots that read, talk, walk, converse, think and feel. As we do these things easily, how hard could it be? In films, robots learn (Short Circuit), reproduce (Stargate’s replicators), think (The Hitchhiker’s Guide’s Marvin), become self-aware (I, Robot) and eventually replace us (The Terminator, The Matrix). In this view, computers are an unstoppable evolutionary juggernaut (Figure 24.21), but right now they couldn’t conquer a planet of cockroaches.


Socio-Technical System Design

1729

Figure 24.21:  Technological utopianism. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Despite cartoons and science fiction, today’s housework robot is the Roomba (Figure 24.22). It picks up dirt, but user problems include turning in circles, paralysis, thread and hair clogs and getting stuck. Even washing it in water kills it. Make no mistake, this is a clever robot, yet it needs help to survive in a house with flat terrain, no enemies and free food.93 Dropped outside or left alone, it can’t survive. Even insects do better.

93.  The Roomba ‘feeds’ from electricity sockets in the house.


1730

Encyclopedia of Human-Computer Interaction

Courtesy of Fleischer Brothers production. Copyright: pd (Public Domain (information that is common property and contains no original authorship)).


Socio-Technical System Design

1731

Courtesy of Brayon Michael Pieske. Copyright: pd (Public Domain (information that is common property and contains no original authorship)).


1732

Encyclopedia of Human-Computer Interaction

Courtesy of Larry D. Moore. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).

Figure 24.22 A-B-C:  Robot help in:  a. Superman, b. Star Wars, c. Reality (the first iRobot Roomba version).

Conversation is the same. People chat easily about many things, but the computers next to us are strangely silent. The technology exists already so why don’t cars talk to us like KITT in Knight Rider? Is it because computer conversations are embarrassing?94 94.  Try a computer conversation at http://cleverbot.com/. Here is a sample, run through twice: If you become me, am I you? I believe in you babe, or, What if I don’t believe they are? If you are not you, who are you? I am not a vampire, or, I’m you. How do you know? Because I’m a girl, or, Because you know I know I know you know me. But girls can be vampires. Yes they can, or, No they can’t.


1733

Socio-Technical System Design

Spatial coordination is the same. Compare the Robot World Cup and the human World Cup. Artificial Intelligence (AI) advocates say robots will surpass people in forty years, but they said that forty years ago. Like a mirage, AI breakthroughs are always forty years hence — or is it forty thousand years? For vision, computers still struggle with retinal let alone cortical activity:

“Computers are no real competition for the human brain in areas such as vision, hearing, pattern recognition and learning. ... And when it comes to operational efficiency there is no contest at all. A typical room-size supercomputer weights roughly 1,000 times more, occupies 10,000 times more space and consumes a millionfold more power ...” -- Boahen, 2005

The point isn’t what computers can’t do, but that tasks like talking, walking and thinking aren’t as easy as they might seem. Technology utopianism predict a “singularity” based on Moore’s law, that computer processing power So could you be a vampire girl? No. I am not, or, A girl. Could you become a vampire girl? Why? or, No, could you become a girl? If you were bitten by a vampire would you be a vampire girl? Who have you told that I am Jesus? or, I’d buy an X-wing, because they’re awesome. So you could become a vampire girl? I doubt it. But I know it could be true, or, I could be a lot of things. That contradicts what you said before. I don’t think it does, or, You are contradicting yourself. You could be not you and a vampire. No, I’m just a rabbit, or, Girl vampire. Could you be not you as a vampire girl? No, or, I could probably be a giraffe, if science were able to make me so. You just said you could be a vampire girl which is not you. I never said I was a boy, or I never said I was a girl. This painful conversation has no end.


1734

Encyclopedia of Human-Computer Interaction

doubles every eighteen months95. It says that shortly, super-intelligent computers will replace people (Kurzweil, 1999). This “big lie” 96 fantasy, sees the future as just more of the same processing computers already have. Yet evolution is never more of the same, and the brain isn’t just a big computer.

Figure 24.23:  Letraset page for letter ‘A’ Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Computers calculate better than us, as cars travel faster and cranes lift more, but calculating isn’t all the brain does. Simple processing97 works for simple cases, but real tasks like vision, hearing, thinking and conversing are productive, i.e. their information increases geometrically with size98. The productivity of language is 95.  See for example http://karlnordstrom.ca/ideas/?p=6 96.  A ‘big lie’ is a statement so ludicrous it is assumed to be true, e.g. the statement that all people are equal, when diversity is an obvious principle of natural selection. Of course, all should have equal rights. 97.  Simple processing works only at the informational level, e.g. a literal number or word recall, number calculations or a ‘photographic’ memory of a scene. 98.  Even on an 8x8 chess board, the number of possible chess games is 10120 - more than the atoms in the universe. In an Indian tale, the inventor of chess was offered a boon by the king. He asked for a grain of


Socio-Technical System Design

1735

that five year olds can speak more sentences than they could learn in a lifetime at a sentence per second (Chomsky, 2006). Children easily see that a Letraset page (Figure 24.23) is all ‘A’s, but computers struggle with such productive variation. Using pixel level processing for pattern recognition is: “like trying to understand bird flight by studying only feathers. It just cannot be done.” (Norman, 1990). AI experts who saw beyond the hype knew decades ago that productive tasks like language wouldn’t be solved anytime soon (Copeland, 1993).

Figure 24.24:  The exponential growth of simple process power. Courtesy of Ray Kurzweil and Kurzweil Technologies, Inc.. Copyright: CC-Att-SA-1 (Creative Commons Attribution-ShareAlike 1.0 Unported). wheat on the first chessboard square, two grains on the second, four on the third, eight on the fourth, and so on, doubling the grains each time. It seemed a modest request, so the king agreed, but the result was over 18 billion, billion grains, more weight than all life on Earth. This is the productivity problem.


1736

Encyclopedia of Human-Computer Interaction

The bottom line for simple processing is the 99% barrier, e.g. 99% accurate computer voice recognition makes one error per 100 words, but an error per minute is well below conversation standards. For computer auto-drive cars, 99% accuracy is an accident a day! In the 2005 DARPA Grand Challenge, five of 23 autonomous vehicles finished a simple course (Miller et al, 2006). In 2007, six of eleven better funded vehicles finished an urban track with a top average speed of 14mph. Yet skilled people drive for decades on harder roads, in worse weathers, in heavier traffic, and faster, with no accidents99. The brain didn’t cross the 99% performance barrier just by increasing simple processing power.

Courtesy of Dmadeo. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).

99.  A good MTBA (Mean Time Between Accidents) is twenty years, see http://ridingsafely.com/ridingsafely3.html


Socio-Technical System Design

1737

Copyright © MGM. All Rights Reserved. Used without permission under the Fair Use Doctrine (as permission could not be obtained). See the “Exceptions” section (and subsection “allRightsReserved-UsedWithoutPermission”) on the page copyright notice.

Figure 24.25 A-B:  Leftmost:  Kim Peek was the inspiration for the film, Rain Man. Rightmost:  Dustin Hoffman in the role of Rain Man.

How can a brain handle “incalculable” tasks? It is an information processor. Its trillion (1012) neurons are biological on/off devices powered by electricity that allow logic gates (McCulloch and Pitts, 1943), i.e. in principle no different from transistors. If processing power really depends on neuron/transistor numbers, computers should be at the brain’s potential soon. Figure 24.24 suggests that computers processed as an insect in 2000, as a mouse in 2010, will be as a human in 2020 and beyond all humans in 2045. Of course this is nonsense, as right now


1738

Encyclopedia of Human-Computer Interaction

computers can’t even do what ants do, with a neuron sliver. Or bees or cockroaches or flying beetles. How will they then jump to conversation, pattern recognition and learning in a few decades? The reason is that calculating power wasn’t the answer to incalculable tasks, as our brain, in its evolution, discovered. In savant syndrome, people who can calculate 20 digit prime numbers in their head need full time care to live in society, e.g. Kim Peek, who inspired the movie Rain Man, could recall every word on every page of over 9,000 books, including all Shakespeare and the Bible, but had to be cared for by his father (Figure 24.25). He was neurologically disabled, as later parts of his brain didn’t arise. Savants then are the brain working without its more recent sub-systems. That they calculate better suggests that the brain tried simple processing power and evolved past it. In contrast, technology utopians still don’t see that more of the same isn’t evolution. Computers are electronic savants, calculation wizards that need minders to survive in the real world. If computers excel at the sort of processing the brain outgrew a million years ago, how are they the future? If super-computers built from PC video cards running in parallel are the future of computing, then bigger oxes are the future of farming! How can AI surpass HI (Human Intelligence) if it isn’t even going in the same direction? A system’s performance isn’t just its parts but also how they connect. Computers today follow von Neumann’s architecture, but the brain didn’t, e.g. it has no CPU (Sperry and Gazzaniga, 1967). It crossed the 99% performance barrier by taking design risks von Neumann avoided (Whitworth, 2009c). The processing of processing is avoided by computer science as it gives infinite loops, yet it allows symbolism - linking one brain neural assembly (a symbol) to another (a perception). This is the basis of meaning and language. Processing changes information so assumes a context100. Only by the processing of processing can we modify contexts, i.e. learn. Denying computers this option denied them meaning. 100.  The context of information is the option set chosen from.


Socio-Technical System Design

1739

Rather than an inferior biological version of today’s computers, the brain is a different kind of processor altogether. It processes its own processing to give language, mathematics and philosophy. The answer to the productivity problem wasn’t more processing but the processing of processing. By this risky step, the brain perceives a “self”, “others”, “friends” and “community”, the same constructs that human and computer savants struggle with. If today’s super-computers aren’t even in the same processing league as the brain101, technology utopians are computing traditionalists posing as futurists.

24.5.2  The socio-technical vision

Figure 24.26:  Mr. Clippy takes charge. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms. 101.  This is not to say they cannot be, just that to do this would require a basic change in their architecture.


1740

Encyclopedia of Human-Computer Interaction

The question facing computing isn’t when it will replace people but when people will see it for what it is, e.g. Mr. Clippy, Office 97’s paper clip assistant (Figure 24.26):

“It wouldn’t go away when you wanted it to. It interrupted rudely and broke your train of thought.” -- Pratley, 2004

Responses included “Die, Clippy, Die!” (Gauze, 2006), but its Microsoft designer still wondered: “If you think the Assistant idea was bad, why exactly?” The specific answer is: because it thought it was in charge. In Windows XP, Mr. Clippy was replaced by tags smart enough to know their place. Software that tries to be “smart” by itself quickly ends up like the sorcerer’s apprentice. Why tie up twenty-million-dollar super-computers to try to do what brains already do, with millions of years of real life beta-testing? Even if we redesign computers to work like the brain, say as neural nets, who is to say they won’t inherit the same weaknesses? If the brain has solved the productivity problem as well as can be expected, lets change the goals of computing, from human mimicry to human assistance. This is already happening. Driverless cars are still a dream but reactive cruise control, range sensing and assisted parallel parking already exist (Miller et al, 2006). Computer surgery struggles but computer supported remote surgery and computer assisted surgery are here today. Robots run clumsily but people with robotic limbs are more than able. Computer piloted drones are a liability but remotely piloted drones are an asset. Computer generated animations are great, but state-of-the-art


Socio-Technical System Design

1741

animations like Avatar combine human actors and computers. Chess players advised by computers perform better than either alone102. In killer applications of the last decade, from email to Facebook, people do what they do best and technology do what it does best, e.g. email transmits information and people create meaning. So “horses for courses” is letting computers process information and people process meaning. That meaning is a level above information, implies that people should “mind” computers and computers shouldn’t control people.

Figure 24.27:  The socio-technical vision. Copyright status: Unknown (pending investigation). See section “Exceptions” in the copyright terms.

Socio-technology is about technology and people, with the latter the “elder” system (Figure 24.27). If people direct technology it may go wrong, but if technology directs people it will go wrong. Higher levels directing lower ones is evolution, but lower ones directing higher ones is devolution. To focus on lower levels, because they are easier, isn’t progress103. To see the Internet only in technical terms 102.  See Kasparov’s ‘The Chess Master and the Computer’, 2010 103.  A man was looking for his lost keys at night under a well lit lamp post. When asked where he lost them, he replied: ‘Over there in the bushes - but the light is better here.’ Seeking intelligence in information is the same.


1742

Encyclopedia of Human-Computer Interaction

is to underestimate it, again! Let computers be background not foreground, as pervasive and ubiquitous computing theories propose. Technology should merge with people, not the other way around. Technology without a human context isn’t even useless - it is pointless. If “technology is the future”, something mindless and heartless is in charge of us. So the future is socio-technology not technology. Some say the Internet is making us stupid104 but a mirror just reflects. Online media showing human brutality, corruption or stupidity just reveal what is. The Internet, as a microscope and telescope on humanity, is showing us to us. It isn’t physical, but thoughts cause words and deeds as guns fire bullets. Humanity’s thoughts are now online for us to choose. We, the human race, are choosing what we think and what we think is now online, with web-counters keeping the score. What the Internet electronic mirror shows isn’t always pretty but it is real and to change oneself one must first see oneself. The evolution of computing is a part of human evolution, of a social experiment that has been ongoing for thousands of years. Only by personal evolution, by seeing beyond ourselves, do we help it succeed.

24.5.3  Discussion questions Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples. 1.

What is technology utopianism? Give examples from movies. What is the technology singularity? In this view, why must computers take over from people? What is the false assumption here?

2. What technology advances did the last century expect by the year 2000? Which ones are we are still awaiting? What do people expect robots to be doing by 2050? What is realistic? How do robot achievements like the 104.  For example: ‘The internet is full of idiots writing rubbish for other idiots to read.’; ‘The internet is full of idiots and one of them might just be you.’; and ‘Do not feed the trolls (DNFTT)


Socio-Technical System Design

1743

Sony dog rank? How might socio-technical design improve the Sony dog? In the socio-technical paradigm, how will robots evolve? Give examples. 3. If super-computers achieve the processing power of one human brain, then many brains, are many people together more intelligent than one? Review the “Madness of Crowds” theory, that people are less intelligent together. Give examples. Why doesn’t adding more programmers to a project always finish it quicker? What, in general, affects whether parts perform better together? Is a super computer, with as many transistors as the brain has neurons, its processing equal? Explain. 4. How do today’s super computers increase processing power? List the processor cores of the top ten? Which use NVidia PC graphic board cores? How is this power utilized in real computing tasks? How does processing cores operating in sequence or parallel affect performance? How is that decided in practice? (CS students only). 5. Review the current state-of-the-art for automated vehicles, whether car, plane, train, etc. Are any fully “pilotless” vehicles currently in use? What about remotely piloted vehicles? When does full computer control work? When doesn’t it? (hint: consider active help systems). When might full computer control of a vehicle be useful? Suggest how computer control of vehicles will evolve, with examples. 6. What is the 99% barrier? Why is the last 1% of accuracy a problem for productive tasks? Give examples from language, logic, art, music, poetry, driving and another. How common are such tasks in the world? How does the brain handle them? 7. What is a human savant? Give examples past and present. What tasks do savants do easily? Can they compete with modern computers?


1744

Encyclopedia of Human-Computer Interaction

What tasks do savants find hard? What is the difference? Why do savants need support? If computers are like savants, what support do they need? 8. Find three examples of software that, like Mr. Clippy, thinks it knows best. Give examples of: 1. Acts without asking, 2. Nags, 3. Changes secretly, 4. Makes you work. 9. Think of a personal conflict you would like advice on. Keep it simple and clear. Now try these three options. In each case explain and ask the question the same way: a. Go to your bedroom alone, put a photo of family member you like on a pillow. Explain and ask the question out loud, then imagine their response. b. Go to an online computer like http://cleverbot.com/ and do the same. c. Ring an anonymous help line and do the same. Compare and contrast the results. Which was the most helpful? 1.

A rational way to decide is to list all the options, assess each one and pick the best. How many options are there for these contests: 1. Checkers, 2. Chess, 3. Civilization (a strategy game), 4. A MMORPG, 5. A debate. Which ones are computers good at? What do people do if they can’t calculate all the options? Can a program do this? How do online gamers rate human and AI opponents? Why? Will this always be so?

2. Mr. Clippy was based on Bayesian logic. What data drove his decisions? What was left out? Why did users find him rude? Why couldn’t he recognize rejection? What users liked Mr. Clippy? Turn on the auto-correct in Word and try writing the equation: i = 1. Why does Word get it wrong?


Socio-Technical System Design

1745

How can you fix it without turning off auto-correct? Give online examples of recommending and taking charge. 3. What is the difference between syntax and semantics in language? What are programs good at? Look at text-to-speech systems, like here, or translators here. How successful are they? Are computers doing what people do? At what level is the translating occurring? Are they semantic level transformations? Discuss John Searle’s Chinese room thought experiment.

24.6  Acknowledgements Thanks to the first author’s wife for helpful advice, to the students of 158729 (STS Design) at Massey University for trying out the questions. Also thanks to Yijing Qian for Figure 24.2 and Figure 24.9.

24.8  References Ackerman, Mark S. (2000): The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility. In Human-Computer Interaction, 15 (2) pp. 181-203 Ahmad, Adnan and Whitworth, Brian (2011): Distributed access control for social networks. In 2011 7th International Conference on Information Assurance and Security IAS, pp. 68-73 Alberts, Bruce, Bray, Dennis, Lewis, Julian, Raff, Martin, Roberts, Keith and Watson, James D. (1994): Molecular Biology of the Cell 3E. Garland Science Alexander, Christopher (1964): Notes on the Synthesis of Form. Harvard University Press Alter, Steven (1999): A general, yet useful theory of information systems. In Communications of the AIS, 1 (3) Beer, David and Burrows, Roger (2007): Sociology and, of and in Web 2.0: Some Initial Considerations. In Sociological Research Online, 12 (5) pp. 1-15 Benkler, Yochai (2002): Coase’s Penguin, or, Linux and “The Nature of the Firm. In Yale Law Journal, 112 (3) pp. 369-446


1746

Encyclopedia of Human-Computer Interaction

Berners-Lee, Tim (2000): Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. HarperBusiness Bertalanffy, Ludwig Von (1968): General System Theory: Foundations, Development, Applications (Revised Edition). George Braziller Inc Boahen, Kwabena (2005): Neuromorphic Microchips. In Scientific American, 292 (5) pp. 56-63 Borenstein, Nathaniel S. and Thyberg, Chris A. (1991): Power, Ease of Use and Cooperative Work in a Practical Multimedia Message System. In International Journal of Man-Machine Studies, 34 (2) pp. 229-259 Boutin, Paul (2004): Can e-mail be saved. In Infoworld, 14 Burk, Dan L. (2001): Copyrightable functions and patentable speech. In Communications of the ACM, 44 (2) pp. 69-75 Callahan, David (2004): The Cheating Culture: Why More Americans Are Doing Wrong to Get Ahead. Mariner Books Campbell-Kelly, Martin (2008): Historical reflections: Will the future of software be open source?. In Communications of the ACM, 51 (10) pp. 21-23 Chomsky, Noam (2006): Language and Mind. Cambridge University Press Chung, Lawrence, Nixon, Brian A., Yu, Eric and Mylopoulos, John (1999): Non-Functional Requirements in Software Engineering (THE KLUWER INTERNATIONAL SERIES IN SOFTWARE ENGINEERING Volume 5). Springer Clark, D. D. and Wilson, D. R. (1987): A Comparison of Commercial and Military Computer Security Policies. In: IEEE Symposium on Security and Privacy 1987 1987. pp. 184-195 Cohen, Bram (2003): Incentives Build Robustness in BitTorrent. In Workshop on Economics of PeertoPeer systems, 6 (22) Copeland, Jack (1993): Artificial Intelligence: A Philosophical Introduction. Wiley-Blackwell Cysneiros, L. M. and Leite, Julio Cesar Sampaio do Prado (2002): Non-functional requirements:from Elicitation to modeling languages. In Computer, 35 (3) pp. 8-9 David, Julie Smith, McCarthy, William E. and Sommer, Brian S. (2003): Agility: the key to survival of the fittest in the software market. In Communications of the ACM, 46 (5) pp. 65-69 Davis, Fred D. (1989): Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. In MIS Quarterly, 13 (3) pp. 319-340


Socio-Technical System Design

1747

Department of Defense (1985). TCSEC - Trusted Computer Security Evaluation Criteria (TCSEC), DOD 5200.28-STD. Retrieved 19 May 2012 from Department of Defense: Diamond, Jared M. (1998): Guns, Germs, and Steel: The Fates of Human Societies. W. W. Norton and Company Esfeld, Michael (1998): Holism and analytic philosophy. In Mind, 107 (426) pp. 365-380 Ferraiolo, David F. and Kuhn, D. Richard (2004): Role Based Access Control. In Review Literature And Arts Of The Americas, 14 (5) pp. 554-563 Figart, Deborah M. and Golden, Lonnie (eds.) (2000): Working Time: International Trends, Theory and Policy Perspectives (Routledge Advances in Social Economics). Routledge Forman, Bruce Jay and Whitworth, Brian (2007): Information Disclosure and the Online Customer Relationship. In: Quality, Values and Choice Workshop, Computer Human Interaction 2007, Portland, Oregon, USA. pp. 1-7 Freeden, Michael (1991): Rights (Concepts in Social Thought). Freudenthal, Eric, Pesin, Tracy, Port, Lawrence, Keenan, Edward and Karamcheti, Vijay (2002): dRBAC: distributed role-based access control for dynamic coalition environments. In Proceedings 22nd International Conference on Distributed Computing Systems, pp. 411-420 G., Geen, R. and Gange, J. J. (1983): Social facilitation: Drive theory and beyond. In: Blumberg, Herbert H. (ed.). “Small Groups and Social Interaction: v. 2 (Small Groups & Social Interactions)”. John Wiley and Sons Ltdp. 141–153 Gediga, Gunther, Hamborg, Kai-Christoph and Duntsch, Ivo (1999): The IsoMetrics Usability Inventory: An Operationalization Of ISO 9241-10 supporting summative and formative evaluation of software systems. In Behaviour and Information Technology, 18 (3) pp. 151-164 Hoffman, L. R. and Maier, N. R. F. (1961): Quality and acceptance of problem solutions by members of homogenous and heterogenous groups. In Journal of Abnormal and Social Psychology, 62 pp. 401-407 Johnson, Deborah G. (2001): Computer Ethics (3rd Edition). Prentice Hall Jonsson, Erland (1998): An integrated Framework for Security and Dependability. In Information Security, pp. 22-29 Kant, Immanuel (1999): Critique of Pure Reason (The Cambridge Edition of the Works of Immanuel Kant). Cambridge University Press Karp, Alan H., Haury, Harry and Davis, Michael H. (2009): From ABAC to ZBAC : The Evolution of Access Control Models From ABAC to ZBAC : The Evolution of Access Control


1748

Encyclopedia of Human-Computer Interaction

Models. In Control, (0) pp. 22-30 Keeney, Ralph L. and Raiffa, Howard (1976): Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge University Press Kelly, Erin (ed.) (2001): Justice as Fairness: A Restatement. Belknap Press of Harvard University Press Kienzle, Darrell M. and Wulf, William A. (1998): A practical approach to security assessment. In: Proceedings of the 1997 workshop on New security paradigms 1998. pp. 5-16 Knoll, Kathleen and Jarvenpaa, Sirkka L. (1994): Information technology alignment or in highly turbulent environments: the concept of flexibility. In: Proceedings of the 1994 computer personnel research conference on Reinventing IS managing information technology in changing organizations managing information technology in changing organizations 1994. pp. 1-14 Kurzweil, Ray (1999): The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult Lampson, B. W. (1969): Dynamic protection structures. In: Proceedings of the November 18-20, 1969, fall joint computer conference 1969. pp. 27-38 Lessig, Lawrence (1999): Code and Other Laws of Cyberspace. Basic Books Lindquist, Christopher (2005): Fixing the requirements mess. In CIO, 0 Locke, John (1963): An essay concerning the true original extent and end of civil government: Second of “Two Treatises on Government” (1690). In: Somerville, John and Santoni, Ronald (eds.). “Social and Political Philosophy: Readings From Plato to Gandhi”. Anchorp. 169–204 Lorenz, E. N. (1963): Deterministic Nonperiodic Flow. In Journal of the Atmospheric Sciences, 20 (2) pp. 130-141 Losavio, Francisca, Chirinos, Ledis, Matteo, Alfredo, Levy, Nicole and Ramdane-Cherif, Amar (2004): Designing Quality Architecture: Incorporating ISO Standards into the Unified Process. In IS Management, 21 (1) pp. 27-44 Mandelbaum, Michael (2002): The Ideas that Conquered the World: Peace, Democracy, and Free Markets in the Twenty-first Century. PublicAffairs McCulloch, Warren S. and Pitts, Walter H. (1943): A logical calculus of the ideas immanent in nervous activity. In Bulletin of Mathematical Biophysics, 5 (4) pp. 115-133


Socio-Technical System Design

1749

MessageLabs (2006). The year spam raised its game; 2007 predictions. Retrieved 19 May 2012 from MessageLabs: MessageLabs (2010). Intelligence Annual Security Report, 2010. Retrieved 19 May 2012 from MessageLabs: Meyrowitz, Joshua (1985): No Sense of Place: The Impact of Electronic Media on Social Behavior. Oxford University Press, USA Miller, George A. (1956): The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. In Psychological Review, 63 pp. 81-97 Miller, Isaac, Garcia, Ephrahim and Campbell, Mark (2006): To Drive Is Human. In IEEE Computer, 39 (12) pp. 52-56 Mitchell, William J. (1995): City of Bits: Space, Place, and the Infobahn (On Architecture). The MIT Press Moreira, Ana, Araújo, João and Brito, Isabel (2002): Crosscutting quality attributes for requirements engineering. In Proceedings of the 14th international conference on Software engineering and knowledge engineering SEKE 02, (0) Norman, Donald A. (1990): The Design of Everyday Things. New York, Doubleday Nuseibeh, Bashar and Easterbrook, Steve (2000): Requirements engineering: a roadmap. In: Proceedings of the Conference on The Future of Software Engineering 2000. pp. 35-46 OECD (1996). Guidelines for the Security of Information Systems. Retrieved 19 May 2012 from OECD: Penrose, Roger (2005): The Road to Reality : A Complete Guide to the Laws of the Universe. Knopf Pinto, Jeffrey K. (2002): Project Management 2002. In Research Technology Management, 45 (2) p. 22–37 Porra, Jaana and Hirschheim, Rudy (2007): A Lifetime of Theory and Action on the Ethical Use of Computers: A Dialogue with Enid Mumford. In Journal of the Association for Information Systems, 8 (9) pp. 467-478 Poundstone, William (1992): Prisoner’s Dilemma. Anchor Raymond, Eric S. (1999): The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. OReilly Media


1750

Encyclopedia of Human-Computer Interaction

Regan, Priscilla M. (1995): Legislating Privacy: Technology, Social Values, and Public Policy. University of North Carolina Press Reid, Fraser J. M., Malinek, Vlastimil, Stott, Clifford J. T. and Evans, Jonathan ST. B. T. (1996): The messaging threshold in computer-mediated communication. In Ergonomics, 39 (8) pp. 1017-1037 Ridley, Matt (2010): The Rational Optimist: How Prosperity Evolves. Harper Rosa, Nelson S., Justo, George R. R. and Cunha, Paulo R. F. (2001): A framework for building non-functional software architectures. In Parallel Computing, pp. 141-147 Rose, E. (2000): Balancing internet marketing needs with consumer concerns: a property rights framework. In ACM SIGCAS Computers and Society, 30 (2) pp. 20-24 Samuelson, Pamela (2003): Unsolicited communications as trespass?. In Communications of the ACM, 46 (10) pp. 15-20 Sanai, Hakim Abu L Majd Madud (1968): The Enclosed Garden of Truth. Theophania Publishing Sanders, Mark S. and McCormick, Ernest J. (1993): Human Factors In Engineering and Design. McGraw-Hill Science Seabold, Daniel E., Honemann, Daniel H. and Balch, Thomas J. (eds.) (1993): Robert’s Rules of Order Newly Revised, 11th edition. Da Capo Press Shannon, Claude E. and Weaver, Warren (1949): The Mathematical Theory of Communication. University of Illinois Press Shannon, Claude E. and Weaver, Warren (1971): The Mathematical Theory of Communication. University of Illinois Press Shirky, Clay (2008): Here Comes Everybody: The Power of Organizing Without Organizations. Penguin Press Short, John, Williams, Ederyn and Christie, Bruce (1976): Visual communication and social interaction - The role of ‘medium’ in the communication process. In The Social Psychology of Telecommunications, pp. 43-60 Simone, Mauricio De and Kazman, Rick (1995): Software architectural analysis: an experience report. In CASCON 95 Proceedings of the 1995 conference of the Centre for Advanced Studies on Collaborative research,


Socio-Technical System Design

1751

Skinner, Burrhus F. (1948): ‘Superstition’ in the pigeon. In Journal of Experimental Psychology, 38 (2) pp. 168-172 Smith, Heather A., Kulatilaka, Nalin and Venkatramen, N. (2002): Developments in IS practice III: Riding the wave: extracting value from mobile technology. In Communications of the Association for Information Systems, 8 (0) pp. 467-481 Sommerville, Ian (2004): Software Engineering (9th Edition). Addison Wesley Spence, Robert and Apperley, Mark (2011). Bifocal Display. Retrieved 6 June 2013 from [URL to be defined - in press] Sperry, R. W. and Gazzaniga, M. S. (1967): Language following surgical disconnexion of the hemispheres. In: Millikan, Darley (ed.). “Brain Mechanism Underlying Speech and Language”. Grune and Stratton Tenner, Edward (1997): Why Things Bite Back: Technology and the Revenge of Unintended Consequences (Vintage). Vintage Thompson, Mary, Johnston, William, Mudumbai, Srilekha, Hoo, Gary, Jackson, Keith and Essiari, Abdelilah (1999): Certificate-based Access Control for Widely Distributed Resources. In Proceedings of 8th USENIX Security Symposium, pp. 215-228 Toffler, Alvin (1980): The Third Wave. Bantam Weiss, Aaron (2003): Ending spam’s free ride. In netWorker, 7 (2) pp. 18-24 Whitworth, Brian (2009b): The social requirements of technical systems. In: Whitworth, Brian and Moor, Aldo de (eds.). “Handbook of Research on Socio-Technical Design and Social Networking Systems (2-Volumes)”. Information Science Reference Whitworth, Brian (2011): The Virtual Reality Conjecture. In Prespacetime Journal, 2 (9) p. 1404–1433 Whitworth, Brian (2009a): A Comparison of Human and Computer Information Processing. In: Pagani, Margherita (ed.). “Encyclopedia of Multimedia Technology and Networking (2 Volume Set)”. Idea Group Publishingp. 230–239 Whitworth, Brian (2006): Measuring disagreement. In: Reynolds, Rodney A., Woods, Robert and Baker, Jason D. (eds.). “Handbook of Research on Electronic Surveys and Measurements”. Whitworth, Brian and Bieber, Michael (2002): Legitimate Navigation Links. In: ACM Hypertext 2002, Demonstrations and Posters 2002, Maryland, USA. pp. 26-27


1752

Encyclopedia of Human-Computer Interaction

Whitworth, Brian and Friedman, Robert S. (2009): Reinventing Academic Publishing Online. Part I: Rigor, Relevance and Practice. In First Monday, 14 (8) Whitworth, Brian and Liu, Tong (2008): Politeness as a Social Computing Requirement. In: Luppicini, Rocci (ed.). “Handbook of Conversation Design for Instructional Applications (Premier Reference Source)”. Information Science Referencepp. 419-436 Whitworth, Brian and Liu, Tong (2009): Channel E-mail: A Sociotechnical Response to Spam. In IEEE Computer, 42 (7) pp. 63-72 Whitworth, Brian and Moor, Aldo de (2002): Legitimate by Design: Towards Trusted Virtual Community Environments. In: HICSS 2002 2002. p. 213 Whitworth, Brian and Whitworth, Alex P. (2010): The social environment model: Small heroes and the evolution of human society. In First Monday, 15 (11) Whitworth, Brian and Whitworth, Elizabeth (2004): Spam and the Social-Technical Gap. In Computer, 37 (10) pp. 38-45 Whitworth, Brian, Bañuls, Victor, Sylla, Cheickna and Mahinda, Edward (2008): Expanding the Criteria for Evaluating Socio-Technical Software. In IEEE Transactions on Systems, Man, and Cybernetics, 38 (4) pp. 777-790 Whitworth, Brian, Gallupe, Brent and McQueen, Robert (2000): A cognitive three-process model of computer-mediated group interaction. In Group Decision and Negotiation, 9 (5) pp. 431-456 Whitworth, Brian, Gallupe, Brent and McQueen, Robert (2001): Generating agreement in computer-mediated groups. In Small Group Research, 32 (5) pp. 625-665 Wright, Robert (2001): Nonzero: The Logic of Human Destiny. Vintage


Socio-Technical System Design

1753

About the authors Brian Whitworth

Personal Homepage: http://brianwhitworth.com/ Born in England and brought up in New Zealand, Brian Whitworth currently works at Massey University in Auckland, New Zealand. After doing a mathematics degree, and a Master’s thesis on split-brain neuropsychology, Brian joined the New Zealand Army, where he was the first specialist to complete regular army officer cadet training. He worked as an army psychologist, and then in computer operational simulations (wargames), while simultaneously raising four wonderful children, until he retired in 1989 as a Major. Brian then completed his doctorate on online groups, and students at his university used the social voting system he built until the World Wide Web arrived. In 1999, he worked in the USA as a professor, and published in journals like Small Group Research, Group Decision and


1754

Encyclopedia of Human-Computer Interaction

Negotiation, Communications of the AIS, IEEE Computer, Behavior and Information Technology, and Communications of the ACM. More recently, he was the senior editor of the Handbook of Research on Socio-Technical Design and Social Networking Systems, written by over a hundred leading experts worldwide. His interests include computing, psychology, quantum theory and motor-cycle riding.

Adnan Ahmad

Has also published under the name of: “Adnan Ahmad”, “Adnan Ahmed”, and “A. Ahmad” Adnan Ahmad was born in Lahore, Pakistan. He received his Bachelor of Science (Hons.) from Govt. College University (GCU, 2005), with a major in software engineering, and a Masters of Science from Lahore University of Management Sciences (LUMS, 2009), with a major in distributed systems. He worked for five years in industry, getting hands on experience with cutting edge software and hardware technologies. His PhD in Information Technology at Massey University, New Zealand, was on a formal model of distributed rights allocation in online


Socio-Technical System Design

1755

social interaction. He has published in well-known conferences like Worldcomp, IFIP SEC, IAS, IAIT and Trustcom. His current research applies socio-technical design principles to computer security, and his other interests include crowd sourcing, distributed systems, spam and image processing.

Your notes and thoughts on chapter 24 Record your notes and thoughts on this chapter. If you want to share these thoughts with others online, go to the bottom of the page at: http://www.interaction-design.org/encyclopedia/socio-technical_system_design.html

NOTES _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________


1756

Encyclopedia of Human-Computer Interaction

_______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.