“Prisoners dilemma”. Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons
“Programming” Social Collective Intelligence DANIELE MIORANDI AND LORENZO MAGGI
Digital Object Identifier 10.1109/MTS.2014.2345206 Date of publication: 17 September 2014
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
1932-4529/14©2014IEEE
|
55
W
orldwide, societies are seeing rapid change in modes of social interaction and organization. These new interaction modes are predicated on emerging forms of information infrastructure together with rapidly evolving devices, systems, and applications that are ever more deeply interwoven with our social fabric [1]. Computers and humans complement each other, as there are activities that are difficult or impossible for one and easy for the other. We argue that the key to activating the potential for the new generation of high-impact technologies and services aimed at enhancing human problem-solving capabilities lies in realizing a tight symbiosis between human and computers. In this symbiosis, information and communications technology (ICT) becomes an unobtrusive, pervasive extension of individuals and groups, empowering and enabling them to achieve ambitious objectives. We believe that a new form of intelligence can be devised to tackle complex and challenging problems, by exploiting, in a coordinated and holistic fashion, the complementary strengths of human and machines through the integration of computers, the Internet, individuals, and society. We term this integration “social collective intelligence” (SCI).
Social Collective Intelligence The power of social collective intelligence resides in the combination of contributions coming from both humans and computers. Humans bring in their competences, knowledge, and skills together with networks of social relationships and understanding of social structures. On the other hand, ICT serves the purpose of searching for, delivering, and storing relevant information that will be then employed by individuals and collectives in their contexts to achieve their goals and, eventually, to improve 56 |
the overall environment in which they live. This combination allows societal-scale coordination of human capacities, and opens ways of reasoning and problem solving far beyond the capacity of an individual. The social collective intelligence results, therefore, from the purposeful integration of computers, digital communication media, individuals, and society, with the objective of managing and solving challenging global problems. This approach has the potential to greatly enhance the problem-solving capabilities of individuals and groups by combining the power of ICT with the knowledge and competencies of billions of people worldwide. By leveraging human problem-solving capability, SCI is particularly appealing when applied to large-scale “wicked problems” [2], or problems that are “relatively weakly specified” (in a computer science sense), and whose solution is not unique, but depends on social context in terms of culture, norms, and regulations. Yet, while the notion and features of SCI look appealing at an abstract level, the engineering problem of how to effectively “build” or “program” such systems is a daunting, challenging problem. The reason lies in the fact that we are talking about complex sociotechnical systems, characterized by a deep entanglement between technological artefacts and social constructs, where the scale of the system and the level of interaction among its components makes it difficult – if even possible – to understand the behaviors emerging at the systemwide level. The only way to reach this understanding is by analyzing the behaviors and interactions at the level of individual components. In addition, of course, we need to account for the unpredictable nature of human beings and social dynamics, which hardly fit traditional “programming” logic employed by computer scientists and engineers. We are therefore left with the following question: “How to program
individuals, collectives, and machines to build an SCI system that will be able to achieve a desired goal?” Note that the verb “to program” is used here in a provocative sense, as it is clear that in this setting we are far from a system that can exhibit a deterministic or controllable behavior, nor are we interested in going towards a “big brother” vision, where people lose their freedom such as in the movie “The Matrix.” We are, rather, interested in systems made “by the people for the people,” where social collective action, enabled and empowered by advanced information and communications technologies, can help solve societal problems.
Examples SCI is not about blue-sky research. Society already provides a rich and rapidly developing picture of SCI-like systems. The emergence of mechanisms like crowdsourcing, crowdfunding, and games with a purpose all point to the potential for much more systematic and large-scale interventions that greatly increase the capacity of society to develop social computations [3] that unleash the power of our “cognitive surplus” [4]. We now provide here a number of examples of SCI systems (in the broad sense), highlighting both peculiar as well as common features. The first example refers to the usage of social networking and more in general, social media. These are technologies that, while relying on some social dynamics for their functioning, are impact social structures in sometimes unexpected ways (e.g., think of friending/unfriending on Facebook [5], [6]). Other examples include crowdsourcing and crowdsensing systems, in which some computational tasks are “delivered” to groups, and the “wisdom of the crowd” [7] is leveraged for solving them. Another class of system is the one that goes under the umbrella term of “human computation,” where human skills are exploited to carry out tasks at which computers are (comparatively) poor [8].
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
A very appealing example is the reCaptcha system [9]. Captchas are programs used for understanding whether the user of an ICT service/ application is a machine or a human, typically employed for preventing access from bots. In most cases this is accomplished by showing a distorted text, which humans can correctly interpret but machines cannot. While interpreting captchas, humans are actually performing a computational task, the potential of which can be exploited for supporting the digitalization of book content. Indeed, many projects around the world are scanning old books, making them available in digital format to future generations. Typically, after the scanning phase, the content is passed through software for optical character recognition (OCR) in order to transform the images resulting from the scan into text. OCR software, however, is not perfect, and – in particular when dealing with old books that can be physically corrupted – may suffer from an inability to correctly read some words. By using such distorted images as captchas it is possible to leverage the human ability to correctly decipher distorted text to turn the scanned version of a given word into the corresponding text.1 Another example is chess. Chess has been long considered a benchmark task for measuring the power of artificial intelligence systems. After the seminal victory by Deep Blue over the then World Champion Garry Kasparov in 1997, it became clear that – for such tasks – machines could compete with humans. Yet, how many resources are consumed for that? A human brain typically consumes around 25W of power, while Deep Blue’s 1
This clearly requires additional mechanisms for checking the correctness of the answer provided to the captcha. Typically two methods are used. Either the image is provided to a number of different users and a majority voting scheme is employed to decide on the correct image, or the distorted image is used in connection with one for which the correct answer is already known.
power consumption was orders of magnitude higher. Similar reasoning applies to IBM Watson and the Jeopardy! victory [10]. Also in this case, intelligent software agents, powered by massive hardware architecture,
locate the balloons would win. The winning team, based at M.I.T., was able to correctly identify the location of all balloons in less than nine hours. What was most interesting was the strategy underpinning the
A new form of intelligence can be devised to tackle complex and challenging problems, by exploiting the complementary strengths of human and machines. were able to compete (and actually outperform) humans at a complex and wicked task. Yet, and in spite of the recent attention on “greening” ICT, this came at a cost (in terms of resources) that was extremely high.2 So it is not just about whether humans or computers are best at carrying out a given task, but also at what cost. In terms of energy efficiency, humans still largely outperform machines at playing chess or Jeopardy!. Some other interesting examples are emerging in the field of citizens’ science. A well-known one is FoldIt (https://fold.it/), an online video game about protein folding. By playing the game, users actually have helped discover structural configurations of relevant proteins. More than 240 000 users are currently registered. The new foldings for proteins that were discovered by users led to a publication in Nature where more than 57 000 users (“Foldit players”) are cited as co-authors [11]. In all the previous examples, no social construct was explicitly present. One interesting case in this sense is the Network Challenge, launched in 2009 by DARPA. The challenge was organized as follows. On a given day, ten red weather balloons would be released at unknown locations in the U.S. The first team to correctly 2
Unconfirmed estimates are in the 150–200 kW range.
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
M.I.T. team’s solution [12]. They set up a public web site and fostered viral recruitment of participants through a recursive financial incentive scheme (based on getting a share of the overall prize by DARPA). In this way social dynamics, coupled with appropriate incentive structures, enabled the team to mobilize a sufficiently large number of motivated users to provide accurate data. In 2012 a revised version of the same challenge was proposed in the Tag Challenge (http://www. tag-challenge.com/). The Tag Challenge was also a distributed search problem, but this time it was about locating and photographing, within twelve hours, five persons in five different cities (Washington DC, New York, London, Stockholm, Bratislava). The “suspects” were wearing a T-shirt with a special logo, and a booking photograph of them was posted online on the day of the competition. The main difference with the Red Balloon challenge was the spatial distribution of the locations involved (four countries, two continents), which required the ability to build on a geographically dispersed social network of users. In addition, the fact that “targets” were moving made the data curation (in particular, identification of false positives) much more challenging. The winning team used a strategy similar to the one that won the Red |
57
Balloon, but managed to identify only three out of the five targets.
together on solving a given problem). Human collectives are “super-additive” examples of systems where the whole (in terms of ability to carry out a task or solve a problem) is
Limitations and Challenges After the non-exhaustive list of examples described above, one may
While the notion of social collective intelligence looks appealing at an abstract level, the engineering problem of how to effectively build such systems is a daunting problem. wonder whether there are still fundamental challenges in the design and operation of such type of systems. The answer is, yes indeed there are. An in-depth analysis, carried out by the authors, highlighted the fact that current systems employing humans as “computational units” are rather primitive in five dimensions: ■■
■■
■■
58 |
Social structure and dynamics: most of the aforementioned examples deal with single individuals. Only a few (e.g., the DARPA Red Balloon winning solution) actually leverage social dynamics (e.g., peer pressure) from a computational perspective. This leaves plenty of room for building more effective systems, able to efficiently leverage their deep embedding in the social fabric. Compositionality: in the examples above it is decided at design time which tasks shall be carried out by humans and which ones by computers. It remains an interesting philosophical and technological question whether we can design systems where humans and machines can carry out transparently (if not interchangeably) computational tasks. Collectiveness: most existing solutions fail to leverage the power of collectives (e.g., teams of people working
■■
■■
more than the sum of its constituencies, and collectives can manage to successfully complete activities that cannot be divided and assigned to single individuals. Workflows: if we look at the workflow underpinning the computation carried out by the SCI system, in most cases it is based on a rather basic (in most cases automated) aggregation of inputs provided by humans. This is in line with the little attention paid to the usage of SCI for fostering collective action. Generality: all the solutions presented above have been built in an ad hoc fashion, in many cases around some domain-specific features that make their design poorly extensible and scalable. We are still lacking a principled approach on how to design, manage, and control SCI systems.
A Computational Perspective Now, let us take one step back and look at SCI systems from a computational perspective. Since social computation is, at some level, still computation, we still have to worry about conventional computational properties when shifting to a social context. These properties, however, change in nature as we make the shift.
Correctness requires social measures: whether or not the algorithm gives the correct result is determined by the aggregation of experience and capabilities of people engaging in the exercise. Completeness requires social judgement: since social algorithms begin as incomplete problem statements and change via interaction with the population, there is not necessarily a specific point at which we can claim that we have a complete execution. The scalability of the algorithm implemented by a program requires social engagement, which requires (at least within a sufficiently constrained context) the existence of some self-reinforcing system of feedback such that the incentive for an individual to engage with the algorithm increases as more individuals participate. In purely algorithmic terms, the key challenge clearly arises from the human factor. Humans present a high level of diversity, are very sensitive to context, and express a multidimensional value system, which makes it extremely difficult to predict their behavior a priori. And this is the key issue to tackle in order to overcome the limitations and challenges outlined above. So how can we “program” SCI? We actually believe that the question should be reframed, moving from the idea of “programming” to the concept of “incentives.” The idea is to shift the focus from obtaining a deterministic or predictable behavior, to focusing on the design and deployment of appropriate strategies for obtaining a “good enough” behavior. An incentive is, roughly speaking, something that motivates people to perform a given action. Incentive design represents a well-established field in social sciences, but a relatively new one in computing sciences [13]. Roughly speaking, we can divide incentives into two categories: monetary (where a financial reward is provided), or non-monetary, which make use of social dynamics (e.g., peer pressure, sense of belonging/community), personal beliefs (activism),
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
expected return (non-monetary, shared goods), fairness, etc. In addition, a general incentive framework should account for phenomena such as reflexivity [14] and performativity [15], which can deeply impact dynamics in sociotechnical systems. Non-monetary incentives have been extensively studied within the field of behavioral economics; evidence has been found that they can play a major role in decision-making in themes such as, e.g., energy and climate change [30]. If we pursue the idea that understanding and designing incentives is the key to engineering SCI systems, we need to build a framework for exploring various choices in the design space. We claim that game theory provides a reasonable framework for addressing this problem. We now provide a short tour in some (nonconventional) game-theoretic models and tools to substantiate our claims.
Towards a Game Theory-Based Swiss Army Knife for SCI Game theory is a broad-scope field of scientific investigations which, generally speaking, helps to model the interaction situation among agents in which the behavior of one agent affects the condition of the others. Hence, game theory can be conceived of as a mathematical tool capable of predicting and, up to some extent, controlling the behavior of interacting agents.3 In 1947, the groundbreaking work [16] by the father of modern computers, John von Neumann, and by Oskar Morgenstern, laid the foundations of modern game theory. Research in game theory was then spurred by John Nash’s celebrated work in 1950 [17], [18], 3
While game theory started as a modeling/descriptive tool, in the past 15 years it has been extensively used for the engineering, control and optimization of networked systems, in particular in the fields of telecommunications and computer science. We therefore do not discuss further the link between game theory and system engineering.
that proved the existence of an equilibrium (since then dubbed as “Nash equilibrium”), in which no agent can be better off by unilaterally changing its strategy. In the early days of this science, the pillars that mainstream game theory was built upon were the following: 1) Static game: the interaction among agents is one-shot, i.e., not repeated over time; 2) Rationality: agents have infinite computational power and are able to perfectly balance costs against benefits of their actions; 3) Complete information: information on agents’ available strategies and payoffs are available to all agents; 4) Description, not prescription: in its early days, game theory used to describe the outcome of an interaction situation, but did not prescribe how to design the game such that the expected outcome complies with some agreeable properties. Nowadays, many debates about the effectiveness of game theory in modeling realistic interaction scenarios are still fueled by the misconception that the limitations I-IV cannot be overcome. Actually, over the last few decades, researchers have expanded the boundaries the of applicability of game theory in several directions, encompassing some that do the trick for a purposeful SCI analysis and design. Dynamics and Reputation Most of the social interactions are not one-shot, but are repeated and span a certain period of time. In such situations, people usually react to the past behavior of the other agents, and often in an imitative fashion. Cooperative or antagonistic behaviors of an agent frequently trigger in the other agents similar reactions. Furthermore, during the interaction process the agents build a reputation for themselves, based on their own past actions.
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
Dynamic game theory analyzes situations in which interactions among agents are repeated and take place in a possibly dynamically changing environment. Interestingly, it has been shown [19] that repeated interactions enforce a cooperative behavior, primarily because agents can pose a retaliation threat in the case that other agents do not act harmoniously with the society. The famous prisoner’s dilemma4 is an elucidating example of how threats often lead to a Pareto improvement in the payoff of all agents. There have been successful attempts to incorporate reputation system formalizations into more standard game theoretical models [20]. The aim is studying how agents can build a reputation for themselves, i.e., how to instill in the other agents the belief that in the future it will behave consistently with the past. Evolutionary games are a peculiar form of dynamic games, as they were initially conceived by Smith [21] to explain the evolution of species in a biological context. Basically, evolutionary games describe the interaction among agents adopting different behaviors and study the evolution of the acceptance of each of the considered behaviors inside a society. In the case of SCI, evolutionary games may be utilized, e.g., to assess the extent of the acceptance of some software by a group of users in an environment where people suffer from peer pressure. Bounded Rationality As a matter of fact, humans are not completely rational. Even when our objective function is perfectly specified, very often people are not capable of devising the most profitable strategy for themselves. Moreover, different individuals are not equally capable of analyzing a situation even if equipped with the same amount of information. A clear example is the 4
http://en.wikipedia.org/wiki/Prisoner’s_dilemma.
|
59
game of chess. According to classical game theory with perfectly rational players [22], in chess both players possess a strategy leading to the “equilibrium,” and guaranteeing to both of them an outcome at least as good as the equilibrium. Nevertheless, chess is still a very popular game, since applying such strategy would require formidable computation abilities by the players, and in fact also for modern computers like IBM’s Deep Blue (see previous sections). An interesting and challenging research thread in game theory deals with modeling competing agents with bounded rationality, especially in repeated games [23]. The (limited) complexity of the
promptly utilize the action which is apparently the best up to that point. With the aid of what are termed learning models with expert advice, which includes Multi-Armed Bandit problems, it has been shown [25] that even under such limited information regime the agents’ behavior can converge to an equilibrium situation by utilizing a simple updating rule on the strategy played at each step. Mechanism Design Perhaps the most successful story in game theory is that of mechanism design [26], [27], which can be seen as a reverse engineering application of a large part of the descriptive game theory developed over the last
We also need to account for the unpredictable nature of human beings and social dynamics, which hardly fit traditional “programming” logic employed by computer scientists and engineers. strategy implemented by a generic player is defined via the notion of a finite automaton. The complexity of a strategy is defined as the minimum number of states required by an automaton in order to implement it. Over the years, researchers have tried to understand the extent of the actual benefit for an individual in having at their disposal powerful machinery to beat the opponent [24]. Learning Beyond having limited reasoning capabilities, players may also not have complete information on the opponents’ behavior, as well as on the reward associated to each of its available actions. In this case, agents face a typical exploration-exploitation trade-off, where it is not clear up to what extent an agent should employ time in trying out all the actions in order to refine the statistics of the associated rewards or else should 60 |
decades. In practice, mechanism design studies how to a) build the rules of the interaction among agents and b) allocate incentives to agents, such that the resulting equilibrium situation satisfies some properties that are “agreeable” under a global perspective. A certain – socially desirable – behavior is said to be “incentive compatible” whenever it can be elicited from the agents via suitable incentives, which are often interpreted as monetary ones. Typically, in mechanism design, such behavior is considered as properly elicited whenever it is a Nash equilibrium. Remarkably, a stronger form of mechanism design takes into account instead a refinement of the Nash equilibrium, and requires that the strategies are dominating for all agents, i.e., optimal regardless of the opponents’ behavior. Hence, there is no apparent reason why an agent should deviate from that
strategy, thus ruling out any philosophical consideration on whether the Nash equilibrium should be actually implemented when agents have full information on the game. Prominent applications of mechanism design can be found in online advertisement auctions, regulation in monopolies and oligopolies, auctioning of radio frequencies to mobile phone companies, and the building of social welfare systems [28]. We posit that, if put in the proper perspective, mechanism design plays a fundamental role in SCI systems, since it provides precious help in designing a) the rules of interactions between humans and machines and b) the incentives to people in order to make them feel enticed to adequately carry out their tasks. In its classic form, mechanism design theory studies situations in which the rule/incentive planner has no complete information about the agents in the system. In SCI systems, the incomplete information about the agents may concern, for example, their skills in carrying out a specific task, or else their sensitivity to monetary incentives, rather than to other kinds of incentives. Mechanism design theory is a sufficiently general model that also allows for the study of incentives that are not only myopic or solely related to financial issues, even though monetary incentives can be more easily modeled and embedded in a game theoretic model. In SCI, incentives may have a different and long-term nature, and are commonly related to the reputation that individuals build for themselves. In current crowdsourcing systems, for example, the assignment of tasks is performed according to the ranking of the users. Thus, seriously engaging in successfully carrying out a task is clearly a longterm incentive for a participating agent to build a good reputation and, as a direct consequence, to increase its future revenue. On the other hand, designing incentives only for those agents might not be enough to build a
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
well-functioning and trustworthy reputation system. In fact, crowdsourcing SCI systems typically possess a bipartite structure: parties offering a service need to match with agents willing to supply the service. If there are multiple parties of the first type, they are typically in competition among each other. Hence, it may not be in their best interest to provide reliable feedback, which can give a competitive advantage to the other parties. Mechanism design theory has also tackled this additional problem [29] by designing mechanisms that elicit trustworthy feedback from the parties who are offering a crowdsourcing service.
Use of Game Theory Tools Can Lead to Advances While there is no ultimate answer to the question on how to “program” a socio-technical ensemble, or social collective intelligence system, we are convinced that this represents a highly relevant challenging field for researchers and innovators from different disciplines. In particular, in this article, we proposed the use of game theory as grounding for engineering such types of systems. We acknowledge that the (mechanistic) perspective of people as “utility maximizers” can still puzzle social scientists and give rise to controversy. But the approach is not to build a mathematical model of human beings; rather, to have some (abstract) model for understanding the impact of their decision-making. By doing so, we do believe that using the full spectrum of game theoretical tools can lead to advances (needless to say to be validated empirically) on how to realize purposeful sociotechnical systems. In this discussion, a key and deeply challenging aspect for future research – not covered in this article due to space limitations – is related to the governance of such social collective intelligence systems, where ethical aspects should be considered as a
pre-requisite for any engineering attempt [31], [32].
Author Information Daniele Miorandi is lead scientist in the Smart ICT for Socio-Technical Systems (iNSPIRE) Area at CREATE-NET, Italy, and Executive VP for R&D at U-Hopper srl, Italy. E-mail: daniele.miorandi@ create-net.org. Lorenzo Maggi is a researcher in the iNSPIRE Area at CREATENET, Italy. E-mail: lorenzo.maggi@ create-net.org.
Acknowledgment The work reported here is partially supported by the EU FP7 project Social-IST (317681), by the EU FP7 project CONGAS (317672), and by the EU FP7 project Smart Society (600854). D. Miorandi is grateful to Stuart Anderson for the insightful and inspirational discussions.
References
[1] M. Castells, The Rise of Network Society. Oxford, U.K.: Blackwell, 1996. [2] C.W. Churchman, “Guest editorial: Wicked problems,” Management Science, vol. 14, no. 4, Dec. 1967. [3] M. Kearns, “Experiments in social computation,” Commun. ACM, vol. 55, no. 10, pp. 56–67, 2012. [4] C. Shirky, Cognitive Surplus: Creativity and Generosity in a Connected Age. U.K.: Penguin, 2010. [5] J. Lewis and A. West, “‘Friending’: London-based undergraduates’ experience of Facebook,” New Media & Society, vol. 11, no. 7, pp. 1209–1229, 2009. [6] J.L. Bevan, J. Pfyl, and B. Barclay, “Negative emotional and cognitive responses to being unfriended on Facebook: An exploratory study,” Computers in Human Behavior, vol. 28, no. 4, pp. 1458–1464, 2012. [7] J. Surowiecki, The Wisdom of Crowds. Chicago, IL: Random House, 2005. [8] L. Von Ahn, “Human computation,” presented at 46th ACM/IEEE Design Automation Conf. (DAC’09), 2009. [9] L. Von Ahn et al. “Recaptcha: Human-based character recognition via web security measures,” Science, vol. 321, no. 589, pp. 1465–1468, 2008. [10] D. Ferrucci and B. Watson, “An overview of Deep QA for the Jeopardy! Challenge,” in Proc. ACM PACT ’10 (New York, NY), 2010, pp. 1–2. [11] S. Cooper et al., “Predicting protein structures with a multiplayer online game,” Nature, vol. 466, no. 7307, pp. 756–760, 2010. [12] J. C. Tang et al., “Reflecting on the DARPA Red Balloon Challenge,” Commun. ACM, vol. 54, pp. 78–85, Apr. 2011.
IEEE TECHNOLOGY AND SOCIETY MAGAZINE
|
FALL 2014
[13] O. Scekic, H. Truong, and S. Dustdar, “Incentives and rewarding in social computing,” Commun. ACM, vol. 56, no. 6, pp. 72–82, 2013. [14] R.A. Rhodes, Understanding Governance: Policy Networks, Governance, Reflexivity and Accountability. Open University Press. 1997. [15] D. MacKenzie, F. Muniesa, and L. Siu, Do Economists Make Markets? On the Performativity of Economics. Princeton, NJ: Princeton Univ. Press, 2010. [16] J. Von Neumann, and O. Morgenstern. Theory of Games and Economic Behavior. Princeton, NJ: Princeton Univ. Press, 1947. [17] J.F. Nash, “Non-cooperative games,” Annals Mathematics, pp. 286–295, 1951. [18] J.F. Nash, “Equilibrium points in n-person games.” Proc. National Acad. Sciences, vol. 36, no. 1, pp. 48–49, 1950. [19] R.B. Myerson. Game theory: Analysis of conflict. Harvard Univ., 1991. [20] G.J. Mailath and L. Samuelson. “Repeated games and reputations: Long-run relationships,” OUP Catalogue, 2006. [21] J.W. Weibull, Evolutionary Game Theory. Cambridge, MA: M.I.T. Press, 1997. [22] E. Zermelo. “Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels,” in Proc. Fifth Int. Congress of Mathematicians, vol. 2. Cambridge, U.K.: Cambridge Univ. Press, 1913. [23] A. Rubinstein, Modeling Bounded Rationality, vol. 1. Cambridge, MA: M.I.T. Press, 1998. [24] E. Ben-Porath. “Repeated games with finite automata,” J. Economic Theory, vol. 59, no.1, pp. 17–32, 1993. [25] N. Cesa-Bianchi and G. Lugosi, Prediction, Learning, and Games. Cambridge, U.K.: Cambridge Univ. Press, 2006. [26] Y Narahari et al., Game Theoretic Problems in Network Economics and Mechanism Design Solutions. Berlin, Germany: Springer, 2009. [27] N. Nisan. Algorithmic Game Theory. Cambridge, U.K.: Cambridge Univ. Press, 2007. [28] Prize Committee of the Royal Swedish Academy of Sciences, “Mechanism design theory,” Scientific background on the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, 2007. [29] M. Heitz, S. König, and T. Eymann, “Reputation in multi agent systems and the incentives to provide feedback,” Multiagent System Technologies. Berlin Heidelberg: Springer, 2010, pp. 40–51. [30] M.G. Pollitt and I. Shaorshadze, “The role of behavioural economics in energy and climate policy,” EPRG Working pap. 1130, Cambridge Working Paper in Economics 1165, 2011. [31] M. Hartswood, B. Grimpe, M. Jirotka, and S. Anderson, “Towards the ethical governance of smart society” in Social Collective Intelligence: Combining the Powers of Humans and Machines to Build a Smarter Society. Springer, to be published. [32] J. Pitt and A. Diaconescu, “The algorithmic governance of common-pool resources”, in From Bitcoin to Burning Man and Beyond: The Quest for Identity and Autonomy in a Digital Society, J. Clippinger and D. Bollier, Eds. ID3/Off the Common Books, 2014, pp. 119–128.
|
61