Social Aspects of Joint Actions: from analysis to design of social actions Virginia Dignum1 , Frank Dignum2 , and Catholijn M. Jonker3 1
Delft University of Technology, The Netherlands, M.V.Dignum@tudelft.nl 2 Utrecht University 3 Delft University of Technology
Abstract. Social interaction has essential effects on social reality, which can even outweigh the importance of the physical effects, but which are not directly nor objectively observable. Understanding the social contexts in which actions and interactions take place is thus of utmost importance for planning one’s goals and activities. We claim that joint action should be considered from both the social and the physical perspective jointly. These aspects are interdependent and influence each other continuously. In order to support the inclusion of social aspects into state descriptions for joint actions, we propose a methodological approach for social analysis of joint action, that enables to identify and represent the social characteristics of a joint action setting. Through the use of social practices it is possible to combine both physical and social aspects of joint actions. We show how these social practices can be used to design agents and robots that take into account both the social and physical context and goals.
1
Introduction
Coordination is essential for the success of joint action. This success is not just a consequence of the interleaving of physical actions by the participants, but is for a large part dependent on common knowledge amongst the actors to anticipate each other’s behaviour, thus increasing the effectiveness of the joint activity. Moreover, social interaction has essential effects on social reality, which can even outweigh the importance of the physical effects, but which are not directly nor objectively observable. This is an essential aspect of joint action when designing human-robot joint action. Understanding the social contexts in which actions and interactions take place is of utmost importance for planning one’s goals and activities. Whereas people are pre-eminently able to understand context, robots and other computer systems are notorious for their inability to do so in general. Joint action in human settings has been a topic of intense research in the social sciences, cognitive psychology and philosophy.(see e.g. [6, 14, 19, 15]). In these areas the emphasis is often on the exact nature of what constitutes a joint action. E.g. are the passengers of a bus participating in a joint action? Are the drivers of the cars in a traffic jam performing a joint action? All of them have at least a partly
shared goal related to the activity they all perform at the same time. Although these are important and interesting questions, in this paper we will avoid this discussion and assume it is clear that the robot and human are performing a joint action. There is also a substantial amount of work on the mental conditions that should hold for performing a joint action. This relates to having joint intentions, joint goals, common beliefs, etc.(see e.g. [1, 7]). The main issue here is that if too many joint mental attitudes are required there will never be joint actions, while if the requirements are slackened behavior might result that would never be expected in joint actions. Again, in this paper we will not seek to answer these questions, but rather assume some shared beliefs and attitudes when necessary and thus create a very pragmatic framework. In the area of Multi-Agent Systems (MAS) the formal aspects of joint actions, coordination, cooperation and task dependencies have been extensively studied (e.g. [18, 10]), but mainly from the point of view of objectively observable effects. In this paper we will complement this work by also taking into consideration the social aspects that are not directly observable. We claim that joint action should be considered from both the social and the physical perspective jointly. These aspects are interdependent and influence each other continuously. In order to support the inclusion of social aspects into state descriptions for joint actions, we propose a methodological approach for social analysis of joint action, that enables to identify and represent the social characteristics of a joint action setting. The aim of this social analysis is to catch the essential elements from the social context that are needed to model the joint action. For the case of joint action between people, this has been extensively researched in Sociology and Psychology [11]: “Just as the picking up of information about our environment is fundamental to the performance of action [...], so too, we argue, is an individual in a social context fundamentally constrained by the picking up of information about others’ incidental movements and intentional actions. Recognizing that our movements and actions in the world are as constrained by others as, within our body, implies (a) a universality of dynamical principles that unify components in a system - they are true for social as well as for non-social action, (b) that linkages are not simply mechanical, but can be informational - that what we take in with a look (or other modality) can affect our behaviour as strongly as a mechanical force, and (c) how others ‘moor’ us in space and time defines the frames of reference for our past, present, and future behaviour.”. In order to illustrate our proposal, we take as working example a blockworld scenario where a person and a robot have to jointly build a tower of four cubes with a triangle on top. Both have two cubes and a triangle to contribute. Even though, this description is given purely in physical terms, in the real world, joint action is not just about the functional end result (in this case, the completed tower), but also about what that result means to the participants, i.e. the social and emotional results of the action. That is, in order to make joint action possible, one needs not only to describe the physical context as in the example, but the social, mental and behavioural conditions under which joint action emerges
for socially embedded individuals, and what kinds of patterns of behaviour are possible [11]. In the case of the example, consider that the description would contain something such as: both the person and the robot should feel satisfied (or happy) after the tower has been built. Or: the robot and person have never met before and thus do not know what the other is capable of or wants to contribute to the joint action. These additions to the state descriptions will probably change what we see as acceptable ways to perform the joint action. These aspects become even more prominent when both actors each have four cubes and could thus, in principle, build the whole tower alone. Or if they have only three cubes together and will never manage to build the tower according to the specification. What would we expect the actors to do in these cases? The answer will depend on the social context and assumptions we make about the actors and their relation. The paper is organised as follows. In the next section, we will look at the social analysis of joint actions and present a preliminary check-list of essential elements that should be taken into account. In Section 3, we present preliminary method for social analysis of a situation, by providing a check-list of issues to consider. In section 4, we show how the concept of social practices can be used to combine the social and physical parts of a situation for concrete plans of actions and mutual expectations about the joint action. In section 5 we discuss how this can be used for the design and deliberation of robots that interact with humans. We finish the paper with some preliminary conclusions and future work to explore this area.
2
Social Analysis of Joint Actions
In sociology, social analysis is the systematic exploration of social issues, related to the general quality of life, social services and social justice of a society. We define social analysis of joint action as the evaluation of issues related to the quality and results of joint action. It includes the identification of the social characteristics, requirements and expected results of an interaction, as well as the social meaning of the physical elements of that interaction. In this section, we will look at aspects of social analysis as they apply to joint action. I.e. we give arguments why some aspects are relevant for joint action between humans and thus also might influence the joint actions between humans and robots. In order to focus the discussion and give some concrete example of a joint action we use the description of a joint action in the blockworld scenario described in the Introduction. It is clear that this description is not sufficient to guarantee that the joint interaction between human and robot will succeed. For instance, there is no mention of social relations and status between the two actors, past experiences, assumed and actual capabilities, mental images each may have of the situation, etc. The example also does not consider the consequences of success or failure. What is at stake? How can failure be countervailed if needed?
A social analysis of the situation looks at three different aspects of the social situation: the (general) social context, the personal social relations between the actors and the social interpretation of the physical situation. Firstly, we explore the social context in which the scenario takes place, identifying the relationships between the human and the robot and also the wider social setting in which the joint action takes place. E.g. does it take place at school, at home, in a research lab, at a demonstration of robots, etc.? This wider setting gives a common background and expectation about the joint action. Is it something they rarely perform together and thus the actors have little experience and knowledge of what to expect from each other. Or is it an action performed as a demonstration and well rehearsed? Within each setting, actors take up social roles. E.g. in the setting of a research lab it may be natural for the human to tell the robot what to do at each step; whereas in a school, we might expect the robot to take the initiative teaching a child to build a tower. As can be seen from the example the most important aspect of the social context are the social roles and their (contextual) definition. Social roles describe aspects of a setting that lead to standard expectations for most people involved. A teacher has as objective to teach students some skills or knowledge. The teacher also follows norms such as being patient when mistakes are made and be supportive in the learning (answering questions, examplify, etc.). Next, the personal social relations should be analysed. These include the expectations of each actor concerning the result fo the joint action. E.g. it may be important that all actors are happy at the end of the interaction. This is particularly important in case the joint action is part of a longer interaction or relation. So, we need to know about the nature and expectations about the personal relation between the actors. Do they know each other? What is their joint experience? Did they perform this type of action together already? Do they expect to build up (or continue) a social relation? The latter leads to a willingness to ”invest” in a joint action with the expectation it might lead to benefits later. If there already is a common history the relative social status is taken into account to determine how much to ”invest” in the current joint action. Finally, we need to analyse the (possible) social interpretation of the physical situation, actions, objects and actors. In the example scenario there might not be many, because it is relatively simple, but we will give a few examples of each aspect. Physical situation. The initial position of the blocks on the table suggests that both the human and the robot “own” two cubes and a triangle and thus can decide what to do with them. Another example is the time given for the task or the time of the day at which it takes place which can strongly influence the interaction. E.g. if little time is available in general the more experienced actor will take more initiative in order to be finish in time. Also, the relative position and measure of the actors and objects is relevant for the social situation. E.g.
imagine a huge robot and a small table with tiny blocks, or a situation in which the person is too far from the table to be able to handle the blocks. Actions. The type and order of actions also have a social connotation. E.g. �taking turns� can be seen as cooperative behaviour. Or, being the one putting the last block in place can be interpreted as asserting the social status. I.e. the one finishing the job is the most important. Attaching social interpretations to the actions will allow to reason about the social effects of the actions that are performed in the joint action. Objects. The physical attributes of the objects and the (expected) capabilities of the actors can lead to specific expectations about the joint action. E.g. if the robot is small and can hardly reach the top of the tower it seems logical to expect the human to finish it (if that one is taller). Similar things can be stated about the size, weight, colour, etc. of the cubes. If the robot cannot distinguish the colours while the cubes should be stacked based on colour it is expected that the human should assist. Note that the actors are not always aware of the capabilities (or limitations) of the other actors and thus the joint action should include the possibility to exchange information about these aspects. In the next section we will describe a first version of a check list of social aspects that should always be considered when performing a social analysis and which can be used to design the robot for particular types of joint actions.
3
Check-list for social analysis
Based on the discussion in the previous section we now try to give a more concrete check list for the social analysis. The use of such a check list is twofold. It should be considered at the start of the joint action and also during the joint action. At the start of the joint action it should give insight into aspects that should be considered when a joint action is performed. Not in every joint action all aspects will be relevant. The more limited the interaction possibilities, the less relevant are the social aspects. E.g. in an English auction most aspects are pre-determined and can be taken for granted by the actors. Thus they hardly influence the way the interaction is performed or the outcome of it. When the relevant aspects are determined it should be established whether they are known to the participants. If aspects are unknown, the actor can assume default values for them or make it a point to acquire the information before or during the joint action. E.g. if the human does not know whether the robot is capable to reach the top of the tower it can assume it can (because it would otherwise not start the joint action) or the human can ask the robot whether it thinks it can reach that high. During the joint action the check list is used again whenever unexpected events occur that necessitate adjustment of the joint action (plan). In this case it functions as background knowledge that leads repair actions (often involving information exchange about expectations). In order to give some structure we divide the social characteristics in three parts: the situation, the actors and the expected/desired results.
1. Situation (a) Physical characteristics that impact the social situation (b) History (past encounters): have the robot and the person interacted before? How was that experience? What are the safety measures in the environment? (c) Resources: do all actors have access to all resources? How is ownership? (d) Understanding social situation in which the action is to take place i. Individual goals & motives of the actors: do the actors have their own goals for the joint action? What are their motives for those goals? ii. Joint goals & motives iii. Shared awareness: are the actors aware of each other’s goals and motives? Can they reason about each other? 2. Actors (a) Capabilities: can each of the actors take all the necessary actions or are they dependent on each other? (b) Physical appearance (size, shape,...): Relative size of robot, blocks and the human (huge robot, small human, small blocks, vs. tiny robot and blocks) (c) Emotional state: does the human feel intimidated, dominating, superior, etc. to the robot? (d) Trust: have the actors bases to trust or mistrust each other? (e) Social network, relationships: are there relationships between the actors? Is there a team of several actors or just a pair? How is the balance humanrobot in the team? (f) Social status (power, authority...) 3. Results (a) Consequences of success (for each individual and for the group) (b) Consequences of failure (for each individual and for the group) (c) Risk assessment measures (d) Risk mitigation (alternative plans) This check list is based on the well-accepted idea that the success of a joint action depends on the actors performing it, the physical possibilities to perform the joint action and whether the goal of the joint is achieved for all actors. However, the check list now includes the social aspects that are involved in each of these elements of the joint action. It is clear that including the social aspects considerably grows the check list, which shows the importance of the social aspects for the success of joint actions. We do not claim that the above check list is complete, but in our experience the listed aspects always play a role in joint actions and thus should be part of any minimal check list. In the next section, we provide a framework to represent joint actions in a way that combines physical, personal and social aspects, using the theory of social practices. The results of this check-list are used as the input to the instantiation of joint action as social practices.
4
Social Practices
Social practice theory seeks to determine the link between practice and context within social situations [17]. Social practices refer to everyday practices and the way these are typically and habitually performed in (much of) a society. Such practices as “going to work”, “meeting”, or “greeting” are routinely performed and integrate different types of elements, such as bodily and mental activities, material artefacts, knowledge, emotions, skills, and so on [13]. In fact, social practices can be seen as recurrent joint actions performed for shared social reasons [19]. Social practices are similar for groups of individuals at different points of time and location. As such, they can be seen as ways to act in context, i.e. once a suitable practice is identified, people will use it as a ‘short cut’ to determine an action which does not require elaborate reasoning about the plan to follow. However, social practices are not just mere scripts in the sense of [12]. They support, rather than restrict deliberation about behaviour. E.g. the social practice of “going to work” incorporates means of transport that can be used, timing constraints, weather and traffic conditions, etc. Maybe, normally you take a car to work, but if the weather is exceptionally bad you can deliberate a new plan in this situation and take a bus or train (or even stay home). That is, different situations give rise to other ways of enacting a social practice. Although we will not go into the theory of how social practices arise and are maintained (see [16] for more on this aspect) a very important aspect of them is that they are a kind of shared way of acting in context. Thus people use a social practice also to form expectations about the roles and actions of the other participants in the social practice. E.g. when a person extends his hand to greet another person, she will expect that the other person will grab the hand and they start shaking hands. Thus, by their very nature, social practices are used to standardize interactions and provide a context in which expectations are set and social interpretations are given for the actions taking place. Based on these ideas, we developed a model to represent social practices that can be used in social deliberation by intelligent systems [3]. The components of this representation model are as follows: – Purpose refers to the expected physical and social results of the practice – Physical Context describes elements from the physical environment that can be sensed: • Resources are objects that play a role in the practice such as cubes and triangles in the scenario. • Places indicates where all objects and actors are located relatively to each other, in space or time. • Actors are all people and autonomous systems involved, that have capability to reason and (inter)act. – Activities indicate the normal activities that are expected within the practice. Not all activities need to be performed! They are meant as potential courses of action. – Social Context contains:
• Social Interpretation determines the social context in which the practice is used. • Roles describe the competencies and expectations about a certain type of actors. • Norms describe the rules of (expected) behaviour within the practice. – Plan Patterns describe usual patterns of actions defined by the landmarks that are expected to occur. – Meaning refers to the social meaning of the activities that are (or can be) performed in the practice. Thus they indicate social effects of actions – Competences indicate the type of capabilities the agent should have to perform the activities within this practice. They include both the physical as well as the mental capabilities. Although we included the purpose component in the social practice this does not mean that the reason for performing the social practice is that purpose. It could be that the reason for the joint action of stacking blocks together is that the robot and human have a good time or explore each others capabilities. The physical result might thus be accidental to the activity. The reverse might also be true and the physical result might be the only important goal that happens to be satisfied (easy) through this social practice. Thus the purpose of a social practice should be seen more as a potential result that could match a goal of the participants of the social practice. However, in the more extreme case the participants perform the social practice as a goal in itself. As we can see in figure 1 the meaning of the social practice of stacking blocks by instruction can be the fulfillment of the power motive of the person. Thus the social practice can serve as an activity performed for a motive while the actual results of the social practice do not play a role in initiating it. The latter is closer to the theory of social practices from social science where often it is assumed that the reason to perform a social practice is exactly the performance of it. However, this would assume that actors are mindlessly performing social practices and have no goals of themselves, which seems contradictory to our experience. Thus we take a less extreme stance in which we assume that social practices can be used to (partially) fulfill goals of the actors, but also can be initiated based on their meaning and thus be performed directly for the satisfaction of their performance (”without thinking about them”). Figure 1 shows how the above framework can be used to describe the social and physical aspects of a joint action to stack blocks together. The two columns show different choices of social practices that could be employed to stack blocks together. The components in the table are filled in based on the results of the social analysis described in Section 3. As said before, in Sociology, social practice theory is mostly used as inquiry on how social activity occurs and identifies its main causes and outcomes. Our recent work [3] proposes to use social practices also as a model for action in (social) context. Thus we try to combine a goal directed deliberation of agents with a standard/normative performance of social practices. In order to make this combination we have to define the connection and use of social practices
Social practice Block stacking by instruction
Block stacking by teamwork
Purpose Physical Stack of 4 cubes with pyramid in top, placed on top of the table Social Robot and person satisfied Physical Context Resources Blocks: cubes and pyramids; table Places Blocks are initially on the table; person and robot at opposite ends of the table Actors Person, robot
Activities
Pick-up blocks, stack blocks, communicate
Social Context Social person feels superior to robot; interpretation follow-orders as cooperative behaviour Roles Person is the leader, robot the subordinate Norms Person decides order of blocks Robot follows person orders In case of problems, person is responsible for alternative plan
person is not intimidated by robot; turn-taking as cooperative behaviour Teammates Communication expected to determine who starts In case of problems, each can suggest action or alternative
Plan patterns
Person decides stacking plan Person requests robot to place block If robot unable, then person will stack blocks If block falls, person decides on alternative
Person and robot agree on who starts Taking turns, each stacks a block If block falls, they will agree on alternative The one appointed takes the alternative action
Meaning
fulfilment of power motive for person
Cooperative behaviour
Competences Physical Robot has limited range cannot reach tower top Person can reach top of tower Person can pick blocks from the ground Choice Robot follows all orders from person Person knows how to make building plan
Robot has limited range cannot reach tower top Person can reach top of tower Person can pick blocks from the ground Person prefers to use robot blocks first Person and robot know building plan
Fig. 1. Joint action as Social Practice: different interaction styles
with the deliberation of agents. The general process for joint action using social practices would include the following steps: – – – – – –
Perform a social analysis of the context Perform a physical analysis of the context Match a social practice that fits both the social and physical parameters Within this social practice determine a course of action Monitor the environment for the expected interactions during the joint action Adjust to any deviations based on the information of the social practice
For the purpose of human-robot interaction, the above can be used in two ways. In the ideal case, robots would be equipped to perform all the above steps autonomously. In order to get to that point they would have to use a cognitive architecture we have coined social agent in [5]. Although we cannot assume current robots to perform according to this vision yet in the next section we discuss some first steps towards designing a robot that uses social practices to perform joint actions. However, the above process can also be used by the designer of
the robot to guide building the capabilities of the robot. Due to its high level of abstraction describing activities, social practices can facilitate the specification of flexible behaviours, e.g. describing ‘counts-as’ relations between low-level robot functions and high-level social behaviours. For example, programming a robot to wait for instructions before picking-up a block, is a way to represent the social behaviour of ‘subordinate’ as described in the column ‘Block Stacking by Instruction’ in Figure 1. Thus the social practice description leads to the specification of specific capabilities for the robot that can fulfil its role in the social practice. Another important aspect that is directly supported by the social practice description is the analysis of three important factors in human-robot interaction: Observability, Predictability and Directability (ODP) [8]. I.e. we need to know what is observable by the participants, which actions of the participants are predictable, and which actions can be delegated and how this can be done to each participant. Observability analysis is supported by the description of the physical context in the social practice. From this description one might deduce which objects and actions are/will be visible by the participants and possible remedies to keep crucial actions observable for the relevant actors. Predictability analysis is supported by several elements of the social practice. First of all it indicates the activities and plan patterns that determine the social practice. These, of course, directly lead to expectations of behaviour. Besides these elements, the roles also lead to expectations of responsibilities, rights and norms for the actors that fulfil those roles. Directability analysis is supported by the roles which indicate which actor is taking the lead at each point and the plan patterns indicate which dependencies exist between the actors in the social practice and thus at which points one actor should be able to direct the other. One can also check whether capabilities exist with the actors to realize this directability. E.g. if a command should be given there should be a common language in which the command can be given and actuators and sensors to send the command and receive it (in time). Finally, the description of social practices for the joint action supports the interdependence analysis of the joint action. Awareness of the interdependence relationships between the actors is important for designing robots because the activities performed by one actor may influence the activities of the others. To analyse the interdependences in a joint activity, [8] presents a tool called the Interdependence Analysis (IA) table, which uses a colour scheme to help identify the interdependence relationships and describe how one agent supports the other in teamwork. We suggest to use a tool such as the IA table to refine the activities and identify the interdependency relations. The designer of the joint activity and/or robot can take into account known limitations and capabilities of the robot, and design communication and interaction protocols to support teamwork. Depending on circumstances, the designer can also equip the robot with hardware (sensor, communication devices, grippers, etc.) to support the joint
salience
beliefs
Social practices
trigger
Context observation
revision
selection
goals Reasoning processes Identity
Concrete social practice Plan generation
adapt
Plan execution
update
Evaluation
Fig. 2. Social reasoning process
activity and the teamwork to support that joint activity. For example, the turntaking capability asks for some way in which the robot can know when it has to put a block on the stack. Such capability can be autonomous, e.g., by increasing the sensory capability, or be dependent on the other actor. In the last case, the robot and the other actor need communication capabilities and the language to exchange the necessary information for turn taking.
5
Social Practice Directed Behaviour
In order for the robot to be able to use the social practices in its deliberation for joint actions, it needs an architecture that allows for both internal motivations (or goals) to drive behaviour as well as external situations and practices to suggest behaviour. In [3], we proposed a deliberation architecture for agents based on social practices, depicted in Figure 2. In this architecture plan generation is driven by both the beliefs and goals as well as by the applicable social practices. When the interpretation of the sensed environment in terms of the existing social practices, results in only one possible action, the agent will perform that action directly. However, in cases that there are several possible courses of action, the agent will take into account its motives in order to determine possible goals that are applicable to the sensed
environment and generate a plan accordingly. This distinction resembles the fast and slow reasoning tracks as described by Kahnemann [9]. However, due to the context of the social practice the robot can limit the portion of the applicable rules and beliefs and also can use specialized rules for planning (patterns) that are (only) applicable within the social practice and the social identity it has when playing a certain role within that social practice. The deliberation component itself can be based on BDI or extensions thereof, such as the FAtiMA [2] or BRIDGE [4] deliberations containing emotions, goals, intentions, beliefs, roles, identity, etc. This architecture assumes that sensing is not a passive but also an active process. Active social practices direct the agent’s sensing to find objects, actors and events that are expected within that social practice. This leads to reactive but focused sensing based on the current situation. The drive to search for patterns (and thus the sensing process) can be steered from the agent’s identities. E.g., a robot with Follower identity will wait patiently for requests by the person, while a robot with TeamPlayer identity will actively check for changes of the stack of blocks in order to know whether it already can put another block. I.e. the parallel tracks of pro-active and reactive behaviour already start with the sensing behaviour. The architecture also show that there are feedback loops between the evaluation of actions and both the social practices as well as the robot’s mental attitudes. When a social practice was very successful it’s use is reinforced and it’s salience in similar circumstances will be higher. If the social practice did not work very well it can lead to an adjustment of the social practice or giving it a lower priority. In a similar way the identity of the robot and some of it’s mental attitudes will be reinforced. For the use of social practices it is important for the robot to determine it’s role in the social practice. This should be matched with the social identity of the robot and with possible roles in the practice. In the block stacking example the robot might assume any role, but if the robot has a servant identity with respect to the human it might not be able to match with the leader role in the block stacking. In general the robot can follow the following (simplified) algorithm to determine which role it should play in this particular social practice and which role the human will play: In the algorithm we use expected(P, Leader) to denote a function that determines whether the human can be expected to take the leader role. This can be based on past experiences, the current social identity of the human, but ultimately also on the human giving a first order right away. A second point to note is that the robot has a compliant nature in that it will only take the leader role if the human does not want it. Of course, more subtle versions could be made of the algorithm that make this dependent on the robot’s previous experiences. Once the robot has established it’s role in the social practice and therewith also the role of the human it can start planning it’s actions according to the plan patterns that are assumed in the social practice. Important in this respect is that the social practice already indicates what can be expected of the other role(s)
Algorithm: social-identity(P) if known(P) then if expected(P, Leader) then enact(Follower) else enact(Teamplayer) end else if askPreference(P, Leader) then enact(Follower) else if askPreference(P, Team) then enact(TeamPlayer) else enact(Leader) end end end
Algorithm 1: Determination of Social Identity
and the plan can use the expected interaction points for efficient planning. It also indicates exactly what aspects of the environment should be monitored for expected actions and communication and thus when failures can be detected. In the following two figures we show the combined plans of the robot and the human for each of the two block stacking social practices that the robot could choose from. Notice that the activities in these diagrams can still be very abstract and detailed plans made in several ways. Given the social practice of teamwork stacking the pattern that is given for the robot where it will stack a next block after the stack has changed (due to the human stacking a block) has an inherent social effect of being cooperative. Thus the robot does not have to reason separately about this social effect of the plan, it is packaged with the social practice. Similarly the passive waiting in the other social practice is not interpreted as being disinterested but in this context shows cooperative behaviour.
6
Conclusions
In this paper, we discuss the effect of social issues on joint action. We argue that for successful joint action, not only is it necessary to describe the physical aspects of the domain, but also, and primarily, its social characteristics. We introduce an initial check list for social analysis that can guide the designers of joint actions through the identification of the social characteristics of the domain. The methodological check-list proposed in this paper supports the identification of what are the aims and steps of the joint action, but not how to achieve them. Social practices, as discussed in section 3, describe the social meaning of physical action and can therefore be used to describe how to act physically in ways
stack block
Robot
can stack monitor tower
changed?
top?
request top
crashed? request help
stack block
Person
can stack
monitor tower
changed?
crashed? pick up blocks top?
place top
Fig. 3. Stacking blocks as teamplayer
that are socially acceptable, and enhance the social relationships. We propose to use the social practice architecture proposed in [3] to specify human-agent-robot interaction in ways that are computationally interpretable and therefore be used to design artificial agents and robots that interact with people. We concluded the paper by providing an initial description of how social analysis and social practices can support the deliberation design of the robot in order to participate in human-robot interactions. This paper presents initial work on this topic. Immediate essential steps will be on the validation of the social analysis check-list, extending and modifying it by application to several case studies. Future work will focus on the further operationalization of social practices for the design of robots capable of social joint action.
References 1. P R Cohen and H J Levesque. Teamwork. Technical report, 1991. 2. J. Dias, S. Mascarenhas, and A. Paiva. Fatima modular: Towards an agent architecture with a generic appraisal framework. In Proceedings of the International Workshop on Standards for Emotion Modeling, 2011. 3. F. Dignum and V. Dignum. Contextualized planning using social practices. In Proceedings of COIN 2014, 2014. 4. F. Dignum, V. Dignum, and C.M. Jonker. Towards agents for policy making. In MABS IX, pages 141–153. Springer, 2009.
wait
stack block
Robot
stack possible observe tower crashed
Person
Stack request
request help
check placement
pick up blocks
not last place top last block
Fig. 4. Stacking blocks as follower
5. F. Dignum, R. Prada, and G.J. Hofstede. From autistic to social agents. In AAMAS 2014, May 2014. 6. Margaret Gilbert. On Social facts. Routledge, 1989. 7. B Grosz and S Kraus. Collaborative plans for complex group actions. Artificial Intelligence, (86):269–358, 1996. 8. M. Johnson, J.M. Bradshaw, P.J. Feltovich, C.M. Jonker, B.M. van Riemsdijk, and M. Sier-huis. Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1):43–69, 2014. 9. D. Kahneman. Thinking, fast and slow. Farrar, Straus & Giroux, 2011. 10. V. Lesser, K. Decker, T. Wagner, N. Carver, A. Garvey, B. Horling, D. Neiman, R. Podorozhny, M.Nagendra Prasad, A. Raja, R. Vincent, P. Xuan, and X.Q. Zhang. Evolution of the gpgp/tms domain-independent coordination framework. Autonomous Agents and Multi-Agent Systems, 9(1-2):87–143, 2004. 11. Kerry L. Marsh, Michael J. Richardson, and R. C. Schmidt. Social connection through joint action and interpersonal coordination. Topics in Cognitive Science, 1(2):320–339, 2009. 12. Marvin Minsky. A framework for representing knowledge. In Allan Collins Edward Smith, editor, Readings in Cognitive Science, pages 156 – 189. Morgan Kaufmann, 1988. 13. A. Reckwitz. Toward a theory of social practices. European Journal of Social Theory, 5(2):243–263, 2002. 14. Michael Schmitz, Beatrice Kobow, and Hans Bernhard Schmid. The Background of Social Reality. Springer, 2013. 15. Natalie Sebanz, Harold Bekkering, and Gnther Knoblich. Joint action: bodies and minds moving together. Trends in Cognitive Science, pages 70–76, 2006.
16. E. Shove, M. Pantzar, and M. Watson. The Dynamics of Social Practice. Sage, 2012. 17. A. L. B. Smolka. Social practice and social change: activity theory in perspective. Human Development, 44(6):362–367, 2001. 18. Milind Tambe. Towards flexible teamwork. Journal of artificial intelligence research, pages 83–124, 1997. 19. Raimo Tuomela. Philosophy of Social Practices: A Collective Acceptance View. Cambridge University Press, 2002.