YOC: Tangible User Interface for cooperative play

Page 1

a.a. 2018-2019 Simone Cherchi - YOC

Final Work Student Simone Cherchi cherchisimone@icloud.com Supervisor Lorenzo Imbesi

Master of Science in PRODUCT DESIGN

Master of Science in PRODUCT DESIGN a.a. 2018 - 2019

Simone Cherchi supervisor Lorenzo Imbesi

YOC Tangible User Interaction for Social Skills development for children



Corso di Laurea Magistrale in Produc Design a.a. 2013.14 Dipartimento PDTA Titolo: YOC A tangible tabletop interface for the developmetn of social skill in children aged from 5 to 8 yeas old. Autore: Simone Cherchi Relatore: Lorenzo Imbesi Relatore aggiunto: Andrea Vitaletti



INDEX

PART ONE Research & Previous Works 0

Prologue

09

01

A Brief Introduction to HCI

09

02 02.1 02.2 02.3

Physical and Virtual The Proces of Dematerialisation Relations Between Physical and Virtual The Mixed Characteristics of Modules

10

03 03.1 03.2

The Introduction of TUIs Merging Physical and Virtual evironments Tangible User Interfaces

16

04 04.1 04.2 04.3

Lowering Of Cognitive Commitment Feedback & Feedforward Accessibility for Everyone Usability and Playability

20 20 22 24

05

The Educational Value of TUI

25

06 06.1 06.2 06.3 06.4 06.5 06.6 06.7

YOC a Platform for Collaboration Usability and Playability The Modules YOC as a TUI The Educational Properties of YOC Use of Projection in SDG YOC's technology Applications

29 30 30 32 32 34 34 36

07. 07.1 07.2

The Prototype Hardware Software

38 38 38

08 08.1 08.2

The Code Arduino Unity

40 40 44

12 14 14

PART TWO Progect Presentation

Acknowledgements

48

Technical drawings

49

Bibliography

52



PART ONE Research and Previous Work



PROLOGUE

Abstract: With the introduction of electronic and then with the introduction of digital devices the XX’s century have change the face of traditional design, detaching the physical action from their direct outcome. This evolution have culminated with the first personal computers and smartphone, tool able to condense a great number of features in a little space. This work, based on the user centred design approach and taking inspiration from the experimental world of tangible user interfaces, start from the educational field to explore the capabilities of an interaction based on physical traditional features. In particular will be discussed the use of tangible interface in the development of social skills as the confrontation, observation and communication with others, for children in the middle of their cognitive development aged 5 to 8 years old. To the paradigms of dematerialisation and the generalisation of the function, here will be opposed a decomposition of feature among several specialised tools for the sake of a consistent interaction and the promotion of collaboration in a conceptual model easy to understand at diverse level of skill. INTRODUCTION On this work I will analyse the main evolutions in the field of HCI (Human Computer Interaction), then will be discussed the pro and con of digitalisation and the possibility offered from the state of the art of interaction design. Finally my own interface for the tangible interaction will be presented and contextualised in the described context. The text will be organised in the following division: The FIRST chapter will be dedicated to the introduction of the science of HCI and its development throughout the establishment of GUIs as a standard. The SECOND chapter will focus on the pro and con of the dematerialisation as one of the protagonist of the digital revolution, highlighting the need for a major attention to new designing paradigms. then it will be discussed the characteristics distinguishing of the physical and virtual environments, and will analyse the role of modules in this relation. The THIRD chapter will explore the theoretical researches about the merging of physical and virtual environments in order to define the paradigms for a balanced integration, then the field of TUIs (Tangible User Interfaces) will be introduced and discussed on its main features . The FOURTH chapter will discuss how to enhance the accessibility of interfaces in order to lower the cognitive commitment required for the use, and include the less skilful children int the interaction; analyse the video game industry of software and hardware will be analysed in comparison to GUIs and TUIs as a field of strong experimentation on both the topics . The FIFTH chapter will discuss the educational properties of the TUIs especially about the development of collaborative and social skills. The SIXTH chapter will introduced the TUI I realised and discusses various topics among those analysed in the previous chapters, it will be even shown the design process on the hardware and software prospectives. In the SEVENTH chapter will be ahown the prototypes and the logic behind their design and software. Finally the chapter EIGHTH are collected the codes at the base of the working of my platform and the communication between Arduino and Unity.

CAPTER 1

A Brief introduction to HCI

1 2

The intuitive knowledge of phisical laws related to every day experience. the device allowing a kind of interaction.

The XX’s century have brought to the realisation of great technological revolution in the design overview, new technical capabilities where used to merge different fields trough serendipitous combination from diverse fields. Designers found new technologies and new way to shape the world of materials thanking as inspiration the inexhaustible possibilities of geometry and new synthetic materials. However, it was the embedding of electronics into products that resulted in the most radical shift in both design possibilities and people’s relationships with objects. For the first time, the potential behaviour and functionality of a product was disconnected from its physical form [Chang K. & King S., 2015]. While the traditional design could be easily associated by the user to its specific function, thanks to the commonly known naïve physic1 principles [Robert J.K et al., 2008] and the intuitiveness of its affordances2 [Donald Norman, 2013], an electronic device doesn’t offer physical evidence about the process making a radio work, and it doesn’t explicit a clear connection between an affordance and its purpose either. Several interaction designers found their own ways to answer to the new design paradigms, the American Henry Dreyfuss tried with his publication of “An Authoritative Guide to International Graphic Symbols”, to introduce a standard for graphical guidance, while the German Dieter Rams established the concept “less but better” trough a simplification of his interfaces that would communicate intuitively the role of each controls thanks to colours or the use of information graphics [Chang K. & King S., 2015]. The interface organisation of new devices has become more general as, with the miniaturisation of components, a great number of features have been concentrated trough an effect called feature creep. This phenomenon was pulled by consumers that often demand for the maximum number of features, being not aware of the consequential difficulty of use [Kim D. S. & Yoon W. C., 2009]. A not controlled development of features can bring indeed to severe inconsistencies, where the interface controls and the correspondent result become difficult to recognise [Donald Norman, 2008]. Today the maximum expression of

9


this concentration of features has been the concept of the personal computer. Introduced in the second half of the XX’s century. These where initially used exclusively by enthusiast and professional; the way the user could interact with the machine was based on standards through specific input commands; only with the introduction by Xerox PARK of the Xerox Alto in 1973, and then the commercial debut in 1981 with Xerox Star the user could finally rely on a more friendly user interface defined WIMP (Window Icon Menu Pointing device) [Smith D. C. et al, 1982]. In its famous demonstration, known with the name The Mother of All Demos [Engelbart D., 1968] Douglas Engelbart demonstrated the potentiality of mouse and the new GUI system. In this moment have been set several important HCI design principles, such as “seeing and pointing vs remembering and typing” [fig. 1 and 2] [Ishii H. & Ullmer B., 1997]. The success of GUI’s interface is not only due to the intuitiveness of the direct manipulation system, it was even based on the principle of skeuomorphism: using a visual metaphor from the real environment to suggest a specific kind of behaviour [Angelica Valentine, 2018]. In the Xerox system files and their containers where represented respectivelty as papers and folders, and could be drag and drop in different locations as it happen on a physical desktop [Smith D. C., 1982]. This technic represented the fulfilled gap between casual users and the arcane mechanics of the computer. Katherine Hayles, in her book How We Became Posthuman, describes skeuomorphism as “threshold devices, smoothing the transition between one conceptual constellation and another.” [Hayles K., 1999]. Some may object the introduction of GUIs have not been positive for all the users: enthusiast and technician, willing to explore the underlining system of the computer had to adapt reluctantly to the new system: Hutchins, Hollan, and Norman commented: “It is too early to tell how GUIs would fare. GUIs could well prove useful for novices [...] we would not be surprised if experts are slower with Direct Manipulation systems than with command language systems” [Grudin J., 2012]. On the other hand, researchers as Jacob explain how for simple action, not always the most efficient choice is necessarily the one with higher performances [Robert J.K., 2008 (20)]. For several year the question about if it would be more important to optimize the interface for skilled use or the initial use have been longly discussed in the field of HCI [Grudin J., 2012]. This same topic will be discussed further on, about TUIs as, in some cases, less efficient then devices integrating GUIs, but with and higher quality on the experience and udability factor. In 2007, when smartphones still had clumsy keys covering the most of their surface, Apple introduced the iPhone imposing the touchscreen technology in the sector as new standard. From this moment mouse and keyboard together with the touchscreen, represented two establish technologies of GUIs respectively for the personal computer and smartphone letting little room to new forms of affordance for digital devices, but spreading the trend of digital screen even to machine that where not used to have one.

CHAPTER 2

Physical and Virtual

2.1 The Process of Dematerialisation Even thought skeuomorphism have been a visual means for a smoother transition from the physical to the virtual environment, still today the interaction with our devices lack of a physical representation of this kind of mediation. The fields of research studying different levels of integration of physical and virtual elements are disparate, in particular we will focus on TUIs as the field of research using physical affordances to manipulate informations. Hiroshi Ishii and Brygg Ullmer, have been among the pioneer of this subject and they explain in the introduction of their work the source of inspiration for such an approach: "We were inspired by the aesthetics and rich affordances of these historical scientific instruments, most of which have disappeared from schools, laboratories, and design studios and have been replaced with the most general of appliances: personal computers. Through grasping and manipulating these instruments, users of the past must have developed rich languages and cultures which valued haptic interaction with real physical objects. Alas, much of this richness has been lost to the rapid flood of digital technologies.” The researchers then define the duality of the interaction of our century and the one is going to start “We live between two realms: our physical environment and cyberspace. Despite our dual citizenship, the absence of seamless couplings between these parallel existences leaves a great divide between the worlds of bits and atoms.” [Ishii H. & Ullmer B., 1997]. Ishii and Ullmer consider the physical properties of specialised affordances as an opportunity to generate unique experiences. In an article in which two innovative design methods for tangible interaction get introduced, Buur, Jensen, and Djajadiningrat declare: "Currently, the actions required by electronic products are limited to pushing, sliding and rotating. Yet humans are capable of far more complex actions: Human dexterity is highly refined. This focus on actions requires a reconsideration of the design process.” [Buur, J., Jensen, M.V. and Djajadiningrat, T.]. In 2011 Bret Victor, wrote an article commenting a video produced by Microsoft regarding the future vision of interaction from the company prospective [fig.3] [Microsoft,

10


1

2

1. The CLI interface require the user to remember the commands and type them on the keyboard to communicate with the computer. 2. The GUI of Xerox Star use graphical metaphor for a communication based on the real world behaviours


2011]. The video show the future technologies augmenting the reality through the overlay of data on the plane surfaces of phones, blackboards and business cards; holograms and displays are integrated in every activity and users interact with them trough direct manipulation with their fingertips. The interaction designer highlight in his point that even though this seems to be the direct consequence of the technology standards we use today, it doesn’t mean it is the propitious future we should wish for; our hands indeed are capable of a wide number of physical interaction that should not be limited to the 2D surface of displays, which technology he calls pictures behind glass. According to Bret, the future devices should exalt the numerous abilities of user’s hands in order to make the device adapt to them and not vice versa [fig.4] [Bret,2011]. Anna Vallgårda and Tomas Sokoler explain how humans are not able to perceive the information stored in digital devices, and that in order to make it available for our senses, information has to be combined with another material, which has human perceivable form [Vallgårda A. and Sokoler T. 2010]. Lukas Van Campenhout et al. concerned as well with the impoverishment of interaction focus their attention to the phenomenon of digital dematerialisation, they give a problematic definition of it, in the attempt to describe the main drawback of the digital technology: “Dematerialisation occurs when digital content becomes disengaged from its carrier, and flows freely through networks and devices, while the carrier disappears” [Van Campenhout L. et al., 2013]. Example proposed are CD, music albums, money, books, photos and all that contents that slowly are finding as competitors their digital counterparts integrated in smartphones and computers. Campenhout even propose the use of design technics for a “guided dematerialisation” referring to a balance in the integration of physical and virtual elements [ibid]. Most of these assumption claim a change in the approach design is managing the evolution of digital devices and more in general the awareness about a change of paradigm it is needed to take in consideration with the advance of new informational systems. 2.2 Relations Between Physical and Virtual "How can we apply dematerialization in digital products with respect for their digital character? And how can we do it without closing the gate to the physical world?” [Van Campenhout L. et al., 2013]. To define the forms of mediations between physical and virtual environments, in the following part will be studying their characteristic and relations. Discussing this point will provide the knowledge that will be used to define the boundaries of domain of each element in the specific approach of my product. Physical and virtual elements are imbedded in each digital interface as the hardware and software elements; they can be identified in three groups of opposite characteristics: tangible vs intangible; static vs dynamic and persistent vs transient [Van Campenhout L. et al., 2013]. In the first dualism, the term tangible refers to the physical nature of an object, it has a weight, a temperature, a texture etc.; information then are intangible, cause they can not be perceived if not through a mediation. The opposition between, static and dynamic, highlight how a physical element have limited capacity to reorganise their structure, this make the physical object not capable to adapt to other kind of functions but at the same time, it is very easy for the user to define its affordances and understand its purpose. Informations on the other hand, are dynamic and allow a greater flexibility, together with the possibility to accomplish diverse tasks. Finally physical elements are persistent, while digital information are transient: the firsts will have passive presence in the environment and their representation will be tied to their affordances; this characteristic can make a physical interface negatively cumbersome or positively ready to use according to the needing of the user; digital information in an opposite way will disappear in the moment the system hosting it is not active anymore. This detachment of informations from the spatial dimension allow a virtually infinite space for visual storage or the movement in the virtual environment [Van Campenhout L. et al., 2013][Eisenberg M., 2002]. As we can summarise, both the physical and digital natures can be associated to positive and negative acceptations: elements of the real world are familiar and more easy to comprehend, while virtual elements are devious but give a greater number of informative tools in a performing amount of time.

12


3

4

3. A frame from Future Vision 4. Array of gestures our hand is capable of, from Bret Victor's article.

13


2.3 The Mixed Characteristics of Modules Among physical tools modularity can be an exception to the physical characteristics here discussed. Thanks to the dynamic nature of modules, groups of physical elements, by considering them as one system, have the ability to integrate qualities exclusive of the virtual world. The characteristics of tangible static and persistent, from the physical environment merge with the ability of the system to be dynamic and somehow transient thought the decomposition of its elements [Ullmer B. & Ishii, 2001]. Examples of modules can be in this sense the work by Bruno Munari Più e Meno or the Chinese puzzle Tangram [fig. 5 & 6]. The taxonomy by Michael Eisenberg et al. on modular toolkits for children, give categories of different kind of flexibility in modularity, as the specificity of construction and domain specificity. The first one concern, the specificity of the kind of construction the user is supposed to be able to create with the modular components, in opposition to the non specific and free-form structures. The traditional Lego toolkit represent a modular system with low specificity of construction, while the new business model of the company seems to go in the direction of specialised kit for a more guided construction process. The domain specificity represent the constrains on modules defining the area of application to which these can be applied. Modules for the creation of molecular modelling as Zometool (e.g.) can have a low specificity of construction but will not be likely used for creating a car, while traditional Legos with their flexibility can move between different context of representation [Eisenberg M., 2002].

CHAPTER 3 The Introduction of TUIs

3.1 Merging Physical and Virtual evironments The study by Theo Mahut et al. define a method for an appealing combination of the two dimension. The researchers analyse the relationships between three different elements of an interactive product: the environment, the target (the product) and the source (the perceived reference). These three components of the experience are contemplated each when part of the physical environment or the virtual one. Then the nine possible combination where associated with the relative examples inspired on videos directly from the science fiction genre and experimental interfaces. Thought a survey with a group of volunteers about the preference between the collected videos it was record a marked higher number of appreciation for product presenting a mix of physical and virtual elements regarding the combination of target and source, with no particular emphasis regarding the specific nature of the environment. In other words it was assumed, when a tangible affordance (smart gloves, remote) allow to manipulate directly something intangible (data, sounds) or vice versa, then the interaction becomes appealing to the user. The research doesn’t give particular heuristic to follow for the exploitation of this knowledge, but let the designer use the described characteristic for a more deep awareness in the design process [Mahut T., 2017]. Robert J.K. Jacob et al. in their research about the RBI (Reality-Based Interaction) pose as main focus the tradeoffs to take in consideration in the balancing of real world element and the virtual one. Introducing several vanguard style of mixed interaction the paper expose its idea of RBI trough the classification of four themes comprehending this phenomenon: The Naïve Physics, as the perception of physics in its more shared and intuitive rules that every person have imbedded in his experience of the world. Body Awareness and Skills, is the perception and the coordination of the individual’s body in the space. Environment Awareness and Skills, the sense people have of their surrounding and the capability to manage the manipulation in this space. Social Awareness and skills, the perception of people being generally aware of the others and the ability to communicate with them [fig. 7] [Robert J.K., 2008]. These themes describing the capability of people during the experience of real world situation are used by Jacob to be compared to the experience with an interface and define the level of involvement. Further more each of the themes get compared with six tradeoffs representing the gained capabilities form the introduction of virtual features in product: Expressive Power, Efficiency, Versatility, Ergonomics, Accessibility and Practicality [fig. 8]. It can be useful to specify how expressive power and versatility represent two distinct attributes, where the first concern the capability to accomplish several kind of operations within an

14


5

6

7

8

5. Modular game PiĂš e Meno by Bruno Munari. 6. Chinese modular game Tangram. 7. The four themes of RBI 8. Tradeoff relations between reality and virtual augmentation

15


application context, and the second, the capability to accomplish task related to different applications and context. [ibid] We can notice how these two tradeoffs have been discussed in very similar relation with the field of physical modules, respectively as specificity of construction and domain specificity [Eisenberg M., 2002]. Given these tools that will be used further on, the research offer a key for their interpretation: “We propose a view that identifies some fraction of a user interface as based on the RBI themes plus some other fraction that provides computer-only functionality that is not realistic. As a design approach or metric, the goal would be to make the first category as large as possible and use the second only as necessary, highlighting the tradeoff explicitly “ [Robert J.K., 2008]. The breakdown between physical and digital elements on which Mahut and Jacob base their researches can be resemble by the MCV model that Ullmer and Ishii use to define the different approach between the traditional GUIs and the new field of TUIs in the separation of physical and digital. It is possible to observe how, according to the metaphor of the water as virtual environment, and what is above as physical environment (with all the characteristic we assigned to them), the GUIs have a minimal part of system facing to the real environment, while already the spatial manipulation of the hardware (the mouse) is highly dependant on the virtual representation of the space. On the other side the ideal model proposed by the researchers that will be shown in the next paragrapher is aligned with the approach by Jacob bringing, where it is possible, the submerged components of interaction in the tangible environment [Ullmer B. & Ishii, 2001].

3.2 Tangible User Interfaces Several field of research involving the combination of real and virtual environments have been widely discussed in the last 20 years; here I will focus on the field of TUIs especially on its property of not only merging elements of physical and virtual environment, how we see it happen in Digital Desk through the AR (augmented reality) technology, but to allow the control of information through the manipulation of physical representations [Ishii H. & Ullmer B., 1997]. It could be objected that even GUIs tanks to the mouse mediation can be considered then TUIs. To better define the distinction between the two typology of interfaces we can analyse the difference proposed by George W. Fitzmaurice, who recognise a selection based on space in which the TUI allow to interact directly with the object of our interest (space-multiplexed) and for GUIs a selection based on the acquisition of the target and then the interaction with the hardware affordance (the mouse) that in this way have not specific digital association but an unlimited number of them (timemultiplexed). The general purpose of a mouse don’t allow then to have a real role as TUI’s physical representation, not representing any kind of specific digital information different form the movement of the pointer [Fitzmaurice G.W. & Buxton W., 1997]. Still TUIs have a wide range of types; to define the term in accordance with the Fitzmaurice model presented above I will use in my work the interpretation by Van Campenhout et al. according which tangible interfaces are interface able to “augment the real physical world by coupling digital information to everyday physical objects and environments” [Van Campenhout L. et al., 2013]. Here will be describe the meaning of some of the most recurrent terms when it comes to describe the dynamic of TUI. These are the token, reference frame and container. Tokens are to be considered physical manipulable object with no particular virtual feature; their shape is considered by the system as imbedded physical information expressing (often univocally) its digital counterpart. Each token has a specific function being its shape designed as an iconic physical interface. An example of token are the building models in the URP interface or the lenses used on metaDESK [Robert J.K., 2008][Ullmer B. & Ishii, 2001][Ishii H. & Ullmer B., 1997]. A reference frame is the physical interaction space within which the token can be manipulated. Again in the URP, the table where informations get projected is an example [Ullmer B. & Ishii, 2001]. A container is a physical tool hosting dynamic virtual informations as images, videos, sounds etc. similarly as a floppy disk or pendrive would do. The physical support confer to the digital imbedded informations the physical qualities of tangible and persistent. These can interact actively with other devices collecting and transferring files while their shape doesn’t need to resemble their content, indeed the dynamical information they carry have not specific attachment to its physical support. An

16


9

9. URP a tangible user interface used for the collaborative developement of urban projects

17


example of container are the blocks in the mediaBlocks interface or the spheres collecting vocal messages in the Marble Answering Machine [Ullmer B. & Ishii, 2001][Ishii H., Ullmer B. & Robert J. K. Jacob, 2005]. To take newly as example the metaphor presented trough the MCV by Ullmer and Ishii we can see how TUI bring out the control interface and the virtual elements form the digital environment to materialise them in physical information trough the use of tokens and containers [Ullmer B. & Ishii, 2001]. To contextualise the use of these TUI tools I show here four of the models described by Ullmer and Ishii that we will be using in the following paragrapher. In TUIs we can recognise platforms based on the spatial system, the relational system, the constructional system and on mixed constructive/relational system [Ullmer B. & Ishii, 2001]. In the spatial system, the spatial configuration of physical tokens within one or more physical reference frames is directly interpreted and augmented by the underlying system [Ullmer B. & Ishii, 2001] according to a set of spatial relative rules defined constraint imbedded int he reference frame [Ishii H., Ullmer B. & Robert J. K. Jacob, 2005]. Thanks to the interaction with the constraints, the token become both a physical representation of the medium (its shape perceived by the system as information) and the control is used to manipulate it. In the relational system differently from the spatial system that interact with the reference frame, tokens and containers get in communication between themselves through logical relations for more abstract computational result. The containers in the Marble Answering Machine (e.g) interact when insert in a slot to record the vocal message, or the mediaBlocks can print an image coded in their memory trough the insertion in a printer thanks a special slot. It get highlighted how this particular system allow a rich communication between physical and digital languages and how a right balance between these two forces represent one of the greatest challenge in TUI. A third approach regard the constructional system as a set of computational enhanced modules able to be physically complied through diverse technologies. An example of this model are the Alphabet Blocks or Active Cube [fig. 13], both of these platform allow to create physical construction between modules and visualise them on the computer once connected [Eisenberg M., 2002][Hiroyasu Ichida et al., 2014]. Finally the mixed constructive/relational system represent a union between the relational and the constructional systems in which mechanically jointed components react to each other according to their embedded computational semantic, an example is AlgoBlock, a tool for children to learn coding paradigms through the composition and the computational communication of its components [fig. 12] [Ullmer B. & Ishii, 2001]. The mixed constructive/relational system is mentioned in the research as promising combination between fields; whether other systems, or combination of systems, are possible or not is not specified. In the 6th chapter it will be discussed how YOC could be the manifestation of a kind of system here not discussed. The constructional and the mixed constructive/relational system present differen kind of constrain respect to the coupling token+constraint [Ishii H., Ullmer B. & Robert J. K. Jacob, 2005] compared with spatial systems: where in the first one the underlign system interpret the position of token whitin the reference plane, generally the platforms based on constructive system intepret the spatial locations in ralation to its modules. Such spatial awareness requirement is the main purpose of integration of computational elements in the toolkits modules [Eisenberg M. et al., 2002]. The following paragraphs will present some key study for a more clear comprehension of the paradigms described about the different systems of TUIs; these will be used even in the sixth chapter for the definition of the attributes that is similar of combined ways have generated my interface. Key study 1: Reactable is a TUI based on the use of tokens and constrains so that it represent a spatial system [fig.11]. The tokens have information graphics and colours as physical representation of specific virtual elements for the contribution in the generation of music (pitch, rhythm etc.) so that each token can not serve for a different purpose from the one assigned. As described above even here the tokens represent both digital properties (as the pitch) and the relative control affordance; the feedback resulting from the manipulation process of tokens is managed by the constraint that set specific rules for different kind of manipulation. Being the tokens passive physical elements their usage have a limited level

18


10

11

10. Monogram, a modular interface for graphycs and video editing controls 11. Reactable, a collaborative platform for the controll of abstract music values

19


of flexibility [Reactable, 2007]. Key study 2: Monogram represent a particular kind of constructional system in which the modules get joint by magnetic force and communicate trough a USB cable with a computer [fig.10]. Each module present a conventional kind of affordance on its top so that simple input can be translated from the user to the virtual model. Even though the impairment with a computer may suggest a general association of the affordance respect to the computer controls, this is not totally true; in this platform indeed it is the user to connect manually each module to its function on the several compatible programs; at the same time the spatial composition of the elements have no relation with the controls on the software, so that we can not consider this interface time-multiplex nor space-multiplex [Monogram] [Fitzmaurice G.W. & Buxton W., 1997] Following the model of tradeoffs by Jacob [Robert J.K., 2008] we can find other opposites between the key study number 1 and 2: while Monogram trade the efficiency factor for a higher expressive power and versatility, vice versa Reactable trade the versatility for the efficiency keeping the platform on the field of specialised application. Key study 3: AlgoBlock is a mixed constructive/relational system, where modular elements can be joint together to generate abstract computational relations. The game is meant to help children to think spatially to solve programming task and develop a logic problem solving attitude. What make of this product constructive and relational at once is the modular nature of the structure and the abstract interpretation resulting from the coupling of the components [Ullmer B. & Ishii, 2001].

CHAPTER 4 Lowering Of Cognitive Commitment

4.1 Feedback & Feedforward In order to discuss the usability properties of TUIs it will be needed to define the meaning attributed to the word cognitive commitment and the several classification of feed forward and feedback involved int he processes of users centred design. In this work I will refer to cognitive commitment the quantification of the mental effort in the accomplishment of a task and then as inversely proportional to the feeling of directness in the interaction [Hutchins E. L., Hollan J. D. & Norman D. A., 1985] The terms feed forward and feedback refer respectively to the anticipation that an affordance give about its function, and the information resulting from its use. both feed forward and feedback can then be coupled with other three attributes inherent, augmented and functional to represent the ways in which the interactive product communicate with the user [Wensveen S. A. G. Djajadiningrat J. P. & Overbeeke C. J., 2004][Kuenen S., 2012]. Inherent means the source of information is physically imbedded in the affordance, while augmented in this case represent an explicit notification designed to give further information to the user; finally functional refer to the functional purpose of the product: Inherent The inherent feed forward is the information about the modality of use communicated through the physical characteristics it present, as the material and shape. The inherent feedback refers to the physical perception received while using the affordance, as feeling the rubber of the button under the finger; it give informations about the progress of the operation. Augmented The augmented feed forward has the task to inform about the kind of result related to the activation of an affordance in an explicit way before the experience of interaction, it can be the colour of the affordance, the information graphic, or a label.The augmented feedforward serves as confirmation the action has been registered by the system, it can be any kind of perception, as a sound or the activation of a led [ibid]. Functional The functional feed forward, goes beyond the action possibilities and concern the general messages that an object gives about its function. while the functional feedback represent the result of the interaction with the product i.e. the informations proving that the product is carrying out its function.

20


12

13

12. Algoblock modules in the early developement stage. 13. Active Cubes divided in sensors and actuators.

21


Describing these declination of feed forward and feedback Wensveen specify how the spatial coupling between the action and the functional feedback could enhance the intuitiveness of interactive interfaces in a similar way in which it will be discussed in the next paragrapher through Norman’s analysis of mapping [Wensveen S. A. G. Djajadiningrat J. P. & Overbeeke C. J., 2004].

4.2 Accessibility For Everyone We saw how GUI’s direct manipulation and the skeuomorphism work as metaphor of the physical manipulation which allow to not trained users to approach the virtual environment of computers. These two models are both metaphors that capitalise on the common knowledge of the physical world reducing where possible the cognitive commitment required in the interaction [Valentine A., 2018]. When it comes to the use of interactive computational systems for the interaction with children, the lowering of cognitive commitment gain critical relevance, as children can posses diverse levels of knowledge, while a more widely accepted metaphor, as the one of physical manipulation, could bring to a more inclusive experience for children at diverse level of skill. Africano et al. claim in their research on interaction for children aged 6 and 7 years old, differences in reading, writing and mathematical abilities influencing the performance of interaction [Africano D., et al. 2004]. In the analysis of children aged 5 to 7 years old Sluis et al. notice the difficulties of these to manage simultaneously the control of the mouse and the movement of the pointer on the screen [Sluis R.J.W., et al. 2004]. Different levels of knowledge among users make metaphors almost never absolute; they don’t work in every context and need to be referred to elements familiar for the user’s conceptual model [Donald Norman, 2013]. TUIs use several kind of metaphors, from the acquisition of some of the GUI’s paradigm (as the drag and drop in mediaBocks) [Ullmer B. & Ishii, 2001] to the original idea of physical manipulation of information, which is based to the concept introduced above (chapter 3.2) of space-multiplex. The idea of interacting with digital information trough spatial relations with the physical world, has been introduced with the term natural mapping by Donald Norman in the book The Desing of Every Day Things [Norman D. 2013]. Norman explains how natural mapping take advantage of spatial analogies in order to make more intuitive the connection between the affordance and the direct outcome. The spatial analogies described by Norman refer to the relative position between affordances, resembling the same relative position of the functional feedbacks; or as we experience in TUIs the direct coupling of affordance and functional feedbacks in a unique shared position. It is claimed by Norman as the natural mapping represent an approach allowing immediate understanding of the interface, making this metaphor extremely appealing when it comes to make children interact with virtual environments [Donald Norman, 2013 (4)]. Norman gives us a second approach for the design of interfaces called modularisation which suggest the division in modules of digital products when these reach a critical level of creeping featurism. This division allow an increased usability of the tool thanks to a logical spatial definition of the physical affordances assigned to each digital task. Let’s think about a printer, it will have recognisable modules: one opening on the superior part for the scan function and the lower one for setting and start the printing process. In his point Norman what to claim how the answer for a good user experience is not the simplification of our digital tools, but a better organisation of their components and the relatives affordances[Donald Norman, 2008]. Even dimension influence the perceptions of feed forward and feedback, different sizes allow different kind of action as Ullmer and Ishii explain, an object too big to be grabbed with an hand must be taken with two hands, a cube of 10cm is usually meant to be used with a certain force, while 5 cm allow a more precise manipulation [Ullmer B. & Ishii, 2001]. Considering the open hand of a 6 years old children not smaller then 11,5 cm we can use similar, lightly scaled measures for the children in our specific target [Malina R. M., Hamil P. V. V., & Lemeshow S., 1973].

22


14

p

Desiderability

Ca

ity

il ab

Usa

bili ty

Number of Features

15

14. Example of poor and good mapping 15. The charter by Donald Norman show the relation between capability and usability of a product.

23


4.3 Usability and Playability One of the fields that more invested in the development of intuitive interfaces for immersive interaction has been the video game industry. Video games where separated at the birth from the HCI standards established in the computer interface; this separation, and the possibility to explore in a much less constrained field, allow to the medium to explore interaction technologies to create a great number of categories radically different from GUIs, further more the exaltation of the fun factor, and the pursue of technics of engagement made these interfaces highly usable [Dyck J., 2003]. The association between usability and playability with attributes as engaging and entertaining is vastly approved in the context of interaction, at the point that several researches on the evaluation of video game design, as the work by Federoff, or Pinelle has been carried out on the base of established HCI’s heuristics settled by Nielsen in his book Usability Inspection Methods [Federof M. A., 2002 ][Pinelle D., Wong N. & Stach T., 2008]. Therefore the references to attributes related to engagement and immersive interaction are considered in this work directly connected to an improvement of usability factors that are then extend to the educational capability of physical interfaces. Analysing the evolution of interfaces of video games, we can recognise an obvious distinction in the standards of the genre of strategic and action video games. Strategic video games are meant to be played using GUIs hardware as mouse and keyboard. They capitalise on the knowledge of established computer controls, as the selection through drag and drop gesture, and the direct manipulation of events of diverse nature as (e.g.) multiple units or button representing abstract command [Dalmau D. S. C., 1999]. On the other hand most of console video games present game mechanics focused on the control of a single main character trough the use of a controller, where each actuator* is connected to a specific action. We can summarise how strategic games require the use of abstract strategic skills while controllers represent a more specialised tool for the control of a singular physical character at a time. Even though the controller differentiate from GUIs trough the physical representation of specific virtual events most of them are still not compatible with the TUI process of mapping. The established standard of console’s controllers are not vertically specialised for specific kind of interaction leaving to the game designer the possibility to associate affordances and virtual events3 according to his personal logic. If we consider (e.g.) the four buttons commonly positioned on the right of standard controls, we can notice, how most of the times they do action has jumping, activating objects, or getting down to hide, still all these action happen through the pressure on identical buttons. Considering the lack of natural mapping between traditional controllers and the on screen action, we can find important differences with the space-multiplex paradigm from Fitzmaurice. It must be concede however, how analog sticks usually associated to the directional control of the player’s body and camera view, allow to control with a good mapping the direction of movement (left stick) and camera rotation (right stick). Still these complex action can be decomposed as the combination of rotation and acceleration for a more immersive experience over time efficiency. Other examples of mapping in controllers are the shoulder trigger usually positioned on the back, of conventional console’s controllers: they where initially designed as specific metaphor of a gun trigger, but then got integrated in all kind of games with diverse functions [Lu W., 2003]. The application of the principles of natural mapping would result, according to the model from Jacob et al. as a tradeoff of expressive power and versatility in favour of realistic interaction and then accessibility for not trained user on the bases of common naïve physic knowledge. Natural mapping would indeed bring to limit the hardware (versatility) and software (expressive power) for game designers, a cumbersome example of this can be found in early consoles in Coleco Telstar Arcade (1977) an interface comprehending a laser gun, a steering wheel and dials that could be used acceding to the needs of the software [Cornish D., 2015].

3

In game design represent every change in the virtual environmen, usually triggered by an action of the player.

Situation in which natural mapping are applied to video games are usually justified by the immersive and intuitive characteristic of the interaction, Kim claim the importance of using consistent interfaces in other to create complex interaction which still keep a good usability [Kim D. S. & Yoon W. C., 2009]. Lee and Im rise the consistency theme as well, and emphasise the properties of an interactive interfaces as the ease of learning and remember the controls, the reduction of performance time and the reduction of errors [Lee J. & Im C., 2009]. Sánchez et al. in discussing the methods to enhance the usability of video games claim how “the more realistic a video game […], the greater the Immersion of the

24


player. Realism helps to focus the player on the game’s challenges, rules and objectives by making the virtual world as believable as the real world.” [Sánchez J. L. G., Zea N. P., and Gutiérrez F. L., 2009]; Cases in which video games pursue this kind of immersion trough the consistency of input hardware are few examples, as hardware sets for car games or Nintendo’s original adaptable hardwares (e.g. wii accessories and LABO). Games able to combine in a good balance a natural mapping of the controls and the efficiency of interaction are usually often related to the category of indies where the game mechanics rely of few simple naïve physic rules, still limited to the two degree of freedom of the screen. The game designer Bennett Foddy work in his 2D creations on a process of reverse mapping, where his games get designed to fit the controls of GUIs’ already established standards [Foddy. B., website]. A characteristic of consistency present in most of console controllers is the vertical constraint of the option button. a vertical constraint is an external constraint recognised by multiple platform, applications and then the user’s conceptual model. It maintain for consistency the same function, whatever application the system is on, so that this convention is particularly useful for accessing in any moment to the setting of the program or the shift between applications [Kim D. S. & Yoon W. C., 2009].

CHAPTER 5 The Educational value of TUIs

The relevance of TUIs for children’s education was pointed out even before the emergence of the term TUI: the research about the benefit of physical interaction was indeed developed through the experimentation in the interaction of children with robots, documented int he book “Mindstorms” by Seymour Papert. In the discussion about the entertainment value of tangible interfaces, Zaman et al. note that the boundary is blurred between a child’s toy featuring some electronics and interactivity and what the research community recognise as a TUI designed to support educational children’s play [Zaman B., 2012]. The affinity between phisical interfaces and playful educational interaction is even discussed by, Eisenberg et al., they claim how kits-toys designed for the assembly of physical models have a venerable place in the history of education [Eisenberg M., 2002]. These modular elements as we saw previously have the possibility to break some intrinsic limitation associated to the physical products. It has been suggested how toolkits could be enhanced through computational augmentation for a consequent improvement of the educational potential. Eisenberg et al. explain “We see computational media as a source of techniques by which the traditional educational value of construction kits can be both preserved and strengthened. The advantages of working with physical materials—the acquisition of wisdom ‘through one’s hands’ —need not be seen as opposed to the use of computation; while at the same time, both the expressive range and pedagogical (or more broadly, communicative) capabilities of construction kits can be vastly improved.” This (e.g.) is what happen in the proposed Alphabet Block in which each component communicate its related letter to a core element to eventually give as feedback the word composed and its pronunciation [Eisenberg M. et al., 2002]. One element in particular making appealing computational interfaces for children is the capability of creating immersive augmented experiences in the real environment, with a consequent phenomenon of engagement. Ullmer and Ishii in claiming the educational properties of TUIs suggest how the entertaining element can contribute to the involvement of children in the activity and then to enhance the learning possibilities [Ullmer B. & Ishii, 2001]. In the creation of an environment for a playful interactive activity for children, Price et al. observed how the use of physical elements allow to the children to experience immersive interactions when virtual phenomenon manifest in the real environment [Price S., 2003]. The use of immersive experiences enhanced by RBIs not only have the capability to create immersive contexts, but is even tied to the phenomenon of social interaction among users in a same environment. Zaman explain how several studies indicated that the “physicality and visibility of tangible interactions foster social interaction and collaboration, which

25


in turn provides fun. Social interaction or ‘social fun’ so far seems to be the most important benefit of TUIs in many empirical evaluations” [Zaman B., 2012].In the development of “Camelot” as a TUI game for children aged 7 to 10 years old Verhaegh define the development of social skills as one of the fundamental factor in the interactive play [Verhaegh J., 2006]. In which circumstances TUIs can bring to the development of social skills as been discussed mainly on the theme of hardware affordances: Bekker et al. in the analysis of games with computational properties as Battle Bots, connect the absence of the screen medium, to a more rich social interaction between children [Bekker T., Sturm J. & Barakova E. 2009]. Despite this statement other researches have moved their focus on the controls that allow the mediation with the virtual environment on the screen: Verhaegh et al. specify how the approach by Sony in the production of multimedia interactive technologies (e.g. Eye Toy), see the television as a more social medium compared to the computer [Verhaegh J., 2006]. Africano et al. in the study about the experience of children with virtual interfaces, find how the interaction with computer has the property to foster social support and interaction; and that children prefer working in groups around a single station rather than alone. At the same time it has been even noticed how GUIs having as interaction medium the mouse and the keyboard let little room for the cooperation in the use of interfaces [Africano D., et al. 2004][Sluis R.J.W., et al. 2004 ]. Broadhead has created a methodology called the Social Play Continuum where social play behaviour is measured by the level of reciprocity in language and action, as it will be discussed more over, different structures of interaction can influence the level of reciprocity between children [Broadhead P., 2003]. It can be assumed how the limitation of social interaction are not totally attributable to the medium of the screen, but mainly to the limitation related to the GUIs’ hardware which are designed for one to one usage. Stewart, Bederson and Druin introduce in their work the concept of the SDG (Single Display Groupware) paradigm, which share a lot with TUIs. It is based on the use of a single screen for the interaction of multiple users and it has been claimed by the researchers to enhance the collaborative possibility on digital devices; a single screen could indeed allow to everybody to have an overview of the ongoing activity [Stewart J., Bederson B., & Druin A. 1999]. Sluis in the presentation of an interface for the collaborative interaction for children explain how children can benefit greatly from tabletop interfaces and specify that they show a great level of engagement when when working together on a platform designed for multiple use rather then when they are forced to work together on conventional interfaces [Sluis R.J.W., et al. 2004]. Here I show some field researches that have being experimenting the way social collaboration could be promoted through the use of computational integration in the interfaces. Ely The explorer, is a SDG interactive game where the screen is divided in three areas able to join the interaction of three children simultaneously. The platform is based on a relational system in which cards and puppets where used as tokens to trigger specific feedbacks in the activity. The research describe how the manipulation of the physical supports and the setting of a common task promoted playful exchange and discussion [Africano D., et al. 2004]. The hunting of the Snark is an interactive game taking place in the real world environment, enhanced trough the use of computational elements with the shared goal to find the fictional creature called Snark. This research highlight how collaborative discovery can help children to develop specific social skills as “appreciate other’s perspectives, and encourages negotiation, tolerance and the ability to listen to others […] the visibility of others actions also enables children to be aware of the effect of theirs and others actions, encouraging further exploration” [Price S., 2003]. Considering the barriers of diverse conceptual models shared environments for the social interaction should respond to common rules understandable by all the participants. On this point Hornecker and Buur claim how lightweight interaction, (i.e. an interaction with simple and direct functional feedback) in a shared physical environment, is one of the determining factor for generating shared participation among the users, stimulating the conversation and the co-interaction [Hornecker E. & Buur J., 2006]. This statement aligns with the video games heuristics described in the previous chapter in which a consistent interfaces is useful for the reduction of time needed to learn the game mechanics.

26


27


28


SECONDA PARTE Project Presentation

29


CHAPTER 6 YOC a Platform for Collaboration

6.1 Usability and Playability YOC is a SDG tangible platform designed to stimulate the development of manipulation and social skills for children aged 5 to 8 years old. The platform is augmented through the projection of virtual information over physical computational modules. Each module, once connected to the core module get augmented by the projection with the overlay of its custom virtual representation that can change according to the active application. The position of the modules in the reference frame is naturally mapped with the virtual augmented counterpart and it is detected trough the computational interaction between the modules and the core component. The platform can even be used with the screen of a computer as support for virtual representation, even though this would lack of the natural mapping feature. The interaction with YOC is divided in two phases: in the first phase children can choose among a number of available components to assemble on the tabletop in a planar free-form construction. At the end of this phase all the component have been recognised by the underling system and get augmented graphical properties according to the application. The second phase consist in the use of the affordances over each module, in order to control the movement of the physical composition in the virtual space and its interaction with the virtual elements of the scenario. Each physical element correspond to a specific action in the virtual environment, and it keep a vertical functional consistency in every available application. 6.2 The Modules In this paragrapher will be described 5 modules, 3 are working prototype, while the grabber and the blower have only been theorised; the number of modules anyway is meant to be expanded with the development of the platform. Each element present physical and augmented characteristics: the physical shape can be considered a computationally enhanced token and works as inherent feed forward for the user to distinguish the modality of interaction, this even give specific inherent feedback to the user through the different physical responses of modules, appreciable especially in the comparison between digital and analog affordances [fig.8]. The virtual augmentation over the modules resemble the physical attributes of each , adapting colour and textures according to the context of the active application. further more it represent an augmented and functional feedback giving value to the interaction with the affordances over the modules. While augmented and functional feed forward may be integrated in the platform software for a faster comprehension of the game mechanics, the shortening of the interface learning process is already manage by the highly consistent interaction where every module maintain the same function in every context. We can consider then each interaction a vertical constraint, similarly to the option button in video game controllers. In YOC the option button for the shift among application is represented by the core module, it present a radial menu, controllable trough a deal, this kind of menu has been proved to allow a relatively fast selection among many commands [Sanchez-Crespo Dalmau, Daniel. 1999]. In this specific presentation of the interface the virtual context used will be the one of a spaceship (hardware) in the space, with planets, asteroids and other elements that can be interacted trough naïve physic (software). It has to be specify how all the spatial command over the spaceship will not be applied to the interface or its virtual representation on the reference frame, but to the background and the other virtual elements through a optical effect, in order to capitalise on the spatial properties of the virtual environment. Here will be described the element part of YOC’s interface; to each module will be assigned a simplified coded name for a more fluent reading. RT rotator (analog): The RT is the interface’s core component, it collect all the computational input from the interface and communicate them to the underling system. Its actuator is an analog dial with endless rotation clockwise and counterclockwise. The choice of the rotator as chore component is due to several geometric reasons: the good balance between speed and accuracy has proved to be good for the process of multiple selection [Africano D., et al. 2004]; further more being the platform based on the control of a single object on a single degree of freedom it is unlike the multiple use of rotational components; finally being it the only affordance with an extension over the module height, its positioning on the centre of the reference frame would reduce problems due to the projection of shadows by the hardware. MV mover (analog): Together with the RT, the mover is the only component influencing the whole modular construction; according to the four direction in which it can be oriented it will

30


1

2

1. YOC in it's five components 2. Storyboard

31


push (or pull according to the position relatively to the other modules) all the component on the direction it is facing. The force applied on the MV affordance will define the speed of movement and each rotation applied to the spaceship from the RT will result then in a new direction for the MV. SHO shooter (digital): The SHT is a component used to shoot different kind of elements depending to the active application, it can be interpreted by different softwares as a bullet to damage scenario’ elements (e.g. Asteroids), allocate objects (e.g. Puzzle Dragon) and so on. Differently from the MV it is meant to be use for one instant and on the software plane it has shape resemble the metaphor of the mouse click through the use of incision for the flexibility of the structure. GRB grabber (digital): The grabber is a directional tool designed as a switch, it use the GUI’s metaphor for which, till when the witch is active the object under it is dragged along all the movement of the spaceship, to be dropped only when the slot get deactivated. Differently from the shooter this component is characterised by two status on/off to communicate the modality in which it is in any moment. BL blower (analog): The blower has the property to convert the blow on its microphone in a pushing force in the direction the module is facing, in order to interact by distance with other virtual elements. 6.3 YOC as a TUI Looking for the place of YOC within the classification by Ullmer and Ishii we can easily recognise a constructional system in the first phase of use, in which a number of modules get connected in a free-form composition [fig.N and N]. The second phase present a different typology of interaction in which the modules can be activated to perform specific augmented actions as the movement, the rotation or the shooting of bullets. We can help the analysis comparing YOC with the three case study (chapter 4.2.) YOC resemble the technology of modules and affordances from Monogram, but add the feature of natural mapping as fundamental component of TUIs, introducing at the same time, spatial relationship similar to the one we find in Reactable. The elements not only get mechanically connected, but their distance from the RT (from which depend the speed during rotation) or the orientation of components within the reference frame (the direction in which the modules apply their capabilities) are relevant information of the interaction experience. Analysing a mixed constructional/relational system as AlgoBlock, on the other hand, we can recognise functional feedbacks related to “abstract computational semantic” not directly related with spacial properties [Ullmer B. & Ishii, 2001]. These consideration are at the base of my recognising YOC, following the nomenclature logic by Ulmer and Ishii, an interface based on mixed constructive/spatial system [ibid]. Analysing the tradeoffs between reality and augmented capabilities we discussed on the RBI method, we can recognise primary the relation between naïve physic with the expressive power and versatility. The use of tokens as physical support of the information limit the possibility of the expressive power, as the action possible to accomplish is tied to the number and the function of each module, at the same time the combination of more modules, as (e.g.) RT+MV to draw a curve, or the GRB+SHT for taking an object and launching it, allow to create new actions expanding the possibility within the virtual environment. The versatility of the system is limited by the constraint of modules, that don’t allow to navigate efficiently in GUIs or to have diverse forms of play as competitive play. The the reality then is traded with versatility to allow to the modules to represent diverse kinds of informations according to the augmentation of the projection. 6.4 The Educational Properties of YOC In the previous chapter we explored the promotion of social interaction as one of the most effective qualities in TUIs, YOC has been designed to exploit these social capabilities in order to be used by children aged 5 to 8 year old, this target has been considered in function of many other researches using TUI for similar purposes. We saw how children at this age seems to not the posses the capability to associate physical and virtual elements, and don’t share the same conceptual model. YOC try to go beyond these limits using a physical language where children can experiment trough functional feedback. It is expected that the high consistency of the platform would allow children to shorten their learning curve and especially allow to preserve this knowledge in the shift among several applications,

32


3

4

3. YOC top view 4. Prospective view

33


which will modify virtual context and augmentation of components, keeping mainly their physical metaphor. According to the method of the Social Play Continuum by Broadhead, when cildren have the common goal as beating the computer or simply experience the virtual environment, then a shared interface in the space create an higher level of reciprocity in language and actions creating a situation of cooperative play. The level of reciprocity of this approach is claimed to be higher then when evey player has his own input interface, which situation is defined Multiple Individual Player vs Game [fig.] [Broadhead P., 2003]. In YOC the opportunity created by the interaction with a common interface have been combined with the scomposition of controlls The first phase of the game will have more possibility to involve social interaction, the positioning of modules in different slots, the different orientation and the lightweight of their functions is expect to be prompt of confrontation. The action represented by every module, once experienced is easily interpretable because of the consistency of its function in every context so that children would early present similar conceptual models about the platform. The size of each square shaped component is of 7,5 cm (2,5 cm more then the working prototype) for side and is 2,5 cm tall (1,5 cm less then the prototype); it should represent a good balance between the available space on the reference frame under the projector, further more it can be comfortably manipulated by one hand and allow to have enough space in the use of the module’s affordance when may hands converge on the interface. In the second phase, children still have to cooperate to control the spaceship in order to move and interact with success in the virtual environment. The use of the SDG method should push children to share the commands: as we saw, when children are in condition to cooperate on the same interface, they will likely work together. 6.5 Use of Projection in SDG In TUI the technologies for the representation of the virtual environment are usually computer, smart screens and projection. Even though YOC can be displayed through a computer, or a custom virtual screen there are several properties that distinguish the possibilities of this technology. - Allow the visualisation of augmented informations directly over the controls, this can be seen as an opportunity to reduce the domain specificity of the interface allowing to represent element from several context or for the application of graphical personalisation to groups of modules in a similar way as discussed in Eisenberg’s description of augmented toolkits[Eisenberg M., 2002 (40)]. An example of flexibility through projection of textures is represented by the URP interface to which different material could be visualised on the structures [Robert J.K., 2008 (20)][Ullmer B. & Ishii, 2001]. -The arms converging on the modules for their activation don’t obstacle totally the view of the action on the virtual environment as the projection will temporarily overlay on the child arm, this allow to keep the control for everyone about the surrounding of the spaceship. - Sluis specify how for a greater flexibility, to open the way for designers willing to create new application on their TUI system, the user interface software should be separated from the hardware [Sluis R.J.W., et al. 2004 (47)]. Has in Sluis work YOC keep the same hardware components both for the representation as projection and on a screen as well. This approach is meant to move TUIs from experimental occasional platform to more flexible interfaces. 6.6 YOC's technology The connections of YOC resemble the characteristic of other toolkits for the joining of components: While the current prototype present mechanical joints in future a system of magnetic sliders as Monogram or ActiveCubes could be implemented for a faster and intuitive creation of compositions. Other technologies for the communication between modules have been considered to enhance the dynamic nature of the platform, has the NFC sensor or the use of a smart camera for the spatial detection, but this approach would result detrimental when it comes to the use of many modules as it would be need to include a battery in each module and to charge them one by one. Being YOC disconnected from the computer the core module will work as power supply for the rest of the components, however the simplicity of the communication among modules should not force a relevant limit in the number of modules working together.

34


5

6

5. Social Play Continuum 6. Comparison between controller and YOCs interface

35


6.7 Applications The adaptability of YOC's interface allow two different approaches for the design of games. Recalling the idea of specificity of construction by Eisenberg we can define mainly three designing approaches: - The fisrt application type refer to the spaceship game mentioned as example above and it is similar to the conchept of sandbox in gaming, where tools (the modules) are given to the players, in an environment they can shape (virtual environment). This sype represent the examle of the spaceship above [fig. 17 and 18]: the game (e.g.) will be possible to push and grab asteroids, or move around planets as shooting on them. The goal in this typology of game can be chosen by children, and will be mainly based on the exploration of the environment. - A second application type can offer to the player different configuration possibilities between modules. in this case then the number of results is limited to the decision of the game designer and can be used for semantic more abstract forms of communication with the interface. A specific configuration of modules (e.g.) with MVs on the sides of the RT and only a SHO could make appear on the overlayed projection, the immage of a tank, while another shape again may represent a plane. Further more the organization of the modules, in this type of game would more likely be part of in-game machanics and be updated in real time during the action. An example of this approach are the ActiveCubes wich allow to represent virtually a plane assembling a stylized shape with the modules. - Finally the major level of specificity of the composition is required by games with more strict rules and goals, wich will base the interaction basically on children's capabilities (as recognisign words and colors) and refelxes. For this last configuration has been designed another example of videogame, this require a cross shaped composition of modules where the RT is surrounded by 4 SHOs, the cross is surrounded by colored spheres in circle, and the required task is to make groups of 3 spheres to make them implode and get points [fig. 19]. The colored spheres get generated on the centre of the modular interface and can be lauch by the SHOs in any of the 4 direction to accomplish the right coumpling of colors. Multiiple number of children can share the projected environment and focus of a specific direction while communicating their informations with other players. While in the free form construction system the only spatial requirement is the positioning of the RT on the central point, in the games with high specificity of construction all the component must be positioned and and oriented accorfing to the software needingn. In both of cases, the porjector may display on the surface the position of the iconic token simbols in orther to sugest the RT position and the one of all modules around.

36


7

8 9

7, 8. respectively the virtual enironment, and its mixing with the real environment through projection 9. The puzzle game has an high specificity toward the position of its actuators.

37


CHAPTER 7 The Prototype

Hardware 7.1 For the realisation of YOC’s prototype it has been used the coupling of the open source electronic platform Arduino and the game engine by Unity. The modules have been connected using female (RT) and male (MV, SHO,GRB, BL) pins compatible with Arduino as a means for the mechanical junction. while the whole platform is connected to the computer running the software via USB. The Core module Each side of the RT present 4 slots for the pins; facing one of the module sides, from the left to the right, the first one is for the 5v, the second for the GND connection; the third and the fourth connection are respectively for digital and analog input and will be used by the connecting modules accordion to their nature. This method as even been used to recognise through the Arduino code, which modules where connected and which where not.[] For the realisation of the RT it has been used a rotator encoder, this component differentiate from the potentiometer for its sensing the rotation instead of the angular degree, for this reason it even allow a limitless rotation on the same direction; the sensor can even be used as a button pushing it from the top, this could be integrated for a future implementation of the radial menu. To contain all the components the RT’s module is positioned over a slight slope hiding the Arduino wirings and only showing the female pins on three sides [fig.]. Other Modules To allow the modules to be connected to the RT the pins have been disposed in a specular way, letting empty the slot for the digital or analog information according to module input nature. The MV has been realised using an analog force sensor, the array of 1000 values it can transmit to Arduino, have been divided in three level of speed (0, 1 and 2) according to which the speed of the spaceship get set. The physical feedback allow to perceive the level of progression of the button in the calibration of the speed, a stronger pressure will correspond to a plastic bending and an acceleration. Following a physical logic multiple use of MVs in the same direction will result in the sum of the strength applied, while opposite MVs will cancel their strength. The SHO is created trough the use of a digital pushbutton and the incision of the plastic over it to allow the transmission of the force through the surface. The incision solution is inspired by the technology of the mouse allowing to give a immediate feedback; the “click” it is hear in the use of the SHO communicate the activation of a digital command, so that the child can raise the hand after the use. The incision shape was initially a cross, but when the project required to to make each button directional, the Y shape has been considered to offer a better feed forward about the orientation of the shot [fig.]. The GRB and the BL working modules still have not been created and tested; the first one can be made using a simple switch while the second with the integration of a vibration sensor; both the shapes keep the directional characteristic of the previous models. It is still not clear if the BL may represent a relevant issue regard the exchange of pathogens, in case it may be substitute with other similar metaphor related to the pushing action as the proximity sensor. Software 7.2 Here will be discussed the approach used in the development of YOC’s demo. For a better understanding the meanings of technical therms will be explained before. Script: is a file containing the code that regulates the events in the virtual environment, it require to be associated to a game object to be used in the game Game manager: is a script managing the relationship among other scripts. Game object: is the basic element that can possibly represent characters, props and scenery. Active/not active: when a game object is active it is operational in the game; when it is set as not active its virtual representation and their function is ignored by the software. Father/child: a father game object “force” the children to keep the relative coordinates with it during the movement, on the other hand the movement of a child don’t influence the spatial coordinates of the father or the other children. Camera: is the device that capture and display the virtual environment representation to the player.

38


10

39


The only active object at the starting of the game is the RT, it is associated with two scripts, one is the game manager and the second is specialised for the rotation function of the core module. Once the game is started the game manager will take 5 seconds to recognise which modules are connected (and where) using the difference of input among different state (connected/not connected) of the modules. Once the configuration is ended the connected modules get activated and it is possible to start to play. Each game object is associated to its own scripts in C# determining its reaction according to the input signals collected by the game manager. The decentralisation of the scripts allow to the software to ignore the not active game objects and to manage each script singularly. All the modules are considered children of the RT that drag all of them during the rotation, when a mover is activated it move trough its script the RT and then all its children with the MV itself. The mover as anticipated have 3 states of speed, these can be cumulated if all the movers are active and oriented in one direction. Even The camera is a child of the RT this allow the virtual spaceship to always stay at the centre of the reference frame making the rest of the virtual environment move relatively to the spaceship. The reason for this approach is obviously the static nature of the physical controls on which the augmented representation of the spaceship is projected and to take then advantage of the virtually infinite space at disposal. The SHO is the only working prototype to not influence the position of other modules, it only allow to interact indirectly with other virtual elements through the shooting function. Each SHO is father of a spawn for bullets, that get generated at the activation of the module’s button; the bullets have a delay of half second between a shot and another and keeping the button pressed will not generate further shots.

CHAPTER 8 The Code

8.1 Arduino Here the code uploaded on Arduino; it allow to recognise the difference of input when the sensor are connected from when they are not. After the setting process, Arduino start reading the data from its sensors and tranmit them to Unity trough the USB connection with the computer. //Pin assignment const int pressurePin = A0; const int pressurePin1 = A1; const int pushButton3 = 3; const int pinA = 4; const int pinB = 5; const int pushButton6 = 6; const int pushButton7 = 7; //Initialization of variables collecting value from sensors int pressureReader; int pressureReader1; boolean button3; boolean A; boolean B; boolean preA = 1; //set to 1 to avoid an output in the first loop boolean button6; boolean button7; //Inizialization of other variables int a = 0; int arrayy[20]; int j=0; int preAverage = 0; int average = 0; int summ = 0; //Delay for the communication of cvalues to Unity int const delayButton = 20; int const delayRotator = 1; //Each sensor is considered as connected at the starting of arduino boolean check0 = true; boolean check1 = true; boolean check3 = true; boolean check6 = true; boolean check7 = true; void setup() {

40


//Setting the sensors as inputs pinMode(pressurePin, INPUT); pinMode(pressurePin1, INPUT); pinMode(pushButton3, INPUT_PULLUP); pinMode(pushButton6, INPUT_PULLUP); pinMode(pushButton7, INPUT_PULLUP); pinMode(pinA, INPUT_PULLUP); pinMode(pinB, INPUT_PULLUP); //Set the speed of communication with Unity Serial.begin(9600); // Read the values from sensors pressureReader = analogRead(pressurePin); pressureReader1 = analogRead(pressurePin1); button3 = digitalRead(pushButton3); button6 = digitalRead(pushButton6); button7 = digitalRead(pushButton7); //Check which sensor is connected if(button3 == 1) { check3 = false; } if(button6 == 1) { check6 = false; } if(button7 == 1) { check7 = false; } if(pressureReader > pressureReader1) { check0 = false; check1 = true; } if(pressureReader1 > pressureReader) { check0 = true; check1 = false; } //Communicate the not connected buttons to Unity if(check0 == false) { for(int i = 0;i != 20;i++) { Serial.print(0); } } if(check1 == false) { for(int i = 0;i != 20;i++) { Serial.print(1); } } if(check3 == false) { for(int i = 0;i != 20;i++) { Serial.print(3); } } if(check6 == false) { for(int i = 0;i != 20;i++) { Serial.print(6); } } if(check7 == false) { for(int i = 0;i != 20;i++) { Serial.print(7); }

41


}

}

void loop() { //The input are read only if recognised previoulsy if(check3 == true) { button3 = digitalRead(pushButton3); } if(check6 == true) { button6 = digitalRead(pushButton6); } if(check7 == true) { button7 = digitalRead(pushButton7); } if(check0 == true) { a++; if(a == 10) { pressureReader = analogRead(pressurePin); } if(a == 10) { a = 0; } } if(check1 == true) { a++; if(a == 10) { pressureReader = analogRead(pressurePin1); } if(a == 10) { a = 0; } } // The rotation value is allways read (RT) A = digitalRead(pinA); B = digitalRead(pinB); // When the value of A change a rotation is happening if(A != preA) { // Define if the rotation is clockwise or counterclockwise and communicate it to Unity if(A == B) { Serial.print(4); Serial.flush(); delay(delayRotator); } if(A != B) {

}

}

Serial.print(5); Serial.flush(); delay(delayRotator);

// Set the new standard value of A preA = A; //Send SHO input to Unity when detected if((check3 == true) && (button3 == 1)) { Serial.print(3); Serial.flush(); delay(delayButton); } if((check7 == true) && (button7 == 1)) { Serial.print(7); Serial.flush(); delay(delayButton); }

42


if((check6 == true) && (button6 == 1)) { Serial.print(6); Serial.flush(); delay(delayButton); } //Stabilizes the pressure sensor input through an average of the value (MV) arrayy[j] = pressureReader/300; j++; summ = 0; for(int i = 0;i != 20;i++) { summ = summ + arrayy[i]; } average = summ/20; //Check when the average is changed if (average != preAverage) { //Simplifies the outputs in three speed levels switch(average) { case 0: average = 0; break; case 1: average = 1; break; case 2: average = 2; break; } // send the current speed to Unity for(int i = 0; i<5; i++) { Serial.print(average); Serial.flush(); delay(1); } } //Set the value current speed preAverage = average; if(j == 19) { j = 0; } }

43


8.2 Unity Here will only be shown the most important codes, the first one, the game manager get the data from Arduino about the configuration of the modules and acoording to this it set the game objects on and off. GameManager using System.Collections; using System.Collections.Generic; using UnityEngine; using System.IO.Ports; public class ControlsManager : MonoBehaviour { int managerValue = 0; // To recognise and control the other elements in the scene public GameObject buttonMove0; public GameObject buttonMove1; public GameObject buttonShoot3; public GameObject buttonShoot6; public GameObject buttonShoot7; // The script prepare all the elements to be activated bool buttonMove0Verify = true; bool buttonMove1Verify = true; bool buttonShoot3Verify = true; bool buttonShoot6Verify = true; bool buttonShoot7Verify = true; // Setting of connection with arduino SerialPort sp = new SerialPort("/dev/cu.usbmodem14201", 9600); void Start() { sp.ReadTimeout = 25; sp.Open(); } void Update() { // recive the communication. if (sp.IsOpen) { try { // Read the value from Arduino.

} }

managerValue = sp.ReadByte(); print(managerValue);

catch (System.Exception) { }

//Detect the sensors sending data in the 5 seconds as disconnected. if (Time.time < 5) { if (managerValue == 48) { buttonMove0Verify = false; } if (managerValue == 49) { buttonMove1Verify = false; } if (managerValue == 51) { buttonShoot3Verify = false; } if(managerValue == 54) { buttonShoot6Verify = false; } if (managerValue == 55)

44


{ }

buttonShoot7Verify = false;

} //Activate activate only the buttons still resulting active. if (Time.time > 5 && Time.time < 6) { if (buttonMove0Verify == true) { buttonMove0.SetActive(true); } if (buttonMove1Verify == true) { buttonMove1.SetActive(true); } if (buttonShoot3Verify == true) { buttonShoot3.SetActive(true); } if (buttonShoot6Verify == true) { buttonShoot6.SetActive(true); } if (buttonShoot7Verify == true) { buttonShoot7.SetActive(true); } } } } Script for RT module: using System.Collections; using System.Collections.Generic; using UnityEngine; using System.IO.Ports; public class rotation : MonoBehaviour { public Animator animator; SerialPort sp = new SerialPort("/dev/cu.usbmodem14201", 9600); int rotationSpeed = 250; int rotatorValue = 0; void Start() { sp.Open(); sp.ReadTimeout = 25; } void Update() { if (sp.IsOpen) { try { // recognise the two Arduino input for CW and CCW rotation direction and apply to the RT. rotatorValue = sp.ReadByte(); if (rotatorValue == 53) { transform.Rotate(new Vector3(0, 0, Time. deltaTime * rotationSpeed)); animator.SetTrigger("Rotation"); } else {

if (rotatorValue == 52) { transform.Rotate(new Vector3(0, 0, Time.deltaTime * rotationSpeed)); animator.SetTrigger("Rotation"); }

} } catch (System.Exception) { } }

}

45


}

Script for the MV module: using System.Collections; using System.Collections.Generic; using UnityEngine; using System.IO.Ports; public class buttonMove1 : MonoBehaviour { //initialisartion of the sound component public AudioSource soundR; //values for data elaboration. private float amountToMove; float speed = 0; // to controll respectively the movement of RT with all the components and to make the flame appear public GameObject rotator; public GameObject flame; SerialPort sp = new SerialPort("/dev/cu.usbmodem14201", 9600); void Start() { // Set the sound to 0. soundR.Play(); soundR.volume = 0;

}

sp.Open(); sp.ReadTimeout = 25;

// Update is called once per frame void Update() { if (sp.IsOpen) { try { // Tgive different speeds and sound according to the quantity of pressure detected by Arduino. if (sp.ReadByte() == 48) { speed = 0; flame.SetActive(false); soundR.volume = 0; } if (sp.ReadByte() == 49) { speed = 40; flame.SetActive(true); soundR.volume = 1; } if (sp.ReadByte() == 50) { speed = 20; } } catch (System.Exception) { } } // move faster or lower the spaceship accorfing to the speed setted above. amountToMove = speed * Time.deltaTime; GameObject.Find("rotator").transform.position += transform.right * Time. deltaTime * -amountToMove; } } Script for SHO module: using System.Collections; using System.Collections.Generic; using UnityEngine; using System.IO.Ports; public class button3F : MonoBehaviour { public Animator animator; SerialPort sp = new SerialPort("/dev/cu.usbmodem14201", 9600); //Set the bullet and the physical place where it will appear in the scene.

46


public GameObject shot; public Transform shotSpawn3; //Doesent allow to shoot too oftern public float fireRate = 0.5F; private float nextFire = 0.0F; void Start() { sp.Open(); sp.ReadTimeout = 25; } // Update is called once per frame void Update() { if (sp.IsOpen) { try { //When the shot is available and the button is pressed, the shot goes. if (sp.ReadByte() == 51 /*&& Time.time > nextFire*/) { //Set the last time it shooted and shoot a bullet through the spawn nextFire = Time.time + fireRate; Instantiate(shot, shotSpawn3.position, shotSpawn3.rotation); animator.SetTrigger("shooting"); } } catch (System.Exception) { } } }

}

47


Acknowledgments I would like to tank my extended family, for supporting me during this final period of study even if I could not spend much time together, speaking with them made me find the proper mood for the study. Then the professor Imbesi Lorenzo which helped me in the most sensitive moments in the development of my personal education and his assistents Gianni Denaro and Luca D'Elia who tried to make me work deeply on the critical aspect of my project; I am willing to continue the work here reported, on the bases of their sugestion even after the conclusion of this master. I tank Mario Baioli for supporting the project on the technical side, and for being always available for sugestions. I whant to thank my friends for giving to me moments in which to discuss about my project and recive helpfull feedbacks; in particular Federico Gentile, Giorgio Soverchia, Puyan hassanzadeh, Hussain Othman and all the teachers and student I had the opportunity to know in these five years in Sapienza. Finally I want to say thank you to my girlfriend Haifei who gave me the strenght to work hard from the concept to the realisation of my project.

48


38

29

22

13 40

35

90°

15

45

15

23

26

17

17

40

2 26

39

18

17


2

8

12

2

25

9

13

5

15

YOC Technical Drawings

75

28

9

28

5

75

40

17

5

10

50

17

30

4

1:1



BIBLIOGRAPHY

Africano D., Berg S., Lindbergh K., Lundholm P., Nilbrink F., Persson A., Designing Tangible Interfaces for Children’s Collaboration, In: Extended abstracts of the 2004 Conference on Human Factors in Computing Systems, CHI 2004, p.853868, Vienna, Austria, 2004. Bekker T., Sturm J. & Barakova E., Design for social interaction through physical play in diverse contexts of use, In: Pers Ubiquit Comput, 14, p.381–383, Department of Industrial Design,Eindhoven University of Technology, P.O. Box 513, 2009 Bekker T., Sturm J. & Barakova E., Designing playful interactions for social interaction and physical play, In: Personal and Ubiquitous Computing, p.385-396, 2009. Buur, J., Jensen, M.V. and Djajadiningrat, T., Hands-only scenarios and video action walls: novel methods for tangible user interaction design., In: Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, p.1-9, Mads Clausen Institute for Product Innovation University of Southern Denmark Grundtvigsalle 150, DK-6400 Sønderborg, 2004. Broadhead P., Early years play and learning: developing social skills and cooperation, Routledge Falmer, London 2003. Chaboki B., Oorschot R., Torguet R., Wu Y. & Yao J., Interaction Design Feedback and Feed Forward Framework: Making the Interaction Frogger Tangible, In: Semantic Scholar, p.1-4, Mads Clausen Insitute, SPIRE, Göteborg, Sweden, 2012. Chang K., King S., Understanding Industrial Design: Principles for UX and Interaction Design, chapter 1, A Brief History of Industrial and Interaction Design, Sebastopol, 2015. Dyck J., Pinelle D., Brown B. & Gutwin C., Learning from Games: HCI Design Innovations in Entertainment Software, In: Proceedings of Graphics Interface, p.1-8, HCI Lab, Deptartment of Computer Science University of Saskatchewan, Halifax, Nova Scotia, Canada, 2003. Eisenberg M., Eisenberg A., Gross M., Kaowthumrong K., Lee N., & Lovett W., Computationally-Enhanced Construction Kits

for Children- Prototype and Principles, In: Proceedings of International Conference of the Learning Sciences, Lawrence Erlbaum Associates, p.79-85, Dept. of Computer Science, Campus Box 430, U. of Colorado, Boulder CO USA, 2002. Federoff M. A., Heuristics and usability guidelines for the creation and evaluation of fun in video games, p.1-52, Department of Telecommunications of Indiana University, 2002. Fitzmaurice G.W. & Buxton W., An Empirical Evaluation of Graspable User Interfaces- towards specialized, space-multiplexed input, In: ACM Proceedings of CHI’96, p.43-50, Dynamic Graphics Project, CSRI University of Toronto,Toronto, Ontario, CANADA M5S 1A4, 1997. Grudin J., A Moving Target—The Evolution of Human-Computer Interaction, In: Taylor & Francis Group, p.1-41, Microsoft Corporation, USA, 2012. Hayles N.K., How We Became Posthuman: Virtual Bodies, in Cybernetics, Literature, and Informatics, University of Chicago Press, Chicago Press, 1999. Hiroyasu Ichida, Yuichi Itoh, Yoshifumi Kitamura, Fumio Kishino, Active Cube and its 3D Applications, In: IEEE VR 2004 Workshop Beyond Wand and Glove Based Interaction, p.1-4, Graduate School of Information Science and Technology, Osaka University 2-1 Yamadaoka, Suita,Osaka 565-0871, Japan, 2014. Hornecker E. & Buur J., Getting a Grip on Tangible Interaction/ A Framework on Physical Space and Social Interaction, In: CHI 2006, p.1-10, Montréal, Québec, Canada, 2006. Hutchins E. L., Hollan J. D. & Norman D. A., Direct manipulation interfaces, In: Human-Computer Interaction, Volume 1, p.311-338, University of California, San Diego, 1985. Ishii H., Tangible bits: Beyond Pixels, In: Proceeding of the 2nd International Conference on Tangible and Embedded Interaction, p.15-25, New York, 2008. Ishii H. & Ullmer B., Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms, In: Proceedings of CHI '97, p.1-8, MIT Media Laboratory Tangible Media Group, Atlanta GA USA, 1997. Ishii H., Ullmer B. & Robert J. K. Jacob, Token+constraint systems for tangible interaction with digital information, in: Transactions on Computer-Human

52

Interaction 12(1), p.81-118, MIT Media Laboratory, 2005. Jansen M. & Bekker T., Swinxsbee: A Shared Interactive Play Object to Stimulate Children’s Social Play Behaviour and Physical Exercise, In: INTETAIN 2009: Intelligent Technologies for Interactive Entertainment, pp 90-101, International Conference on Intelligent Technologies for Interactive Entertainment, 2009. Kim D. S. & Yoon W. C., A Method for Consistent Design of User Interaction with Multifunction Devices, In: Human Centered Design, First International Conference HCD 2009 Held as Part of HCI INternational 2011, p.202-211, San Diego, CA, USA, 2009. Klemmer S. R., Hartmann B. & Leila Takayama, How Bodies Matter: Five Themes for Interaction Design, In: Proceedings of DIS '06: Proceedings of the 6th ACM conference on Designing Interactive systems, p.140-149, Computer Science Department Stanford, CA 94305-9035, USA, 2006. Kuenen S. Wensveen S. Alonso M.B. Stienstra J. & Alonso M.A., How to Design for Transformation of Behavior through Interactive Materiality, In: Proceedings of the 7th Nordic Conference on HumanComputer Interaction Making Sense , p.21-30, SPIRE, Mads Clausen InstituteUniversity of Southern, Copenhagen, Denmark, 2012. Lee J. & Im C., A Study on User Centered Game Evaluation Guideline Based on the MIPA Framework, In: Human Centered Design, First International Conference HCD 2009 Held as Part of HCI INternational 2010, p.84-93, San Diego, CA, USA, 2009. Lu W., Evolution of Video Game Controllers- How Simple Switches Lead to the Development of the Joystick and the Directional Pad, p.1-20, Stanford University, 2003. Mahut T., Bouchard C., Omhover J. F., Favart C., Esquivel D., Interaction Design and Metaphor through a Physical and Digital Taxonomy, In: International Journal on Interactive Design and Manufacturing (IJIDeM) 12, p.629-649, LCPI Arts et Métiers Paristech, 151 boulevard de l’hospital, 75013 Paris, France, 2017. Malina R. M., Hamil P. V. V., & Lemeshow S., Selected Body Measurements of Children 6-11 Years, In: Vital and Heart Statistics-series 11-N° 123, "Hand Length and Breadth" , p.8-9, Health Services and Mental Healthcare Admini-


stration, National Center for Healthcares Statistics, Rockville, Md, 1973. Nielsen J., Usability Inspection Methods, John Wiley & Sons, New York, 1994 Norman D., The Design of everyday things, revised & expanded edition, chapter I, Fundamental Principles of Interaction, p.10-31, New York, 2013.

dings of CHI 1999 , p.286-293, Computer Science Dept. University of New Mexico Albuquerque, NM 87106, 1999. Ullmer B. & Ishii, Emerging Frameworks for Tangible User Interfaces, In: HumanComputer Interaction in the New Millenium, p.579-601, MIT Media Laboratory, 20 Ames Street, Cambridge, Massachusetts, 2001.

Pinelle D, Wong N. & Stach T., Heuristic Evaluation for Games: Usability Principles for Video Game Design, In: CHI 2008 Proceedings Game Zone, p.1453-1462, Florence, Italy 2008.

Vallgårda A. and Sokoler T. , A Material Strategy: Exploring Material Properties of Computers, in: International Journal of Design , p.1-14, Swedish School of Textiles, University of Borås, Borås, Sweden, 2010.

Price S., Rogers Y.,Scaife M., Stanton D. & Neale H., Using ‘Tangibles’ to Promote Novel Forms of Playful Learning, in: Interacting with computers, Volume 15 (2), p.169–185, School of Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, 2003.

Van Campenhout L., Frens J., Overbeeke K., Standaert A. & Peremans H., Physical interaction in a dematerialized world, In. International Journal of Design, p.1-18, Product Development, Faculty of Design Sciences, University of Antwerp, Antwerp, Belgium, 2013.

Jacob R. J.K., Girouard A., Hirshfield L. M., Horn M. S., Shaer O, Solovey E. T. & Zigelbaum J., Reality-Based Interaction: A Framework for Post-WIMP Interfaces, In: Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (CHI '08), p.1-10, Tufts University Department of Computer Science 161 College Ave. Medford, MA 02155 USA, 2008.

Verhaegh J., Soute I., Kessels A. & Markopoulos P., On the design of Camelot, an outdoor game for children, In: IDC '06 Proceedings of the 2006 conference on Interaction design and children, p.9-16, Eindhoven University of Technology Den Dolech 2, 5612 AZ Eindhoven, 2006.

Sánchez J. L. G., Zea N. P., and Gutiérrez F. L., From Usability to Playability: Introduction to Player- Centred Video Game Development Process, In: Human Centered Design, First International Conference HCD 2009 Held as Part of HCI INternational 2009, p.65-74, San Diego, CA, USA, 2009. Sluis R.J.W., Weevers I., van Schijndel C.H.G.J., Kolos-Mazuryk L., Fitrianie S. & Martens J.B.O.S., Read-It Five-to-sevenyear-old children learn to read in a tabletop environment, In: IDC '04 Proceedings of the 2004 conference on Interaction design and children: building a community, p.73-80, University of Eindhoven, Design School of User System Interaction, Den Dolech 2, 5600 MB Eindhoven, The Netherlands, 2004. Smith D. C., Irby C.H., Kimball R.B., Verplank W.H. & Harslem E.F., Designing the Xerox “Star” User Interface, Byte 7(4), p.242-282, United States, 1982. Stewart J., Bederson B., & Druin A. , Single Display Groupware: A Model for Copresent Collaboration, In: ACM procee-

Zaman B., Abeele V. V., Markopoulos P. & Marshall P., Editorial the evolving field of tangible interaction for children, In: Personal and Ubiquitous Computing 16(4), p.367–378, London, England, 2012.

LINKOGRAPHY Bret V., A Brief Rant on the Future of Interaction Design, 2011. http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/ Cornish D., History of the video game controller, 2015. https://www.shortlist.com/news/history-of-the-video-game-controller Dalmau D. S. C., Learn Faster to Play Better: How to Shorten the Learning Cycle,1999. https://www.gamasutra.com/view/ feature/131799/learn_faster_to_play_better_how_.php Engelbart D., The Mother of all Demos, 1968. https://www.youtube.com/ watch?v=yJDv-zdhzMY " Microsoft, Future Vision, 2011. https://www.youtube.com/

53

watch?v=a6cNdhOKwi0 Norman D., Simplicity Is Not the Answer, 2008. https://jnd.org/simplicity_is_not_the_ answer/ Bennett Foddy, Games by Bennett Foddy www.foddy.net Valentine A., Is Skeuomorphism Really Dead? And Should it Be?, 2018 https://blog.proto.io/skeuomorphismreally-dead/


54


a.a. 2018-2019 Simone Cherchi - YOC

Final Work Student Simone Cherchi cherchisimone@icloud.com Supervisor Lorenzo Imbesi

Master of Science in PRODUCT DESIGN

Master of Science in PRODUCT DESIGN a.a. 2018 - 2019

Simone Cherchi supervisor Lorenzo Imbesi

YOC Tangible User Interaction for Social Skills development for children


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.