D3.3 - Theories, methods and technologies for human/machine symbiosis

Page 1

SmartSociety Hybrid and Diversity-Aware Collective Adaptive Systems When People Meet Machines to Build a Smarter Society

Grant Agreement No. 600584

Deliverable D3.3 Working Package [WP3],

Theories, methods and technologies for human/machine symbiosis Dissemination Level 1 (Confidentiality): Delivery Date in Annex I: Actual Delivery Date Status2 Total Number of pages: Keywords:

PU 30/6/2015 July 10, 2015 F 24 (excluding pages before the Table of Contents, annexes and references) Collective Adaptive Systems, HumanMachine Communication, Bridging the Semantic Gap


c SmartSociety Consortium 2013-2017

Deliverable D3.3

1

PU: Public; RE: Restricted to Group; PP: Restricted to Programme; CO: Consortium Confidential as specified in the Grant Agreeement 2 F: Final; D: Draft; RD: Revised Draft 2 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Disclaimer This document contains material, which is the copyright of SmartSociety Consortium parties, and no copying or distributing, in any form or by any means, is allowed without the prior written agreement of the owner of the property rights. The commercial use of any information contained in this document may require a license from the proprietor of that information. Neither the SmartSociety Consortium as a whole, nor a certain party of the SmartSocietys Consortium warrant that the information contained in this document is suitable for use, nor that the use of the information is free from risk, and accepts no liability for loss or damage suffered by any person using this information. This document reflects only the authors’ view. The European Community is not liable for any use that may be made of the information contained herein.

Full project title:

Project Acronym: Grant Agreement Number: Number and title of workpackage: Document title: Work-package leader: Deliverable owner: Quality Assessor: c SmartSociety Consortium 2013-2017

SmartSociety: Hybrid and Diversity-Aware Collective Adaptive Systems: When People Meet Machines to Build a Smarter Society SmartSociety 600854 [WP3], Human/Machine Symbiosis Theories, methods and technologies for human/machine symbiosis Paul Lukowicz, DFKI Agnes Gruenerbl, DFKI Kobi Gal, BGU 3 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

List of Contributors Partner Acronym UNITN DFKI

Contributor Enrico Bignotti, Mattia Zeni, Ronald Chenu-Abente, Fausto Giunchiglia Agnes Grnerbl, Paul Lukowicz, George Kampis

4 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

Executive Summary This document provides a report on the effort of the SmartSociety workpackage 3 during the period of M20 to M30 where the main focus lies in theories, methods and technologies for human/machine symbiosis. The deliverable reports the outcomes of T3.2, and the first iteration of T3.3. As such, the core aspect of this deliverable is to develop and introduce methods that allow machines to learn from the solutions and approaches taken by humans, while also allowing these humans to be aided/guided by machines in certain situations. The deliverable presents the work done using a top-down approach. Introducing first the refinement of the work previously presented on models for human machine symbiosis by refining the innovative 3-layer approach to close the semantic gap between low-level machine evaluation of data and high-level human interpretation (Theory Part I); and introducing methods for human-machine composition where the machine is actually guiding/assisting the human (Theory Part II), by for example helping humans instantly learn how to perform CPR (cardiopulmonary resuscitation) with the assistance of a smart-watch. These theory sections and their related further ongoing work are applied in practical scenarios in the following sections. In particular, within the health care scenario, the Mainkofen data-set and the Semantic Nurse were used to develop methods for bridging the semantic gap and help humans to learn from machines. On the other hand, the i-Log application incorporates the 3-layer approach to bridge the semantic gap in an application directly aimed to support the RideShare use case. The final section details the architecture, design and implementation considerations of the Execution Monitor. This component will be integrated within the SmartSociety platform and offer an interface to the WP3 human/machine symbiosis related services, like context/intention interpretation and the bridging of the semantic gap via sensor fusion.

c SmartSociety Consortium 2013-2017

5 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Table of Contents 1 Introduction

9

2 Theory Part I - Semantic Gap

9

2.1

Advancements in the 3-layer approach . . . . . . . . . . . . . . . . . . . . .

2.2

Experiential and Representational attributes

. . . . . . . . . . . . . . . . . 12

3 Theory Part II - Humans learning from Machines 3.1

12

The Smartwatch Life Saver . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.1

With the help of machines: Humans learn on the fly and enhance their performance

3.2

9

. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.1.2

Participants’ feedback - machines are welcome . . . . . . . . . . . . 15

3.1.3

Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 15

Smart Glass Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2.1

WP3 Contributions on Teaching . . . . . . . . . . . . . . . . . . . . 17

3.2.2

Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Mainkofen

18

4.1

Updates on modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2

Processing nurses logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.3

4.2.1

Pre-importing phase . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2.2

Parsing Tool System . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Future experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5 Semantic Nurse

22

5.1

The Semantic Nurse Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.2

Collective Adaptive Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

6 Applying I-Log: The RideShare use case

24

6.1

i-Log in the RideShare usecase . . . . . . . . . . . . . . . . . . . . . . . . . 25

6.2

Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

7 Execution Monitor

27

7.1

Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7.2

Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

7.3

Data Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7.4

Integration with other WPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 7.4.1

WP4 Peer Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

7.4.2

WP6 Orchestration Manager . . . . . . . . . . . . . . . . . . . . . . 32

7.4.3

Implementation and Future Work . . . . . . . . . . . . . . . . . . . . 33

A Details on: Smart-Watch Live Saver

39

A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 A.1.1 Contribution of WP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 A.2 Status Quo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.2.1 CPR Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.2.2 CPR support on Smartphones

. . . . . . . . . . . . . . . . . . . . . 40

A.2.3 CPR support on a Smart-watch . . . . . . . . . . . . . . . . . . . . . 40 A.3 The Smart Watch Live Saver Concept . . . . . . . . . . . . . . . . . . . . . 41 A.3.1 The Situation - Bystander CPR . . . . . . . . . . . . . . . . . . . . . 41 A.3.2 The Correct Way - CPR Suggestions and Effects . . . . . . . . . . . 41 A.3.3 The Solution - CPR Watch . . . . . . . . . . . . . . . . . . . . . . . 42 A.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.4.1 Study Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 A.5.1 Ideal Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 A.5.2 Deviations and Indications for a Learning Curve . . . . . . . . . . . 50 A.6 Participants’ Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 A.7 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 A.8 German Study Description and Informed Consent . . . . . . . . . . . . . . . 54 B Details on: Smart-Glass Teaching

58

B.1 Status Quo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 B.2 Use Case: The Water Glass Frequency Experiment . . . . . . . . . . . . . . 59 c SmartSociety Consortium 2013-2017

7 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

B.2.1 Google Glass Based Water Glass Experiment . . . . . . . . . . . . . 59 B.3 The gPhysics App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 B.4 The gPhysics System Implementation . . . . . . . . . . . . . . . . . . . . . 61 B.4.1 Automatic Fill Level Detection From Images . . . . . . . . . . . . . 61 B.4.2 Detection of the fluid color component . . . . . . . . . . . . . . . . . 62 B.4.3 Detection of the colored labels and estimation of the fill level . . . . 62 B.5 Frequency Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 B.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 C Details on: Mainkofen Hospital Experiment C.1 Modelling Mainkofen Etypes

67

. . . . . . . . . . . . . . . . . . . . . . . . . . 67

C.1.1 Person . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 C.1.2 Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 C.1.3 Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 C.2 Files for Mainkofen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 C.2.1 Menues.json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 C.2.2 Person.json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 C.2.3 Roomhotspots.json . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 C.3 Model Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

8 of 93

http://www.smart-society-project.eu


Deliverable D3.3

1

c SmartSociety Consortium 2013-2017

Introduction

Between M20 and M30 the main effort of work package 3 focused on theories,methods and technologies for human/machine symbiosis. This deliverable reports the outcomes of this work, specifically highlighting the final works of T3.2, and the first iteration of T3.3. The Core aspect of T3.2 was to allowing machines to learn from the solutions and approaches taken by humans in certain situation. T3.3 goes in the first iteration for the opposite, i.e., human-machine composition, which should enable machines to teach humans or have humans learn from machines. The following deliverable will introduce theories and new methods, but will also highlight practical applications of the new methods and algorithms, and will also define the interface of WP3 to the SmartSociety system. The remainder of this document is structured as follows. Section 2 presents the refinements for the the model used for bridging the semantic gap between humans and computers by using sensor information fusion; Section 3 shows how the machines can become a guide and a help for allowing humans to complete certain tasks better; Section 4, Section 5 and Section 6 show the progress done in applying these techniques within two of the project’s use cases (health care and ride sharing); and finally Section 6 shows the details of component that will provide these machine/human symbiosis services within the SmartSociety platform.

2

Theory Part I - Semantic Gap

In this section we will recall and extend the theoretical contributions of this WP3 from UNITN. We will recall the 3-layer approach and the its advancement in Section 2.1, and then illustrate a new division among entity attributes in Section 2.2.

2.1

Advancements in the 3-layer approach

A fundamental notion for the human machine symbiosis is the so called 3-layer approach. Its main objective is to account and attempt to bridge the representational distance between humans and machines, i.e., the semantic gap. As a quick recall, this approach consists of three level handling different types of information, as shown in Figure 1: Level 1 is the low-level one containing only sensor data collected with users’ sensing c SmartSociety Consortium 2013-2017

9 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

LEVEL 3: high-level semantic knowledge

LEVEL 2: mid-level semantic driven sensor fusion

LEVEL 1: low-level sensor data t

Â

Figure 1: Previous understanding of the 3-layer approach

devices, e.g., smartphones and smartwatches. This level has no semantic meaning itself. Level 2 contains streams with an additional level of abstraction from Level 1. Computed at the server- side, these data are derived using a semantic driven sensor fusion approach using the semantic model to create meaningful information through algorithms. Level 3 contains only semantic information. We can assimilate it as a stream of the contents of the Knowledge Base that can be queried in different moments of time. On the contrary, the Knowledge Base system contains only information referred to the present time

The information at level 3 are generated thanks to a link between sensor data and the personal Knowledge Base of the user, allowing the increase in the abstraction level. The main advancement from this previous understanding of the 3-layer approach is illustrated in Figure 2.

Figure 2: Previous understanding of the 3-layer approach 10 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

The major change is the role and purpose of level 2 within the approach. In fact, it no longer acts as an abstraction from the sensor data of level 1; rather it is exactly where the semantic gap is addressed. While maintaining the aspect of the semantic driven sensor fusion, the new level 2 is radically different from the other two levels. In fact, it is not simply a database, as in level 3, or a stream of information, as in level 1, but a middle layer that actively fuses the information from these two levels, rather than a simple abstraction of level 1. Therefore, this new perspective produces the following contributions:

• At a theoretical level, the distinction between representational and experiential attributes. The former are those attribute qualifying an entity, i.e., allowing for its identification, while the latter are those quantifying an entity, i.e., allowing for a measurement of an aspect of an entity. For instance, in the case of peers, examples of representational attributes are identifiers, while experiential attributes are activities and roles. This overarching distinction is strongly related to the technical contributions and uses cases in WP3, and will be further explained in Section 2.2. Moreover, this distinction will also require the development of an ontology of these attributes, especially driven by the SmartSociety use cases needs • At a technical level, there are two further contributions. The first one is the modularity, where sensor information is composed to provide information for specific attributes, e.g., managing how Wi-Fi and GPS compose the information for indoor or outdoor positioning. The second one is the link between the attributes and the modules output values, i.e., a placeholder. This link is the main step towards the bridging of the semantic gap, allowing the system to solve values of entity attributes and keeping all the streaming information at all the levels of abstraction stored in a dedicated server separated from the user Knowledge Base called Stream Base (STB). The motivation behind this division is both technical and theoretical. From a theoretical point of view, the Knowledge Base acts like a snapshot of users’ view of the world. In fact, it must represent the stable understanding of the environment surrounding users and how they see themselves with respect to it; hence, it is inherently static. From a technical point of view, the Knowledge Base cannot handle values changing at a stream speed, i.e., seconds of even less, therefore they must be separated and linked through placeholders. Both technical contributions will be discussed in more detail in Section 6 and Section 7. c SmartSociety Consortium 2013-2017

11 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Since this section is dedicated to the advancement from the theoretical point, we now turn to provide more details bout it.

2.2

Experiential and Representational attributes

As we said, the theoretical contributions is the distinction between experiential attributes and representational attributes. The main difference between these two kinds of attributes is the data generating their attribute values. In fact, while representational data is based on static data, e.g., identifiers, experiential data is based on dynamic data, e.g., sensor data. However, the difference between static and dynamic data is not limited to time variance. Intuitively, static data encompass all identifiers, e.g., IDs and SSNs, but there are static data that do not serve identity purposes, e.g., natural features like hair color, or languages known, which require many years of practice. As for dynamic data, not all of them represent aspects of users quantifiable by sensors. For instance, clothing, interests and mental states are clearly dynamic; however, they cannot be inferred from sensors. These issue cannot be attributed to data per se, but are rather driven by technological limitations. This theoretical distinction is important because it embodies at the entity level the difference between the static nature of ontologies and the dynamicity of sensor data. While a complete and exhaustive categorization of these two kinds of attributes may not be possible at the current state, we can foresee in this work, based on our preliminary interaction with the use cases, that main experiential attributes will be, from the point of view of users’: position, preferences, interests, moods, and activities. In addition the challenge in representing experiential attributes is to establish how the sensors can provide their values, i.e., how to compose the sensor in a general, i.e., adaptable to different devices, way to provide the values for said attributes. Developing smart composition of sensors will be also important since it will allow for smart sensing strategies. Overall, we will use the use case addressed in SmartSociety as a drive for understanding and structuring both the ontology of attributes for the relevant entities and how to make them interact with the technologies for providing their values.

3 3.1

Theory Part II - Humans learning from Machines The Smartwatch Life Saver

Observing the training-lessons in the University of Southampton has revealed an important task for nurses to be skilled at: CPR (Cardiopulmonary Resuscitation) 12 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

CPR itself has to be performed in specific way in order to be effective, and therefore a performer should be skilled in order to save lives. The regulations about CPR techniques have changed periodically due to scientific evidence over the decades. Today, based on scientific evaluations, it is suggested to perform at least 100 (100-120) compressions per minute, with a compression depth of 5-6 cm [1]. However, not only nurses have to be able to perform CPR correctly. Out of Hospital Cardiac Arrest (OHCA) is on of the leading causes of death in the western world. In the US alone more than 350,000 people die due to OHCA every year (one death every 90 seconds) and it causes approximately 40% of deaths in adults younger than 75 every year in Europe. Over 95% of those experiencing OHCA die because CPR is not commenced quickly enough [2]. Chances for survival decrease by 7-10% every minute in the absence of Cardiopulmonary Resuscitation (CPR). Nevertheless, the sad story told by the Red Cross and ADAC is, that in the German speaking countries only 15-20% of people would actually dare to perform CPR in an emergency. The main reason for people not to help is because they are insecure about what to do and therefore afraid to cause harm. In terms of a smart society this is a crucial topic which is in need of smart support, as devices, ideally commonly available, could be used to support both nurses and untrained bystanders to feel safe enough to perform CPR effectively. Our work aims at improving bystander engagement and performance by producing a Smart-Watch based, interactive Live-Feedback System. WP3 Contribution to save Lives In the line of the Semantic Nurse Scenario WP3 contributed to help improve bystander CPR engagement and performance: 1. An easy to use CPR feedback application for a Smart-Watch was developed, designed to allow untrained people to perform CPR correctly in emergencies. Since watches are worn most of the time by their owners, this application is always at hand, without the requirement for additional and expensive equipment. 2. This system was evaluated with 41 untrained testers in three modalities. 3. Using the two main quantitative indicators, i.e., frequency and compression depth, the results clearly demonstrated CPR improvement using the CPR Watch. Thus, for example, with the CPR watch around half of subjects managed to stay within the recommended range for both parameters (correct frequency and compression depth) for at least 50% of the time they performed CPR. Approximately 70% of the c SmartSociety Consortium 2013-2017

13 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

participants managed to stay in the recommended range for both parameters for more than 30% of the time. On the other hand, without the assistance of the watch (even after receiving oral reminder about the CPR procedure) just 20% managed to perform CPR correctly for about 50% of the time, while only 30% performed CPR correctly (frequency and depth) for 30% of the time. 3.1.1

With the help of machines: Humans learn on the fly and enhance their performance

Analysis of the CPR data recorded by the CPR-meter indicates that the assistance of the Smart-watch has a pronounced positive effect on the quality of performed CPR. Table 1 provides the details: In the non intervention group, without briefing or app assistance, on average, the participants only managed to keep the ideal frequency for 19.78 % of the time (SD 33.7) and ideal depth for 48.7 % of the time (SD 25.8). Using the Smart-Watch for assistance the time spent at the ideal frequency increased by more than 200% (to 61.31%) and at the ideal compression depth by more than 30% (to 65.01%). Analysis of the number of participants who managed to stay in the ideal range (depth and frequency) reveals that only 57,5% were able to do so. Furthermore, they only achieved this for about 20% of the time without app assistance or briefing. In contrast, with watch assistance, 95% of the participants maintained the ideal range for more than 50% of the time. For most participants the third run, without the watch but with extensive prior information about correct CPR, does slightly enhance the result in comparison with the first run without any additional information. On average, the participants stay at the ideal depth (SD 41.9) 45% of the time and at ideal frequency (SD 30.2) 44.7% of the time. It can be clearly seen that even extensive prior information and two previous sessions (one with and one without the watch) provide less improvement than the interactive feedback from the Smart Watch system. More details can be found in Table 1 (last 3 columns) in the Annex. Doing it right! Upon further perusal, some interesting further details are revealed. Without any help, more than 70% of all test-subjects were only able to reach the ideal range (i.e., both ideal compression depth and ideal frequency at the same time (see Figure 14) for less than 10% of the time (48% did not even manage to find the ideal range at all) and only 5% of the test-subjects were able to stay in the ideal range for more than 50% of the time! 14 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

After getting an introduction on how CPR has to be performed, this situation improved slightly. However, almost 50% the test-subjects still only managed to stay in the ideal range for less than 10% of the time (and 30% still did not manage at all). Only 14% were able to stay in the ideal range for almost 100% of the time. With the assistance of the CPR watch, performance improved significantly. Only 15% (6 out of 41) of the test-subjects failed completely in reaching the ideal range. More than 50% managed to stay in the ideal range over 50% of the time (29% of the test-person even achieved ideal range for over 75% of the time). This amounts to an improvement of more than 45 percentage points (pp)! 3.1.2

Participants’ feedback - machines are welcome

After the interventions, a feedback questionnaire was sent to the thirty participants who were accessible. 28 completed the survey giving a return rate of 93%, representing 68% of the initial 41 participants. 100% of the respondents stated that the topic of the study was very important or important. Furthermore, 93% were positive that a Live-Feedback System like the CPR Watch could help to save more lives (only 7% were neutral) and 83% believed that such a system could remove the fear of doing harm whilst performing CPR. Being asked how secure they felt in their understanding of CPR before participating in our study, 35% replied ”secure”, 25% were ”neutral” and 40% were ”insecure”. Regarding our study and its outcome, the following questions were more interesting: 89 % of all study-participants stated that they felt much safer while performing CPR with the assistance of the watch than without and 92% are sure that they performed better with the assistance of the watch. 75% would immediately install such an App on their Smart-Watch if they owned one. More details are listed in Table 2 in the Annex. 3.1.3

Conclusion and Future Work

In particular Figure 14 emphasizes that a smart watch based CPR assistance has tremendous potential for improving bystander CPR and is an app that could truly save lives. Although a simply system, it demonstrates the potential of SmartSociety like systems not only to ease people’s lives but also to make them safer. This clearly would be a SmartSociety application that is worth putting it to the realworld. Still, clarification of medical device and potential liability issues have to be done, before this application could move from research to development state. c SmartSociety Consortium 2013-2017

15 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

In terms of future work the key question is how far the impact of the system could be strengthened through more elaborate and possibly personalized feedback mechanisms. At due-date of this deliverable a study with nurse-students is ongoing. The hypothesis of this study is that students using the CPR-Watch while training internalize CPR faster than students training without CPR watch assistance. Furthermore, with more experience in teaching humans with Smart-Glasses (see Section 3.2), this application will be ported to work on a Smart-Glass or possibly a combination of Smart-Watch, Smartphone and Smart-Glass could enhance the functionalities (e.g. if CPR is detected an automatic emergency call could be set off, on the Smart-Glass screen First Aid information could be displayed for the bystander, . . . ). More Details about the Smart-Watch, set-up and evaluation can be found in the Annex A

3.2

Smart Glass Teaching

Experiments are the key to teaching and understanding science and acquiring abilities in various fields. This is not only true for nurse students (as in the Semantic Nurse Scenario) but also for every other field. To a degree, teaching basic experimental skill and methodologies (the “mechanics� of experimenting) is part of science education. Another important motivation is the well known fact that competent handling of multiple representations is a key aspect in learning and solving problems especially in science education [3]. Furthermore, researchers have found that integrating multiple representations (especially visual ones) could afford a better conceptual learning environment for many students [4, 5]. Of particular interest is the ability to easily put theoretical representations into relations with the actual physical phenomena preferably being able to see. The Smart Glasses are a new class of wearable systems that extend the original vision of head mounted displays (HMD) towards a broader concept of head centered interaction and sensing. Devices like Google Glass are full blown mobile computers that combine an HMD, a headphone, a multitouch touchpad, head motion sensing, eye blink sensing, a microphone, first person camera, significant amount of FLASH storage, and variable communication capabilities. As a result, they enable users to seamlessly blend their interactions in the physical and in the digital world, fusing and manipulating information from both worlds with minimal interference with other activities. In this section, we demonstrate how this capability can be used to support high school 16 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

science education. As a concrete case study, we have developed and evaluated gPhysics: a Google Glass based app to support students in conducting a specific physics experiment in the area of acoustics. The vision is to utilise the Google Glass device to (1) reduce the “technical� effort involved in conducting the experiments (measuring, generating plots, etc. . . ) and to (2) allow the students to interactively see and/or manipulate the theoretical representation of the relevant phenomena while at the same time interacting with them in the real world.

3.2.1

WP3 Contributions on Teaching

We have developed a concept for supporting a specific experiment common in many high school curricula (i.e., determining the relationship between the water fill level of a glass and the frequency of the tone that can be generated by hitting the glass). The teaching concept was implemented on the Google Glass platform and was evaluated with 36 twelve grade students in a between subjects design.

3.2.2

Results

The results show a statistically significant reduction in the experiment execution time. Standard questionnaires used in education research also reveal a statistically significant improvement in the level of curiosity generated by the experiment and the cognitive load. In fact, the capability for theoretical representation evolve while physically interacting with the experiment. The capabilities of Smart Glasses are well suited to support the above considerations. While the HMD can be used to present various representations while observing the physical phenomena, interaction modes such as head motions, blinking, or speech can allow for switching and manipulation in a hands free manner with minimal cognitive distraction. Measurements can be made either with the built in sensors (the microphone and the camera can capture many relevant phenomena) or through Bluetooth connected sensors. A key assumption behind our work is that reducing the experimental effort together with a new way of relating the experiments to theoretical representations will help awake and foster curiosity which is known to be highly relevant to science education success [6]. More Details about the teaching concept, the evaluation and the results can be found in the Annex B. c SmartSociety Consortium 2013-2017

17 of 93


c SmartSociety Consortium 2013-2017

4

Deliverable D3.3

Mainkofen

This section documents the progress on the Mainkofen collaboration between UNITN and DFKI on improving the activity recognition of nurses, thus attempting to bridge the semantic gap between sensor data and semantics. Our main focus in the collaboration was finalizing the model, uploading it and populate it with the information needed before comparing it to the real life data of nurses. This section structure is as follows. Firstly, we will illustrate the updated model in Section 4.1. Secondly, we provide details on the importing phase of the data from nurses logs to populate the model in Section 4. Finally, we indicate the next steps for experiments in Section 4.3.

4.1

Updates on modelling

Compared to the model presented in D3.2, the etypes have been updated. Much like the previous model, we established Person, Event and Location as the core etypes. The main reason was that the previous model could not properly represent the sequence of events and activities. Moreover, after initially attempting to build it, we did not create a whole new activity ontology from the available logs since the gain both from a practical and overall research point of view would have been outweighed by the increased (and unnecessary) complexity when interacting back with DFKI’s labels. Finally, we reconsidered the attributes from the point of view of the research on experiential and representational attributes; however, we could not entirely address this aspect since we were dealing with a preexisting experiment. Now, we show the final version of the model, which Figure 3 presents at a glance: We leave the full details of modelling for each etype to Annex C.1, and we move to the model populating phase of the experiment.

4.2

Processing nurses logs

Once the model was defined, it had to be imported in our system in order to extract high-level information needed for the success of this experiment. 4.2.1

Pre-importing phase

As we agreed, the information to build the model was taken from two data sources: 18 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Figure 3: Final model of the Mainkofen etypes 1. Testlogs from the tests performed at the Mainkofen Hospital in 2011. They were taken during the daily routines of the nurses, and follow a legend created ad hoc representing: a) Location and Hotspots: Rooms of the ward, e.g., G1, and SB, and the hotspots in a room: e.g., Z2-closet. b) Patients: Patients were labeled with their room number and the position of their bed, e.g., Z1-T and Z1-F mean that one patient is close to the door, while the other is close to the window. c) Activities and Objects: Activities always start with a number, which specifies the kind of task, and is followed by �-� and a specific activity or object that is used for this task. Objects always start with a capital letter, activities always start with a lower case letter. For instance, 1Activities - put down (common activity, i.e., put down (something)). Overall, the testlogs cover the activities performed by the nurses in the 4 areas of the ward on the 2nd, 3rd, 5th, 9 th, 16 th, 18 th, and 19th May 2011. c SmartSociety Consortium 2013-2017

19 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

2. The .json files, i.e., menues, roomhotspots, and person, providing information about activity and object classes, locations and people, which can be found in Appendix C.2. 4.2.2

Parsing Tool System

Once we imported the structure of the model, we needed to populate it; this required an ad hoc program “translating� testlogs lines into entities. As showed in Figure 4, it consist of two components:

PARSING INTO AD-HOC JAVA CLASSES

JAVA CLASSES TO ENTITIES OF THE MODEL

NURSE LOG NURSE LOG NURSE LOG

SEMANTIC ENGINE

Figure 4: Translation program architecture 1. The first component interprets testlog files line by line and creates ad hoc Java classes, which mirror the classes of the model. In the first phase, the program imports as a cache the hospital locations and their structure. Then it handles rooms, patient rooms, hallways and bathrooms, specifying for each one the adjacent rooms and those contained, i.e., bathrooms in the case of patient rooms; these objects will be used by other classes. Then, the program captures the logs labels line by line to save it in the corresponding class. In order to do this step, we created four files called rooms.txt, rooms detail.txt, nurses.txt e patients.txt to assist the transition from logs to classes. 2. The second components converts the Java classes created by the first component into dedicated classes to be imported in the semantic engine However, we could not make the translation process from the raw log form to parsable text file fully automatic. The main reason is that, while they did have a predefined set of 20 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

activities, people and locations, their flow is not always consistent. For instance, patients in the logs are recognized and the all the activities following them assume them until a new patient is recognized. Moreover, this may happen in short time, as in Listing 1, or can take several minutes. Listing 1: Switching from one patient to another 1304398904.962 1304398904.962 1304398929.120 1304398946.926 1304398996.388 1304399005.475 1304399016.295 1304399018.390

1 Activities −talk Z8−M Z8−M note−calm down Patient G4 Z9 Z9−T 1 Activities −talks

Therefore, we manually had to translate a set of nurse logs based on the parsing schema done by the program. For instance, the activity of instructing patient Z7-T while the shower is on in Listing 2 Listing 2: Original nurse log example 1304397289.762 Z7−T 1304397389.561 5Hygiene−SH + 1304397451.591 5Hygiene−instruct

would then be translated as showed in Listing 3 Listing 3: Translation of nurse logs example {”timestamp” : 1304397451, ”activity” : {”name” : ”instruct”, ”end” : 1304397460, ”patient” : ”Z7−T”, ”experiential” : ”shower ”}, ”procedure” : ”5Hygiene”}

This allowed us to inject semantics in the nurse logs to ease the population of the model. Furthermore, it allowed us to better compute the probability of sequences of activities. Unfortunately, this process has to be done manually since the logs are not consistent enough in terms of notation, even across the same nurse, making the automatization of the translation infeasible.

4.3

Future experiments

Currently, our model has been populated with 2514 entities, consisting for the majority of individual tasks taken from 6 routine of the fist two days of the Mainkofen experiment, two per nurse. The full view of these data can be found in the Appendix C.3. Overall the table represents the whole current model populated, while also tracking the computed probability of next and previous activities. This is therefore the staring point for the future experiment which will consist in the fusion with the raw data of sensors during the Mainkofen experiment. As we said, the c SmartSociety Consortium 2013-2017

21 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

attempt to bridge the semantic gap will be evaluated as an increase of the overall accuracy. Therefore, we will follow these steps: 1. We will load the entirety of the available logs 2. We refine the queries for both retrieving the events and calculate the probabilities 3. We fuse the data from DFKI by using ontologies for refining and reduce the computational space of the most probable events. The actual exchange of information will happen through a ad hoc middleware bridging the sensor information with the model in the KB/EB. Note that DFKI will provide a stream of probabilities for activities and locations, coupled in the model as events; hence, the model will help in reducing the amount of probable events to rank. Furthermore, this probability is only on events and it is different from the one used in the structured attribute Link, since that is computed from the logs.

5

Semantic Nurse

The Semantic Nurse will be a continuation of the data-set and work including the nurseactivities in the Mainkofen Scenario. While Mainkofen Data with it specific features and availabilities was used to develop a solution for the issue of the semantic gap on a single nurse - single patient scale, the Semantic Nurse now should aim for a larger scale. The Health Science Department of the University of Southampton uses a skill-based learning environment to train nursing students and has equipped a room like a patient’s room of a hospital (real-life like) including “SimMan” patients (a highly sophisticated medical training dummy, looking like a person of average height that can be programmed to have a heart-beat, breath and simulate organ failures). In a close-by control room the situation during a training session can be influenced (e.g. SimMans heart stops, crisis can occur, etc. . . ) for learning purposes and in order to see how different trainees react to a crisis and training becomes “real”. Afterwards the training sessions are analyzed and the competence of the trainees is evaluated by educators. In order to answer further research question,e.g., how can training be enhanced, how do trainees learn, how do educators determine competence and last but not least learn/understand with machines how experts rate behaviour and situations, the learning environment has further been augmented with mainly microphones and cameras. Nevertheless, analysing a training session afterwards 22 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

is limited as e.g. it is hard to determine who is referred to when a mentor is explaining something. Therefore, UI Southampton, DFKI and UOXF agreed on co-operating to leverage the know-how of DFKI in terms of activity recognition with different and unobtrusive sensorsystems (e.g., a smartphone in a pocket, Google glass, sensor-watches, etc. . . ) to enhance the possibilities of recording and analysing the training sessions.

5.1

The Semantic Nurse Scenario

Depending on the degree of skills (years of being taught) a common teaching lesson in the skill-center looks as follows: * The skill-room is equipped like a standard hospital room, including lines for oxygen and nitrous oxide, a screen for monitoring the patient, a telephone to call specific clinical departments, an emergency trolley, a sink and an alarm system. * In this room 3-4 nurse-students have to attend a patient (SimMan). Here no roles are given to the nurses. As a matter of monitoring the trainees, exactly one of the aspects is to understand if and how roles come to them naturally. * a trainer is also present in the room, playing the role of an assistant. * the scenario / the health of the patient (SimMan) is controlled by a health care professional in next next-door control-room. * by adding an emergency the situation is taken over by the way the trainees deal with it.

5.2

Collective Adaptive Aspects

In terms of Semantic Gap and WP3 this collaboration will provide us further possibilities to analyse the semantic gap and develop strategies to close it. Furthermore, as one of the main research questions is to understand how the complex process of rating a situation/behaviour works suggests itself to SmartSociety WP3. A number of interesting aspects, this scenario can provide for SmartSociety to analyse:

c SmartSociety Consortium 2013-2017

23 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

* How do people interact with each other (often without talking)? * How do people interact with machines/devices? * How are roles naturally distributed ? * Who looks in which direction, who looks to the areas where things happen (is concerned), who looks away (tries to be unconcerned), who pretends to be busy (in order to not to have to be engaged), etc. . . All these questions require the machines to understand a situation! * How can a machine influence the situation? (e.g., give contextualized advice, check regulations and inform about them, detect falls activities and warn, detect if a nurse needs help and inform optimal helper - for example see figure 5 * How do people react to the machine’s attempt to influence? * Could the help/influence of machine help trainees to learn faster? ...

Figure 5: A possible human/machine interaction

6

Applying I-Log: The RideShare use case

In this section we show the advances and applications for extending and integrating i-Log (introduced in D3.2 and [7]) within the context of SmartSociety. We will now illustrate how i-Log fits within the more established RideShare use case, motivating its development, its interaction and the advantages that it brings to the table. 24 of 93

http://www.smart-society-project.eu


Deliverable D3.3

6.1

c SmartSociety Consortium 2013-2017

i-Log in the RideShare usecase

As a progression to what introduced in D3.2 and demonstrated in the second year review consider the case of two peers meeting for a ride (e.g. measuring distance from the starting point of the ride to anticipate delays), getting in the car at the designated starting point(e.g. measuring �togetherness� between the peers), taking the ride(e.g. tracking progress of the ride), and the arriving at their destination (e.g. rating the ride based on both sensor and directly imputed user ratings). Overall, we can see that i-Log is important to provide values not only for peers attributes, but also to the event of the ride as a whole, although indirectly through the peers’ sensing devices. Indeed, it is not the objective of i-Log to contribute to the mechanisms of orchestration pre and post ride, but rather as the main way to track the actual start, progress and end of the ride in real time, together with the peers and their experience. The main technical issue that we have to tackle in order to give the system the understanding of the situation (in which the peers are involved and thus the ride itself) is to find a way to update the values of the experiential attributes in the entity/knowledge bases starting from sensor data. What we want to do is to fuse two very heterogeneous systems and let them work in symbiosis: a combination Entity and Knowledge Bases which are by definition prone to deal with semi-static structured data and a Stream Base that deals with very fast-varying and totally unstructured information. The way we define how they work together is actually what will define the bridging of the semantic gap between low-level sensor data and high-level human knowledge. Technically speaking, we cannot simply push values from the sensors into the Entity/Knowledge Bases as they arrive, as we will face thousands of updates per seconds per sensor. Moreover, this will make the usage of the structured information into the EB/KB itself useless. On the other hand, we cannot simply downsample the update frequency based on a constant value, since it is not possible or advisable to define an appropriate update rate; in fact, it depends on the situation or the context in which users are involved. For instance, while walking, it can be useful to have updates about the user position every minute, since the difference from two consecutive samples can be in the order of 100m, making no real difference. On the other hand, if the user is on a bus and the system wants to alert him to get off at the right bus stop, 100m can make quite the difference; furthermore, the values will have to be updated every 10 seconds. Because of this, we then came up with a pull based solution that solves this issues and we call it Dymanic Attribute Resolution through placeholders. c SmartSociety Consortium 2013-2017

25 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Placeholders are stored as attribute values in the Entity/Knowledge Bases to the corresponding stream into the Stream Base. The placeholder is basically a link to a specific software component in the Stream Base APIs that resolves the desired sensor values at runtime. In this way, we defined a novel approach to make Ontologies and Knowledge Bases work with very fast-varying streaming data without altering their original behaviours. There is no need to ever write these values into the Knowledge Base but they are always and only stored into the Stream Base and are accessed on demand.

Each placeholder as well as each stream have a unique identifier (UUID) that allows to retrieve the information and to connect them to each other.

In order to explain this concept consider the following concrete example: one application in the Smart Society platform wants to know if a specific ride is on time. The concepts of ride and on time are high-level human concepts that are present only into the Entity/Knowledge Bases and not in the raw data collected from users’ devices. Thus this query has to be redirected to the Entity/Knowledge Base that decomposes it in its main building blocks: a ride is a group of people, moving together, from point A to point B, following a path. Now that we have these elements (attributes) we can link each of them with one or more sensors (sensor fusion) from one or more peers, and check the results. This is done through the Dynamic Attribute Resolution procedure: we define a set of placeholders to the specific software component in the STB that resolves at runtime (query time) the values and returns them to the Knowledge Base. In the example above, a group of people is recognized, among others, by checking the promiximity of the users’ mobile devices registered to the Smart Society platform and running i-Log through the bluetooth sensor. The moving together feature is recognized by cross-checking the values of the speed of the GPS sensor of the users. Finally, both from point A to point B and following a path are checked using the GPS sensors of the peers. The component that replies to the query for the specific ride queries all the users personal Knowledge Bases for the specific attribute values needed and merges the responses. If all of them are checked means that the ride is currently happening. At this point, to check if also the on time feature is satisfied, the system involves the Task Execution Manager Component (see Section 7 for details) that matches the real time information provided by the Context Manager with the expected ones of the Orchestration Manager. 26 of 93

http://www.smart-society-project.eu


Deliverable D3.3

6.2

c SmartSociety Consortium 2013-2017

Future Work

As we will show in Section 7, i-Log is a component of a larger architecture designed to manage the execution of tasks involving peers across SmartSociety, i.e., the Task Execution Manager. Therefore, the majority of the future work will focus less on the design point of view and more on consistently implementing the placeholder technology for attribute values. This will also be linked to the development and extension of those experiential attributes, whose values are taken on sensors, which are needed for this particular use case.

7

Execution Monitor

At the current state of SmartSociety, many of the tasks to be handled by peers and collectives happen in the real (physical) world. However, there is a lack of the appropriate means for monitoring what happens in the real world. For instance, even if a ride is agreed upon in SmartShare, it will be unknown whether an agreed ride is actually taking place. To overcome this issue, a “monitoring” component will be designed, implemented and integrated into the SmartSociety platform. This work should account for use cases, model and bridging the semantic gap, its conceptual design, and implementation. Hence, it will address issues such as: definition of the input/output, recognition, matching with an agreed plan, and detection of deviations, simulations. We call this component the Execution monitor, and it will have two main purposes. Firstly, it will monitor the execution of tasks, especially those involving “offline” actions by peers and collectives, although including also online actions. Secondly, it will perform the sensor and knowledge fusion described in previous sections and offer high-level information about the execution progress of tasks to the rest of the SmartSociety platform. Based on this information the platform may decide to introduce corrective actions in the tasks, to trigger a re-planning or even to cancel the task.

7.1

Requirements

The general requirements for the Execution Monitor were decided in close collaboration between WP8, WP6, WP4 and, of course, WP3 throughout several dedicated meetings. These agreed functional requirements for the Execution Monitor include: • For each task to be monitored the Execution Monitor will get as an input from the c SmartSociety Consortium 2013-2017

27 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Orchestration Manager (OM): – a description of the task whose execution has to be monitored; and – the IDs of the peers involved in the execution • At the same time the Execution Monitor will also have access to: – sensory data from the client (normally from an app running in a cellphone) related to the peers involved in the execution and; – mid and high-level data derived by sensor fusion in time and user input • Using these two sources the Execution Monitor should produce an output to inform the Orchestration manager of: – the progress of a given tasks; and – possible deviations that occur during the execution of this task, big deviations should be properly expressed to allow the Orchestration Manager to react in a timely manner With this information the Orchestration Manager is able to keep track of the progress of each task sent to the Execution Manager and report about possible deviations that may occur (e.g. by having the platform triggering replanning, cancelling a task). In this sense the Execution Monitor replaces the feedback loops that happen during regular humanmediated task execution (e.g. a person giving directions to other person parking a car), by providing information to the platform about the current status of the task and allowing it to decide which correction measures to apply (like ”you are too close to another car on the left” or ”you still need to go further” would be in the human mediated parking example).

7.2

Architecture

Figure 6 provides a general overview of the Execution Monitor architecture. Notice that we will focus only on the Execution Monitor component and assume that the Orchestration Manager (hence OM) and the Peer Manager (hence PM) are already known since they are work of WP6 and WP4, respectively. 28 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Orchestration Manager

Task description (including ID of peers involved)

Execution Monitor Task Execution Manager High Level Real time Attributes

Peer Manager

Context Manager

Task proceeding according to plan (% of completion?)/ Not proceeding according to plan (retrigger planning?)

Sensor data i-Log (client)

Figure 6: Execution monitor architecture Let us describe the main three main architectural elements of the Execution Monitor: Execution Monitor: the execution monitor is the overarching component that includes the functionalities offered by both the Task Execution Manager and the Context Manager. Task Execution Manager (TEM): this component interacts with the PM and the OM. The main function of the TEM is to act as an overarching monitoring component integrating the real time evolution of a given task, through the PM, and proactively adapt it and match it to the requirements defined by the OM. Context Manager (CTX): this component is the software/hardware embodiment of the 3-layer approach. In fact it consists of three different subcomponents: 1. Sensing devices 2. Modules for aggregating and merging attribute values from sensor data 3. High level models stored in each peer and connected via dedicated APIs

7.3

Data Flows

To better illustrate how the Execution Monitor works within the SmartSociety platform we present the following sequence of steps that are representative of its normal functionalities in Figure 7. However, before delving into the actual data flow, we must note that the first three steps, i.e., those within the CM and linked to the PM, are actually ongoing processes, not c SmartSociety Consortium 2013-2017

29 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

immediately linked to the TEM and/or OM. In fact, the update of attribute values can be constant regardless of the application specific requirements. Let us now illustrate each step. Step 1: Devices data Data from devices are firstly stored by a Cassandra database system that stores in form of streams. Each user has more than 1 stream, typically one per sensor. In future implementations we can foresee additional streams referred to users from their devices, e.g., other devices belonging and developed within the Internet of Things (IoT) paradigm. This step is therefore the first one given the low (if not absent) level of abstraction of data, since they are the direct output of sensor devices. In the context of Rideshare, these data would be all the output from physical sensors such as accelerometers, GPS and similar sensor commonly available on several smartphones. Step 2: Modules To bridge the gap between the high level information and sensor data input from user personal devices we foresee a ”modular” aggregation and fusion of data, in the sense that it will be driven by attributes of entities represented in the high level model.. This aggregation will then be used to provide attribute values for entities in the model via a placeholder to the corresponding stream in the STB. Placeholder are resolved at run time whenever an attribute of a given entity is queried by another system component. Each module in the architecture of the Context Manager is used to solve one attribute value of multiple modules can be combined together. This process, driven by mathematical algorithms in combination with semantic information is what we defined as Bridging of the Semantic Gap. Each module can produce its output in a continuous way or on a per request basis. Notice also that we can also foresee modules whose input is one or more modules combined together; the compositionality here would still be semantically driven In the context of Rideshare, as we discussed in Section 6, we can have a module aggregating sensors to provide value for the attributes ”is moving”, ”being near” and so on; these values will then update those stored in the peer database via their respective placeholder. Step 3: Updating in the peer (manager) Here the attributes of the entities stored in the peer database, i.e., the combination of one or more entity bases (EB) and the knowledge base (KB) belonging to each peer, have their value updated following the procedure explained in the previous step. This type of attributes belong to the experiential attribute, i.e., attributes whose value is on dynamic data, e.g., sensor data 30 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

such as the ones provided by devices at Step 1. In fact, the experiential attributes act as a ”semantic drive” to the collection of data upon which we can build models describing not only the static elements of a user, the context and sequence of events in the everyday life of the user. This can be achieved also by adopting mathematical methods, e.g., Timed Automata, which provide the expressiveness and a well understood mechanism for representing sequences in a user everyday life. As for the TEM, here are the operation steps that happen during the monitoring: Step 4a: TEM requesting At the beginning of the monitoring phase, all the data are actually requested by the TEM via APIs for outputting metrics, evaluations and predictions Step 4b: TEM matching and integrating Once they are all collected, the data from the PM (via the CM) are then matched with the information about the task taken from the OM. Then, the TEM can proactively inform the OM about the status of the ride and its corresponding expectations.

EXECUTION MONITOR ORCHESTRATION MANAGER

RIDE REQUIREMENTS AND GOALS

STEP 4a

TASK EXECUTION MANAGER

FROM PM (VIA CM): THE USER ISMOVING AND IS WITH OTHER PEOPLE, HE IS IN A RIDE. THE RIDE IS IN THIS STATUS. FROM OM: THESE PEOPLE MUST BE IN THE RIDE THAT SHOULD BE AT THIS LOCATION WITHIN A CERTAIN TIME INTERVAL.

API

TEM COMPARES THESE 2 TYPES OF INFORMATION AND DECIDES IF THE TASK IS HAPPENING LIKE PLANNED. API STEP 4b PEER MANAGER API

API

hasValue [filled from the CM module 1] Person C: General information A: Name A: Date of Birth A: Place of Birth etc... C: Activity information R: Activity: [] C: Device R: Smartphone

STEP 3

isMoving

CONTEXT MANAGER

S: Motion A: Speed S: Duration A: Direction A: Class A: Vehicle

DATA ANALYTICS MODULE 1

DATA ANALYTICS MODULE 2

DATA ANALYTICS MODULE 3

DATA ANALYTICS MODULE N

CASSANDRA STREAM BASE (STB)

+

STEP 2

STEP 1

i-Log

Figure 7: The flow of information within the task execution manager c SmartSociety Consortium 2013-2017

31 of 93


c SmartSociety Consortium 2013-2017

7.4

Deliverable D3.3

Integration with other WPs

The implementation of the integrated Execution Monitor is currently under way, early demos of this technology were presented in previous years under the name of iLog/Move app. During the third year significant efforts where dedicated to integrate this work with the existing SmartSociety platform and to the compliance of the specific requirements needed to do so. As shown in the Figure 6, the Execution Monitor has two main points of interaction with the rest of the SmartSociety platform (with the Peer Manager and the Orchestration Manager). 7.4.1

WP4 Peer Manager

WP3 Execution Monitoring component and in particular the Context Manager contained in it integrates with the Peer Manager (WP4) within the Smart Society platform through the use of the Peer Manager web APIs. In particular, the Context Manager writes crystallized attributes that result from sensor fusion into the profile of the peer to which these sensors belong. It does this by using the Knowledge Base elements of the Dynamic Resolution of entity attributes through placeholders belonging to the Peer Manager. 7.4.2

WP6 Orchestration Manager

The Orchestration Manager will contact the Execution Monitor and in particular the Task Execution Manager contained in it through the use of dedicated web API, specifically created for this purpose. The Task Execution Manager will provide the Orchestration Monitor calls for: • Registering a task in the Task Execution Manager : this calls sends a description of the task whose execution has to be monitored and the IDs of the peers involved in the execution to the Task Execution Manager. This information will tell the Task Execution Manager to keep track of that task and its related peers and to report back to the Orchestration Manager (according to parameters of this call) with their status information. • Removing a task from the Task Execution Manager : this calls cancels the monitoring of a task and its involved peers, no further notifications will be sent back to the Orchestration Manager about this removed task. 32 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

• Polling the status of a given task : this call can be used to get current information of the the task and its related peers from the Task Execution Manager (without waiting for the next notification). For reporting the status of all monitored tasks, the Task Execution Manager will use a specifically prepared call from the Orchestration Manager. Upon receiving these notifications, the Orchestration Manager would be able to either mark a task as completed (when the reported information matches a final state for the task), or to perform corrective measures (when the reported information is either unexpected or corresponding to deviations from the plan). 7.4.3

Implementation and Future Work

The Execution Monitor is currently being implemented through a joint effort of UNITN (Context Manager) and DFKI (Task Execution Manager). With the support of WP8 for the technical and platform-wide details, the integration efforts with WP4 and WP6 are already under way. We expect to be able to present a first version of the integrated Execution Monitor in time for the third project review.

c SmartSociety Consortium 2013-2017

33 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

References [1] R. A. Berg, R. Hemphill, B. S. Abella, T. P. Aufderheide, D. M. Cave, M. F. Hazinski, E. B. Lerner, T. D. Rea, M. R. Sayre, and R. A. Swor, “Part 5: Adult basic life support 2010 american heart association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care,” Circulation, vol. 122, no. 18 suppl 3, pp. S685– S705, 2010. [2] H. R. Society, “Sudden cardiac arrest (sca),” http://www.hrsonline.org/, April 14 2015. [3] S. Ainsworth, “The functions of multiple representations,” Computers & Education, vol. 33, no. 2, pp. 131–152, 1999. [4] Y. J. Dori and J. Belcher, “Learning electromagnetism with visualizations and active learning,” in Visualization in science education.

Springer, 2005, pp. 187–216.

[5] J. K. Gilbert and D. F. Treagust, Multiple representations in chemical education. Springer, 2009. [6] S. Von Stumm, B. Hell, and T. Chamorro-Premuzic, “The hungry mind intellectual curiosity is the third pillar of academic performance,” Perspectives on Psychological Science, vol. 6, no. 6, pp. 574–588, 2011. [7] M. Zeni, I. Zaihrayeu, and F. Giunchiglia, “Multi-device activity logging,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct.

Adjunct Publication, ser. UbiComp ’14

New York, NY, USA: ACM, 2014, pp. 299–302. [Online]. Available:

http://doi.acm.org/10.1145/2638728.2638756 [8] C. Bul´eon, J.-J. Parienti, L. Halbout, X. Arrot, H. D. F. R´egent, D. Chelarescu, J.-L. Fellahi, J.-L. G´erard, and J.-L. Hanouz, “Improvement in chest compression quality using a feedback device (cprmeter): a simulation randomized crossover study,” The American journal of emergency medicine, vol. 31, no. 10, pp. 1457–1461, 2013. [9] D. M. Gonz´ alez-Otero, J. Ruiz, S. Ruiz de Gauna, U. Irusta, U. Ayala, and E. Alonso, “A new method for feedback on the quality of chest compressions during cardiopulmonary resuscitation,” BioMed research international, vol. 2014, 2014. 34 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

[10] J. Yeung, R. Meeks, D. Edelson, F. Gao, J. Soar, and G. D. Perkins, “The use of cpr feedback/prompt devices during training and cpr performance: a systematic review,” Resuscitation, vol. 80, no. 7, pp. 743–751, 2009. [11] Y. Song, J. Oh, and Y. Chee, “A new chest compression depth feedback algorithm for high-quality cpr based on smartphone,” Telemedicine and e-Health, 2014. [12] T. Chan, K. Wan, J. Chan, H. Lam, Y. Wong, P. Kan et al., “New era of cpr: application of i-technology in resuscitation,” Hong Kong Journal of Emergency Medicine, vol. 19, no. 5, p. 305, 2012. [13] L. Herzler, “Medical emergency app takes top prize at weekend hackathon,” Philadelphia Business Journal, http://www.bizjournals.com/, April 14 2015. [Online]. Available: http://www.bizjournals.com/ [14] M. Wissenberg, F. K. Lippert, F. Folke, P. Weeke, C. M. Hansen, E. F. Christensen, H. Jans, P. A. Hansen, T. Lang-Jensen, J. B. Olesen et al., “Association of national initiatives to improve cardiac arrest management with rates of bystander intervention and patient survival after out-of-hospital cardiac arrest,” JAMA, vol. 310, no. 13, pp. 1377–1384, 2013. [15] J.-T. Gr¨ asner, J. Wnent, I. Gr¨ asner, S. Seewald, M. Fischer, and T. Jantzen, “Einfluss der basisreanimationsmaßnahmen durch laien auf das u ¨berleben nach pl¨otzlichem herztod,” Notfall+ Rettungsmedizin, vol. 15, no. 7, pp. 593–599, 2012. [16] M. Sasaki, T. Iwami, T. Kitamura, S. Nomoto, C. Nishiyama, T. Sakai, K. Tanigawa, K. Kajino, T. Irisawa, T. Nishiuchi et al., “Incidence and outcome of out-of-hospital cardiac arrest with public-access defibrillation-a descriptive epidemiological study in a large urban community,” Circulation Journal, vol. 75, no. 12, pp. 2821–2826, 2011. [17] J. P. Nolan, J. Soar, D. A. Zideman, D. Biarent, L. L. Bossaert, C. Deakin, R. W. Koster, J. Wyllie, and B. B¨ ottiger, “European resuscitation council guidelines for resuscitation 2010 section 1. executive summary,” Resuscitation, vol. 81, no. 10, pp. 1219–1276, 2010. [18] A. H. Idris, “The sweet spot: Chest compressions between 100–120/minute optimize successful resuscitation from cardiac rest,” JEMS: a journal of emergency medical services, vol. 37, no. 9, p. 4, 2012. c SmartSociety Consortium 2013-2017

35 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

[19] L. Wik, J. Kramer-Johansen, H. Myklebust, H. Sørebø, L. Svensson, B. Fellows, and P. A. Steen, “Quality of cardiopulmonary resuscitation during out-of-hospital cardiac arrest,” Jama, vol. 293, no. 3, pp. 299–304, 2005. [20] D. P. Edelson, B. S. Abella, J. Kramer-Johansen, L. Wik, H. Myklebust, A. M. Barry, R. M. Merchant, T. L. V. Hoek, P. A. Steen, and L. B. Becker, “Effects of compression depth and pre-shock pauses predict defibrillation failure during cardiac arrest,” Resuscitation, vol. 71, no. 2, pp. 137–145, 2006. [21] R. W. Koster, M. A. Baubin, L. L. Bossaert, A. Caballero, P. Cassan, M. Castr´en, C. Granja, A. J. Handley, K. G. Monsieurs, G. D. Perkins et al., “European resuscitation council guidelines for resuscitation,” Resuscitation, vol. 81, no. 10, pp. 1277–1292, 2010. [22] T. Starner, “Project glass: An extension of the self,” Pervasive Computing, IEEE, vol. 12, no. 2, pp. 14–16, 2013. [23] J. Siegel and M. Bauer, “A field usability evaluation of a wearable system,” in Wearable Computers, 1997. Digest of Papers., First International Symposium on. IEEE, 1997, pp. 18–22. [24] T. G. Holzman, “Computer-human interface solutions for emergency medical care,” Interactions, vol. 6, no. 3, pp. 13–24, 1999. [25] P. Lukowicz, A. Timm-Giel, M. Lawo, and O. Herzog, “Wearit@ work: Toward realworld industrial wearable computing,” Pervasive Computing, IEEE, vol. 6, no. 4, pp. 8–13, 2007. [26] T. Nicolai, T. Sindt, H. Kenn, and H. Witt, “Case study of wearable computing for aircraft maintenance,” in IFAWC-International Forum on Applied Wearable Computing.

VDE VERLAG GmbH, 2005.

[27] M. Billinghurst, “Augmented reality in education,” New Horizons for Learning, vol. 12, 2002. [28] H. Kaufmann and D. Schmalstieg, “Mathematics and geometry education with collaborative augmented reality,” Computers & Graphics, vol. 27, no. 3, pp. 339–345, 2003. 36 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

[29] K. Lee, “Augmented reality in education and training,” TechTrends, vol. 56, no. 2, pp. 13–21, 2012. [30] U.-V. Albrecht, U. von Jan, J. Kuebler, C. Zoeller, M. Lacher, O. J. Muensterer, M. Ettinger, M. Klintschar, and L. Hagemeier, “Google glass for documentation of medical findings: evaluation in forensic medicine,” Journal of medical Internet research, vol. 16, no. 2, 2014. [31] O. J. Muensterer, M. Lacher, C. Zoeller, M. Bronstein, and J. K¨ ubler, “Google glass in pediatric surgery: An exploratory study,” International Journal of Surgery, vol. 12, no. 4, pp. 281–289, 2014. [32] G. R. Parslow, “Commentary: Google glass: A head-up display to facilitate teaching and learning,” Biochemistry and Molecular Biology Education, vol. 42, no. 1, pp. 91–92, 2014. [33] G. Ngai, S. C. Chan, J. C. Cheung, and W. W. Lau, “Deploying a wearable computing platform for computing education,” Learning Technologies, IEEE Transactions on, vol. 3, no. 1, pp. 45–55, 2010. [34] J. Weppner, P. Lukowicz, M. Hirth, and J. Kuhn, “Physics education with google glass gphysics experiment app,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication.

ACM,

2014, pp. 279–282. [35] M. Hirth, J. Kuhn, and A. M¨ uller, “Measurement of sound velocity made easy using harmonic resonant frequencies with everyday mobile technology,” The Physics Teacher, vol. 53, no. 2, pp. 120–121, 2015. [36] J. Kuhn and P. Vogt, “Applications and examples of experiments with mobile phones and smartphones in physics lessons,” Frontiers in Sensors, vol. 1, pp. 67–73, 2013. [37] B. Vogel, D. Spikol, A. Kurti, and M. Milrad, “Integrating mobile, web and sensory technologies to support inquiry-based science learning,” in Wireless, Mobile and Ubiquitous Technologies in Education (WMUTE), 2010 6th IEEE International Conference on.

IEEE, 2010, pp. 65–72.

c SmartSociety Consortium 2013-2017

37 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

[38] J. A. Litman and C. D. Spielberger, “Measuring epistemic curiosity and its diversive and specific components,” Journal of Personality Assessment, vol. 80, no. 1, pp. 75–86, 2003. [39] P. Chandler and J. Sweller, “Cognitive load theory and the format of instruction,” Cognition and instruction, vol. 8, no. 4, pp. 293–332, 1991. [40] D. Riboni and C. Bettini, “Cosar: hybrid reasoning for context-aware activity recognition,” Personal and Ubiquitous Computing, vol. 15, no. 3, pp. 271–289, 2011.

38 of 93

http://www.smart-society-project.eu


Deliverable D3.3

A

c SmartSociety Consortium 2013-2017

Details on: Smart-Watch Live Saver

A.1

Introduction

Out of Hospital Cardiac Arrest (OHCA) is on of the leading causes of death in the western world. In the US alone more than 350,000 people die due to OHCA every year (one death every 90 seconds) and it causes approximately 40% of deaths in adults younger than 75 every year in Europe. Over 95% of those experiencing OHCA die because CPR is not commenced quickly enough [2]. Chances for survival decrease by 7-10% every minute in the absence of Cardiopulmonary Resuscitation (CPR). Unfortunately, despite relatively high numbers of people being trained (due to obligatory First Aid Courses as part of Driving License preparation in many European countries) , the reported incidence of lay bystander participation in CPR remains low. This may be due to a failure to recall the procedure or fear of causing harm. Our work aims at to improve lay bystander engagement and performance by producing a S mart-Watch based, interactive Live-Feedback System. A.1.1

Contribution of WP3

Our contribution to help improve bystander CPR engagement and performance is described in this work: • We developed an easy to use CPR feedback application for a Smart-Watch, , designed to allow untrained people to perform CPR correctly in emergencies. As watches are worn most of the time by their owners, this application is always at hand, without the requirement for additional and expensive equipment. • We evaluated the CPR watch application with 41 untrained testers in three modalities. • Using the two main quantitative indicators; frequency and compression depth, the results clearly demonstrated CPR improvement using the CPR Watch. Thus, for example, with the CPR watch around half of subjects managed to stay within the recommended range for both parameters (correct frequency and compression depth) for at least 50% of the time they performed CPR. Approximately 70% of the participants managed to stay in the recommended range for both parameters for more than 30% of the time. On the other hand, without the assistance of the watch (even after receiving oral reminder about the CPR procedure) just c SmartSociety Consortium 2013-2017

39 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

20% managed to perform CPR correctly for about 50% of the time, while only 30% performed CPR correctly (frequency and depth) for 30% of the time.

A.2

Status Quo

A.2.1

CPR Devices

Using CPR Feedback devices can help to enhance the quality of CPR. A number of such devices are commercially available. One of the most common devices is the CPR-meter (available from Laerdal, and Philips) the Laerdal CPR meter was used in this evaluation Studies similar to Buleon et al. [8] show significant improvements in CPR when using such devices. Other studies like Gonzales et al. [9] introduce alternative devices like photoelectric distance sensors, which also improve performance. In addition, Yeung et al. [10] conducted a systematic review of the literature. Their findings support the use of CPR/feedback/prompt devices for improving skills during training. These devices, however, are expensive (hundreds of Euros) and are meant for laboratory and medical training environments.They are not intended for use by the general public. A.2.2

CPR support on Smartphones

There does not appear to be any iPhone App for CPR assistance in the Apple Appstore . The Google Play Store lists several apps that are intended to assist in CPR, but these mainly give information about First Aid and instructions for the performance of CPR. Only CPR Metronome provides live instruction by emitting sound at the proper frequency, yet does not provide live performance feedback. A number of research studies have reported using Smart-Phone-apps for CPR measurement. For example [11] Song et al.used the trajectories derived by double integration of the acceleration of a smartphone for measuring compression depth. Their evaluation of the system showed only a very small error range of 1.43 mm with a standard deviation of 1mm. Chan et al. [12] evaluate a CPR Feedback application for iPhones in a controlled study (control group without IPhone app). The iPhone group reaching better compression depth than the control group. A.2.3

CPR support on a Smart-watch

The idea of using a Smart-watch for assisting in CPR seems not be entirely new. A thorough literature research revealed an article in the Philadelphia Business Journal, which 40 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

reported about �Lifesaver�, a Smartwatch-App that was developed during the PennApps weekend hackathon [13] in January 2015. To the best of our knowledge, however, this app has not yet been published, nor have any studies been performed with it.

A.3 A.3.1

The Smart Watch Live Saver Concept The Situation - Bystander CPR

In Europe, the number of people willing to actually perform First Aid and CPR is different depending upon country, the average lies at 66%. Wissenberg et al [14] studied the rate of Out of Hospital Cardiac Arrest (OHCA) in Denmark over 10 years (2001-2010): lay bystander resuscitation was attempted in a total of 19,468 patients. The rate of bystander CPR increased over the study period from 21.1% in 2001 to 44.9% in 2010. According to the Red Cross and ADAC, in the German speaking countries only 15-20% of people would actually perform CPR. The main reason for people not to help is because they are insecure about what to do and therefore afraid to cause harm. Grasner et al [15] report the adult OHCA incidence between 2004 and 2011 in Germany (n=11,788). Bystander CPR was initiated most often in patients between 18 and 20 years (25%), and least often in those over 80 years (12%). It was also noted that bystander CPR of a witnessed OHCA was performed significantly less often in private homes compared with public areas. These are interesting observations, from which one may surmise that being in public places makes bystanders more willing to act either because they feel it is expected by others, or that they are encouraged by their support. Sasaki et al ([16] recorded the incidence of OHCA in Osaka Japan. As the availability of Automatic Electronic Defibrillators increased in public places, their rate of usage by lay rescuers was 0% in 2004, climbing to 11% in 2008. This demonstrates a willingness of the lay bystander to utilize technological equipment as a means of support. A.3.2

The Correct Way - CPR Suggestions and Effects

CPR (Cardiopulmonary Resuscitation) was first introduced approximately 50 years ago. Ever since, scientific evidence has led to periodic changes in CPR techniques. In 2010, the European Resuscitation Council (ERC) and the American Heart Association (AHA) published the currently valid evidence based guidelines for resuscitation: They suggest to perform compressions, at least 100/min, with a compression depth of at least 2 inches/5 c SmartSociety Consortium 2013-2017

41 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

cm [1]. The ERC specifies the guidelines more clearly and suggests compression depth of at least 5 cm (but not exceeding 6 cm) at a rate of at least 100/min (but not exceeding 120/min) [17]. Both agencies still agree on recommending chest compressions and rescue breaths in a ratio of 30:2. Since then, further research has further corroborated these values. It was shown [18] that CPR is in fact most effective at a frequency of 100-120 cpm (compressions per minute), while the effectiveness declines when the frequency exceeds 125 cpm. Other research indicates that compression depth of 40 mm or less results in less survival of victims than compression depth of 50 mm and more [19, 20]. Even though there appears to be no evidence that greater compression depth causes damage, the ERC that a compression depth of 60 mm should not be exceeded-even in large adults [21]. A.3.3

The Solution - CPR Watch

It was apparent that any tool that helped the rescuer to maintain optimal frequency and compression depth would be very beneficial. Thus, we developed an easy to understand CPR feedback application for the LG G Watch R Smart-Watch with Android Wear OS. This application has three main functionalities: • Frequency: When the app is started the watch begins to vibrate and blink (black/blue - see Figure 8 A/B) with 110 cpm (average frequency of the ideal compression rate 100-120). Due to the lack of a loud speaker, audio feedback is not possible. • Depth: The feedback for the compression depth is done by color (Figure 8 C-E). The center of the display stays green as long as the compression depth is within the ideal range of approx. 50-60 mm, turns yellow if the compression depth goes beyond 60 mm and turns red if the minimum compression depth is not reached. • Counting: Following the ERC and AHA recommendations of a 30/2 compression/rescue breath ratio, the watch counts the compressions backwards from 30 and stops vibrating/blinking after 30 effective compressions. In case the minimal compression depth is not reached (red display) the watch stops counting backwards until a sufficient depth is reached again. The CPR compressions are recognized using the accelerator of the Smart-Watch (see Figure 9). The magnitude of the acceleration vector allows us to estimate both CPR 42 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

Figure 8: Smart-Watch indicates frequency of 110 cpm by a blue/black (A, B) blinking ring and the vibration motor. The center displays the number of compression necessary to complete 30 compressions, indicates the quality of the detected compression by color green (C): compression is within the suggested interval (50 - 60 mm), yellow (D): compression too strong (beyond 60 mm, but still supports life saving feature), red (E): compression is too weak (no life saving effects), push harder.

frequency and compression depth. By using a peak detector on the signal, we retrieve its local minima and maxima. The time differences of the peaks are used to estimate the frequency, the amplitude (the y-axis distance of max / min peaks) are used to conclude the compression quality. The derived amplitudes have been compared to the CPR-meter and adjusted accordingly.

A.4 A.4.1

Evaluation Study Design

In order to test the effectiveness of our Smart-watch app we asked study participants to perform CPR on a standard CPR training manikin �Little Anne� 1 . To establish a baseline, we used a CPR-meter (QCPR1 ). This device is able to record all important aspects of CPR (compression depth, frequency, ideal zones of contact). During CPR, the app provides ongoing user feedback (which was blinded or disabled for the purpose of this study) for the key CPR elements. The parameters of this device were set to the standards currently valid in Europe (compression depth 50-60 mm and frequency 100-120 per minute). 1

www.laerdal.com

c SmartSociety Consortium 2013-2017

43 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Amplitudes of Compressions

25 Acceleration in m/s*s

20 15 10 5 0 50

10

20

30

40

Time in s

50

60

70

Figure 9: Amplitudes of the two compression cycles with acceleration intervals for ”compression too low” - red, ”compression ok” - green and ”compression too strong” - yellow. Study Implementation

Overall every study participant was asked to perform CPR in

three different modalities: 1. without any additional information: First every test-person was asked to perform CPR the way they remembered from their last First Aid Course. 2. with assistance of the watch: For the second run we explained the current CPR regulations and the functionality of the watch to the participants. Afterwards the participants performed the CPR again, but now with the assistance of the watch. 3. with prior explanations/briefing: After first analyses of the data it was clear that there were distinct effects and improvements visible with the assistance of the watch. As these effects might have been caused by the explanation of how CPR works we decided to repeat the measurements of CPR without the watch two weeks after the initial data recordings. So, in the third run, we asked the study participants once again to perform CPR, without the watch, but with a reminder on current CPR regulations. Study Group The group of participants was defined as ”any lay member of the public (for whom the app is actually designed). The only exclusion criterion was people with medical/resuscitation background as nurses, police and and people who have undertaken 44 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

frequent first aid training that included CPR. In total 41 people participated, 24 male 17 female, aged between 24-70 (average age 37, SD 13). Each study participant provided the date of their last first aid course. The most common reply was ”during the course for the drivers license”, 5-35 years ago (average 16 years, SD 10). Only 5 out of 41 had refreshed their rst aid course at least once since, and this had been between 2-25 years previously. Data Set In order to record a sufficient amount of data per person and for each of the 3 modalities, every test-subject was asked to perform the current standard of CPR, 30/2 (30 compressions 2 rescue breaths - the test-persons were not asked to actually perform rescue breaths on the manikin but only to make a break of 2-3 seconds), 5 times in a row. On the one hand, this could provide insights into a potential learning curve (specifically for the data-sets using the watch). On the other hand, it also allowed analysis of effects resulting from participants getting tired (specifically for the data-sets without assistance of the watch). Therefore, in total we have: • app. 190 recordings (5700 compressions) of CPR being performed as 30/2 the way people would perform CPR without any additional information, • app. 190 recordings of CPR being performed as 30/2 with the assistance of the watch, • and app. 180 recordings (5400 compressions) of CPR being performed as 30/2 with prior refreshing the information on how CPR should be performed according to the current standards and regulation. Figure 10 shows a test-subject trying to recall how CPR works without any assistance and Figure 11 shows the CPR watch in action.

A.5

Results

Analysis of the CPR data recorded by the CPR-meter indicates that the assistance of the Smart-watch has a pronounced positive effect on the quality of performed CPR. Table 1 provides the details: In the non intervention group, without briefing or app assistance, on average, the participants only managed to keep the ideal frequency for 19.78 % of the time (SD 33.7) and ideal depth for 48.7 % of the time (SD 25.8). Using the Smart-Watch for c SmartSociety Consortium 2013-2017

45 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Figure 10: Study-Participant trying to recall how CPR is done correctly. The CPR-meter (grey device beneath the test-subject’s hands) measures the correctness.

Figure 11: CPR watch in action on a CPR training manikin and with the CPR-meter (grey device beneath the test-subject’s hands) for analysis

assistance the time spent at the ideal frequency increased by more than 200% (to 61.31%) and at the ideal compression depth by more than 30% (to 65.01%). 46 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Analysis of the number of participants who managed to stay in the ideal range (depth and frequency) reveals that only 57,5% were able to do so. Furthermore, they only achieved this for about 20% of the time without app assistance or briefing. In contrast, with watch assistance, 95% of the participants maintained the ideal range for more than 50% of the time. For most participants the third run, without the watch but with extensive prior information about correct CPR, does slightly enhance the result in comparison with the run first without any additional information. On average the participants stay at the ideal depth (SD 41.9) 45% of the time and at ideal frequency (SD 30.2) 44.7% of the time. It can be clearly seen that even extensive prior information and two previous sessions (one with and one without the watch) provide less improvement than the interactive feedback from the Smart Watch system. 1 (last 3 columns) shows details about enhancements of the results between ”no information” to ”watch”, ”no information” to ”prior explanation”, and from ”prior information” to ”watch”. A.5.1

Ideal Range

The last row of Table 1 indicates that without getting help or additional information (as it would be during a sudden incident, where getting explanations cannot be expected) only an average of 57% of all test-subjects managed to perform CPR at least for a short time in an ideal range, and those who did were only able to stay there for an average of 20% of the time! On the other hand, with assistance by the watch, only 5% of the test-subjects were not able to find the ideal range. All others reached the ideal range for an average of 50% of the time. The following subsection provides a more detailed analysis of this data. Doing it right!

Upon further perusal, some interesting further details are revealed.

These are displayed in Figures 12,13 and 14 : Without any help, more than 70% of all test-subjects were only able to reach the ideal range ( = both ideal compression depth and ideal frequency at the same time (see Figure 14) for less than 10% of the time (48% did not even manage to find the ideal range at all) and only 5% of the test-subjects were able to stay in the ideal range for more than 50% of the time! After getting an introduction on how CPR has to be performed, this situation improved slightly. However, almost 50% the test-subjects still only managed to stay in the ideal range for less than 10% of the time (and 30% still did not manage at all). Only 14% were able to stay in the ideal range for almost 100% of the time. c SmartSociety Consortium 2013-2017

47 of 93


c SmartSociety Consortium 2013-2017

av. depth ideal Depth too shallow too deep

av. Frequ. ideal Frequ. too slow too fast

ideal (dep.+freq.) pers. in ideal

Deliverable D3.3

N = 40 w/o info

N=35 prior expl.

N= 41 watch

in numbers SD 50-60mm SD < 50 mm SD > 60 mm SD

60.49 9.77 48.31% 26.46 21.99% 30.23 32.56% 27.80

61.66 8.83 45.15% 29.81 12.46% 29.67 42.74% 34.59

59.76 7.04 65.01% 23.87 17.16% 25.68 20.32% 21.34

in numbers SD 100-120cpm SD < 100 cpm SD > 120 cpm SD

102.12 28.10 19,78% 35.33 51,70% 47.72 29,92% 40.56

107.05 18.84 43.91% 41.81 31.62% 41.81 26.10% 33.66

104.40 10.41 61.31% 29.79 32.13% 31.54 9.32% 20.28

50-60/100-120 SD % o/o total

18.14% 24.73 57.5%

25.7% 28.71 80%

52.14% 23.86 95.1%

w/o info to watch

Improvement pri. expl. w/o info to watch pri. expl.

34.56%

43.98%

-6.54%

28.15%

-27.41%

-43.35%

60.22%

110.34%

31.28%

209.96%

39.63%

121.98%

60.91%

-1.58%

-38.83%

221.03%

180.08%

-12.76%

160.4%

80.8%

44.0%

65.4%

18.9%

39.1%

Table 1: Total time in percent that participants were (in)correctly performing CPR, both by compression depth and frequency individually and combined (actually doing it ”right”) and how many participants were actually able to achieve this (columns 3-5). And improvement of of total time that participants were (in)correctly performing CPR between the three modalities ”w/o any information”, ”with prior explanation how to do correctly” and ”using the watch” for compression depth and frequency individually and combined (columns 7-9). With the assistance of the CPR watch, performance improved significantly. Only 15% (6 out of 41) of the test-subjects failed completely in reaching the ideal range. More than 50% managed to stay in the ideal range over 50% of the time (29% of the test-person even achieved ideal range for over 75% of the time). This amounts to an improvement of more than 45 percentage points (pp)!

Ideal Depth or Ideal Frequency

The analysis of the important aspects of compression

depth and frequency demonstrates that correct depth is easier to achieve. See Figure 13. Even without help more than 50% of participants are able to reach a depth of 50-60 mm for more than 50% of the time (only 10% stay below 50 mm 10% of the time, with 10% not reaching 50-60 mm at all). Nevertheless, usage of the watch improves performance. More than 70% of the test-subjects stay in the ideal depth for more than 50% of the time 48 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

(with 50% of the test-subjects staying for more than 75% of the time, an improvement of 24 pp). Regarding the compression frequency, most test-subjects are either too slow or too fast when they do not receive additional help. 75% of them do not even manage to stay at the ideal frequency of 100-120 cpm for more than 10% of the time . This changes significantly with the assistance of the watch. In this case, approximately 80% of the test-persons can keep the rhythm and stay at the ideal frequency for more than 50% of the time, an improvement of 60 pp. 50% even manage to stay at the ideal frequency for more than 75% of the time (36 pp improvement). See Figure 12

Figure 12: The percentage of test-persons managing to spend how much time in ideal compression frequency (100-120 cpm) for the three modalities ”w/o any information” blue, ”with prior explanation how to do correctly”- red and ”using the watch” - green

The analysis of the ideal ranges shows that the most significant benefit of the watch is helping people to find the ideal compression rhythm (60 pp improvement). Looking at Figure 15 an interesting observation related to rhythm is that participants wither performed very well, or very poorly (irrespective as to whether they had a briefing/information session or not) What the watch appears to do, is to help people keep the frequency: especially those who are originally weak in this area. Thus, most of those who scored poorly without help (only around 10% of the time at ideal frequency) improve dramatically to more than 50% of the time at ideal frequency . c SmartSociety Consortium 2013-2017

49 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Figure 13: The percentage of test-persons managing to spend how much time in ideal compression depth (50 - 60 mm) for the three modalities ”w/o any information” - blue, ”with prior explanation how to do correctly”- red and ”using the watch” - green

Figure 14: The percentage of test-persons managing to spend how much time doing correct CPR (ideal frequency and ideal compression depth) for the three modalities ”w/o any information” - blue, ”with prior explanation how to do correctly”- red and ”using the watch” - green A.5.2

Deviations and Indications for a Learning Curve

One other positive effect of using the watch is that significantly less people deviate from the ideal ranges and, moreover, with less deviation. With regard to compression frequency, 50 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

Figure 15: The percentage of test-persons managing to spend how much time in ideal compression frequency (100-120 cpm) for the three modalities ”w/o any information” blue, ”with prior explanation how to do correctly”- red and ”using the watch” - green on average 27 persons (65%) deviated from the suggested frequency (average deviation 20 cpm, SD 12) without the watch. With assistance, only 8 persons (20%) deviated (average deviation 8 cpm, SD 6). Regarding compression depth, while without help an average of 21 (50%) people deviated from the ideal range (average deviation 4 mm, SD 4) only 15 (36%) people deviated when helped by the watch (average deviation 4 mm, SD 4). Furthermore, the comparison of the 5 cycles ( 1 cycle is 30 compression 2 breaths) of each participant per modality indicated trends for a learning curve when using the watch. For both depth and frequency, during the first cycle more people deviate from the ideal range (8 frequency/16 depth) than in the last cycle (6 frequency/11 depth, with an increase at the second/third cycle). For the other modalities, the decrease of deviations is either less steep or nonexistent. See Figure 16. This could be an indication that people using the watch start to learn how to use the watch and therefore more quickly learn to perform CPR correctly! Nevertheless, the amount of cycles is not large enough to actually confirm the details of a learning curve. This topic is part of an ongoing study with nurse-students.

A.6

Participants’ Feedback

After the interventions, a feedback questionnaire was sent to the thirty participants who were accessible. 28 completed the survey giving a return rate of 93%, representing 68% of the initial 41 participants. 100% of the respondents stated that the topic of the study c SmartSociety Consortium 2013-2017

51 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Figure 16: Number of persons deviating from the ideal depth (left) and ideal frequency (right) in course of the 5 runs. was very important or important. Furthermore, 93% were positive that a Live-Feedback System like the CPR Watch could help to save more lives (only 7% were neutral) and 83% believed that Such a system could remove the fear of doing damage whilst performing CPR. Being asked how secure they felt in their understanding of CPR before participating in our study, 35% replied ”secure”, 25% were ”neutral” and 40% were ”insecure”. Regarding our study and its outcome, the following questions were more interesting: 89 % of all study-participants stated that they felt much saver while performing CPR with the assistance of the watch than without and 92% are sure that they performed better with the assistance of the watch. 75% would immediately install such an App on their Smart-Watch if they owned one. More details are listed in Table 2.

52 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

absolutely

yes

neutral

no

not at all

Is the topic of the study (bystander CPR) relevant for you personally? 25,0 0,0 0,0 0,0 75,0 Could a Live-Feedback System help saving lives? 32,1 60,7 7,1 0,0 0,0 Could such a system help to reduce fear of doing damage in CPR? 53,6 3,6 7,1 0,0 35,7 How secure were you about how CPR works before the study? very secure secure neutral insecure 3,6 32,1 25,0 35,7 Did the watch help you to feel more secure?

very insecure 3,6

35,7 53,6 7,1 3,6 Did the watch help you to perform CPR better?

0,0

46,4 46,4 7,1 0,0 Did the watch irritate you while performing CPR

0,0

3,6 7,1 39,3 50,0 0,0 Would you install this App on your Smart-Watch (if you had one)? 35,7

39,3

10,7

14,3

0,0

Table 2: Participants’ Feedback. The relevance of the topic is clear to all, and in most questions the replies are quite in unison as most favor using the watch for assistance.

c SmartSociety Consortium 2013-2017

53 of 93


c SmartSociety Consortium 2013-2017

A.7

Deliverable D3.3

Conclusion and Outlook

In particular Figure 14 emphasizes that a smart watch based CPR assistance has tremendous potential for improving bystander CPR and is an App that could truly save lives. Pending clarification of potential liability issues we thus intend to make it available through the App store. In terms of future work the key question is how far the impact of the system could be strengthened through more elaborate and possibly personalized feedback mechanisms. We are also currently experimenting with using a combination of Smart-Watch and Smart-Glasses. As already mentioned above, we currently have an ongoing study with nurse-students ongoing. The hypothesis of this study is, that students using the CPRWatch while training internalize CPR faster than students training without CPR watch assistance.

A.8

German Study Description and Informed Consent

Following informed consent and study description was handed to the participants and was signed by every one.

54 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

„Können&Uhren&Leben&retten?“!! Studienbeschreibung.und.Motivation. Jeder% Mensch% der% einen% Führerschein% besitzt,% hat% irgendwann% im% seinem% Leben% einen% Erst% Hilfe% Kurs% gemacht% und% dort% prinzipiell% % gelernt% wie% in% einem% Notfall% (der% bei% Unfällen% auf% der% Straße% jederzeit% passieren% kann)% HerzHDruckHMassage% richtig% durchgeführt% wird.% Da% die% Anwendung% dieser% Kenntnisse% zumeist% lange% Zeit% nicht% erforderlich% ist,% wissen% zwar% die% meisten% Menschen% prinzipiell% wie% eine% HerzHDruck% Massage% geht,% aber% mangels% an% Erfahrung% können% nur% wenige% sie% auch% richtig% anwenden!%Im%Falle%eines%Notfalls%fühlen%sich%viele%Überfordert%und%machen%Fehler.%Ein% Hilfssystem% wäre% in% so% einem% Fall% absolut% wünschenswert% und% möglicherweise% auch% lebensrettend!%Nun,%für%Rettungsdienste%sind%solche%Hilfssysteme%(in%Form%von%QHCPR%–% Messungsgeräten).%Diese%Geräte%sind%aber%teuer%und%kaum%ein%Laie%besitzt%soll%ein%Gerät!% Ein% Hilfssystem% für% Jedermann% sollte% nun% aber% ständig% verfügbar% sein,% da% nicht% vorhergesehen%werden%kann%wann%man%mit%einem%Notfall%konfrontiert%ohne%dass%jeder% extra%ein%teures%Gerät%mit%sich%herumtragen%muss.%% Die% derzeitige% Entwicklung% zahlreicher% Smartwatches% bietet% hierfür% eine% ideale% Plattform!%Eine%Uhr%trägt%man%immer%bei%sich%und%zwar%am%Handgelenk,%dem%Ort,%dem% Körperteil%mit%welchem%eine%HerzHDruckHMassage%durchgeführt%wird!% In% der% Studie% „Können% Uhren% Leben% retten?“% soll% erhoben% werden% ob% mit% Hilfe% von% überall% verfügbaren% Smartwatches% eine% Hilfestellung% in% der% Art% gegeben% werden% kann,% dass% auch% Laien% HerzHDruckHMassage% ohne% vorheriges% Auffrischen% ihrer% Kenntnisse% richtig%durchführen%können.% % Studienablauf. Die%Probanden%erhalten%vor%der%Studiendurchführung%eine%kurze%Einführung%über%die% Motivation%dieser%Studie%und%die%Verwendung%der%Studien%Daten.%Zusätzlich%erhalten%sie% einen%kurzen%Überblick%über%die%derzeitigen%CPR%Richtlinien.% Die%Studie%selbst%ist%auf%2%Tage%aufgeteilt%um%Lerneffekte%zu%minimieren.%Am%1.%Tag%(die% Messung%wird%pro%Proband%ca.%5%Minuten%dauern)%werden%die%Probanden%gebeten%eine% HerzHDruckHMassage%(mit%10%Wiederholungen)%an%einer%Trainingspuppe%durchzuführen% so% wie% sie% glauben,% dass% die% HerzHDruckHMassage% funktioniert% (Bei% diesen% Messungen% wird% das% QHCPR% Messgerät% zur% Analyse% der% HerzHDruckHMassage% zwar% eingesetzt,% aber% ohne% entsprechende% FeedbackHFunktion.% Auch% die% FeedbackHFunktionen% der% Trainingspuppe% werden% deaktiviert% sein!).% Am% zweiten% Tag% werden% die% Probanden% die% Messung%des%ersten%Tages,%mit%dem%gleichen%SetHUp%wiederholen,%allerdings%werden%die% Probanden% an% diesem% Tag% zusätzlich% eine% Smartwatch% verwenden,% welche% durch% Vibration,% Geräuschen% und% und% farblichen% Veränderungen% am% Display% helfen% soll,% die% richtige% Drucktiefe% und% richtige% Druckfrequenz% zu% finden% und% einzuhalten!% Die% Messabfolge%(mit%und%ohne%Uhr)%wird%aus%statistischen%Gründen%variieren,%es%wird%aber% sichergestellt,% dass% die% jeweiligen% Messungen% an% unterschiedlichen% Tagen% stattfinden% wird%um,%wie%bereits%beschrieben%Lerneffekte%zu%minimieren!% % Risiken.für.eine.Studien8Teilnahme. Die% Risiken% für% die% Teilnahme% an% der% Studie% sind% gering.% Die% StudienHDurchführer% verpflichten%sich%alle%Daten%und%Ergebnisse%zu%anonymisieren%und%versichern,%dass%die% Identität%der%Studienteilnehmer%zu%jedem%Zeitpunkt%geheim%bleibt%

Figure 17: CPR watch study description page 1 in German c SmartSociety Consortium 2013-2017

55 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Einzig% in% rechtlichen% Fällen% könnte% ein% Aufheben% der% Anonymität% erzwungen% werden.% Sofern% aber% möglich% versichern% wir,% dass% die% Notwendigkeit% für% ein% solches% Aufheben% der%Anonymität%sehr%unwahrscheinlich%ist.% % Vorteile:. Auch% wenn% die% Teilnahme% an% dieser% Studie% keine% rechtliche% Gültigkeit% im% Sinne! eines% ErsteHHilfeHKurses% hat,% profitiert% der% Studienteilnehmer% doch% durch% die% Konfrontation% mit% der% Notwendigkeit% HerzHDruckHMassage% korrekt% durchzuführen% und% das% Ausprobieren% dieser% HerzHDruckHMassage% an% einer% Normkonformen% Reanimationspuppe% (Fimra% Laerdal)% und% einem% entsprechenden% QHCPRHMeter% (Firma% Laerdal).% Dadurch% wird% dem% Studienteilnehmer% aufgefordert% seine% Kenntnisse% diesbezüglich% wieder% aufzufrischen% und% erhält% anschließend% durch% Auswertung% des% QH CPRHMeters,%wenn%gewünscht%ein%Feedback%zur%Durchführung.% %

Kosten.und.Kompensation:. Dem% Studienteilnehmer% entstehen% keinerlei% Kosten.% Für% die% Teilnahme% und% den% zeitlichen%Aufwand%für%die%Studie%(max.%10H15%Minuten)%wird%der%Studienteilnehmer%mit%% _________________€%entschädigt.% % Vertraulichkeit:. Alle%Informationen%die%im%Zuge%der%Studie%gesammelt%werden%verbleiben%bei%der%StudieH durchführenden%Person,%welche%die%Vertraulichkeit%aller%Informationen%sicherzustellen% hat.% In% anonymisierter% Form% werden% die% Daten% unter% Umständen% dem% SmartSociety% Konsortium% zur% Verfügung% gestellt.% Ebenfalls,% aber% ausschließlich% in% anonymisierter% Form,%behalten%sich%die%StudienHDurchführer%(DFKI%–%Embedded%Intelligence%Mitarbeiter% und%SmartSociety%Konsortium)%vor%Ergebnisse%dieser%Studie%zu%veröffentlichen.% Keiner%außer%der%Studiendurchführer%(DFKI%Mitarbeiter%und%SmartSociety%Konsortium)% wird%Zugang%zu%den%Daten%erhalten.%% Alle%vertraulichen%Information,%inkl.%Fotos,%Videos%oder%Audio%werden%ausschließlich%mit% schriftlicher% Genehmigung% des/der% TeilnehmerIn% weitergegeben% oder% veröffentlicht% werden.% %

Kontaktperson:. Bei%Fragen%im%Zusammenhang%mit%dieser%Studie%konntaktieren%Sie%bitte%Agnes%Grünerbl% (agnes.gruenerbl@dkfi.de,%Trippstadterstraße%122,%67663%Kaiserslautern).%

!

Figure 18: CPR watch study description page 2 in German 56 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Teilnehmer"Einwilligung((Informed)Consent)! % Ich% habe% die% Studienbeschreibung% gelesen% und% verstanden.% Ich% bestätige% hiermit,%dass%ich%über%die%Studie,%deren%Ablauf%und%über%meine%Teilnahme% aufgeklärt% wurde,% und% dass% alle% meine% Fragen% hierzu% beantwortet% worden% sind.%Ich%nehme%freiwillig%an%dieser%Studie%teil%und%bin%damit%einverstanden,% dass% die,% im% Verlauf% dieser% Studie% durch% mich% erhobenen% Daten,% in% anonymisierter% Form,% analysiert% und% die% Ergebnisse% veröffentlicht% werden% dürfen.%Mir%ist%bewusst,%dass%ich%meine%Einwilligung%jederzeit%ohne%Angabe% von%Gründen%zurückziehen%darf.% % ____________________________________________________________________________________________________% Name%in%Blockbuchstaben% % % % % % Teilnehmer%Unterschrift% % __________________________________________________% Datum% % % Eine! Kopie! dieser! Teilnehmer0Einwilligung! (Informed! Consent)! wurde! mir! ausgehändigt! ! __________________________________________________! %Initials! ! ! Hiermit! bestätige! ich,! dass! ich! als! Aufwandsentschädigung! für! meine! Teilnahme! an!der!Studie!einmalig!_________€!erhalten!habe.! ! ____________________________________________________________________________________________________% Teilnehmer%Unterschrift% % % % % % % Datum% ! ! _____________________________________________________________________________________________________ _____________________________________________________________________________________________________! ! ! Ich% habe% mit% dem/der% Studiente’ilenehmerIn% (und% wenn% nötig% mit% der% Unterschriftsberechtigten% Person)% die% Durchführung% der% Studie% wie% oben% beschrieben% besprochen% und% auf% eventuelle% Risi’ken% hingewiesen.% Der/die% TeilnehmerIn% hat% den% Inhalt% der% Einwilligung% verstanden% und% ist% fähig% eine% rechtlich% gültige% Einwilligung% (Informed%Consent)%zu%geben% % % ____________________________________________________________________________________________________% % Unterschrift%StudienHDurchführer% % % % % % Datum!

Figure 19: CPR watch study informed consent in German c SmartSociety Consortium 2013-2017

57 of 93


c SmartSociety Consortium 2013-2017

B

Deliverable D3.3

Details on: Smart-Glass Teaching

Smart Glasses are a new class of wearable systems that extend the original vision of head mounted displays (HMD) towards a broader concept of head centered interaction and sensing. Devices like Google Glass [22] are full blown mobile computers that combine an HMD, a headphone, a multi-touch touchpad, head motion sensing, eye blink sensing, a microphone, first person camera, significant amount of FLASH storage, and variable communication capabilities. As a result, they enable users to seamlessly blend their interactions in the physical and in the digital world, fusing and manipulating information from both worlds with minimal interference with other activities. In this paper, we demonstrate how this capability can be used to support high school science education. As a concrete case study, we have developed and evaluated gPhysics: a Google Glass based App to support students in conducting a specific physics experiment in the area of acoustics. The vision is to utilise the Google Glass device to (1) reduce the �technical� effort involved in conducting the experiments (measuring, generating plots etc.) and to (2) allow the students to interactively see/manipulate the theoretical representation of the relevant phenomena while at the same time interacting with them in the real world.

B.1

Status Quo

The use of HMDs combined with various types of sensing and multimodal input systems have been suggested and demonstrated in a variety of domains such as, among others, maintenance, production, or emergency response (see e.g. [23, 24, 25, 26]). HMD based Augmented/Virtual reality has also been discussed in many application areas including education [27, 28, 29]. Smart Glasses, being much less obtrusive then traditional wearable systems on which much previous work was based, have been investigated in domains such medical documentation [30] and surgery [31]. Initial ideas for educational use were also discussed (see e.g. [32, 33, 34]). In terms of using mobile devices for science education and experimentation there has recently been much interest in leveraging built-in sensors or suing them as a seamless interface to networked sensors [35, 36, 37]. Our work builds on this trend and the insights it produced taking it a step further with the use of a wearable Smart Glass devices rather then a mere mobile platforms. 58 of 93

http://www.smart-society-project.eu


Deliverable D3.3

B.2

c SmartSociety Consortium 2013-2017

Use Case: The Water Glass Frequency Experiment

To explore and validate the above vision we have chosen a concrete experiment that is part of many high school physics curricula. The basic idea is studying a well-known everyday phenomena: When water is filled in the water glass, the frequency of the tone that sounds when hitting the glass with e.g. a peg gets lower. This happens because as water is added, more mass is added to the water glass. More mass results in a smaller/lower vibrating frequency, and less mass produces a faster/higher vibrating frequency of the wall of the glass. But instead of the supposed obvious relationship that the more water would be in the glass the higher is the tone, this everyday phenomena is much more complicated. To make the real relationship obvious is the basic intention of this experiment. The core idea is to allow the students to fill the glass and test the frequency while having the Google Glass incrementally generate the graph showing the relationship between fill level and frequency. Noticeably, the phenomenon to be detected by students is that the pitch does not correlate linearly with the fill level although this would be assumed due to everyday experiences. Until the water glass is nearly filled half the pitch will change less when a fixed amount of water is added compared to when water glass is nearly full. B.2.1

Google Glass Based Water Glass Experiment

The core idea is to have the Google Glass device measure both the water fill level and the sound frequency and incrementally build the fill level/frequency graph in the head mounted display. The water level measurement is done using the built-in camera while the sound is recorded with the built-in microphone. For the confirmation and correction of measurements we provide a choice of a hands-free head motion/eye blink interface or the Google Glass touchpad. Thus, the student can view the results on the display as the experiment evolves while he/she fills/removes water into/from the glass.

B.3

The gPhysics App

We have developed a Google Glass App (gPhysics) that implements the above idea through the steps shown in figure 20. Having started the gPhysics App the first activity is to adjust the filling level of the glass. We chose a computer vision approach to estimate the fill level of the glass automatically. The live stream of the camera is shown in the display of Google Glass. To initiate the estimation of the filling level, the user needs to adjust the camera to focus the glass with his/her head first. Double blinking with the right eye or a tap on the c SmartSociety Consortium 2013-2017

59 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

50 %

Fill glass with desired amount of water and adjust head to focus glass with camera

© https://support.google.com/ glass/answer/3064184

a)

Take picure with camera of Google Glass to measure fill level

b)

(c) https://support.google.com/glass/answer/3064184

Hit glass with peg and measure frequency three <mes

75 % Adjustment in case c) of wrongly recognized fill level (also through touch pad)

Confirm filling level and frequency or correct

Observe results in graph view

Figure 20: Setup of glass used for visual fill level estimation during experiments. Five orange stripes are fixed to the back of the glass. The fluid is bright green.

touchpad starts the actual estimation by taking a picture with the camera and starting the processing. Having estimated the filling level of the glass, the user is forwarded to the frequency measuring activity. He/she can start hitting the glass with the peg any time. For detecting the water glass frequency we use the built-in microphone of the Google Glass. Next, the user is forwarded to a confirmation screen which displays the estimated filling level and the measured frequency and asks for confirmation. If the two values are correct, a double blink or a tap gesture on the touchpad of the device leads the user on to the graph view. In case one of the values or even both are not correct (e.g. when the filling level was estimated wrong or the frequency is incorrect due to background noise during the measurement for example) the user can correct them separately or may alternatively discard both and go back to step 1. The three options can be accessed by scrolling through a horizontal three-item menu either by forward/backward swiping on the touchpad or by moving the head left or right, respectively. The correction of the filling level is done by adjusting a vertical slider to the correct 60 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

value. Once again, the selection can be done using the head gesture (up/down head movement and double blinking for confirmation) or the touchpad (left/right scrolling and tapping for confirmation) interface. At this point, we don’t go back to visual estimation as this alternative method of selection is not as error-prone as the visual detection. In case the frequency was not measured correctly, the user is directed to the frequency measuring activity. Finally, measurements (filling level, frequency) are visualized in a graph view plotting the filling level along the x-axis and the frequency along the y-axis. Through a horizontal three-item menu the user has the possibility to delete existing entries in the reverse order they were created including the latest point, reset the whole graph, or accept the new entry and go back to step 1 to record a new measurement. As described before, the navigation through the menu can be either done using the head gesture interface or the touchpad of the Google Glass device.

B.4

The gPhysics System Implementation

We implemented the Google Glass application with the Glass Development Kit (GDK) which is an add-on to the Android SDK allowing us build Glassware running directly on the Google Glass device (as opposed to the Google Glass Mirror API does not allow full hardware access and interaction). The visualization and input (including eye blink and head motion detection) build on the provided routines and require no further explanation. Thus the two core components of the gPhysics App are the visual recognition of the glass fill level and the tone frequency measurement (which had to take into account filtering out higher harmonics). B.4.1

Automatic Fill Level Detection From Images

First it has to be noted that the purpose of this work was not to develop a novel video processing method that would work under complex real life conditions. Instead, we build on the fact that the experiments are performed in a controlled lab environment where colored water and glass with clear markings can be used and the background can be kept largely free of clutter. An example of the glass and fluid can be seen in figure 20 top middle. We used a bright green fluid created with green food coloring and five orange stripes with their upper edges aligned with the 100 %, 75 %, 50 %, 25 %, and 0 % level marking, respectively. In all further descriptions below we will refer to this setup. The actual image processing is implemented with the OpenCV computer vision libraries and essentially consists of two stages: First, the detection of the fluid color comc SmartSociety Consortium 2013-2017

61 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

ponent and second, the detection of the colored labels and the estimation of the filling level. Originally we implemented the entire detection to run on the Glass device. However, together with the sound processing it made the system overheat and we had to transmit the images to an external computer and do the processing there. B.4.2

Detection of the fluid color component

Initially, the input image from the camera is converted from RGB to HSV color space which is commonly used for colour segmentation purposes. Next, the HSV image is thresholded applying upper and lower bounds on hue (H), saturation (S), and intensity (V) values of the pixels to create a binary mask image in which green pixels are marked as ones and non-greens as zeros. To clean up the mask from insignificantly small green areas the mask is post-processed applying morphological operators. To localize the fluid color component, we compute the contours of connected components in the mask image and filter out those which are insignificantly small. From the remaining significant ones we assume the biggest contour to correspond to the fluid component and compute its bounding box. If there are several significant components, we merge them with the biggest one if they are closer than a certain threshold with respect to the distance between their respective bounding rectangles. The distance between two rectangle is thereby given by first determining the relative position of the two rectangles (top, top right, right,. . . ) and then using minimum edge or corner distance, respectively. If the rectangles overlap, the distance is 0. In case no appropriate component exists, we assume that the filling level is 0 %. B.4.3

Detection of the colored labels and estimation of the fill level

For the remaining part of the algorithm, all further image processing is restricted to subimages covering only the area above the bounding box (region of interest) of the (merged) fluid color component. First, the HSV image is thresholded again to create another binary mask containing orange pixels of the color labels this time. Next, cleaning up, contour extraction, and component filtering is done in a similar way as described above. In case no orange contour is found at all, the algorithm assumes that the glass is full and returns a fill level of 100 %. Otherwise, the mean points of the remaining contours are computed, projected to the vertical line going through the center of the region of interest, and the components are merged together if their projected mean points are closer to each other than a certain threshold. This merging step is necessary in case the contour of a color label is not detected as a whole, but is fragmented into two or more color components. From 62 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

the merged contours, we compute the mean distance dmean between two adjacent ones filtering out those which are either too far away from each other or too close comparing the distance between the upper edges of their bounding rectangles. From the resulting set of extracted color labels, the fill level is determined by one of the following two cases. Let n be the overall number of color labels which are fixed to the glass and m be the number of extracted (visible) color labels. 1. If m = 1, the fill level is estimated as f =

hf htotal

· 100%, where hf is the height of the

bounding rectangle of the fluid color component in pixels and htotal is the vertical distance of the upper edge of the only color label bounding box and the lower edge of fluid component bounding rectangle in pixels. 2. If m > 1, the fill level is f = (1 −

m−1 n−1

df luid dmean

n−1

) · 100%, where dmean is the mean

distance in pixels between the extracted color labels as explained before and df luid is the distance between the upper edge of the bounding box of the color label and the upper edge of the fluid bounding rectangle. Using the number of visible color labels as additional information applying the more complex formula yielded more stable results.

B.5

Frequency Measurement

For detecting the water glass frequency we use the built-in microphone of the Google Glass. The main requirements of real-time water glass frequency detection is to achieve a robust, resource-saving and accurate algorithm. The resonant frequency detection algorithm is a multi-step pipeline process forwarding the provisional results to the next stage following the subsequent series of steps: 1. Reading audio buffer 2. Applying Fast Fourier Transformation 3. Filtering frequencies between 650 Hz and 2000 Hz 4. Detecting frequency with highest magnitude based on power spectrum 5. Validating detected frequency with a magnitude threshold (0.5) 6. Detecting sequential ascending, resonant and descending values in window sequence c SmartSociety Consortium 2013-2017

63 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

7. If sequence is valid: Computing resulting frequency value 8. If sequence is invalid: Searching for new valid sequence The audio buffer is read continuously (every few milliseconds). Each audio buffer window is transformed by the Fast Fourier Transformation (FFT) into the frequency domain. The frequency with the highest magnitude based on power spectrum is retrieved. Harmonic frequencies are not detected as they are more quiet than the fundamental oscillation. Only frequencies between 650 Hz and 2000 Hz are analyzed, which is the known frequency range of the given water glass. A validation step is attached to ignore sound input with a broad frequency spectrum in the allowed frequency range. If the frequency is lower than an empirically determined threshold (0.5) it is rejected. While the water glass is tapped, the algorithm is detecting ascending frequencies, lingering peaks with similar frequencies (while the water glass is in resonance) and a descending frequencies (while the water glass stops the resonance frequency). If the ascending, descending, and then in between frequencies (with very similar frequency) are detected, the average value of the in between frequencies are returned. If the frequencies in between are distinct (i.e. speech is recognized), the recognition is reset and the search for ascending frequencies is restarted.

B.6

Results

Figure 21 shows that the times for conducting the experiments differ between the two groups. In both experiments the students in the TG need less time for experiment execution compared to the students in the CG. The nonparametric Mann-Whitney test shows that these effects are significant with a medium Cohen’s d effect size measure (Execution time 1: p < 0.05, d = 0.65; Execution time 1: p < 0.01, d = 0.75). As usual in (physics) education research, we measured two further dependent variables curiosity state (according to [38]) and cognitive load (according to [39]) after finishing the experimental procedure (studying the phenomenon of two different glasses) by using well-established paper-and-pencil questionnaires. The curiosity questionnaire consisted of six questions and is shown in table 3. The cognitive load questionnaire differed between mental effort due to mobile device handling (six questions, see table 4) and mental effort due to experimental demand (ten questions, see table 5). The students indicate their degree of acceptance concerning each item and could therefore choose six options: They fully agree with the statement of the item wholeheartedly, 64 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

1. I would like to conduct further experiments with mobile devices in school. 2. I would like to conduct further experiments with mobile devices in my free time. 3. I want to know more about the experimental possibilities of mobile devices. 4. I am interested in conducting experiments with mobile devices outside my activities in school. 5. I am interested in using mobile devices as experimental tools for other topics in physics. 6. Using the mobile device as an experimental tool in school encouraged me to use it as a measuring tool in my free time.

Table 3: Questionnaire about curiosity of student when performing experiments with Google Glass or Table PC, respectively. 1. I had no issue with the App of the mobile device. 2. I had no issue with the mobile device. 3. It was easy to use the mobile device for experimentation. 4. Conducting the experiment with the mobile device was trouble-free. 5. I was overextended due to the functions of the App. 6. I was overextended due to the functions of the mobile device.

Table 4: Questionnaire about mental effort due to mobile device handling. 1. The experiment was difficult. 2. Solving the problem by conducting the experiment was easy. 3. I had no issue with the understanding of the physical principles relating to the experiment. 4. It was difficult to find the relevant information within the recorded experimental data for solving the problem. 5. I exactly knew what to do during the whole experimentation time. 6. I would have needed more help for experimentation. 7. I would have needed more time for experimentation. 8. I had no issue with the interpretation of the experimental data. 9. I had to struggle understanding the instructions of experiment. 10. I had to struggle solving the problem.

Table 5: Questionnaire about mental effort due to experimental demand. c SmartSociety Consortium 2013-2017

65 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

Figure 21: Mean and standard deviation of the experiment execution time of the two groups (in min) they agree with the statement of the item, they rather agree with the statement of the item, they rather decline the statement of the item, they decline the statement of the item, they fully decline with the statement of the item wholeheartedly. Each rating corresponds to a value ranging from 0 (fully decline) up to 5 (fully agree) while the values could be interpreted as metric data (Likert scaled). The variable score of a student is calculated as the percentage of the rated item values related to the total possible score (in %). Students who experimented with Google Glass have a higher degree of curiosity state than students in CG (see figure 22). Using the nonparametric Mann-Whitney test for independent random samples we prove that this differences are significant and with a large Cohen’s effect size measure (p < 0.001; d = 1.3). The Mann-Whitney test shows significant differences and medium up to large concerning the perceived cognitive load in favour of the TG, too (concerning experimental demand and mobile devices handling; pexperiment < 0.01, d = 1.1; pdevice = 0.01; d = 0.7). Last but not least, figure 23 shows a predominantly positive pattern concerning the Google Glass usage. However, it also shows that the visual recognition of the fill level needs to be improved.

66 of 93

http://www.smart-society-project.eu


Deliverable D3.3

c SmartSociety Consortium 2013-2017

Figure 22: Mean and standard deviation of curiosity state and cognitive load of the two groups after experimentation with Google Glass and Tablet PC, respectively (parameter value in %; high %-value indicates large parameter value).

Figure 23: Mean and standard deviation of usability items for Google Glass use (parameter value in %; high %-value indicates large parameter value).

C C.1

Details on: Mainkofen Hospital Experiment Modelling Mainkofen Etypes

We dedicate a subsection for each core Mainkofen etype, and related subtypes, providing a overview and motivation cdetailed SmartSociety Consortium 2013-2017for its modelling.

67 of 93


c SmartSociety Consortium 2013-2017

C.1.1

Deliverable D3.3

Person

Listing 4 shows our model for the Person etype. Listing 4: Location etype E: PERSON is−a ENTITY C: General S: Role <Concept>

There are two possible roles for Person: nurse and patient. The latter, since it does not have dedicated sensor, can only be recognized indirectly through nurses and thus cannot justify additional attributes. Furthermore, since patients are only known through IDs based on the position in their room, we cannot understand whether the patient is the same in every routine, whereas the nurses IDs are unique. The scarcity of attributes is mainly due to the fact that, for privacy reasons, neither nurses or patients had additional information apart from their name and role. C.1.2

Location

Listing 5, 6, and 7 show our proposed model for the Location, Room and Patient Room etypes, respectively.

Listing 5: Location etype E: C: S: S: A: A: A:

LOCATION General Position POSITION Longitude Latitude Elevation

<Position>; <Float>; <Float>; <Float>;

Listing 6: Room etype E: R: A: R: S: S: A: A: A:

ROOM is−a LOCATION Contains <Room>[]; Furniture <Concept>; Adjacent Room <Room >[] Localization <Location Information>; LOCALIZATION INFORMATION Ward <NLString>; Hospital <NLString>; City <NLString>;

Listing 7: etype

Patient room

E: PATIENT ROOM is−a ROOM R: Nurse <Person>; R: Patient <Person>[];

The purpose of the structured attribute Location Information is related to the methodological aspect of the bridging between representational and experiential levels. In fact, while it would make sense to represent Ward, Hospital and City as etypes, since they can be uniquely referred via names, the real word data does not either support or motivate their presence. Methodologically, even though it is not the case in the experiment, we create this structured attribute acting as “reminder” that these type of entities exist and may become full fledged etypes if more data were to be available or needed. In our case, the main etypes are those representing the locations where activities and events take place, i.e., Room and Patient Room. The attribute Contains accounts for those rooms included in other rooms (mostly bathrooms). The attribute Furniture represents 68 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

the furniture that can be found in the room. Finally, the attribute Adjacent Room to represent the set of rooms that have at least one shared wall with the room. As for Patient rooms, they are relevant sub-types since that is where the majority of the activities are performed together with bathrooms. The amount of patients depends on the room, while in the majority of the cases only one nurse is assigned per each patient room. C.1.3

Event

This is our proposed etype Event. Listing 8: Location etype E: C: R: R:

EVENT General Participant <Person>[]; Location <Location>;

We propose three possible events: Tasks: localized activities during the daily routine Procedures: the set of activities that are part of the daily routine and are usually performed to each patient Routine: the complete sequence of activities performed in a day by a nurse Consider Listing 9 showing an excerpt from a nurse log: Listing 9: Excerpt from a nurse log 1304397687.039 1304397696.604 1304397704.559 1304397713.088 1304397716.696 1304397719.605 1304397750.344 1304397754.067 1304397771.246 1304397780.845

4Examin−cuffON 4Examin−inflate 4Examin−MEASURE 4Examin−cuffOFF 4Examin−search pulse 4Examin−MEASURE 2P−Activities−take P to Z10−bathroom 5Hygiene−undress 2P−Activities−sit on toilet

Here we can see that the uppercase string beginning with a number, e.g., 4Examin, is a procedure, while lowercase string after it, e.g., cuffON, is a task, and finally the whole log file is the routine, for that day, of a nurse. Tasks are the “semantic” step forward with respect to the previous experiment since it is the coupling of the spatial and temporal information, which is an approach already c SmartSociety Consortium 2013-2017

69 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

considered in works in the ontology based activity recognition (especially in smart environments), e.g., [40]. The advantage is that it allows for filtering impossible or not likely activities, e.g., showering a patient in a corridor, since the model has the representational power to associate this information and ignore impossible or very unlikely pairs of activities and locations. Indeed, while locations provide spatial information, activities provide the temporal information of a person, and they cannot be represented as entities since they lack identity. For instance, brushing one’s teeth in one’s own bathroom or in a hotel is not different per se but changes depending on where it is performed and who is performing it thus becoming a discrete part of reality, i.e., an event; hence, we treat activities are treated as structured attributes. Notice that activities clearly belong to the experiential level, since we need the sensor data and the logs to justify and represent them in the model. We characterized activities by adding Patient, Bodypart and Object to account for the additional information about the activity in accordance with the logs, while Experiential is to handle those hints provided in the logs on whether the activity was associated with a specific sound, e.g., water running or hair dryer blowing. The Link structured attributes represents a Finite State Machine-like structure allowing us to navigate through the sequence of activities and hnce tasks. The link between two activities has a certain probability value, which is not directly taken from sensors but that can be computed from the notes. Therefore, we can track the sequence of individual events without moving to more abstract entities, e.g., procedures. In fact, procedures can be often interrupted by individual tasks or procedures, e.g., a nurse forgetting a stethoscope while examining a patient and going back to pick it up.

Listing 10: Task etype E: TASK is−a EVENT S: Activity <Activity>

Listing 11: Activity struc-Listing 12: Link structured tured attribute attribute S: S: A: A: A: A: A: A: A: A: S: S:

ACTIVITY Activity Start <Float > End <Float > Duration <Float> Class <Concept> Object <Concept> Patient <Person> Body part <Concept> Experiential <Concept> Previous activity <Link> Next activity <Link>

S: LINK Activity <Activity> Probability <Float>

Procedures are therefore modelled as sequences of events and they are important recurring events because they tend not to happen to the same patient and/or in the same room in the same day, i.e., in the daily routine. Nonetheless, they tend to change accord70 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

ing to nurses and the available patients, although some medical procedures tend to follow a more rigid sequence. Finally, routines are in turn acting as sequence of procedure, thus raising the abstraction further.

Listing 13: Procedure etype E: PROCEDURE is−a EVENT R: HasTask <Task>[]

C.2

Listing 14: Routine etype E: ROUTINE is−a EVENT R: HasProcedure <Procedure>[]

Files for Mainkofen

Here are the three .json files used to populate the model. C.2.1

Menues.json

This file contains all the activities, body parts and objects with their corresponding classes, which we called procedures in the case of activities. Listing 15: Menues.json file "1Activities":["change","check on","clean","disinfect","empty","fetch/search"," fill","flush","open","pick up","prepare", "put down","put in", "put on", " store","take off", "take out","use", "W+","W-","wash H","dry H", "talk"," furniture"], "2P-Activities":["wake up", "sit up", "stand up", "lead", "sit down","sit on toilet", "sit into wCh", "lie down","take P to", "prep wChair", "rearrange" "","removeSrail", "applySrail", "removeSbelt", "fastenSbelt","raise bed", " lower bed", ""], "3Things":["BSampleKit","BPMD","Steto", "Record","Clothes","Diaper", "Towel"," Toilet","WashCloth","", "Bowl", "Comb", "Deo","Dentures","Hygiene Box", " Lotion/Salve","Toothbrush", "Shaver","Soap", "Gloves", "Glasses","WheelCh"," Rollator"], "4Examin":["cuffOPEN","cuffON","inflate","cuffOFF","MEASURE","deflate","search pulse", "note","read", "write", "BPMD", "Steto", "Record", "Thermometer", " Temperature"], "5Hygiene":["apply","brush teeth","comb","dry","dry hair +", "dry hair -", " instruct", "lather","SH +", "SH -", "shave +","shave -","undress","wash","wet ", "Arms","Back", "Chest","Face","Feet","Hair", "Legs", "Neck","Pubic Area"], "6WASHBed": ["apply","cover","dry","instruct","turnP","wet", "wash", "uncover"," removeD", "arrangeD", "closeD","Arms","Back", "Chest","Face","Feet","Hair", " Legs", "Neck","Pubic Area"],

c SmartSociety Consortium 2013-2017

71 of 93


c SmartSociety Consortium 2013-2017

Deliverable D3.3

"7Dress":["re/arrange","undress","pick up", "hand to P", "instruct", "help", "put on","pull up","Diaper","Underpants","Pants","ProtecPa","Undershirt","Bra"," Shirt","Pull/Jack", "Socks", "Shoes","Scarf", "Nightgown"], "8Clean":["clean","close","collect","lock Clos","make Bed","open", "pick up"," store", "take out","pack up","Bag","Bedding","Laundry","LaundryBag","Sheets ","Trash","TrashBag"], "9Medics":["BLOODSAM:","cuffON", "search vein","open","setButterfly","witdhraw Bl ","cuffOFF","removeButt","lable","store","BloodSample","Pack","Syringe", ""," BANDAGE:","bandage","cut","fixate","open","put on","remove","Bandage","Btrolley","Compress","Pack", "Patch", "Salve" "Sissors","INSULINE:","prepare", "go to Patient", "inject", "dispose", "InsulinePen","MEDICATION","connect"," give","prepare", "set", "Antbiotika", "Infusion","Intravenous", "Medicaments ", "Venous puncture"]

C.2.2

Person.json

This file contains all the information about the people and locations in the Mainkofen hospital. Listing 16: Person.json file "patients":["Z1-T","Z1-F","Z2-T","Z2-F","Z3-T","Z3-F","Z4-F","Z5-RT","Z5-RM","Z5RF","Z5-LT","Z5-LF","Z6-F","Z7-T","Z7-F","Z8-T","Z8-M","Z8-F","Z9-T","Z9-F"," Z10-R","Z10-L","other1", "other2"], "sisters":["B1-1","B4-1","B2-1","B3-1","StvO","ScUe1","ScUe2"], "roomindex1":["Z1","Z2","Z3","Z4","Z5","Z6","Z7","Z8","Z9","Z10","SB","WC","W","V "], "roomindex2":["G4","G3","G2","G1","E","SR","A","K","AZ","PS","PR","RR"]

C.2.3

Roomhotspots.json

This file contains all the information about rooms and their furniture. Listing 17: Person.json file "Z1":["Bed T","Bed F","bathroom","sink","shower","toilet","closet","table","chair ", "room","trash container","window","door"], "Z2":["Bed T","Bed F","bathroom","sink","shower","toilet","closet","table","chair ", "room","trash container","window","door"], "Z3":["Bed T","Bed F","bathroom","sink","shower","toilet","closet","table","chair ", "room","trash container","window","door"], "Z4":["Bed", "bathroom","sink","shower","toilet","closet","table","chair", "room ","trash container","window","door"], "Z5":["Bed TR","Bed FR","Bed TL","Bed FL","trash container","bathroom","sink"," shower","toilet","closet","table","chair","window","door", "room"], "Z6":["Bed", "bathroom","sink","shower","toilet","closet","table","chair", "room ","trash container","window","door"], 72 of 93

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

"Z7":["Bed T","Bed F","bathroom","sink","shower","toilet","closet","table","chair ", "room","trash container","window","door"], "Z8":["Bed T","Bed F","bathroom","sink","shower","toilet","closet","table","chair ", "room","trash container","window","door"], "Z9":["Bed T","Bed F","bathroom","sink","shower","toilet","closet","table","chair ", "room","trash container","window","door"], "Z10":["Bed R","Bed L","bathroom","sink","shower","toilet","closet","table"," chair", "room","trash container","window","door"], "G1":["trash container","bench"], "G2":["trash container"], "G3":["trash container"], "G4":["trash container","bench"], "E":[""], "SR":["FR","rf","rm","lf","lm","lt"], "SB":["chair","closet","shower"], "WC":[""], "W":[""], "V":[""], "PS":["cabinet","PC","record trolley","phone","sink", "fridge"], "A":["big table","small table","couch","TV","armchair"], "K":[""], "AZ":[""], "RR":[""], "PR":["table","sink","closet"]

C.3

Model Details

Activity pick up store

Tasks Object record record

pick up

sphygmo.

pick up

steto

wake up

talk

Prev Activity Location Activity Object G2 PS pick up record store record pick up steto PS read record write put down hyg. b. PS disinfect pick up sphygmo. pick up steto Z1 talk pick up gloves wake up talk wear gloves pick up sphygmo. stand up prep wchair Z1 take p to

c SmartSociety Consortium 2013-2017

% 100.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 33.3% 33.3% 33.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1%

Next Activity Activity Object store record pick up sphygmo. talk pick up steto put down sphygmo. pick up gloves wake up talk pick up sphygmo. talk cuffon raise bed fetch wake up pick up laundry open read record talk cuffon

% 100.0% 100.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 33.3% 33.3% 33.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3%

73 of 93


c SmartSociety Consortium 2013-2017

cuffon

Z1

inflate

Z1

measure

Z1

search pulse

Z1

measure

Z1

write

Z1

talk

wake up

Z1

Z1

cuffon

Z1

inflate

Z1

74 of 93

pick up put in sit down fetch write put in wake up measure talk raise bed cuffon search pulse search pulse inflate

Deliverable D3.3

steto trashbag shirt laundry

measure cuffon cuffoff search pulse inflate

measure put down wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in pick up talk pick up wake up measure talk raise bed cuffon

sphygmo.

gloves sphygmo.

steto trashbag shirt laundry steto gloves

7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 20.0% 20.0% 40.0% 20.0% 80.0% 20.0% 50.0% 50.0%

sit down put in trash comb sit on toilet toilet take out washcloth

14.3% 7.1% 7.1% 7.1% 7.1%

search pulse inflate

20.0% 80.0%

measure

100.0%

dummy cuffon search pulse put down sphygmo. write cuffoff 50.0% measure 16.6% inflate 33.3% 50.0% dummy 50.0% cuffon search pulse put down sphygmo. write cuffoff 50.0% talk 50.0% pick up sphygmo. 7.1% fetch 7.1% wake up 7.1% pick up laundry 7.1% open 7.1% read record 7.1% talk 7.1% cuffon 7.1% sit down 7.1% put in trash 14.3% comb 7.1% sit on toilet toilet 7.1% take out washcloth 7.1% 33.3% talk 33.3% cuffon 33.3% raise bed 20.0% search pulse 20.0% inflate 40.0% 20.0% 80.0% measure

10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 83.3% 16.6% 10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 50.0% 50.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 33.3% 33.3% 33.3% 20.0% 80.0%

100.0%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

search pulse search pulse inflate measure

Z1

cuffoff

Z1

search pulse

Z1

measure

put down

Z1

sphygmo.

write

Z1 Z1

pick up

sphygmo.

Z1

put down

sphygmo.

Z1

open

take out

measure measure cuffon cuffoff search pulse inflate

Z1

washcloth

Z1

put down

washcloth

Z1-bath.

wear

gloves

Z1-bath.

take out

diaper

Z1-bath.

take out

clothes

Z1-bath.

measure pick up measure put down store pick up read write measure pick up talk put down take out open talk brush teeth take out sit on toilet stand up take out fetch put down write put down wear take out take p to open take out wear

c SmartSociety Consortium 2013-2017

sphygmo.

sphygmo. record steto record

sphygmo.

sphygmo. washcloth

towel toilet washcloth socks steto washcloth gloves clothes

diaper gloves

20.0% 50.0% dummy 50.0% cuffon search pulse put down write cuffoff 100.0% search pulse 50.0% measure 16.6% inflate 33.3% 50.0% dummy 50.0% cuffon search pulse put down write cuffoff 66.6% open 33.3% write put down 50.0% talk 50.0% pick up 25.0% talk 25.0% pick up 25.0% put down 25.0% pick up 66.6% open 33.3% write put down 33.3% take out 33.3% take out 33.3% take out 20.0% open 20.0% put down 20.0% put down 20.0% stand up 20.0% help 50.0% wear 50.0% sit down 25.0% talk 25.0% take out 25.0% take out 25.0% take out 50.0% put down 50.0% take out 20.0% take p to 20.0% pick up 20.0% take out 20.0% fetch

sphygmo.

sphygmo.

steto sphygmo. steto sphygmo. gloves

steto towel clothes washcloth socks washcloth

gloves

diaper towel clothes diaper clothes hyg. b. diaper shirt

10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 100.0% 83.3% 16.6% 10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 33.3% 33.3% 33.3% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 33.3% 33.3% 33.3% 20.0% 20.0% 20.0% 20.0% 20.0% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 50.0% 50.0% 20.0% 20.0% 20.0% 20.0%

75 of 93


c SmartSociety Consortium 2013-2017

take p to

Z1-bath.

take out

clothes

Z1-bath.

fetch

shirt

Z1-bath.

talk

Z1-bath.

sit on toilet toilet

Z1-bath.

take out

Z1-bath.

washcloth

stand up

put down

sit down

76 of 93

Z1-bath.

washcloth

Z1-bath.

Z1-bath.

take out disinfect pick up take out stand up take p to open take out wear take out take out wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in talk open talk brush teeth take out sit on toilet sit up wear pick up put in sit down wear wear shower wash take out dry stand up take out pull up take p to wear dry talk put down

Deliverable D3.3

towel unknown clothes

diaper gloves towel clothes

gloves sphygmo.

steto trashbag shirt laundry

towel toilet diaper washcloth trashbag pants shoes

washcloth thorax washcloth

diaper

washcloth

20.0% 16.6% 16.6% 16.6% 50.0% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 50.0% 50.0% 16.6% 16.6% 8.3% 8.3% 16.6% 8.3%

put in talk sit down undress take off take out take p to pick up take out fetch put in talk fetch wake up pick up open read talk cuffon sit down put in comb sit on toilet take out

clothes

take out open put down put down stand up help pull up instruct take p to wash dry talk sit down put down put in

washcloth

wear sit down flush talk wear disinfect wear wear

gloves

gloves clothes hyg. b. diaper shirt clothes

laundry record

trash toilet washcloth

socks washcloth

back

washcloth clothes

toilet patch pants pullover

20.0% 16.6% 33.3% 16.6% 16.6% 16.6% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

50.0% 50.0% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

instruct wash

Z1-bath. Z1-bath.

instruct

Z1-bath.

wash

face

Z1-bath.

undress

nightgown Z1-bath.

pick up

washcloth

Z1-bath.

hand to p dry

washcloth face

Z1-bath. Z1-bath.

help

Z1-bath.

wash

Z1-bath.

stand up

wash dry

Z1-bath.

back back

sit down

Z1-bath. Z1-bath.

Z1-bath.

wear

pullover

Z1-bath.

wear

diaper

Z1-bath.

dry wear stand up sit down instruct undress wash stand up comb instruct

back shoes

wash undress put down pick up hand to p dry put in wash pick up take out help sit up wear pick up put in sit down wear wear shower wash take out dry wash stand up put in wash

face nightgown towel washcloth washcloth face trashbag pubic area socks washcloth

pull up take p to wear dry talk put down dry wear stand up sit down wear

c SmartSociety Consortium 2013-2017

diaper washcloth trashbag pants shoes

washcloth thorax face clothes back

diaper

washcloth back shoes

patch

8.3% 8.3% 8.3% 100.0% 100.0% 25.0% 25.0% 25.0% 25.0% 100.0%

instruct stand up comb wash instruct wash face brush teeth wash pubic area

8.3% 8.3% 8.3% 100.0% 100.0% 50.0% 25.0% 25.0%

undress wash pick up hand to p stand up dry help pick up brush wear wash wear stand up pull up instruct take p to wash dry talk sit down put down put in

washcloth clothes

50.0% 50.0% 100.0% 50.0% 50.0% 100.0% 100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

back socks

66.6% 33.3%

100.0% 50.0% 50.0% 100.0% 100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 33.3% dry 33.3% pick up 33.3% 100.0% sit down wash 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear 8.3% instruct 8.3% stand up 8.3% comb 100.0% wear comb 50.0% sit down

nightgown back washcloth washcloth face laundry tooth clothes socks

back

thorax toilet patch pants pullover

diaper

50.0% 50.0% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 50.0% 50.0% 50.0%

77 of 93


c SmartSociety Consortium 2013-2017

sit down

Z1-bath.

wear

pants

Z1-bath.

pull up

pants

Z1-bath.

pick up

shoes

Z1-bath.

wear

shoes

Z1-bath.

sit down

Z1-bath.

comb

put in

put in

put in

talk

78 of 93

Z1-bath.

trash

laundry

laundry

G1

G1

G3

G2

Deliverable D3.3

wear pull up take p to wear dry talk put down dry wear stand up sit down

pullover

wear wear pull up sit up pick up pull up take p to wear dry talk put down dry wear stand up talk sit down wear talk comb wear put in put in put in

pants clothes pants

put in put in put in

trash laundry clothes

wake up talk wear pick up stand up prep wchair take p to pick up put in

diaper

washcloth back shoes

shoes

diaper

washcloth back shoes

pullover

socks trash laundry clothes

gloves sphygmo.

steto trashbag

50.0% 16.6% 16.6% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 100.0%

stand up flush talk wear disinfect wear wear instruct stand up comb pull up stand up 100.0% pick up 50.0% wear 50.0% 33.3% sit down 66.6% stand up 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear 8.3% instruct 8.3% stand up 8.3% comb 33.3% instruct 33.3% put in 33.3% take off 33.3% take off 33.3% put in 33.3% 50.0% talk 25.0% take off 25.0% put in put in 50.0% talk 25.0% take off 25.0% put in put in 7.1% fetch 7.1% wake up 7.1% pick up 7.1% open 7.1% read 7.1% talk 7.1% cuffon 7.1% sit down 7.1% put in

toilet patch pants pullover

pants shoes shoes

toilet patch pants pullover

trash gloves gloves laundry

gloves laundry clothes gloves laundry clothes

laundry record

trash

50.0% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 50.0% 50.0% 100.0% 100.0% 33.3% 66.6% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 33.3% 33.3% 33.3% 33.3% 66.6% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

comb

Z1-bath.

instruct

Z1-bath.

brush teeth

Z1-bath.

take out

washcloth

help

brush pick up

Z1-bath.

Z1-bath.

tooth unknown

take p to

Z1-bath. Z1-bath.

Z1-bath.

take off

gloves

G1

put in

trashbag

G1

talk

G1

sit down fetch write put in talk sit down wear undress wash stand up comb instruct open talk brush teeth take out sit on toilet dry put in wash pick up take out help brush disinfect pick up take out stand up take p to put in put in comb take off

wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in wake up

c SmartSociety Consortium 2013-2017

14.3% 7.1% 7.1% laundry 7.1% 33.3% 33.3% pullover 33.3% 25.0% 25.0% 25.0% 25.0% 100.0% 20.0% 20.0% 20.0% towel 20.0% toilet 20.0% face 20.0% trashbag 20.0% pubic area 20.0% socks 20.0% washcloth 20.0% 100.0% tooth 100.0% 16.6% unknown 16.6% clothes 16.6% 50.0% shirt

trash laundry gloves

gloves sphygmo.

steto trashbag shirt laundry

comb sit on toilet toilet take out washcloth instruct put in take off wash brush teeth wash

take out open put down put down stand up help pick up brush wear wash wear pick up take p to talk sit down undress take off take out 25.0% put in 25.0% disinfect 25.0% 25.0% 100.0% talk stand up help 7.1% fetch 7.1% wake up 7.1% pick up 7.1% open 7.1% read 7.1% talk 7.1% cuffon 7.1% sit down 7.1% put in 14.3% comb 7.1% sit on toilet 7.1% take out 7.1% 7.1% fetch

7.1% 7.1% 7.1%

33.3% 33.3% 33.3% 50.0% 25.0% pubic area 25.0% trash gloves face

washcloth socks washcloth

laundry tooth clothes socks unknown

gloves clothes trashbag

laundry record

trash toilet washcloth

100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 100.0% 16.6% 33.3% 16.6% 16.6% 16.6% 75.0% 25.0%

33.3% 33.3% 33.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 7.1%

79 of 93


c SmartSociety Consortium 2013-2017

sit down

A

talk

A

fetch

SB

wear

gloves

Z1

take out

towel

Z1

take out

clothes

Z1

take out

diaper

Z1

80 of 93

talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in pull up take p to wear dry talk put down dry wear stand up wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in talk take out fetch put down write put down open wear pick up take p to open take out wear take out wear

Deliverable D3.3

gloves sphygmo.

steto trashbag shirt laundry

diaper

washcloth back shoes

gloves sphygmo.

steto trashbag shirt laundry towel socks steto washcloth gloves clothes

diaper gloves towel gloves

7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 16.6% 16.6% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 20.0% 20.0% 20.0% 20.0% 20.0% 50.0%

wake up pick up open read talk cuffon sit down put in comb sit on toilet take out

laundry record

trash toilet washcloth

flush talk wear disinfect wear wear instruct stand up comb fetch wake up pick up open read talk cuffon sit down put in comb sit on toilet take out

toilet

wear put down talk take out take out take out fetch take out take out take p to pick up take out fetch put in put down

gloves clothes

patch pants pullover

laundry record

trash toilet washcloth

diaper towel clothes clothes washcloth hyg. b. diaper shirt clothes diaper

7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 20.0% 20.0% 20.0% 20.0% 20.0% 50.0%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

put down

diaper

Z1-bath.

put down

clothes

Z1-bath.

put down

hyg. b.

Z1-bath.

pick up

hyg. b.

Z1-bath.

put down

hyg. b.

sit up

Z1-bath.

stand up

Z1-bath.

take p to

Z1-bath.

sit down

flush

Z1-bath.

Z1-bath.

toilet

Z1-bath.

undress

Z1-bath.

instruct

Z1-bath.

wash

face

Z1-bath.

take out take out fetch pick up put down pick up pick up put down

clothes diaper

put down disinfect take out pick up put down

hyg. b.

put down removesbelt sit up wear pick up put in sit down wear wear shower wash take out dry disinfect pick up take out stand up

hyg. b.

pull up take p to wear dry talk put down dry wear stand up sit down flush take p to undress wash stand up comb instruct

c SmartSociety Consortium 2013-2017

hyg. b. diaper clothes hyg. b. clothes

clothes hyg. b. clothes

diaper washcloth trashbag pants shoes

washcloth thorax unknown clothes

diaper

washcloth back shoes

toilet

50.0% 100.0% 25.0% 25.0% 25.0% 25.0% 66.6% 33.3%

take out put down pick up put down put down removesrail sit up pick up pick up 33.3% put down 33.3% put down 33.3% 66.6% sit up 33.3% pick up pick up 50.0% wear 50.0% stand up 8.3% pull up 8.3% instruct 8.3% take p to 8.3% wash 8.3% dry 8.3% talk 16.6% sit down 8.3% put down 8.3% put in 8.3% 8.3% 16.6% talk 16.6% sit down 16.6% undress 50.0% take off take out 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear 8.3% instruct 8.3% stand up 8.3% comb 100.0% undress 50.0% instruct 50.0% collect 25.0% wash 25.0% brush teeth 25.0% wash 25.0% 100.0% undress

clothes clothes prot pad hyg. b. towel

steto hyg. b. hyg. b. clothes

steto hyg. b. shoes

back

washcloth clothes

50.0% 100.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 66.6% 33.3% 33.3% 33.3% 33.3% 50.0% 50.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

16.6% 33.3% 16.6% gloves 16.6% clothes 16.6% toilet 8.3% 16.6% patch 8.3% 8.3% pants 16.6% pullover 16.6% 8.3% 8.3% 8.3% 100.0% 50.0% trash 50.0% face 50.0% 25.0% pubic area 25.0% nightgown 50.0% 81 of 93


c SmartSociety Consortium 2013-2017

wash

back

Z1-bath.

dry

back

Z1-bath.

wash dry

thorax thorax

Z1-bath. Z1-bath.

stand up

Z1-bath.

instruct wash

Z1-bath. pubic area Z1-bath.

help

Z1-bath.

wear

clothes

Z1-bath.

pick up

shoes

Z1-bath.

wear

shoes

Z1-bath.

stand up

talk 82 of 93

Z1-bath.

Z5

wash stand up put in wash dry wash sit up wear pick up put in sit down wear wear shower wash take out dry undress wash stand up comb instruct dry put in wash pick up take out help wear pull up sit up pick up sit up wear pick up put in sit down wear wear shower wash take out dry wake up talk wear pick up

Deliverable D3.3

wash face 33.3% dry 33.3% pick up clothes 33.3% back 100.0% sit down wash back 100.0% dry thorax 100.0% stand up 8.3% pull up diaper 8.3% instruct washcloth 8.3% take p to trashbag 8.3% wash 8.3% dry pants 8.3% talk shoes 16.6% sit down 8.3% put down 8.3% put in washcloth 8.3% thorax 8.3% 25.0% wash 25.0% brush teeth 25.0% wash 25.0% 100.0% help face 20.0% pick up trashbag 20.0% brush pubic area 20.0% wear socks 20.0% wash washcloth 20.0% wear 100.0% pick up clothes 50.0% wear pants 50.0% 33.3% sit down shoes 66.6% stand up 8.3% pull up diaper 8.3% instruct washcloth 8.3% take p to trashbag 8.3% wash 8.3% dry pants 8.3% talk shoes 16.6% sit down 8.3% put down 8.3% put in washcloth 8.3% thorax 8.3% 7.1% fetch 7.1% wake up gloves 7.1% pick up sphygmo. 7.1% open

back back socks

thorax thorax

back

washcloth clothes

50.0% 66.6% 33.3% 50.0% 50.0% 100.0% 100.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

face

50.0% 25.0% pubic area 25.0%

laundry tooth clothes socks shoes shoes

back

washcloth clothes

laundry

100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 100.0% 33.3% 66.6% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

7.1% 7.1% 7.1% 7.1%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

put in

trash

G4

take off

gloves

G3

put in

trashbag

G1

help

Z1-bath.

pick up

laundry

Z1-bath.

pick up put down put down

trash laundry trash

Z1-bath. Z1-bath. Z1-bath.

disinfect

Z1-bath.

take p to

Z1-bath.

talk

G1

stand up prep wchair take p to pick up put in sit down fetch write put in talk comb wear take p to put in put in comb take off

dry put in wash pick up take out talk help pick up pick up put down put down sit down take off put in disinfect pick up take out stand up wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write

c SmartSociety Consortium 2013-2017

steto trashbag shirt laundry

socks trash laundry gloves

face trashbag pubic area socks washcloth

laundry trash laundry trash gloves clothes unknown clothes

gloves sphygmo.

steto trashbag shirt

7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 33.3% 33.3% 33.3% 25.0% 25.0% 25.0% 25.0% 100.0%

20.0% 20.0% 20.0% 20.0% 20.0% 50.0% 50.0% 100.0% 100.0% 100.0% 25.0% 25.0% 25.0% 25.0% 16.6% 16.6% 16.6% 50.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1%

read talk cuffon sit down put in comb sit on toilet take out

record

toilet washcloth

7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1%

take off put in

gloves laundry

33.3% 66.6%

put in disinfect

trashbag

75.0% 25.0%

talk stand up help pick up brush wear wash wear put in pick up put down put down disinfect prep wchair take p to pick up pick up talk sit down undress take off take out fetch wake up pick up open read talk cuffon sit down put in comb sit on toilet take out

33.3% 33.3% 33.3% laundry 20.0% tooth 20.0% clothes 20.0% 20.0% socks 20.0% laundrybag 50.0% trash 50.0% laundry 100.0% trash 100.0% 100.0% 25.0% 25.0% steto 25.0% hyg. b. 25.0% 16.6% 33.3% 16.6% gloves 16.6% clothes 16.6% 7.1% 7.1% laundry 7.1% 7.1% record 7.1% 7.1% 14.3% 14.3% trash 7.1% 7.1% toilet 7.1% washcloth 7.1%

trash

83 of 93


c SmartSociety Consortium 2013-2017

sit down

A

disinfect

pick up

Z1

hyg. b.

Z1-bath.

put down

hyg. b.

Z1-bath.

pick up

steto

Z1-bath.

pick up

talk

sphygmo.

Z1-bath.

Z2

cuffon

Z2

search pulse

Z2

inflate

Z2

84 of 93

Deliverable D3.3

put in pull up take p to wear dry talk put down dry wear stand up put down sit down take off put in put down disinfect take out pick up put down

laundry

put down disinfect pick up store pick up read write wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in wake up measure talk raise bed measure cuffon cuffoff cuffon search pulse

hyg. b.

diaper

washcloth back shoes trash gloves clothes hyg. b. clothes hyg. b. clothes

sphygmo. record steto record

gloves sphygmo.

steto trashbag shirt laundry

7.1% 16.6% 16.6% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 66.6% 33.3% 33.3% 33.3% 33.3% 25.0% 25.0% 25.0% 25.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 20.0% 20.0% 40.0% 20.0% 50.0% 16.6% 33.3% 80.0% 20.0%

flush talk wear disinfect wear wear instruct stand up comb prep wchair take p to pick up pick up put down put down sit up pick up pick up wake up talk pick up talk pick up put down pick up fetch wake up pick up open read talk cuffon sit down put in comb sit on toilet take out

toilet patch pants pullover

steto hyg. b. hyg. b. clothes

steto hyg. b.

sphygmo. steto sphygmo. gloves

laundry record

trash toilet washcloth

8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 25.0% 25.0% 25.0% 25.0% 66.6% 33.3% 33.3% 33.3% 33.3% 33.3% 33.3% 33.3% 25.0% 25.0% 25.0% 25.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1%

search pulse inflate

20.0% 80.0%

measure inflate

83.3% 16.6%

measure

100.0%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

search pulse inflate measure

Z2

search pulse

Z2

measure

measure cuffon cuffoff search pulse inflate

Z2

cuffon

Z2

inflate

Z2

measure

Z2

search pulse

Z2

measure

wake up measure talk raise bed cuffon search pulse search pulse inflate

measure cuffon cuffoff search pulse inflate

Z2

put down

sphygmo.

Z2

put down write

steto steto

Z2 Z2

wear

gloves

Z2

measure pick up put down put down fetch put down write put down wake up talk wear pick up

talk Z2 2013-2017 c SmartSociety Consortium

sphygmo. sphygmo. steto socks steto washcloth

gloves sphygmo.

50.0% dummy 50.0% cuffon search pulse put down write cuffoff 50.0% measure 16.6% inflate 33.3% 50.0% dummy 50.0% cuffon search pulse put down write cuffoff 20.0% search pulse 20.0% inflate 40.0% 20.0% 80.0% measure 20.0% 50.0% dummy 50.0% cuffon search pulse put down write cuffoff 50.0% measure 16.6% inflate 33.3% 50.0% dummy 50.0% cuffon search pulse put down write cuffoff 66.6% open 33.3% write put down 100.0% write 100.0% wear 25.0% talk 25.0% take out 25.0% take out 25.0% take out 7.1% fetch 7.1% wake up 7.1% pick up 7.1% open

sphygmo.

sphygmo.

10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 83.3% 16.6% 10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 20.0% 80.0%

100.0%

sphygmo.

sphygmo.

steto steto gloves diaper towel clothes

laundry

10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 83.3% 16.6% 10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 33.3% 33.3% 33.3% 100.0% 100.0% 25.0% 25.0% 25.0% 25.0% 7.1% 7.1% 7.1% 7.1%

85 of 93


c SmartSociety Consortium 2013-2017

open

Z2

take out

towel

Z2

take out

washcloth

Z2

open

Z2

take out

clothes

Z2

pick up

hyg. b.

Z2

put down

clothes

Z2-bath.

put down

towel

Z2-bath.

pick up

washcloth

Z2-bath.

stand up

put in 86 of 93

Z2-bath.

clothes

stand up prep wchair take p to pick up put in sit down fetch write put in talk put down take out open wear pick up open talk brush teeth take out sit on toilet talk put down take out take p to open take out wear take out put down disinfect take out fetch pick up put down pick up put down undress put down sit up wear pick up put in sit down wear wear shower wash take out dry pick up

Deliverable D3.3

7.1% 7.1% 7.1% steto 7.1% trashbag 7.1% 14.3% shirt 7.1% 7.1% laundry 7.1% 33.3% sphygmo. 33.3% washcloth 33.3% 33.3% gloves 33.3% clothes 33.3% 20.0% 20.0% 20.0% towel 20.0% toilet 20.0% 33.3% sphygmo. 33.3% washcloth 33.3% 20.0% 20.0% diaper 20.0% gloves 20.0% towel 20.0% hyg. b. 33.3% 33.3% clothes 33.3% 25.0% hyg. b. 25.0% diaper 25.0% clothes 25.0% clothes 100.0% nightgown 50.0% towel 50.0% 8.3% diaper 8.3% washcloth 8.3% trashbag 8.3% 8.3% pants 8.3% shoes 16.6% 8.3% 8.3% washcloth 8.3% thorax 8.3% bag 25.0%

read talk cuffon sit down put in comb sit on toilet take out

record

take out take out take out fetch take out take out open put down put down stand up help take out take out take out take p to pick up take out fetch put in put down put down

towel clothes washcloth

pick up put down put down removesrail pick up hand to p stand up pull up instruct take p to wash dry talk sit down put down put in

prot pad hyg. b. towel

washcloth clothes

25.0% 25.0% 25.0% 25.0% 100.0% 50.0% 50.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

wash

back

25.0%

trash toilet washcloth

clothes washcloth socks washcloth

towel clothes washcloth hyg. b. diaper shirt clothes hyg. b. clothes

washcloth washcloth

back

7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 33.3% 33.3% 33.3% 33.3% 33.3% 33.3% 20.0% 20.0% 20.0% 20.0% 20.0% 33.3% 33.3% 33.3% 20.0% 20.0% 20.0% 20.0% 20.0% 66.6% 33.3%

Z2-bath. http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

wash

back

Z2-bath.

pick up

socks

Z2-bath.

help

Z2-bath.

wear

socks

Z2-bath.

put in

trash

G1

put in

take off

laundry

gloves

G1

G1

disinfect

Z2

prep wchair

Z2-bath.

talk

take out

A

washcloth

Z2

put down

socks

Z2-bath.

wear

gloves

Z2-bath.

take out stand up put in wash stand up put in wash dry put in wash pick up take out help talk comb wear put in put in put in take p to put in put in comb put down sit down take off put in disinfect wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in open talk brush teeth take out sit on toilet take out fetch

c SmartSociety Consortium 2013-2017

clothes

25.0% 25.0% laundry 25.0% face 33.3% 33.3% clothes 33.3% back 100.0% face 20.0% trashbag 20.0% pubic area 20.0% socks 20.0% washcloth 20.0% 100.0% 33.3% 33.3% socks 33.3% trash 50.0% laundry 25.0% clothes 25.0%

trash laundry trash gloves clothes

gloves sphygmo.

steto trashbag shirt laundry

towel toilet washcloth

disinfect put in pick up dry pick up help pick up brush wear wash wear put in take off put in

talk take off put in put in 25.0% put in 25.0% disinfect 25.0% 25.0% 25.0% prep wchair 25.0% take p to 25.0% pick up 25.0% pick up 100.0% talk 7.1% fetch 7.1% wake up 7.1% pick up 7.1% open 7.1% read 7.1% talk 7.1% cuffon 7.1% sit down 7.1% put in 14.3% comb 7.1% sit on toilet 7.1% take out 7.1% 20.0% open 20.0% put down 20.0% put down 20.0% stand up 20.0% help 100.0% wear 25.0% talk

laundry clothes back socks

laundry tooth clothes socks trash gloves laundry

gloves laundry clothes trashbag

steto hyg. b.

laundry record

trash toilet washcloth

socks washcloth

gloves

25.0% 25.0% 25.0% 66.6% 33.3% 100.0% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 33.3% 66.6% 25.0% 25.0% 25.0% 25.0% 75.0% 25.0%

25.0% 25.0% 25.0% 25.0% 100.0% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 20.0% 20.0% 20.0% 20.0% 20.0% 100.0% 25.0%

87 of 93


c SmartSociety Consortium 2013-2017

take out

clothes

Z2-bath.

put in

clothes

Z2-bath.

pick up

clothes

Z2-bath.

take out

towel

Z3

fetch

Z2-bath.

put down

clothes

Z2-bath.

pick up

prot pad

Z2-bath.

pick up

clothes

Z2-bath.

put down

clothes

Z2-bath.

removesrail removesbelt

Z2-bath. Z2-bath.

sit up

Z2-bath.

wear

stand up

take p to 88 of 93

shoes

Z2-bath.

Z2-bath.

Z2-bath.

put down write put down take p to open take out wear take out pick up take out stand up put in pick up put in open wear pick up talk take out fetch pick up put down pick up put down pick up put in fetch pick up put down pick up put down removesrail put down removesbelt sit up pick up sit up wear pick up put in sit down wear wear shower wash take out dry disinfect pick up take out

Deliverable D3.3

socks steto washcloth

diaper gloves towel bag clothes laundry prot pad clothes gloves clothes towel hyg. b. diaper clothes clothes prot pad clothes hyg. b. diaper clothes clothes hyg. b.

shoes diaper washcloth trashbag pants shoes

washcloth thorax unknown clothes

25.0% 25.0% 25.0% 20.0% 20.0% 20.0% 20.0% 20.0% 25.0% 25.0% 25.0% 25.0% 50.0% 50.0% 33.3% 33.3% 33.3% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 100.0% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 100.0% 100.0% 50.0% 50.0% 33.3% 66.6% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 16.6% 16.6% 16.6%

take out take out take out take p to pick up take out fetch put in wash disinfect put in pick up take out put down fetch take out take out wear put down pick up put down put down removesrail pick up take out put down pick up put down put down removesrail removesbelt sit up wear stand up sit down stand up pull up instruct take p to wash dry talk sit down put down put in

talk sit down undress

diaper towel clothes hyg. b. diaper shirt clothes back laundry clothes towel clothes clothes washcloth gloves clothes prot pad hyg. b. towel clothes towel clothes prot pad hyg. b. towel

shoes

back

washcloth clothes

25.0% 25.0% 25.0% 20.0% 20.0% 20.0% 20.0% 20.0% 25.0% 25.0% 25.0% 25.0% 50.0% 50.0% 33.3% 33.3% 33.3% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 100.0% 50.0% 50.0% 25.0% 25.0% 25.0% 25.0% 100.0% 100.0% 50.0% 50.0% 33.3% 66.6% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

16.6% 33.3% 16.6%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

stand up undress collect

Z2-bath. trash

removed

Z2-bath. Z2-bath.

collect

trash

Z2-bath.

undress shower

shoes

Z2-bath. Z2-bath.

stand up

sit down

stand up

dry

sit down

Z2-bath.

Z2-bath.

Z2-bath.

Z2-bath.

Z2-bath.

flush take p to removed undress collect removed undress collect undress sit up wear pick up put in sit down wear wear shower wash take out dry pull up take p to wear dry talk put down dry wear stand up sit up wear pick up put in sit down wear wear shower wash take out dry stand up pull up take p to wear dry talk put down

c SmartSociety Consortium 2013-2017

toilet

trash

trash shoes diaper washcloth trashbag pants shoes

washcloth thorax

diaper

washcloth back shoes

diaper washcloth trashbag pants shoes

washcloth thorax

diaper

washcloth

50.0% take off take out 50.0% instruct 50.0% collect 50.0% undress 50.0% removed 100.0% collect 50.0% undress 50.0% removed 100.0% shower 100.0% stand up 8.3% pull up 8.3% instruct 8.3% take p to 8.3% wash 8.3% dry 8.3% talk 16.6% sit down 8.3% put down 8.3% put in 8.3% 8.3% 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear 8.3% instruct 8.3% stand up 8.3% comb 8.3% pull up 8.3% instruct 8.3% take p to 8.3% wash 8.3% dry 8.3% talk 16.6% sit down 8.3% put down 8.3% put in 8.3% 8.3% 100.0% sit down 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear

gloves clothes trash shoes trash shoes

back

washcloth clothes

toilet patch pants pullover

back

washcloth clothes

toilet patch pants pullover

16.6% 16.6% 50.0% 50.0% 50.0% 50.0% 100.0% 50.0% 50.0% 100.0% 100.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

100.0% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6%

89 of 93


c SmartSociety Consortium 2013-2017

wear wear

patch diaper

stand up

Z2-bath.

sit down

stand up

pull up

sit down

90 of 93

Z2-bath.

Z2-bath.

pull up

wear

Z2-bath.

Z2-bath.

pants

Z2-bath.

Z2-bath.

Z2-bath.

Z2-bath.

dry wear stand up sit down wear wear sit up wear pick up put in sit down wear wear shower wash take out dry stand up pull up take p to wear dry talk put down dry wear stand up sit down sit up wear pick up put in sit down wear wear shower wash take out dry stand up pull up take p to wear dry talk put down dry wear stand up

Deliverable D3.3

back shoes

patch pullover diaper washcloth trashbag pants shoes

washcloth thorax

diaper

washcloth back shoes

diaper washcloth trashbag pants shoes

washcloth thorax

diaper

washcloth back shoes

8.3% 8.3% 8.3% 100.0% 50.0% 50.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 100.0% 16.6% 16.6% 8.3% 8.3% 16.6% 8.3% 8.3% 8.3% 8.3% 100.0%

instruct stand up comb wear sit down stand up pull up instruct take p to wash dry talk sit down put down put in

sit down flush talk wear disinfect wear wear instruct stand up comb pull up stand up 8.3% pull up 8.3% instruct 8.3% take p to 8.3% wash 8.3% dry 8.3% talk 16.6% sit down 8.3% put down 8.3% put in 8.3% 8.3% 100.0% sit down 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear 8.3% instruct 8.3% stand up 8.3% comb

diaper

back

washcloth clothes

toilet patch pants pullover

pants

back

washcloth clothes

toilet patch pants pullover

8.3% 8.3% 8.3% 100.0% 50.0% 50.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

100.0% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 50.0% 50.0% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

100.0% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

wear

pullover

comb

Z2-bath. Z2-bath.

take off

gloves

Z2-bath.

put in

trashbag

Z2-bath.

stand up

take p to

sit down

talk

Z2-bath.

Z2-bath.

A

PS

sit down talk sit down wear take p to put in put in comb take off

sit up wear pick up put in sit down wear wear shower wash take out dry disinfect pick up take out stand up pull up take p to wear dry talk put down dry wear stand up wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write

c SmartSociety Consortium 2013-2017

pullover trash laundry gloves

diaper washcloth trashbag pants shoes

washcloth thorax unknown clothes

diaper

washcloth back shoes

gloves sphygmo.

steto trashbag shirt

100.0% wear comb 33.3% instruct 33.3% put in 33.3% take off 25.0% put in 25.0% disinfect 25.0% 25.0% 100.0% talk stand up help 8.3% pull up 8.3% instruct 8.3% take p to 8.3% wash 8.3% dry 8.3% talk 16.6% sit down 8.3% put down 8.3% put in 8.3% 8.3% 16.6% talk 16.6% sit down 16.6% undress 50.0% take off take out 16.6% flush 16.6% talk 8.3% wear 8.3% disinfect 16.6% wear 8.3% wear 8.3% instruct 8.3% stand up 8.3% comb 7.1% fetch 7.1% wake up 7.1% pick up 7.1% open 7.1% read 7.1% talk 7.1% cuffon 7.1% sit down 7.1% put in 14.3% comb 7.1% sit on toilet 7.1% take out

diaper

trash gloves trashbag

back

washcloth clothes

gloves clothes toilet patch pants pullover

laundry record

trash toilet washcloth

50.0% 50.0% 33.3% 33.3% 33.3% 75.0% 25.0%

33.3% 33.3% 33.3% 16.6% 8.3% 25.0% 8.3% 8.3% 8.3% 8.3% 8.3% 8.3%

16.6% 33.3% 16.6% 16.6% 16.6% 8.3% 16.6% 8.3% 8.3% 16.6% 16.6% 8.3% 8.3% 8.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1%

91 of 93


c SmartSociety Consortium 2013-2017

pick up

laundry

put in pick up

laundrybag Z2-bath. bag Z2-bath.

put in

clothes

Z2-bath.

put in

laundry

Z2-bath.

put in

clothes

disinfect

pick up

Z2-bath.

Z2-bath.

steto

talk

read

Z2-bath.

Z2-bath.

Z2-bath.

record

PS

pick up

sphygmo.

Z2

pick up

gloves

PS

wake up

Z4

raise bed

Z4

cuffon

Z4

92 of 93

Deliverable D3.3

put in talk help pick up put in pick up take out stand up put in put in put in put in

laundry

7.1% 50.0% 50.0% laundry 100.0% laundrybag 100.0% bag 25.0% clothes 25.0% 25.0% laundry 25.0% trash 50.0% laundry 25.0% clothes 25.0%

pick up take out stand up put in put down sit down take off put in put down disinfect pick up wake up talk wear pick up stand up prep wchair take p to pick up put in sit down fetch write put in talk store pick up read write pick up pick up talk pick up wake up wake up measure talk

bag clothes laundry trash gloves clothes hyg. b. sphygmo.

gloves sphygmo.

steto trashbag shirt laundry record steto record sphygmo. steto gloves

25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 7.1% 7.1% 7.1% 100.0% 25.0% 25.0% 25.0% 25.0% 100.0% 33.3% 33.3% 33.3% 100.0% 20.0% 20.0% 40.0%

put in pick up pick up put in wash disinfect put in pick up talk take off put in put in wash disinfect put in pick up prep wchair take p to pick up pick up wake up talk pick up fetch wake up pick up open read talk cuffon sit down put in comb sit on toilet take out

laundrybag trash bag clothes back

pick up talk pick up put down pick up wake up talk cuffon raise bed cuffon search pulse inflate

sphygmo.

laundry clothes gloves laundry clothes back laundry clothes

steto hyg. b.

sphygmo.

laundry record

trash toilet washcloth

steto sphygmo. gloves

50.0% 50.0% 100.0% 100.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 25.0% 33.3% 33.3% 33.3% 7.1% 7.1% 7.1% 7.1% 7.1% 7.1% 14.3% 14.3% 7.1% 7.1% 7.1% 7.1% 100.0% 25.0% 25.0% 25.0% 25.0% 100.0% 33.3% 33.3% 33.3% 100.0% 20.0% 80.0%

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.3

inflate

Z4

measure

Z4

cuffoff

Z4

search pulse

Z4

measure

raise bed cuffon search pulse search pulse inflate

measure measure cuffon cuffoff search pulse inflate

Z4

20.0% 80.0% measure 20.0% 50.0% dummy 50.0% cuffon search pulse put down sphygmo. write cuffoff 100.0% search pulse 50.0% measure 16.6% inflate 33.3% 50.0% dummy 50.0% cuffon search pulse put down sphygmo. write cuffoff

100.0% 10.0% 10.0% 30.0% 20.0% 10.0% 20.0% 100.0% 83.3% 16.6% 10.0% 10.0% 30.0% 20.0% 10.0% 20.0%

=======

c SmartSociety Consortium 2013-2017

93 of 93


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.