D3.4 - Final Methods and Technologies for Human Machine Symbiosis

Page 1

SmartSociety Hybrid and Diversity-Aware Collective Adaptive Systems When People Meet Machines to Build a Smarter Society

Grant Agreement No. 600584

Deliverable D3.4 Working Package WP 3

Final Methods and Technologies for Human Machine Symbiosis Dissemination Level 1 (Confidentiality): Delivery Date in Annex I: Actual Delivery Date Status2 Total Number of pages: Keywords:

1

PU 31/6/2016 December 22, 2016 F 36 template,LATEX

PU: Public; RE: Restricted to Group; PP: Restricted to Programme; CO: Consortium Confidential as specified in the Grant Agreeement 2 F: Final; D: Draft; RD: Revised Draft


c SmartSociety Consortium 2013-2017

2 of 36

Deliverable D3.4

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.4

Disclaimer This document contains material, which is the copyright of SmartSociety Consortium parties, and no copying or distributing, in any form or by any means, is allowed without the prior written agreement of the owner of the property rights. The commercial use of any information contained in this document may require a license from the proprietor of that information. Neither the SmartSociety Consortium as a whole, nor a certain party of the SmartSocietys Consortium warrant that the information contained in this document is suitable for use, nor that the use of the information is free from risk, and accepts no liability for loss or damage suffered by any person using this information. This document reflects only the authors’ view. The European Community is not liable for any use that may be made of the information contained herein.

Full project title:

Project Acronym: Grant Agreement Number: Number and title of workpackage: Document title: Work-package leader: Deliverable owner: Quality Assessor: c SmartSociety Consortium 2013-2017

SmartSociety: Hybrid and Diversity-Aware Collective Adaptive Systems: When People Meet Machines to Build a Smarter Society SmartSociety 600854 WP 3 WPName Final Methods and Technologies for Human Machine Symbiosis Paul Lukowicz, DFKI Agnes Grnerbl, DFKI Kobi Gal, BGU 3 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

List of Contributors Partner Acronym DFKI UNITN

4 of 36

Contributor Agnes Grnerbl, Paul Lukowicz, Ronald Chenu-Abente, Mattia Zeni, Enrico Bigniotti, Fausto Giunchiglia

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

Executive Summary The present deliverable marks the end point of Workpackage 3 (WP3) of the SmartSociety project. Its purpose is to conclude and round up the work of all three previous deliverables and present the final methods and technologies for human/machine symbiosis. Mainly it includes the outcomes of the second iteration of T3.3 (Human-Machine Composition) and T3.4 (Interaction Patterns and Persuasive Technology). Part I of this deliverable concludes the work on bridging the semantic gap with personal context modelling and annotation. It represents the culmination of context modelling and representation efforts that we started with the analysis of the Mainkofen hospital nurse routines; progressively shifting from that controlled environment to the everyday environment and mobile devices based sensors as shown progressively in the i-Log application. Furthermore, it introduces the final version of the Semantic Nurse persuasive support technology, the A2E assistant and details the effort to validate its usage. The A2E Assistant is an Android OS based tablet application, which allows nurses to get direct feedback regarding the patients current vital signs and general health values, quick-check regulations and furthermore receive suggestions how to proceed based on the information at hand. Part II eventually, adds the last component to the SmartSociety Platform, the Execution Monitor, which is embedded between the Orchestration Manager OM (WP 6) and the Peer Manager PM (WP 4). The execution monitor itself is the overarching component that includes the functionalities offered by both the • Task Execution Manager (which is to act as an overarching monitoring component integrating the real-time evolution of a given task, through the PM, and proactively adapt it and match it to the requirements defined by the OM) • Context Manager (the component acutally embodying the software/hardware of the 3-layer approach.)

c SmartSociety Consortium 2013-2017

5 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

Table of Contents 1 Introduction

7

2 Part I - Persuasive Technologies

7

2.1

Personal context modelling and annotation . . . . . . . . . . . . . . . . . .

7

2.2

A-to-E Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

3 Part II - Execution Manager

10

3.1

Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.2

Context Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.3

3.2.1

Data model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2.2

Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2.3

Simulation

Task Execution Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3.1

3.4

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

API and Data Flow Model . . . . . . . . . . . . . . . . . . . . . . . 17

Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4.1

WP4 Peer Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.4.2

WP6 Orchestration Manager . . . . . . . . . . . . . . . . . . . . . . 19

A TEM

25

B Publication

31

6 of 36

http://www.smart-society-project.eu


Deliverable D3.4

1

c SmartSociety Consortium 2013-2017

Introduction

The work package 3 deals with how humans and machines can operate together in new ways to collaboratively achieve goals not achievable by them alone, or achievable with greater effort and difficulty currently. As brief recap, D3.1 reported on T3.1, Models of Human-Machine Symbiosis. The next step D3.2 continued T3.1 and introduced T3.2: Context and Intention Interpretation. Following up D3.3 reported the outcomes of T3.2, and the first iteration of T3.3 Human-Machine Composition and very initial work on T3.4 Interaction Patterns and Persuasive Technology. The present deliverable d3.4 now sums WP3 up. It presents the final methods and technologies for human/machine symbiosis in Part I and describes the final part of the SmartSociety platform, the Execution Monitor, in detail in Part II.

2

Part I - Persuasive Technologies

2.1

Personal context modelling and annotation

Humans can only have a limited and partial view of the world at all times in their everyday life. This is what context is, i.e., “a theory of the world which encodes an individual’s subjective perspective about it� [1]. For more information this paper is attached in the Appendix. Hence, context modelling must account for this relation between the user and the context inferred from the environment. Currently, most works focus on controlled environments, e.g., smart homes [2]. The main limitation of these approaches is that they focus on a priori defined environments, which are limited in terms of complexity and known in advance. In other words, they focus on a closed domain, whereas humans experience is essentially in open domains. In open domains, unlike closed domains, it is impossible to predict, and hence model, how the world will present itself [3]. This requires managing, at run-time, unexpected obstacles and changes of the environment [4] and also deciding what is relevant to the state of affairs the user is in at that time [5]. We propose a model of context based on [1], organized according to the different dimensions of the environment. This work represents the culmination of context modelling and representation efforts that we started with the analysis of the Mainkofen hospital nurse routines; progressively shifting from that controlled environment to the everyday environment and mobile deviced based sensors as shown progressively in the i-Log application. Although we hope to continue to get more information over the following months, c SmartSociety Consortium 2013-2017

7 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

we can already show some preliminary results obtained from a subset of users (21 total) from a reduced time window of 5 days, shown in Table 1. From the first row is it possible to see that the generated annotations are 1130 in total, 74.77% of which are answered while the remaining 285 are left unanswered. From the answered ones, we calculated the correctness between the annotations and the actual locations, e.g., “Home” and “Via Roma 1” or GPS coordinates, since they are the easiest to check directly with the users; the accuracy rate is 95.65%. Finally, rows 3 and 4 provide an analysis of the answer behaviour of the user in terms of timing. More in detail, row 3 shows the time needed to complete an annotation, i.e., ∆A , from the moment the user selects a question until all three questions are answered. 60.4% of the annotations are filled in less than 10 seconds, while none of them took more that 30 seconds. Since users do not receive incentives, 30 seconds response rate can be considered more than enough time for a question interval of 30 minutes. On the other hand, row 4 shows the time delay between the question notification and the user’s reply, i.e., ∆Q A. Here, 50.9% of the annotations were filled within 30 minutes, meaning that the reply came before a new question appeared; we consider this close to real time answering. Moreover, the 72.6% of the annotations were provided within 60 minutes, i.e., when there were less than two questions available. Notice that the choice of answering all the questions at once just before the earliest one expires was discouraged by the sociology experts, since it may lead to more errors in reporting; in fact, only 4.3% of the annotation were answered this way. Overall, we plan to go for a full fleged experiment with 136 students from different departments, i.e., sociology and engineering. The full publication further detailing this theory and the obtained results may be found at Appendix B, on the other hand, architectural and implementational details for the i-Log application will be described below.

No. of annotations

Answered 845 (74.77%)

Location accuracy

Not answered 285 (25.23%) 95,65%

∆A (in sec)

0-5 16,8%

6-10 43,6%

11-15 17,6%

16-20 9.0%

21-25 4,6%

26-30 8,1%

∆QA (in min)

0-16 33,2%

17-33 17,7%

34-66 21,7%

67-100 13.0%

101-133 9,8%

134-166 4,3%

Table 1: Preliminary results from a subset of users participating in the project.

8 of 36

http://www.smart-society-project.eu


Deliverable D3.4

2.2

c SmartSociety Consortium 2013-2017

A-to-E Assistant

The A-to-E Assistant is an Android OS based tablet application, which allows nurses to get direct feedback regarding the patients current vital signs and general health values, quick-check regulations and furthermore receive suggestions how to proceed based on the information at hand. The A to E algorithm (A to E stands for - Airways, Breathing, Circulation, Disability, and Exposure) includes general instructions on which bodily functions to assess during the examination in which order and what actions to take (treatments to apply) for each particular dysfunction and also in the case of emergency. It is mainly defined for single actors (a single nurse has to be able to examine a patient and in the emergency, a single nurse has to be able to “keep a patient alive” until help arrives). Nevertheless, in reality, the examination is commonly performed by a set of 3-5 nurses. These are not only very likely randomly put together, but also have no pre-defined roles or assigned tasks. It is a common understanding that the A to E algorithm should be followed, but despite thorough training, there is no guarantee that the activities performed by the nurses strictly follow this protocol. Even though it is designed to help the nurse (agent) to reach a goal (figure out what is wrong with a patient and treat all eventual illnesses and handle emergencies), as in goal recognition, an agent might use experience or intuition to reach the goal in different ways. E.g. the nurses might take shortcuts like, when the patient speaks clear and normally and is conscientious, there is no need to assess the Airways or Disability. Moreover, agents might mix plans, perform plans parallelly at the same time, interrupt plans, skip steps or repeat parts of the plan. Is this scenario carried out during a nursetraining session, it might even go further. The A-to-E Asssistant (A2E) is designed and implemented for Android based Tablet devices. • Document vital signs: The A2E offers an interface for quickly noting all relevant vital signs (oxygen saturation, breathrate, blood pressure, pulse, glucose level, temperature) • Get instant feedback about condition based on vital signs: On input the A2E instantly provides color feedback (trafic light based) about the condition based on specific vital signs. E.g. yellow for temperature of 38C or red for 39C or above, • Recieve hints and suggestions by the system. Via the Doctor-App The A2E can provide various hints and tips and suggestions directly at the nurse’s interface. c SmartSociety Consortium 2013-2017

9 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

• Get quick looks at specific regulation information. In case the nurse wants to be certain about a specific aspekt, the A2E offers the possibility to check regulations.

Figure 1: The A2E Assistant is a tablet based system, supporting nurses to keep track of the patients state, by implementing the emergency ABDCE-Algorithm. The A2E Assistant will be evaluated in a test study, once the approval of the ethics commitee arrives. A maximum of 25 student nurses will be invited to participate in the study. The students will participate in groups of 3 persons. There will be a minimum of 5 groups and a maximum of 10 groups. Each group will be required to attend an approximately 1h session on two different days (one day with using the A2E assistant, one day without using it to provide comparable data). Detailed information about this study can be found in the appendix ??

3

Part II - Execution Manager

The Execution Monitor (EM) is a functionality of the smartsociety platform embedded betweeen the Orchestation Manager and the Peer Manager.

3.1

Architecture

Figure 2 provides a general overview of the EM’s architecture. The Orchestration Manager (OM) and the Peer Manager (PM) are part of the work of WP6 (OM) and WP4 (PM). Detailed information of both can be considered in the respective deliverables. The execution monitor is the overarching component that includes the functionalities offered by both the Task Execution Manager and the Context Manager: 10 of 36

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

Task Execution Manager (TEM): this component interacts with the PM and the OM. The main function of the TEM is to act as an overarching monitoring component integrating the real time evolution of a given task, through the PM, and proactively adapt it and match it to the requirements defined by the OM. Context Manager (CM): this component is the software/hardware embodiment of the 3-layer approach. (See D3.2 and D3.3) To better illustrate how the Execution Monitor works within the SmartSociety platform we present the following sequence of steps that are representative of its normal functionalities in Figure 2. However, before delving into the actual data flow, we must note that the first three steps, i.e., those within the CM and linked to the PM, are actually ongoing processes, not immediately linked to the TEM and/or OM. In fact, the update of attribute values can be constant regardless of the application specific requirements. Let us now illustrate each step (numbering of steps according to Figure 2). Step 1: Devices data Data from devices are firstly stored by a Cassandra database system that stores in form of streams. Step 2: Modules To bridge the gap between the high level information and sensor data input from user personal devices we foresee a �modular� aggregation and fusion of data, in the sense that it will be driven by attributes of entities represented in the high level model. Step 3: Updating in the peer (manager) Here the attributes of the entities stored in the peer database, have their value updated following the procedure explained in the previous step. Step 4a: TEM requesting At the beginning of the monitoring phase, all the data are actually requested by the TEM via APIs for outputting metrics, evaluations and predictions Step 4b: TEM matching and integrating Once they are all collected, the data from the PM (via the CM) are then matched with the information about the task taken from the OM. Then, the TEM can proactively inform the OM about the status of the task and its corresponding expectations. c SmartSociety Consortium 2013-2017

11 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

EXECUTION MONITOR ORCHESTRATION MANAGER

RIDE REQUIREMENTS AND GOALS

STEP 4a

TASK EXECUTION MANAGER

FROM PM (VIA CM): THE USER ISMOVING AND IS WITH OTHER PEOPLE, HE IS IN A RIDE. THE RIDE IS IN THIS STATUS. FROM OM: THESE PEOPLE MUST BE IN THE RIDE THAT SHOULD BE AT THIS LOCATION WITHIN A CERTAIN TIME INTERVAL.

API

TEM COMPARES THESE 2 TYPES OF INFORMATION AND DECIDES IF THE TASK IS HAPPENING LIKE PLANNED. API STEP 4b PEER MANAGER API

API

hasValue [filled from the CM module 1] Person C: General information A: Name A: Date of Birth A: Place of Birth etc... C: Activity information R: Activity: [] C: Device R: Smartphone

CONTEXT MANAGER

STEP 3

isMoving

S: Motion A: Speed S: Duration A: Direction A: Class A: Vehicle

DATA ANALYTICS MODULE 1

DATA ANALYTICS MODULE 2

DATA ANALYTICS MODULE 3

DATA ANALYTICS MODULE N

CASSANDRA STREAM BASE (STB)

+

STEP 2

STEP 1

i-Log

Figure 2: The flow of information within the task execution manager

3.2

Context Manager

This section gives the overview of the APIs and data model implemented for the Context Manager according to the implementation of the Task Execution Manager. At the moment the Context Manager can monitor events of type Travel and in particular the following variables: start, end, execute. The fail functionality is still pending in the implementation since no much information has been provided about it and what it means. Every task can be monitored using a monitor process that has to be registered to the context manager and unregistered when no new updates are needed anymore. When the status for a travel updates because of new sensor data arriving to the servers, the Context Manager pushes the corresponding information to the Task Execution Manager as specified in their APIs, using the PUT method /updateMonitor/:rideRequestID (so far no endpoint URL as been provided for the Task Execution manager so the PUT doesnt work). At the same time, the Context Manager provides a pull functionality according 12 of 36

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

to which the component who registered a monitor can ask for its status at any time by calling a specific endpoint. A specific component of the Context Manager receives data from the users smatphone in real time and stores them in a Cassandra NOSQL database. At the current situation we are running a stress test on our small cluster of 5 Cassandra nodes to test the stability of the system under high loads. We are inserting data simultaneously from 5 servers to this cluster, for an average of 80000 values per second, which correspond to 2GB per hour. More information may be found in the Table in Figure 4.

Figure 3: Status of the cluster with a total of 1648.87GB of data without performance degradation. The low level data stored into this cluster are analyzed by the Context Manager and pushed to the attributes of the corresponding agents in the PM. For example, the user location is updated when needed as well as the high level attributes isMoving or nearbyPeople. When the Task Execution Manager requests to monitor a specific event, the Context Manager analyzes these attributes of the people involved in that event in the PM and generates higher level status information. In the current implementation we are able to track if a Travel is started, ended and in execution. The precision in the recognition of these states is linked with the definition of the radius used to determine is a person is at the specified location. With a radius of 1m we will be very precise but most likely we wont get the status correctly since in the best case scenario the GPS accuracy is 3m outdoor. On the other hand, a radius of 100m will likely match all the conditions for the statuses above but with a very poor accuracy. This can be an element of discussion. 3.2.1

Data model

The data model for the monitorTask POST endpoint have been chosen to be the same as of the Task Execution manager for a seamless integration of the two systems and for this reason we refer to that documentation. c SmartSociety Consortium 2013-2017

13 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

• /monitorTask POST - The fields that are important at this stage of the development for tracking the travel status are the following: – commuters: the list of commuters that take place in the ride – destination: the coordinates of the destination location – departure: the coordinates of the departure location – id: id of the travel The other fields have not been considered for the analysis and can be left blank, except for the comments and route that are used for the UI. • /monitorTask/:rideRequestID GET - The data model is the same of the POST method but, in addition, two additional fields are added by the Context Manager to provide the status of the monitors of the travel: monitoringOn and monitors as in the Task Execution manager. • /monitorTask GET - The same structure of point 3.2 but in this case we have an array of elements.

The APIs for the Context Manager are listed below in Table 2.

Table 2: Main API calls for the Context Manager. 14 of 36

http://www.smart-society-project.eu


Deliverable D3.4

3.2.2

c SmartSociety Consortium 2013-2017

Usage

Suppose a new Travel has to be monitored. The Task Execution Manager calls the /monitorTask POST method asking the Context Manager to monitor its status. When a new update in one of the monitors is available, the information is pushed from the Context Manager to the Task Execution Manager using the provided PUT method. On the other hand, it the Task Execution Manager wants to check the status at any time, it can call the /monitorTask/:rideRequestID method. When there is no need to monitor a travel anymore, the monitor attached to it can be unregistered using /terminateTaskMonitor/:rideRequestID. 3.2.3

Simulation

As explained above, the Context Manager uses realtime streaming data collected by the users smartphone to compute and generate the abstractions necessary for updating the monitors of the status of a Travel. At the current moment, to test the system can be useful to have a simulation environment that shows what the Context Manager is doing, instead of executing the whole pipeline that needs for a smartphone and a user that travels around the city. All the methods presented in the table above work in this implementation but currently it is not possible to get updated values since no realtime data is pushed to the backend server for the involved users. You can register monitor requests, see them and unregister them but the monitors status wont be updated. For this reason, we modified the Y3 demo of WP3 integrating it with the new version of the Context Manager as described in this document. The process to test is the following: 1. Open a browser (tested on Chrome and Safari) and go to the link http://elog.disi.unitn.it:8081 which shows the user interface of the Y3 demo 2. Call the GET endpoint http://elog.disi.unitn.it:8081/kos/smartsociety-stb/startcmsimulation (only one user at the time) 3. The function called with that endpoint does the following: (a) Creates 10 Travels as follows: i. Daniele going to the Magritte Museum ii. Marco going to the Magritte Museum iii. Michael going to the Magritte Museum iv. Michael Wanders off c SmartSociety Consortium 2013-2017

15 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

v. Daniele travels to the Less Filles Restaurant vi. Marco Wanders off vii. Ronald travels to the Less Filles Restaurant viii. Heather travels to the Less Filles Restaurant ix. Daniele and Heather wanders off together x. Ronald Wanders off (b) Registers 10 monitors to the Context Manager using /monitorTask POST method (c) The widget on the right of the user interface showing all the Travels as a timeline uses the information taken from /monitorTask/:rideRequestID GET to populate its labels (the two events Visit at the Museum and Dinner are skipped since not travels but general events and are not updated). You can call /monitorTask/:rideRequestID GET or /monitorTask GET anytime from a separare window to see that the system is currently computing these values. (d) Unregisters the 10 monitors created before using /terminateTaskMonitor/:rideRequestID when the whole process is finished

3.3

Task Execution Manager

This document gives the overview of the APIs and data model for the task execution manager. The TEM is the module of the smartsociety framework which acts as a monitoring platform between the Orchestration Manager and Context Manager/Peer Manager. All it does is to add a separate functionality to the accepted tasks in OM and add the respective monitors to that task. Those monitors are then constnatly updated by CM/PM and the subsequent updates are sent to the OM. So, there is not business logic implemente on TEM side other than acting as the mediating agent between different modules. The functionality implemented by TEM APIs consists of following steps. • Task monitors (1. Getting the accepted tasks from the orchestration, 2. adding the monitors to that task) • Update task monitors (Updating the currently monitored tasks) • Terminate the monitoring of the tasks 16 of 36

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.4

3.3.1

API and Data Flow Model

In the context of the SmartSociety architecture, the following interactions would take place among the different components, i.e. the Orchestration Manager (OM), Execution Manager (EM), Context Manager (CM), and Peer Manager (PM). For a detailed diagramm of the Data Model please refer to appendix A. The APIs for the Task Execution Monitor are listed below in Table 3.

Table 3: Main API calls for the Task Execution Monitor Execution model initialisation:

When transitioning from negotation to execution,

the OM extends a task by adding start/exec/end/fail conditions to each action using the vocabulary provided by the CM using the vocabulary of conditions that can actually be checked at execution time (technically, in the current architecture these become task records once dispatched for execution), and passes this on to the EM. Monitor initialisation:

The EM creates, at the relevant points in time, individual

monitors that it requests the CM to track. In principle, this can be done for each action when it becomes active by creating monitors for its start, exec, fail, and end conditions (and, optionally, preconditions and effects, if they are to be tracked), but in theory monitors for subsequent actions can also be created early, if, for example, one wants to identify that a future action is already becoming impossible although it is not relevant yet. Each monitor is parametrised with a termination condition, so that the CM knows when it can stop tracking its status. The task monitoring request will add the monitors to the accepted c SmartSociety Consortium 2013-2017

17 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

task and the entire task structure will be passed on to the TEM from OM with a similar request. To this request, monitors will be added which will be monitored till the task is terminated. Context update:

For each active monitor, the CM extracts higher-level values for the

tracked condition from real-time data streams, and updates the relevant entries of the peer profiles in the PM. The update monitors is a PUT request with the changed flag of the monitors in the body of the request. Execution update: The PM notifies the EM of any value change in the monitored conditions, and the EM forwards this update to the OM (or, alternatively, the OM may periodically ping the EM for updates). The EM does not need access to the values itself, these can be retrieved by the OM subject to the privacy policies applied directly from the PM. However, if this strategy is pursued, the OM would have to perform the interpretation from individual conditions into the more general status values, unless these are also stored in the PM (but this is not straightforward, as tasks, and thus their constituent actions and conditions are not entities in the PM this probably needs to be discussed further). Terminating monitoring:

The OM may advise the EM to stop monitoring a task,

either because it is considered failed, it has been cancelled late in the process, or real-time information is no longer needed to determine termination of the task. This will terminate the monitoring of a particular task and remove the task from the ride collection. The terminated tasks will switch off the monitoring and will be added to a different collection which can be seen anytime with a GET API.

3.4

Integration

The implementation of the integrated Execution Monitor is currently under way, early demos of this technology were presented in previous years under the name of iLog/Move app. During the third year significant efforts where dedicated to integrate this work with the existing SmartSociety platform and to the compliance of the specific requirements needed to do so. As shown in the Figure 2, the Execution Monitor has two main points of interaction with the rest of the SmartSociety platform (with the Peer Manager and the Orchestration Manager). 18 of 36

http://www.smart-society-project.eu


Deliverable D3.4

3.4.1

c SmartSociety Consortium 2013-2017

WP4 Peer Manager

WP3 Execution Monitoring component and in particular the Context Manager contained in it integrates with the Peer Manager (WP4) within the Smart Society platform through the use of the Peer Manager web APIs. In particular, the Context Manager writes crystallized attributes that result from sensor fusion into the profile of the peer to which these sensors belong. It does this by using the Knowledge Base elements of the Dynamic Resolution of entity attributes through placeholders belonging to the Peer Manager. 3.4.2

WP6 Orchestration Manager

The Orchestration Manager will contact the Execution Monitor and in particular the Task Execution Manager contained in it through the use of dedicated web API, specifically created for this purpose. The Task Execution Manager will provide the Orchestration Monitor calls for: • Registering a task in the Task Execution Manager : this calls sends a description of the task whose execution has to be monitored and the IDs of the peers involved in the execution to the Task Execution Manager. This information will tell the Task Execution Manager to keep track of that task and its related peers and to report back to the Orchestration Manager (according to parameters of this call) with their status information. • Removing a task from the Task Execution Manager : this calls cancels the monitoring of a task and its involved peers, no further notifications will be sent back to the Orchestration Manager about this removed task. • Polling the status of a given task : this call can be used to get current information of the the task and its related peers from the Task Execution Manager (without waiting for the next notification). For reporting the status of all monitored tasks, the Task Execution Manager will use a specifically prepared call from the Orchestration Manager. Upon receiving these notifications, the Orchestration Manager would be able to either mark a task as completed (when the reported information matches a final state for the task), or to perform corrective measures (when the reported information is either unexpected or corresponding to deviations from the plan). c SmartSociety Consortium 2013-2017

19 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

References [1] F. Giunchiglia, “Contextual reasoning,” Epistemologia, special issue on I Linguaggi e le Macchine, vol. 16, pp. 345–364, 1993. [2] N. D. Rodr´ıguez, M. P. Cu´ellar, J. Lilius, and M. D. Calvo-Flores, “A survey on ontologies for human behavior recognition,” ACM Computing Surveys (CSUR), vol. 46, no. 4, p. 43, 2014. [3] F. Giunchiglia, “Managing diversity in knowledge,” in IEA/AIE, 2006, p. 1. [4] F. Giunchiglia, E. Giunchiglia, T. Costello, and P. Bouquet, “Dealing with expected and unexpected obstacles,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 8, no. 2, pp. 173–190, 1996. [5] P. Bouquet and F. Giunchiglia, “Reasoning about theory adequacy. a new solution to the qualification problem,” Fundamenta Informaticae, vol. 23, no. 2, 3, 4, pp. 247–262, 1995.

20 of 36

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.4

A2E Assistant Nurse Training Study The purpose of the first part of this study is to use the laboratory suite equipment and perform an app. 15-20 Minute emergency scenario in which the A-to-E assistant should help them to do so. Additionally, sensors (eye-tracker, accelerometer, and location) will be used to assess the performance of the students. This session (in an equivalent scenario) should be repeated without the assistance of the A-to-E assistant, in order to measure the impact of the assistance system.

Study Design

The project aims are to:

• Collect video/audio and multiple sensor data associated with student performance while using the assistance systems in interactive clinical simulations, to identify whether and how these systems could have a positive impact in training. • Analyse the data to evaluate the usability of the assistant system. • Establish the feasibility of using these technologies for educational research and evaluation purposes particularly the extent to which they may interfere with usual behaviours. • Identify areas for further research and development. The project aims will be achieved by: • Observing, videotaping and recording student interactions and behaviours during their engagement with the scenarios and automated mannequins. • Recording data from a variety of sensors ( GoogleGlass, Evetracker, proximity sensors, smart phones, location sensors) worn by the students during the scenarios. The duration of the study:

will be approximately one week. 2-3 days for using the

A-to-E assistance system with 5-8 groups of nurse students (3-4 students per group) with 2-3 sessions of 15 Minutes per group. 1-2 days for post-processing data/video data. c SmartSociety Consortium 2013-2017

21 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

Methods: A maximum of 25 student nurses will be invited to participate in the study. The students will participate in groups of 3 persons. There will be a minimum of 5 groups and a maximum of 10 groups. Each group will be required to attend an approximately 1h session on two different days (one day with using the A-to-E assistant, one day without using it to provide comparable data). Note: An alternative would be to recruit 14-16 groups of 2 nurses and have each group either use the assistant or not. The setup is dependened on what is easier to handle. Recruitment:

The particitpants will be recruited from the year 2 cohort of Post

Graduate Diploma Adult Field Nursing students ( n= approx 70). If insufficient numbers are recruited the Year 2 Adult Field BN students will be approached ( n=300). Permission to approach the students has been obtained from the Programme Leads. The researcher will personally meet the student cohort at a time identified by the Programme Lead, provide a very brief overview of the Project and ask any interested student to take a Introduction letter, Participant Information Sheet and Consent Form. The Introductory Letter will provide the instructions about the dates of the study and what they need to do if they wish to participate. Participants will be asked to attend for a maximum of 2 hour in any one day and undertake 2 simulation sessions and the associated debriefs within this timeframe on two different days. Informed Consent: The Participant Information Sheet will provide a detailed description of the study and the activities that the student will be engaging in and will also include a self screening section. The students will be required to wear /carry sensors whilst engaged in the simulation, this may include the wearing of Googleglass or portable eyetrackers all of the sensor data will be recorded . The information sheet will provide a list of medical conditions which preclude the use of eyetracker technologies and students will be asked NOT to volunteer should they fall to any of the excluded categories. The medical conditions which are known to adversely affect the success of eyetracking technologies are: The presence of ocular pathologies: cataracts, peripheral field deficits, glaucoma, uveitis, nystagmus, unusual corrective lenses, Cranial Nerve palsies ( III, IV or VI); Previous history of brain/eye or facial tumours; Major Head Injury or Stroke: Photosensitivty or uncontrolled epilepsy; Personal or Family history of schizophrenia; Previous electroconvulsive therapy; alcohol or other substance misuse within the previous six month; Pregnancy. This self screening approach was successfully used in a previous study in the Faculty ( FoHS-ETHICS-2010-035), and ensures that potential participants do not need to disclose potentially sensitive conditions to the research team. 22 of 36

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

Study Execution: When the participants arrive for the session, the researcher will go over the Participant Information Sheet and Consent Form with each potential participant in private and answer any questions. If the participants are happy with the study arrangements, in particular that the data will be shared electronically with researchers in other European locations, and that it will not be possible to withdraw individuals data once the data collection is complete, the Participants will be asked to sign the consent form. The research team will provide the participants with the sensors and provide instructions should they be required. The students will then participate in a simulation exercise similar to those which they have undertaken previously in their programme. As well as collecting sensor data, the simulation session and the debrief will be both video recorded. The data will be collected, stored and handled in accordance with the PREVIP protocol. All of the data will be stored digitally on a password protected computer with access restricted to the researchers from the University of Southampton. The researchers from the other sites will access the data that they require for their part of the study, but will not require access to the entire data set. Ethical Issues and governance: Volunteer students will be sought for the project. Previous students have demonstrated that they are willing to be involved in additional clinically focused learning opportunities. A member of staff not associated with the project will act as a student guardian and resource should any students have concerns about the simulation scenarios or their engagement in the project generally. The use of electronic devices as sensors will generate some ionising radiation. This will be made clear to the participants but it is not considered to be a significant risk above what is currently considered normal exposure. It will made clear that participant anonymity and confidentiality will be respected, It is the intention of the research team to work with these data in both the UK, Italy and Germany and this will require secure transmission of data electronically. There is therefore a theoretical risk to data security, although all researchers are bound by the same European Data Protection Laws, and will sign a consent form stating that they understand their responsibilities and will comply with the PREVIP Protocol which is attached as a separate document. The Participant Information sheet and consent form will explicitly state that whilst all efforts will be made to protect the data, electronic data transfer will be taking place between the partner institutions. It is also the teams intention to use images and c SmartSociety Consortium 2013-2017

23 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

video of the activities in the dissemination of the research findings, and this may include internet publication and consent will be explicitly sought for this.

24 of 36

http://www.smart-society-project.eu


c SmartSociety Consortium 2013-2017

Deliverable D3.4

A

TEM

tp_TaskActionPlan TaskActionPlanID INT(11) TaskPlanID INT(11) TaskActionID INT(11) Indexes

tp_TaskPlan TaskPlanID INT(11) TaskID INT(11) ActionPlanID INT(11) Role VARCHAR(45) StatusID INT(11) StateID INT(11) ConstraintID INT(11) TaskRequest VARCHAR(45) Indexes

tp_Task TaskID INT(11) TaskType VARCHAR(45) TaskStatusID VARCHAR(45) TaskPlanID VARCHAR(45) Indexes

tp_Constraint

tp_TaskAction

ConstraintID INT(11)

TaskActionID INT(11)

tp_Constraintcol VARCHAR(45)

TaskActionPlanID VARCHAR(45)

Indexes

TaskActionDesc VARCHAR(1000) StatusID INT(11) Indexes

tp_Status StatusID INT(11) StatusName VARCHAR(45) Indexes

tp_State StateID INT(11) StateName VARCHAR(45) Indexes

c SmartSociety Consortium 2013-2017

25 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

Figure 4: Detailed Datamodel of the Task Execution Monitor.

Code Examples Monitoring request The task monitoring request will add the monitors to the accepted task and the entire task structure will be passed on to the TEM from OM with a similar request. { "_id": ObjectId("54be40e7efab5f75af62773c"), "comments": [ "Comments for agent with username agent1", "Comments for agent with username agent2" ], "route": [ "Route description for agent with username agent1", "Route description for agent with username agent2" ], "priceRange": [ "9", "11" ], "desDateTimeWindow": { "desDateTimeHigh": "1421844540000", "desDateTimeLow": "1421840940000" }, "depDateTimeWindow": { "depDateTimeHigh": "1421758140000", "depDateTimeLow": "1421754540000" }, "destination": { "radius": 3.2, "lat": 55.9483678, "lon": -3.158850799999982, "location": "Edinburgh"//addedinversion0.9.6 }, "departure": { "radius": 1.9, "lat": 55.9483678, "lon": -3.158850799999982, "location": "Edinburgh"//addedinversion0.9.6 }, "smoking": "No", "pets": "Yes", "currency": "Euro", "rejectedCommuters": [ ], 26 of 36

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

"agreedCommuters": [ ], "potentiallyAgreedCommuters": [ "agent2" ], "potentialCommuters": [ ], "rejectedDriver": "", "agreedDriver": "agent1", "potentiallyAgreedDriver": "", "potentialDriver": "", "commuterOpinions": [ "http://localhost:3000/users/agent2/opinionsCommuterDrivers/agent1" ], "driverOpinions": [ "http://localhost:3000/users/agent1/opinionsDriverCommuters/agent2" ], "commuters": [ "agent2" ], "driver": "agent1", "_revision": 1, "index": 0, "__v": 0 }

To this request, monitors will be added which will be monitored till the task is terminated { "_id": "574d6df23194568814bf9261", "__v": 0, "rideQualityThreshold": "5", "monitoringOn": true, "monitors": { "fail": false, "end": false, "execute": false, "start": true }, "managedBy": null, "comments": "Comments for agent with username agent3", "route": "Route description for agent with username agent3", "priceBound": "11", "desDateTimeWindow": { "desDateTimeHigh": "1421844600000", "desDateTimeLow": "1421841000000" }, c SmartSociety Consortium 2013-2017

27 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

"depDateTimeWindow": { "depDateTimeHigh": "1421758200000", "depDateTimeLow": "1421754600000" }, "destination": { "location": "Edinburgh", "radius": 3.2, "lat": 55.9483678, "lon": -3.158850799999982 }, "departure": { "location": "Edinburgh", "radius": 1.9, "lat": 55.9483678, "lon": -3.158850799999982 }, "capacity": "1", "smoking": "No", "pets": "Yes", "currency": "Euro", "mode": "commuter", "invalidRidePlans": [ "http://localhost:3000/ridePlans/5", "http://localhost:3000/ridePlans/6", "http://localhost:3000/ridePlans/9", "http://localhost:3000/ridePlans/10", "http://localhost:3000/ridePlans/1", "http://localhost:3000/ridePlans/2" ], "agreedRidePlan": "", "driverAgreedRidePlans": [ ], "potentiallyAgreedRidePlans": [ ], "potentialRidePlans": [ ], "user": "agent3", "_revision": 7, "index": 6 }

Update monitors request The update monitors is a PUT request with the changed flag of the monitors in the body of the request. 28 of 36

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

{ "monitors": { "fail": false, "end": true, "execute": true, "start": true } }

Terminate monitoring request This will terminate the monitoring of a particular task and remove the task from the ride collection. The terminated tasks will switch off the monitoring and will be added to a different collection which can be seen anytime with a GET API. { "rideQualityThreshold": "5", "monitoringOn": false, "monitors": { "fail": false, "end": false, "execute": false, "start": true }, "managedBy": null, "comments": "Comments for agent with username agent3", "route": "Route description for agent with username agent3", "priceBound": "11", "desDateTimeWindow": { "desDateTimeHigh": "1421844600000", "desDateTimeLow": "1421841000000" }, "depDateTimeWindow": { "depDateTimeHigh": "1421758200000", "depDateTimeLow": "1421754600000" }, "destination": { "location": "Edinburgh", "radius": 3.2, "lat": 55.9483678, "lon": -3.158850799999982 }, "departure": { "location": "Edinburgh", "radius": 1.9, "lat": 55.9483678, "lon": -3.158850799999982 }, "capacity": "1", c SmartSociety Consortium 2013-2017

29 of 36


c SmartSociety Consortium 2013-2017

Deliverable D3.4

"smoking": "No", "pets": "Yes", "currency": "Euro", "mode": "commuter", "invalidRidePlans": [ "http://localhost:3000/ridePlans/5", "http://localhost:3000/ridePlans/6", "http://localhost:3000/ridePlans/9", "http://localhost:3000/ridePlans/10", "http://localhost:3000/ridePlans/1", "http://localhost:3000/ridePlans/2" ], "agreedRidePlan": "", "driverAgreedRidePlans": [ ], "potentiallyAgreedRidePlans": [ ], "potentialRidePlans": [ ], "user": "agent3", "_revision": 7, "index": 3, "id": "57478a254c55b82025b83c54" }

30 of 36

http://www.smart-society-project.eu


Deliverable D3.4

B

c SmartSociety Consortium 2013-2017

Publication

c SmartSociety Consortium 2013-2017

31 of 36


c SmartSociety Consortium 2013-2017

32 of 36

Deliverable D3.4

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

c SmartSociety Consortium 2013-2017

33 of 36


c SmartSociety Consortium 2013-2017

34 of 36

Deliverable D3.4

http://www.smart-society-project.eu


Deliverable D3.4

c SmartSociety Consortium 2013-2017

c SmartSociety Consortium 2013-2017

35 of 36


c SmartSociety Consortium 2013-2017

36 of 36

Deliverable D3.4

http://www.smart-society-project.eu


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.