Visualization and Prototyping

Page 1

VISUALIZATION AND PROTOTYPING

GROUP THREE

2016 @ Politecnico di Milano


Objective & Target User _

This is a healthcare treatment OS, helping those paralysis (amyotrophic lateral sclerosis, brain-stem stroke or high spinal cord injury) to be independent in work again and achieve their value of life.

e.g. Stephen Hawking


Scenario

Before The paralysis patient lays on the bed. He can do nothing but laying. He felt hopeless.

After Then he tried AURORA healthcare treatment OS Get in the VR community.

He can join the normal work and social with others.


“Aurora” implies that this system is the hope for those who “can not” realize the value of life.


Mind uploading_

Mental

System

Mind uploading is the hypothetical process of scanning mental state of a particular brain substrate and copying it to a computational device, such as a digital, analog, quantum-based or software-based artificial neural network. The computational device could then run a simulation model of the brain information processing, such that it responds in essentially the same way as the original brain and experiences having a conscious mind. Mind uploading may potentially be accomplished by either of two methods: Copy-and-Transfer & Gradual Replacement of neurons In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer that's inside a robot or a biological body.


Hardware Architecture_

Input Virtual Community User: Paralyzed People

Terminal: Electrode-studded Cap

Virtual Working Space

Visual, Auditory, Tactile Central Processing Unit Brainwaves’ Electrical Signals

VR Interface

Virtual Entertainment Space Computer-mediated Communication

Output


Senses involved_

Visual

Auditory

Touch


Sense of visual - How can we see_

VR Glass By wearing the VR glass, the user can see all those happening in the virtual world in first person view.


Sense of touch & auditory - How can we touch and hear_ Emotiv EPOC The EPOC uses a headset that actually picks up on your brain waves. These brain waves are then transferred into computer language and send to the system to analysis and creat signals. It uses 5 EEG (brain waves) detectors and 2 standard sensors, touching your head, capturing the brain waves from prefrontal cortex (responsible for implementation of the action), the top temporal bone (responsible for hearing and Coordination) and occipital (responsible for vision) of the three partial areas.


How can we act in the virtual world _ Emotiv Insight Emotive Insight’s detection algorithms enable brainwear to interpret signals measured as either mental commands, facial expressions or brain performance metrics. By transferred brain waves into computer instructions, Emotiv Brainwear can tell the system what you want to do in your virtual reality. In other words, you think "lift," and a virtual rock actually levitates on the screen.


How do they work_ creating a scene for Jemmie to see himself being slapped by Alice in the virtual world Emotive

“That hurts�

I want to slap him Alice

OS Create the feeling in hands Emotive

Jemmie Generate brain signal of hurting Emotive


Software & Applications_

Language Siginal

System

Language Siginal

PART1: The Voice-making System For users who have lost their language abilities (e.g. stroke patients / Stephen Hawking) this function helps identify the information they want to convey by analysing the electrical signals of the brainwaves, which means that the system will translate their brain waves into words/sentences and the machine will talk for him/her. However, that “machine sound” is not the real voice of themselves, but they can choose from a “voice library” a favourite one or a voice which sounds the closest to his true voice. PS: It is not allowed to choose a female sound if they are male, vice versa.


Software & Applications_

PART2: Computer-mediated Communication (CMC) System For communicating among users. The first part is Voice-chatting, the real-time dialogue. The second part is the function of sending messages to each other (could be text/audio/video). With this function, communications between virtual world and real world will be established through different softwares.


Software & Applications_

PART3: Inner-environment Selection Before entering the virtual world, users are able to select from three modes: work, entertainment or simply exploration. For each mode they are given a range of settings to choose from. For example, for entertainment you can choose the scene of a football pitch or a tennis court or a movie theatre.


Software & Applications_

PART4: The Virtual Community When the user enters the virtual community, he is actually controlling an avatar that represent himself. All the trees and flowers and buildings are virtual, they are a program pre-set in everyone’s helmet. The people you see in this virtual community are all people existing in the real world and who are using the helmet (which means their status is: “connected”) When you enter the community, you can choose the location you appear. e.g. Under the Effel Tower in that virtual world.


THANK YOU

YINGHUA ZHU, ZIYU ZHOU, HANYU TANG, DONGYING HU, JIASHENG XIE, RAN XU, KELLY JIANG, HANSHU CHEN, YUHANG REN, WEIRUN GUO


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.