Imperial Journal of Interdisciplinary Research (IJIR) Vol-3, Issue-2, 2017 ISSN: 2454-1362, http://www.onlinejournal.in
Survey on Human–Computer Interface for Persons with Tetraplegia Blessy B John Department of Computer Science, RIT, Kottayam Abstract: - Human–Computer Interface for persons with tetraplegia aims to help their difficulty to use personal computer’s user input interfaces such as keyboard and mouse. They are paralysis caused by illness or injury that affects the partial or total loss of use of all four limbs. Some Assistive Technologies are developed to improve the life of people with tetraplegia allowing the control of computers, smartphones, power wheelchairs or other computerized appliances. In this survey, we review the major approaches to assistive technologies are used for human computer interface. Keywords: Tongue drive system (TDS), Lip control system (LCS), Brain–computer interfaces (BCIs), Randomized Decision Tree, Human–Computer Interface.
improve their quality of life. But in the standard way of operating a personal computer requires the reliable use of hands and arms, since it involves a keyboard and a mouse device, which is unsuitable for a large number of people with disabilities. Therefore, developing an alternative user interface, which does not require manual input, was of great importance. This paper presents a survey of Human– Computer Interface for Persons with Tetraplegia. Here we discuss five different method used for Human– Computer Interface. The first method describes Tongue drive system (TDS) is a wireless, wearable assistive technology. It is a new tongue-operated computer input device that enables individuals with severe motor impairments to access computers, and control their environments using tongue motion. Subjects wore tongue rings made of titanium in the form of a barbell with a small rare-earth magnetic tracer hermetically sealed inside the upper ball.
1. Introduction Tetraplegia is also known as quadriplegia, is paralysis caused by illness or injury those results in the partial or total loss of use of all four limbs and torso. The loss was usually sensory and motor, which indicates that both sensation and control are lost. Tetraparesis or quadriparesis, on the other hand, means it affect the muscle weakness of all four limbs. Nowadays, personal computers make a very important role in modern life. For persons with tetraplegia, suffering from traumatic brain injury, cerebral palsy, neurological injury or stroke, it was very difficult to use personal computers’ standard input devices, such as the keyboard and the mouse, during their rehabilitation and everyday life activities. Assistive Technology is a generic term that contains adaptive, assistive and rehabilitative devices that provides computer interface for people with disabilities. Computer control and the subsequent electronic device was an actual worldwide concern because it offers people with disabilities the ability to
Imperial Journal of Interdisciplinary Research (IJIR)
Second method is Lip control system (LCS) was an innovative human–computer interface specially designed for people with tetraplegia. It was made of a headset and a joystick positioned in front of the lower lip. Lip muscles are controlled by the facial nerve that was directly connected to the brain. The system was developed as a standard Bluetooth mouse with a human interface device (HID) profile. In the third method a Testing Inertial Sensor is introduces hands-free human-computer interface designed around commercially available inertial sensor pack. It was primarily intended to provide computer access for people with little or no upperlimb functionality, but had been used by able bodied subject in certain application scenarios. In next method a Brain–computer interfaces (BCIs) allow their users to communicate or control external devices using brain signals. BCI replaces muscles and nerves and the movements they develop with software and hardware that measure brain Page 881
Imperial Journal of Interdisciplinary Research (IJIR) Vol-3, Issue-2, 2017 ISSN: 2454-1362, http://www.onlinejournal.in signals and translate that signals into actions. Restoration of basic communication capabilities for those people could significantly improve their quality of life as well as that of their reduce social isolation, caregivers, increase independence and potentially reduce cost of care. The last method is a human–computer interface (namely Facial position and expression Mouse system, FM) for the persons with tetraplegia based on a monocular infrared depth camera is presented in this paper. The nose position along with the mouth status such as close or open was detected by an algorithm to control and navigate the cursor as computer user input. The algorithm was based on an improved Randomized Decision Tree, which was capable of identifing the facial information efficiently and accurately. A more comfortable user experience was required by mapping the nose motion to the cursor motion via a nonlinear function. The infrared depth camera enables the system to be independent of illumination and color changes both from the background and on human face, which was a critical advantage over RGB camera-based options.
The SSP algorithm running on the personal computer and used a K-nearest neighbour (KNN) classifier to detect the incoming samples based on their features, which was extracted via principal components analysis from the data that was selected during a training stage. The NEUTRAL command was considered as the tongue positions for the six TDS commands plus the tongue resting position.
Figure 1: Tongue drive system
2. Human Computer Interface Techniques Here we discuss five methods used for humancomputer interface for persons with Tetraplegia. They are Tongue drive system (TDS), Lip control system (LCS), Testing Inertial Sensor, Brain– computer interfaces (BCIs) and Facial position and expression Mouse system.
2.1. Tongue drive system (TDS) Tongue drive system (TDS) [2] was a wireless, wearable assistive technology. It enabled individuals with severe motor impairments to access computers, drive wheelchairs, and control their environments using motion of tongue. It was made of a small permanent magnetic tracer fixed on the tongue with adhesive a headset with an array of 3-axial magnetic sensors to identify the changes in the magnetic field generated by the tracer, a wireless link between the headset and a transceiver on a computer, or a smartphone to transfer the magnetic sensor data, and a sensor signal processing (SSP) algorithm, which identifies the position of the magnetic tracer and, hence, the position of the tongue.
It should be noted that the cursor movement in each direction was unlatched, i.e., the subjects had to continue issuing some directional command to maintain the cursor motion in that direction. The speed of cursor movement was increased linearly from zero at a rate of 500 pixels /s2 until it reached saturation level at 350 pixels /s. The cursor acceleration rate and saturation level was determined based on the comfort level of an experienced TDS user. The target reached in oblique directions with four directional commands requires moving in horizontal or vertical segments. Since the cursor speed ramped up every time a new command was issued, the most efficient way of reaching such targets was to minimize the number of segments. Even then reaching oblique targets always took longer than reaching vertical or horizontal targets at a similar distance. To prevent issuing wrong commands repeatedly and to prevent subjects from adjusting the cursor position with a series of small moves, a time limit was set in reaching a target, beyond which the trial was terminated and a new target appeared.
2.2. Lip control system (LCS) Lip control system (LCS) [4] was a human– computer interface with a headset and a joystick
Imperial Journal of Interdisciplinary Research (IJIR)
Page 882
Imperial Journal of Interdisciplinary Research (IJIR) Vol-3, Issue-2, 2017 ISSN: 2454-1362, http://www.onlinejournal.in positioned in front of the lower lip. The lip control must be head mounted in order to absorb the lower lip muscles movements. The joystick, as an interaction method, was chosen because it was easy to used, was compatible with the lips movement, provides an intuitive control and was widely known assistive technologies. LCS is controlled by the lower lip, an external body part, less hygienic issues. It allows soft free movement in any direction as it was based on a joystick/. It was a personal system that can stay with the user in the wheelchair, chair, bed, etc; and it avoids false commands deriving from wheelchair vibration or body spasms because it was head mounted. An efficient human–computer interface was very helpful to improve the autonomy of people with tetraplegia allowing the control of power wheelchairs, smartphones, computers or other computerized appliances. The LCS hardware consists of a development board Arduino Mega ADK, a Bluetooth module (Roving RN42-HID) and a thumb joystick, Fig. 2(a).
Figure 2. (a) LCS architecture. (b) Head support. (c) Joystick support. (d) Calibration holes.
The system was designed as a standard Bluetooth mouse with a human interface device (HID) profile. All the communications happen as with a standard Bluetooth mouse. The LCS was developed specifically to be controlled by the lower lip. The head support, Fig. 2(b), evolved to provide the necessary stability during the operation; the joystick support, Fig. 2(c), evolved to be double and to provide calibration of length and angle in order to set the joystick in the correct operation position (just touching the skin).
Imperial Journal of Interdisciplinary Research (IJIR)
2.3. Testing Inertial Sensor In this paper hands-free computer interface [5] designed for people with certain types of disabilities (e.g.tetraplegia) was unable to use computers. The performance of the device was evaluated on twelve healthy subjects performing multi-directional pointand-select task with throughput as the main performance parameter. The sensor incorporated triad of accelerometers, gyroscopes and magnetometers and as an output provided 3D orientation with angular resolution of 0.05 degrees, static accuracy < 1 degree and dynamical accuracy (depending on movement) of 2 degrees RMS. Sensors were relatively small measuring 38 x 53 x 21 mm (W x L x H) and weighing 30 g. This system was used commercially available Xsens MTx sensor and XBus Master. The XBus master was digital data bus with data processing capabilities based on Kalman filtering which can operate in one of two modes: wire mode (used in measurements) or wireless mode via Bluetooth (enabling high portability). The sensor which was placed on top of subjectâ&#x20AC;&#x2122;s head was used to measure absolute head orientation such as roll, pitch and yaw angles and was secured in place by elastic harness which ensured snug fit and prevented sensor from moving during measurements. This system was tested in four different pointing mode-selection mode configurations. The two pointing modes are Joystick and Pointer, while two selection modes are keyboard and time trigger. Head Joystick device showed best performance in Pointer mode with time trigger mechanism measuring throughput of 1.927 bits/s. The keyboard selection technique needed the subject to press the spacebar to complete the selection, while in the time trigger technique user had to place the pointer inside the target circle and keep it there for 400 ms in order to complete the selection task. This system was simple, highly portable, accurate, intuitive and user friendly device which could be used by both healthy subjects and impaired subjects who retained some motor functionality. Itâ&#x20AC;&#x2122;s most significant demerit is high price.
Page 883
Imperial Journal of Interdisciplinary Research (IJIR) Vol-3, Issue-2, 2017 ISSN: 2454-1362, http://www.onlinejournal.in Brain signals that carry the intent of the user were first acquired by electrodes placed on the scalp (EEG), beneath the skull and over the cortical surface (ECoG), or within brain tissue (intracortical). These brain signals were digitized, and specific signal features was extracted. The extracted signal features was translated into device commands that activate and control assistive technology used for communication, movement control, environmental control, locomotion or neurorehabilitation.
Figure 3: Measurement setup
2.4. Brain Computer Interfaces Brain–computer interface (BCI) [3] technology, is a new output channel for brain signals to communicate or control external devices without using the normal output pathways of peripheral nerves and muscles. A BCI identifies the intent of the user through the electrophysiological or other signals of the brain. The signal such as electrophysiological may be recorded over the scalp, underneath the scalp, or within the brain; other types of physiological signals may be recorded by magnetic sensors or other means. In time, a brain signal was translated into output commands that accomplish the desire of the user. A brain–computer interface, also referred to as a brain–machine interface, was a communication or control system that allows interaction between the human brain and external devices. A BCI user’s intent, as reflected by brain signals, was translated by the BCI system into a desired output: computer-based communication or control of an external device.
Figure 4: Brain Computer Interfaces
Imperial Journal of Interdisciplinary Research (IJIR)
The operating protocol used to determine the interactive functioning of the BCI system. It defines the onset/offset control, the details of and sequence of steps in the operation of the BCI, and the timing of BCI operation. It provides the feedback parameters and settings, and possibly also any switching between different device outputs. An effective operating protocol allows a BCI system to be flexible, serving the specific needs of an individual user.
2.5. Facial Position and Expression The human–computer interface namely Facial position and expression Mouse system, FM [1] using a monocular commercial depth camera, such as SoftKinetic [10] and Kinect, and is based on the facial position and expression. FM is independent of color and illumination influences since it is based on an infrared depth camera, even the main interference source is the infrared light such as direct sunlight. Moreover, the depth image simplifies the method of extraction of facial feature, which makes it more robust than RGB camera-based methods. A fast and robust Randomized Decision Tree (RDT) was used to automatically detect the position and the expression from a single image, which can prevent the “feature drift” problem that includes in most tracking algorithms. In the training phase of algorithm, the information of both the position and the expression is embedded into labels. There are five labels for the RDT algorithm, i.e., nose, mouth-open, mouth-close, head, and body. A mean-shift algorithm and a voting algorithm were used to extract the positions and the mouth status, respectively. The close and open statuses of the mouth were used to enable and disable the motion of the cursor, respectively. The runtime of this detection algorithm can be speeded up by pyramid processing. To address the problem of small range of head motion for low resolution of camera, the human– Page 884
Imperial Journal of Interdisciplinary Research (IJIR) Vol-3, Issue-2, 2017 ISSN: 2454-1362, http://www.onlinejournal.in computer interface combines the advantages of both facial expression-based interfaces and head motionbased interfaces. An efficient user experience was required by a nonlinear function mapping the motion of the nose to the motion of the cursor. The mouth status enables the user to conveniently adjust the pose of the head, and to efficiently provide commands. When compared with other interface devices for people with difficulties to access computers, the strength of this system was its comfortable (no guardian, no accessory on body, no calibration, and no initialization) and robustness (insensitive to illumination or color). It provides persons with tetraplegia to use computers efficiently to enrich their lives.
Figure. 5. Overview of FM
3. Conclusion In this paper we have discussed several assistive technologies are used for human computer interface. These ATs have been developed to help persons with tetraplegia by using their limited voluntary signals and motions to control computers. For a certain situation, one approach is better than all the rest but also that each of the presented approaches is the most suitable to a particular case. While the latest research is still mainly focused on technical issues, the user and his daily requirements are still to be fulfilled. Thus, it is still hard to choose an assistive technology that maximizes the user's capabilities fulfilling his requirements, and enrich their lives.
Imperial Journal of Interdisciplinary Research (IJIR)
References [1]
Zhen-Peng Bian, Lap-Pui Chau and Nadia Magnenat, “Facial Position and Expression-Based Human Computer Interface for Persons With Tetraplegia” IEEE Journal Of Biomedical and Health Infomatics, vol. 20, no. 3, may 2016. [2] B. Yousefi, X. Huo, E. Veledar, and M. Ghovanloo, “Quantitative and comparative assessment of learning in a tongue-operated computer input device,” IEEE Trans. Inform. Technol. Biomed., vol. 15, no. 5, pp. 747–757, Sep. 2011. [3] J. Mak and J. Wolpaw, “Clinical applications of brain-computer interfaces: Current state and future prospects,” IEEE Rev. Biomed. Eng., vol. 2,pp. 187– 199, 2009. DOI 10.1109/RBME.2009.2035356. [4] M. Jose and R. de Deus Lopes, “Human-computer interface controlled by the lip,” IEEE J. Biomed. Health Informat., vol. 19, no. 1, pp. 302–308, Jan. 2015. [5] J.Music, M. Cecic, andM.Bonkovic, “Testing inertial sensor performance as hands-free human-computer interface,” WSEAS Trans. Comput., vol. 8, pp. 715– 724, Apr. 2009. [6] Z.-P. Bian, J. Hou, L.-P. Chau, and N. MagnenatThalmann, “Human computer interface for quadriplegic people based on face position/gesture detection,” in Proc. ACM Int. Conf. Multimedia, 2014, pp. 1221–1224. [7] Ergonomics of Human-System Interaction—Part 411 Evaluation methods for the Design of Physical Input Devices, ISO/TS 9241-411:2012, May 2012. [8] C. Lau and S. O’Leary, “Comparison of computer interface devices for persons with severe physical disabilities,” Amer. J. Occupat. Therapy, vol. 47, no. 11, pp. 1022–1030, Nov. 1993. [9] P.McCool, G. Fraser, A. Chan, L. Petropoulakis, and J. Soraghan, “Identification of contaminant type in surface electromyography (EMG) signals,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 4, pp. 774–783, Jul. 2014. [10] SoftKinetic. (2014). [Online]. Available: http://www.softkinetic.com/ [11] C. M. Bishop, Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006. [12] Camera Mouse. (2014). [Online]. Available: http://www. cameramouse.org/
Page 885