11_

Page 1

Short Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012

Dynamical Switching between EEG and ECG for Emotion Recognition in Living Space Kanlaya Rattanyu1, and Makoto Mizukawa2 1

Shibaura Institute of Technology/Graduate School of Functional Control System Engineering, Tokyo, Japan Email: m709502@shibaura-it.ac.jp 2 Shibaura Institute of Technology/Department of Electrical Engineering, Tokyo, Japan Email: mizukawa@sic.shibaura-it.ac.jp

Abstract—This paper presents our approach for emotion recognition based on wireless and wearable multichannel of Electroencephalogram (EEG) and Electrocardiogram (ECG) sensors for mobility and convenience of users’ daily life. We took advantages of the combining that EEG gave more precise recognition rate and ECG was more stable with less noise. In the ECG module, we propose to use the ECG’s inter-beat features together with within-beat features. In order to reduce the feature space, post hoc tests in the Analysis of Variance (ANOVA) were employed to select the set of eleven most significant features. Our designed system applied EEG’s power density spectral and fractal dimension (FD) features in normal situation and using ECG features when EEG signal degrades. We conducted experiments on 18 subjects according to Mirror Neural System (MNS) theory to elicit emotion. For simultaneous classification of six emotional sates: anger, fear, disgust, sadness, neutral, and joy, the Correct Classification Ratio (CCR) was 74.1% in EEG module and 60.6% in ECG module.

increase of heart rate associated with fear (e.g., [8]) and anger (e.g., [7]), and increase of heart rate variability associated with stress (e.g., [5]). However some results were controversial: sadness has been found to sometimes lead to an increase of heart rate (e.g., [11]) and sometimes to a decrease (e.g., [8]). II. EQUIPMENTS There are many biological signals related with emotion. The sensors were selected based on three important criteria. The first criterion was that it’s signals had to be strongly related with the human emotion. The second criterion was that the sensor had to adhere to human skin without discomfort. The last criterion was that the sensor had to be wearable and convenient for use in notmal daily life. A. Wireless ECG Sensor (RF-ECG) RF-ECG was used to measure an elctrocardiogram (ECG) signal generated by electrical activity of the heart muscle. The sensor is a low weight (12 g) and small-sized sensor (40 mm × 35 mm × 7.2 mm). This sensor can record and wirelessly transmit ECG signals to the server with 204 Hz. The wireless RF transmitter had an open area range of up to 15 m.

Index Terms—emotion recognition, EEG, ECG, ANOVA

I. INTRODUCTION Although EEG and ECG are commonly used for emotion recognition, there are some remaining issues. For EEG, the quality of data depends on user activities and sensor set up. For example, artifacts will stem from muscle activities when user moves. Signal from sensor may be weak over some short periods when user changes their head position which make EEG sensor loosens. To overcome the above issue, this paper proposes dynamic switching between sensor/data sources when EEG signal degrades. The ECG signal has an advantage over the EEG signal in that the ECG’s amplitude is quite large compared with the EEG signal. The amplitude of ECG signals is measured in mV. For a typical adult human, the EEG signal is about 10-100 µV in amplitude when measured from the scalp. However the main limitation of emotion recognition by using only ECG signal was the number of emotional categories. Facial feature expression can categorize emotions into many categories, while most successful studies by using ECG signals classify only few categories such as positive/ negative feeling[1-3], feeling of being stressed/relaxed [4, 5], or fear/neutrality [6]. Some studies (e.g., [7–10]) overcome this limitation by combining ECG with other physiological signals that are related to organs that are affected by the Autonomic Nervous System(ANS). Among these studies, some correlations between emotion and ECG can be identified: © 2012 ACEEE DOI: 02.ACE.2012.03. 11

B. EEG Emotiv EPOC Headset The Emotiv EPOC headset was selected to measure elctroencephalogram (EEG) signals that present information regarding brain activity and global information about mental activites and emotional states. The neuroheadset consists of 14 electrodes following the American Electroencephalographic Society Standard. It also integrates two internal gyroscopes to provide user head position information. III. METHODOLOGY To achieve EEG signals without artifacts: (1) the user must avoid moving in the EEG signal acquisition; and (2) filters or some signal processing algorithms can be accomplished to remove artifacts from the EEG signals acquired. Although we have filters to remove the artifacts, removing artifacts entirely is impossible in the existing data acquisition processes. It is better to avoid them. Most successful researches [11-16], the participants were asked to keep less movement as possible while measuring EEG. As this reason our system was designed for dynamical switching between sensors when EEG signal degrade. We employed emotion recognition using ECG 85


Short Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 instead of EEG when bad contact quality of EEG signal or the user’s moving occurs as described in figure 1.

Figure 3. Detecting user’s movement

C. EEG Processing The Emotiv headset acquired EEG signal with 14 sensors placed on the user scalp.The signals were recorded at a sampling rate of 2048 Hz through a C-R high-pass hardware filter at 0.16Hz cutoff, pre amplified and low-pass filtered at 83 Hz cutoff and preprocessed using two notch filters at 50 Hz and 60 Hz. The signal is down-sampled further to 128 Hz.  Filters: A high-pass filter with cutoff frequency at 1 Hz was first applied in order to remove linear trends in the EEG raw signals and a low-pass filter with cutoff frequency at 41 Hz was employed for high-frequency noise removal.  Signal Enchantment: There was no timestamp data available by the Emotiv headset. The amount of data loss was calculated from time period and frequency. The signals were simple compensated by using scale interpolation.  Feature Extraction: EEG signals are complex signals seem like noise, so gathering information from the time domain was hard. In the time domain data seem irrelevant. These compensated signals were converted from time domain to the frequency domain. The signals were split by frequency ranges. In this work, we applied a combination of power spectral density (PSD) of different EEG frequency band and the fractal dimension (FD) of each electrode location as our set of EEG features. A fast Fourier transform was used to estimate PSD for each electrode in the δ (1-4 Hz), θ (4-8 Hz), α (814 Hz), β (14-26 Hz), and γ (26-41 Hz) frequency band. For each data point normalized PSD of sub band s at electrode location i was calculated as equation (1).

Figure 1. Dynamical switching between EEG and ECG for emotion recognition

A. Checking contact quality of EEG The headset provided contact quality flag variables associated with each EEG channels. The color of the sensor circle is a representation of the contact quality as shown in figure 2. In EEG emotion recogntion module, we accepted only excellent(green color) signal. To achieve the best possible contact quality, all of the sensors should show as green. Other sensor colors indicate: yellow fair signal, orange poor signal, red very poor signal and black no signal.

PSD si 

5 s 1

PSDsi

(1)

The fractal dimension (FD) is a quantity that conveys information about the space filling and self-similarity of an object. For calculating the FD we used a method from a recent study [20] reporting better performance that Higuchi algorithm (traditional method); it computed directly FD from the waveform. At the first step, the time series t with length N is normalized with respect to time and amplitude:

Figure 2. Checking contact quality of EEG

B. Checking contact quality of EEG We detected the user’s head movment via two internal gyroscopes in the headset device. To avoid to use noisy data, the system rejected to process the EEG signal when the user did not keep less movement.

© 2012 ACEEE DOI: 02.ACE.2012.03.11

PSDsi

nnew 

86

n N

(2)


Short Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012

tnew 

t  tmin tmax  tmin

used to explore all possible pair-wise comparisons of means comprising an emotion factor  using the equivalent of multiple t-tests. We used the post hoc test to identify the significance of each feature. As described in the previous section, there are 42 features. We selected only 11 features (HR, SDNN, RMSSD, QT, SDQT, PR, SDPR, QRS, SDQRS, ST, and SDST) from 42 features that had a level of confidence more than 85%. We did not apply 25 non-significant features in the classification process.

(3)

Where tmin and tmax are the minimum and maximum of the signal amplitude, respectively. Now the FD of waveform can be approximated as follows: FD  1 

ln( L) ln(2( N  1))

(4)

C. Normalization

Where L is the length of the normalized curve,

Each parameter was normalized by subtracting each parameter from its mean in the neutral emotion.

N

L   (tnew (n)  tnew (n  1)) 2  (nnew  (n  1)new )2 (5) n2

D. Classification Linear Discriminant Analysis (LDA) was used to classify emotion into six categories (anger, fear, disgust, sadness, neutral, and joy). We ran a cross-validation 10 times using the 20 percentage holdout method to have a better estimation of the classifier performance.

This produced 5+1 features at each electrode location which made a total number of 84 features for each emotional data. D. ECG Processing The ECG signal was sampled with a sampling frequency of 204 Hz. The digital signal was then transmitted wirelessly to the server.  Annotation of the ECG: The Continuous Wavelet Transform (CWT) and Fast Wavelet Transform (FWT) were used for automatic annotation of the ECG cardio cycle [8]. The annotation method consisted of two phases: QRS detection followed by P, T wave location.  QRS detection: To amplify the QRS complex and separate low frequency (P and T waves) and high frequency (noise), the CWT transform was applied at 12 Hz with an inverse wavelet. The CWT spectrum obtained was further filtered with FWT using an interpolation filter to remove frequency content below 30 Hz and the rest of the spectrum was denoted with a hard threshold using a MINIMAX estimate. The reconstructed ECG signal after denoising contained only spikes with non-zero values at the location of the QRS complexes.  P,T waves detection: After QRS complexes were detected, the intervals between them were processed for detection of P and T waves  Features Extraction: After determining the location of each wave on the ECGs, several parameters indicating each part of heart’s activity were calculated. In this process, we calculated not only the inter-beat information of the ECG (RR-interval or HR), but also the within-beat information of ECG (PR, QRS, ST, QT intervals, PR, and ST segments).  Statistical Data: We calculated six types of statistical data (maximum, minimum, medium, mean, SD, and RMSSD) of each parameter during the 6 second period. In total, we have seven parameters(RR, PR, QRS, ST, QT intervals, PR, and ST segments) and six types of statistical data, so the corresponding Cartesian product has 42 (7×6) elements. This means that the number of possible features were 42 features. Post Hoc Tests in Analysis of Variance (ANOVA): Least Significant Difference (LSD) test was © 2012 ACEEE DOI: 02.ACE.2012.03. 11

IV. EXPERIMENT According to the Mirror Neuron System (MNS) theory[17], the subjects’ biological signal is supposed to reflect the same activity as when they are really overcome by the same emotion. Thus, the exploitation of MNS’s functionality during the emotion elicitation process would help gathering more representative biological signals. In order to elicit the basic emotions as defined previously Pictures of Facial Affect (POFA) [19], showing people expressing the six aforementioned emotions were subsequently demonstrated, separated by black and counting down (5 4 ... 1) frames before the projection of the new picture. A five-second period was demonstrated to accomplish a relaxation phase and emotion-reset. More specifically, black screen was projected for five seconds follow by five seconds counting down period then one second cross to focus user and finally a randomly picture is projected for six seconds. This seventeen seconds procedure is repeated for every one of the sixty pictures. Each trial began with a preparation step to familiarize the subjects with and understand the experiment. We started to measure the biological signals at the same time the first picture was presented, so we were able to separate the biological signal into 6 seconds of 60 emotions per subject. VI. RESULT AND DISCCUSION The experiment was conducted with 18 subjects (mean±SD age = 27.5± 5.1 years). Each subject was recorded both ECG and EEG with 60 emotional data. In EEG module, some samples that the contact quality of signal was very poor were rejected. The total of selected-sample was 912 samples. The previous studies extracted only inter-beat information of ECG such as RR-interval or Heart Rate (HR) time series. Some studies use statistical data to record this information 87


Short Paper Proc. of Int. Conf. on Advances in Computer Engineering 2012 (min, max, average, Standard Deviation of Normal-to-Normal of R-R intervals (SDNN), and Root Mean Square of Successive Different of RR intervals (RMSSD). In order to maximize efficiency of ECG. we propose to use ECG’s interbeat features together with within-beat features in our recognition system. Table 1 showed the final result of emotion recognition accuracy based on Mirror Neural System. The experimental result showed that our 11 ECG features approach performed better than convensional 3 ECG features approach 21.4% and EEG module gave more precise recognition rate than ECG module 13.5%, when the subjects were asked to keep less movement as possible while measuring the signals.

[8]

I. C. Christie, “Multivariate Discrimination of EmotionSpecific Autonomic Nervous System Activity,” Master’s thesis, Virginia Polytechnic Institute and State University, 2002. [9] G. Chanel, K. Ansari-Asl, and T. Pun, “Using Neural Network to Recognize Human Emotions from Heart Rate Variability and Skin Resistance,” In Int. Conf. of the IEEE-Engineering in Medicine and Biology Society (IEEE-EMBS), IEEE, 2005, pp. 5523-5525. TABLE I. EMOTION RECOGNITION ACCURACY

V. CONCLUSION AND FUTURE WORK This system, we focus on emotion recognition in the living space. The wearable EEG and ECG sensors with wireless connections were selected. The sensors provided the user mobility and convenience in normal daily life without the limitation of audiovisual problems. The experiment results showed that the primary module (EEG) gave more precise recognition rate than ECG secondary module. However there is a known issue that the EEG’s amplitude is very sensitive with artifact and the quality of data depends on user activities. Removing artifacts entirely is impossible in the existing data acquisition processes. As this reason, we proposed for dynamical switching between sensors when EEG signal degrade. To take advantage of the combining that EEG gives more precise recognition rate and ECG is more stable with noise.

[10] J. Kim and E. Andre, “Emotion Recognition Based on Physiological Changes in Music Listening,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.30, No.12, 2008, pp. 2067-2083. [11] Sun S and Zhang C. “Adaptive Feature Extraction for EEG Signal Classification,” IEEE Transactions on Medical and Biological Engineering and Computing, Vol 44 (10), 2006, pp. 931-935. [12] Takahashi K. “Remarks on Emotion Recognition from BioPotential Signals,” IEEE Transactions on Autonomous Robots and Agents, 2004, pp. 186-191. [13] Takahashi K and Tsukaguchi A. “Remarks on Emotion Recognition from Multi-Modal Bio-Potential Signals,” IEEE Transactions on Industrial Technology, Vol 3, 2003, pp. 16541659. [14] Iizuka T and Nakawa M. “Emotion Analysis with FractalDimension of EEG Signals,” IEIC Technical Report, Vol 102 (534), 2005, pp. 13-18. [15] Chanel G, Kronegg J, Granjean D., and Pun T. “Emotion Assessment: Arousal Evaluation using EEG’s and Peripheral Physiological Signals,” Lecturer Notes in Computer Science, Springer, 2006, pp. 530 – 537. [16] Savran A, Ciftci K, Chanel G, Javier Cruz Mota, Luong Hong Viet, Sankur B, Akarun L, Caplier A., and Rombaut M. “Emotion Detection in the Loop from Brain Signals and Facial Images,” Final Project Report, eNTERFACE’06, 2006. [17] G. Rizzolatti and L. Craighero, “The mirror-neuron system, “ Annual Review of Neuroscience, vol. 27, 2004, pp. 169-192. [18] E. Oztop, M . Kawato, and M . Arbib, “Mirror neurons and imitation: A computationally guided review,” Neural Networks, vol. 19, 2006, pp. 254-271. [19] P. Ekman and W. V. Friesen, “Pictures of facial affect,” Human Interaction Laboratory. San Francisco, CA: Univ. California Medical Center, 1976. [20] C. Sevcik, “On fractal dimension of waveforms,” Chaos, Solitons & Fractals, vol. 28, pp. 579-580, 2006.

REFERENCES [1] C.-H. Yang, J.-L. Wang, K.-L. Lin, Y.-H. Kuo, and K.-S. Cheng, “Negative Emotion Detection Using the Heart Rate Recovery and Time for Twelve-Beats Heart Rate Decay After Exercise Stress Test,” In Int. Joint Conf. on Neural Networks (IJCNN), IEEE, 2010, pp. 1-6. [2] J. Thayer and G. Siegle, “Neurovisceral integration in cardiac and emotional regulation,” Engineering in Medicine and Biology Magazine, IEEE, Vol.21, No.4, pp. 24-29, 2002. [3] W. Wu and J. Lee, “Improvement of HRV Methodology for Positive/Negative Emotion Assessment,” In Collaborative Computing: Networking, Application and Worksharing 2009, CollaborateCom 2009, 5th Int. Conf. on, IEEE, 2009, pp. 1-6. [4] C. Lee and S. Yoo, “ECG-based Biofeedback Chair for Selfemotion Management at Home,” In Int. Conf. on Consumer Electronics 2008 (ICCE 2008), Digest of Technical Papers, IEEE, 2008, pp. 1-2. [5] J. D. Rodriguez and L. Santos, “Comparative Analysis Using the 80-Lead Body Surface Map and 12-Lead ECGWith Exercise Stress Echocardiograms,” J. of Diagnostic Medical Sonography, Vol.22, No.5, 2006, pp. 308-316. [6] S. R. Vrana, B. N. Cuthbert, and P. J. Lang, “Fear Imagery and text processing,” The Int. J. of the Society for Psychophysiological Research (Psychophysiology), Vol.23, No.3, 1986, pp. 247-253. [7] B. L. Fredrickson, R. A. Mancuso, C. Branigan, and M. M. Tugade, “The Undoing Effect of Positive Emotion,” Motivation and Emotion,Vol.24, No.4, 2000, pp. 237-258.

© 2012 ACEEE DOI: 02.ACE.2012.03. 11

88


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.