Effectiveness of a Multimodal Head-Up Display for a Night Driving Assistance System Mauro D’Alessandro
Michele Mariani
Università degli Studi di Siena Communication Science Dept. Comp. S.Niccolò, via Roma 56 53100 Siena (Italy) dalessandro9@unisi.it
Università degli Studi di Modena e Reggio Emilia Social,Cognitive and Quantitative Science Dept. Via A.Allegri 9 42100 Reggio Emilia (Italy) mariani.michele@unimore.it
ABSTRACT
The effectiveness of a Multimodal Head-Up Display (HUD)1 for a night driving assistance system was tested in a mutiple tasking computer based experiment. The test consisted of a car following simulation, in which 40 students from Siena University were asked to perform one of four experimental condition, using one of the following HMI configurations: baseline (no HUD); simple HUD (without warnings), visual wrning HUD; and multimodal HUD. Afterwards subjects were asked to fill an evaluation questionnaire. Measures of Reaction Times on stimulus detection and recognition, and mistakes on different tasks were used as dependent variables. Futhermore distraction induced by different HMI was tested by asking subjects to follow a car trajectory during the video. Results clearly showed taht involving more perceptual channels the safety impact of automotive HMI results as more effective and less distractive.. The pc-based apparatus limited the possibility to simulate with more accurateness driving situation, and the narrow age range of subject limited the possibility to explore experimental hypothesis on other driver’s classes.
of secondary driving information (e.g. warnings) consistently reducing driver's distraction. However, HUDs could overload the visual channel and enhance the potential for cognitive tunnelling (Tufano, 1997; Ward & Parkes, 1994; James, et al., 1995; Steinfield & Green, 1998). According to Multiple Resource Theory (Wickens, 1988) we tested the hypothesis that a multimodal HUD could reduce HUDs' potential drawbacks, resulting in enhanced night driving safety .
EXPERIMENTAL APPARATUS
Subjects interacted through a computer-based console (fig.1a) running a quicktime video simulation of a night driving scenario, engaging in a car-following task. The video was projected onto a 15,4’’pc display at a distance of about 90 cm.. A laser light mounted on the steering wheel served for simulating the car-following task. The simulation of the HUD (lightened black&white video of the same road scene) was displayed on the bottom-left corner of the screen2.
Driver assistance systems are often designed without a clear knolwledge of driver’s attentional efforts. This researh makes a contribution to the need of an accurate analysis of driving task to improove design of automotive technologies.
Keywords
Head-Up Display, Night Driving, Warnings, Driver’s Distraction.
Multimodal Fig. 1a: Frame from the experimental video. On the bottom-left corner the simulated HUD is reproducing and enhancing the view of the driving scene.
INTRODUCTION
Head-Up Displays (HUD) are widely used in the aviation and military domains, and are starting to be implemented on a large scale within the automotive sector. HUDs hold the promise to facilitate the uptake
1
Head Up Displays (HUD) project symbols and images in the field of view of a driver. HUD’s main function is to provide information about vehicle position, navigation, and potential obstacles in the driver’s forward field of view (FFOV), minimizing information overload and distraction.
2
This study reproduces the experimental apparatus presented in Bossi L. et al; (1997). The Effect of Vision Enhancement Systems on Driver Peripheral Visual Performance
Three main performance measures were taken to explore different HUD’s effectiveness and impact on driving safety: • • •
reaction times for stop-lights and targets detection (road signs; oncoming cars); missed stop-lights detections and visual target detection; mistakes in foregoing car tracking (car following errors).
Finally, subjective evaluations on system's usability and perceived safety were collected through some seven-point rated Likert scales questions administered in paper-and-pencil questionnaire. Fig 1b: Experimental Apparatus. RESULTS EXPERIMENTAL DESIGN AND PROCEDURE
Fourty (20 males and 20 females) licensed students, aged 19-32, were randomly divided in four groups, according to four experimental conditions in a between subjects design: baseline (no HUD); no warning HUD; visual warning HUD; multimodal (visual and auditory3) HUD. Each test session was divided in three phases: • training and familiarization; • driving simulation; • questionnaire filling and debrief. The primary task required subjects to follow a car shown in the video by pointing a laser light mounted on the steering wheel. Furthermore they were asked to detect the stop light of the foregoing car (activated in determined critical moments during the trial) by pressing pedal 1, simulating braking. The secondary task required subjects to detect as quickly as possibile both oncomig cars and road signs by pressing pedal 2 and to recognize it, telling the experimenter which type (road sign or oncoming cars) of information they were warned about. In the last two experimental conditions such task was aided by the display of a visual warning (tab. 1) in the HUD.
Tab.1: Visual warnings: the first warning category is less critical: it refers to visibile road signs and it is associated to a yellow triangle with an exclamation mark. The second category refers to oncoming cars and it is associated to a red circle with an esclamation mark. 3
Two 'beeps', different in pitch and frequency were associated to the potential obstacles and road signs to be detected along the driving session.
Reaction times to foregoing car stop-light show a significative (ANOVA) advantage for subjects driving in the 'multimodal HUD' condition as compared to the 'visual warning' condition (t(3,306)= 9,170 p<,001), and to the 'no warning' condition (t(3,306)= 9,945 p<,001) (fig.2a). Subjects in the 'multimodal HUD' condition didn't make any error with respect to stoplight detection and significant fewer mistakes in foregoing car tracking (fig.2b).
Fig.2a: Reaction times for stop-lights detection
Fig.2b: Missed stop-lights detections and mistakes in foregoing car tracking
Reaction times for oncoming car detection constantly decrease (fig.3a) starting from the control condition
('no HUD'). Multimodal HUD significantly improves drivers' performances compared to baseline (t(3,306)= 6,76; p< ,001) and compared to 'no warning' condition (t(3,306)= 4,318; p< ,001). Road sign detection (fig.3b) main effects come from the introduction of the HUD, given the significative difference between the 'no HUD' and the 'no warning HUD' conditions (t(1,683)=2,505; p<,05). Results show shorter reaction times in the 'multimodal HUD' condition as compared to the 'no warning HUD' (t(2,704)= 3,561; p<,005) and to the 'visual warning HUD' (t(1,683)=2,171; p<,05). Fig.4: Means on mistakes of task1 performances: missing car recognition and missing road signs recognition
Finally, concerning subjective measures, the multimodal HUD was evaluated as more usable (fig.5a) than both the 'no warning' (p<,05) and the 'visual warning' ones (p<,05). Safety perception was more positive (fig.5b) for the multimodal HUD as compared to the 'no warning HUD' (t(1,684)= 2,530; p<,05) and the 'visual warning HUD' (t(1,684)=2,029; p=,05). Fig.3a: Reaction times for oncoming cars detection
Fig.5: Subjective measures of Usability and Safety perception
CONCLUSIONS Fig.3b: Reaction times for road signs detection
Recognition results show that the multimodal HUD (fig.4) results in no mistakes between oncoming carâ&#x20AC;&#x2122;s and road signs detection showing a clear advantage in terms of safety benefits during night driving.
The present experiment brings evidence to the possibility of exploiting drivers' parallel processing of information during driving. Processing information in the acoustic channel didnâ&#x20AC;&#x2122;t interfere with information processing in the visual channel, which is often overloaded in driving. Results show that a simple HUD without warnings has to be considered as a good support for target detection, but with the major drawback of enhancing subjects' distraction. The 'visual warning' HUD improves target detections too, but not enough to compensate worsened car-following behavior and slower reactions to sudden road events (i.e. stop-light detection). Subjective data confirm objective results, showing better safety perception and superior usability evaluations for the multimodal HUD. It is thus recommended that future night vision head up displays should always be accompanied by a multimodal (visual and auditory) warning function.
REFERENCES
Aasman, J. (1995). Modelling driver behaviour in SOAR. Doctoral dissertation, University of Groningen, the Netherlands Bainbridge, L. (1997). The change in concepts needed to account for human behavior in complex dynamic tasks. IEEE Transactions on systems, man, and cybernetics—Part A: Systems and humans, 27(3), 351-359. Bossi L. et al; (1997). The Effect of Vision Enhancement Systems on Driver Peripheral Visual Performance. In "Ergonomics and Safety of Intelligent Driver Intefaces". Lawrence Erlbaum Associates Cnossen F. (2000). Adaptive strategies and goal management in car drivers. Edworthy, J.; Adams, A.; (1996). Warning Design. A Research Prospective. Taylor & Francis Ltd. London. ETSI EG 202 048 V1.1.1 (2002-08): ”Human Factors (HF); Guidelines on the multimodality of icons, symbols and pictograms” Hellier, E. Edworthy, J. et al auditory warning design: predicting the effects of parameters on perceived Factors.
(1993). Improving Quantifying and different warning urgency. Human
Hollnagel, E. et al (2003). They drive at night- Can Visual Enhancement Systems Keep The Driver In Control? University of Linkoping, SE-581 83, Linkoping, Sweden Naatanen, R.; Summala, H. (1974). A Model for the Role of Motivational Factors in Driver’s Decision Making. In “Accident Analysis & Prevention”
Naatanen, R.; Summala, H. (1976). Road Users Behaviour and Traffic Accidents. North Holland Publishing Company. New York Nilsson, R,; (2001). Safety Margins in the Driver. In “Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciente”. Atca Univ. Ups. Printzel, L. J. (2004). Head-Up Displays and Attention Capture. NASA/TM-2004-213000. Langley Research Centre, Hampton, Virginia Stanton, N.; (1994). Alarm Initiated Activities.in “Human Factors in Alarm Design”, Taylor & Francis Ltd. London. Stanton, N.; Pinto, M.; (2000). Behavioural Compensation by Drivers of a Simulator when using a Vision Enhancement System. In “Ergonomics”, 2000, vol 43, n.9 1359-1370 Ververs, P. M., Wickens, C. D. (1998). Conformal Flight Path Symbology for Head-Up Displays: Defining the Distribution of Visual Attention in Three-Dimensional Space. Final Technical Report ARL-98-5/NASA-98-1. NASA Ames Research Center, Moffett Field, CA Wickens, C.D. (1989) Models of Multitask Situations, in “Application of Human PerformanceModels to System Design”, Plenum Press in Association with NATO Defence Research Group Wickens, C. D; Liu, Y, (1988). Codes and Modalities in Multiple Resources: A Success and a Qualification. Human Factors, 1988, 30(5), 599-616 Young, K., et al (2003). Driver Distraction: A Review of the Literature. Monash Yniversity. Accident Research Centre. Report No. 206