Virtual Reality And The Senses

Page 1

VIRTUAL REALITY AND THE SENSES STEFANIA SERAFIN 路 NIELS C. NILSSON 路 CUMHUR ERKUT 路 ROLF NORDAHL AUGUST 21 路 2017

awe


COLOPHON VIRTUAL REALITY AND THE SENSES ISBN: 978-87-643-1317-8

PUBLISHER Danish Sound Innovation Network Technical University of Denmark Richard Petersens Plads, Building 324 2800 Kgs. Lyngby Tel +45 45253411 Web www.danishsound.org September 2017

ABOUT THE PUBLICATION This publication and possible comments and discussions can be downloaded from www.danishsound.org. The content of this publication reflects the authors’ point of view and not necessarily the view of the Danish Sound Innovation Network as such. Copyright of the publication belongs to the authors. Possible agreements between the authors might regulate the copyright in detail.

ABOUT THE AUTHORS Department of Architecture, Design and Media Technology, Aalborg University Copenhagen • • • •

Stefania Serafin Professor with Specific Responsibilities Cumhur Erkut Associate Professor Niels C. Nilsson Assistant Professor Rolf Nordahl Associate Professor

CONTRIBUTORS • •

Egil Sandfeld AWE Hannes Becker Absolute Zero

PRE FACE VIRTUAL REALITY DEVELOPMENT

Virtual reality has seen a rapid development in the last couple of years, thanks to improvements in hardware and software technologies. Virtual reality research and development has mostly focused on visual feedback. However, virtual reality should be considered as a rich multimodal experience, where the representation and interaction between the different sensorial modalities play an important role.

The goal of this white paper is to review the state of the art and future directions of the field of virtual reality. We examine the role of multisensory (merely non visual) feedback, and present different research in the field of interface design and cognitive psychology and multimodal perception and cognition that can benefit virtual reality.

THE WHITE PAPER has been written by researchers from the Multisensory Experience lab at Aalborg University Copenhagen (melcph.create.aau.dk), and with the support of Danish Sound Innovation Network.

WE WOULD LIKE TO THANK the companies involved in the discussions converging into the white paper • •

Jabra · Oticon · Absolute Zero · Kanda · Molamil · Liquidmedia · Awe With a special thank to Egil Sandfeld from Awe and Hannes Becker from Absolute Zero.

DANISH SOUND INNOVATION NETWORK Danish Sound Innovation Network is an innovation network and part of the Danish national innovation infrastructure supported by a grant from the Danish Agency for Institutions and Educational Grants under the Ministry of Higher Education and Science. The Network is hosted by the Technical University of Denmark and is headed by Director and Professor Jan Larsen. Danish Sound is the facilitator of the national ecosystem for SOUND, creating value for all parts of the value chain and contributes to growth and wealth in Denmark. Network membership is free of charge and open for all. Registration at www.danishsound.org


”THE GOAL OF THIS WHITE PAPER IS TO REVIEW THE STATE OF THE ART AND FUTURE DIRECTIONS OF THE FIELD OF VIRTUAL REALITY”


CONTENTS 1

INTRODUCTION 5

2

MULTIMODAL FEEDBACK

2.1

The building blocks of a virtual reality system ................................................................................ 8

2.2

Visual feedback ............................................................................................................................... 8

2.3

Commercial possibilities ................................................................................................................. 8

2.4

Auditory feedback ........................................................................................................................... 8

7

2.4.1 Binaural sound ...................................................................................................................... 8

2.4.2 Principles of 3D sound .......................................................................................................... 9

2.4.3 The rebirth of 3D sound ........................................................................................................ 9

2.4.4 Current challenges in 3D sound rendering ........................................................................... 9

2.5

Capturing 3D sound ..................................................................................................................... 10

2.6

Commercial possibilities .............................................................................................................. 10

2.7

Touch and haptic feedback .......................................................................................................... 10

2.8

Commercial possibilities .............................................................................................................. 11

2.9

A note on other modalities .......................................................................................................... 11

3 MULTISENSORY INTERACTIONS

12

3.1

Audio-visual Interactions ............................................................................................................. 14

3.2

Audio-tactile Interactions ............................................................................................................ 15

3.3

Visual-tactile Interactions ............................................................................................................ 16

3.4 Conclusions .................................................................................................................................. 16

4

INTERACTING IN VIRTUAL REALITY

4.1 Background .................................................................................................................................. 18

4.2

Design Principles and Interaction ................................................................................................ 18

4.3

Evaluation .................................................................................................................................... 20

4.4

Industrial possibilities .................................................................................................................. 20

17


5

WALKING IN VIRTUAL REALITY 21

5.1

Walking in place ................................................................................................................................. 22

5.2

Redirected walking ............................................................................................................................. 23

5.3

Mechanical repositioning .................................................................................................................. 23

5.4

Auditory Rendering ............................................................................................................................ 23

5.4.1 Walking Sounds Reproduction ................................................................................................. 24

5.5

Haptic and Multimodal Rendering ..................................................................................................... 25

5.6

Commercial possibilities .................................................................................................................... 26

6

PERCEPTUAL ILLUSIONS AND DISTORTIONS IN VIRTUAL REALITY 27

6.1

Presence in Virtual Reality ................................................................................................................. 28

6.2

Body Ownership Illusions ................................................................................................................... 28

6.3

Distortions of Virtual Space ............................................................................................................... 29

6.4

Illusory and Distorted Self-Motion ..................................................................................................... 30

6.5

Leveraging Perceptual Illusion for Commercial Applications ............................................................ 30

7

APPLICATIONS 32

7.1

Entertainment and News ................................................................................................................... 33

7.2

Learning, Education and Training ...................................................................................................... 33

7.3 Healthcare .......................................................................................................................................... 33

7.4

Product Development and Marketing ............................................................................................... 34

7.5

Travel and Shared Experiences .......................................................................................................... 34

7.6

Cultural heritage ................................................................................................................................ 34

7.7

Art

8

CONCLUSIONS 35

1

Design sound, visual, touch and proprioception in tandem, and consider the mappings ............... 36

2

Reduce latency ................................................................................................................................... 36

3

Prevent or limit cybersickness or VR sickness ................................................................................... 36

4

Do not copy but leverage expert techniques .................................................................................... 36

5

Consider both natural and magical interactions................................................................................ 37

6

Consider the ergonomics of the display............................................................................................. 37

7

Consider the body of the player ....................................................................................................... 37

8

Create a sense of presence................................................................................................................. 38

9

Make the experience social ............................................................................................................... 38

8

BIBLIOGRAPHY 39

........................................................................................................................................... 34


CHAPTER 1

INTRO DUCTION


ALTHOUGH VIRTUAL REALITY (VR) Technologies are receiving significant attention since the introduction of low-cost head-mounted displays (HMDs), and several specialists called 2016 the year of virtual reality, such technologies have existed for more than half a century. In 1965 Sutherland described the ultimate display, dreamt as a multi sensorial virtual experience able to recreate the Wonderland where Alice walked [162].

The VR systems available today are not only affordable but also show a higher resolution, better field of view, lower latency, lower weight, and better ergonomics. Moreover, software tools that are available and easy to use contribute to what is called the democratization of VR. This democratization is opening up several possibilities for the near future, since the availability of hardware technologies and software tools to a large audience inevitably extends the possibilities offered by virtual reality and related immersive technologies.

MORE THAN 50 YEARS LATER such ultimate display is closer to reality. In the past couple of years, the rapid development of low cost virtual reality displays such as the Oculus Rift 1, HTC Vive 2, Samsung Gear VR 3, has boosted the interest in immersive virtual reality technologies.

WE DEFINE VR AS immersive artificial environments experienced through sensory stimuli provided by technology and in which one’s actions partially determine what happens in the environment. We restrict our focus to VR technologies where the visual feedback is based on head mounted displays.

Hardware devices once exclusive to high-end laboratories are now available to consumers and media technology developers. It is only now that consumer VR technology is surpassing professional VR/HMD systems.

1 2 3

https://www.oculus.com/en-us/rift/ http://www.htcvive.com/us/ http://www.samsung.com/global/galaxy/wearables/gear-vr/

THE MAIN QUESTIONS THIS WHITE PAPER AIMS TO ANSWER ARE:

1

WHAT DID WE LEARN FROM 50 YEARS OF VIRTUAL REALITY RESEARCH AND MULTIMODAL PERCEPTION THAT CAN BE USED TO IMPROVE THE .DESIGN OF VR EXPERIENCES?

2

WHAT ARE THE DIFFERENT ELEMENTS THAT NEED TO BE TAKEN INTO CONSIDERATION FOR COMPELLING VR EXPERIENCES, BESIDE THE QUALITY OF THE HEAD-MOUNTED DISPLAY USED?

3

WHAT ARE DIFFERENT APPLICATIONS OF VR, BESIDE THE ENTERTAINMENT INDUSTRY?

4

WHAT ARE THE OPEN CHALLENGES IN VR RESEARCH AND DEVELOPMENT, BOTH FROM THE TECHNOLOGICAL POINT OF VIEW AND FROM THE SOCIETAL POINT OF VIEW?

5

WHAT ARE THE POTENTIALS FOR THE DANISH AND NORDIC VR INDUSTRIES?

6

IS THERE AN AREA OF VR WHERE DANISH INDUSTRIES CAN MAKE A WORLDWIDE IMPACT?

6


CHAPTER 2

MULTIMODAL FEEDBACK


2.1 THE BUILDING BLOCKS OF A VIRTUAL REALITY SYSTEM

FIELD OF VIEW (FOV) • •

Figure 2.1 illustrates the different elements needed to design a single user virtual reality experience. The actions of the user are tracked for example by a motion capture system, and they trigger reactions in the software application which are experienced through visual, auditory, and haptic feedback. To limit the content of this white paper to the case of low-cost VR hardware technologies, we will focus on visual feedback delivered through head-mounted displays, auditory feedback delivered through headphones, and haptic feedback delivered through several hardware devices as will be described in Chapter 4.

FIELD OF REGARD • • • • •

Compelling VR experiences require the simulation and integration of different modalities. This chapter reviews theories and applications of different modalities such as visual feedback, auditory feedback, haptic feedback, as well as other types of feedback that could be explored in virtual reality applications, such as taste and smell. USER

Action

Amount of physical space surrounding the user in which visual images are displayed. Spatial resolution: the number of pixels in the screen. Temporal resolution: e.g., the refresh rate of the HMD. Screen geometry: the shape of the screen also plays an important role. Comfort and ergonomics: decreasing size and weight of HMDs ultimately also affects the experience.

2.3 COMMERCIAL POSSIBILITIES

SYSTEM

The market for HMDs is rapidly expanding, and devices aim to constantly improve the characteristics above. Currently (Spring 2017) the main devices are HTC Vive and Oculus Rift, with cheaper competitors such as Samsung Gear VR and Google Daydream.

Tracking

Cognition

Computation Perception

FOV is defined as the maximum number of degrees of visual angle that can be seen instantaneously on a display. The human eye has a field of view of 200 degrees [46], so VR technologies should aim at having similar characteristics. The larger field of view enhances technological immersion.

2.4 AUDITORY FEEDBACK

Displays

The importance of audio in VR experiences is well acknowledged, and VR research and development is placing lots of focus on improving interactive auditory feedback. The virtual reality industry is well aware of the fact that a true immersive experience requires a high fidelity 3-dimensional auditory display; this is why it is said that 3D sound is finally finding a large application area in the field of virtual reality. In this white paper we focus on 3D sound delivered through headphones, and do not consider speakers based systems. A general structure of the different input and output modalities required to design a virtual reality experience.

FIGURE 2.1

Beside from availability and affordability compared to a high quality speaker setup, the main advantages of using headphone-based sound delivery include the ability to isolate the user from the outside world, and the fact that multiple tracked users can hear 3D sounds simultaneously. A challenge related to headphone-based systems is that sounds may be perceived as if they originate from inside the head of the listener.

2.2 VISUAL FEEDBACK

2.4.1 BINAURAL SOUND

The rapid increase of interest in VR is primarily due to the availability of low-cost high fidelity head mounted displays from companies such as Samsung, Oculus, HTC and more. In this white paper we do not report on the state of the art of HMDs, since it is a rapidly evolving field and the technologies become quickly outdated. However, we highlight the issues that manufacturers of HMDs are addressing.

For virtual reality applications it is important that sound is displayed in 3D, in such a way that the location of sound sources corresponds to the location of visual images.

8


Immersiveness is highly enhanced when sound is displayed in 3D. This is also known as binaural sound, which means that each one of the two ears receives a different signal providing a sense of space, as opposed to traditional stereo, where the same signal is delivered to both ears. Stereo recordings do not provide a proper sense of space, and the sound is felt as inside the head. On the other hand, binaural recordings, or binaural sound rendering obtained using signal processing techniques, allow developers to provide a sense of space, including direction (called azimuth) and sense of elevation.

It is well known that some HRTFs fit better some individuals than others. Moreover, the chosen auditory feedback will affect the performance of the system. For example, broadband impulsive sounds are easier to localize as opposed to lowfrequency sounds [7]. Visual and cognitive cues also affect HRTFs performance. As an example, the sound of lighting a cigarette might avoid the front-back confusion.

2.4.3 THE REBIRTH OF 3D SOUND

2.4.2 PRINCIPLES OF 3D SOUND

3D sound until now has not received much attention in the interactive media community. Virtual reality might be the ideal environment for 3D sound to shine. It is extremely important for virtual reality applications to display sound and images in equal locations. Even with a high quality visual display the sense of presence is reduced when a low fidelity auditory display is present. There are also studies showing how a low quality visual display can be perceived as higher quality when coped with high quality auditory display [158]. This is particularly relevant for virtual reality applications, since the quality of the visual display shows large margins for improvement.

For decades the audio community has researched and developed what is called head-related transfer functions (HRTF). A head-related transfer function defines how each ear receive sound from a point in space. Assuming that the sound emitting source is a single point, the sound propagates from this point to both ears of the listener. The distance between the sound source and the two ears is not necessarily the same, unless the sound source is placed in front of the listener. This already represents a cue called interaural time difference (ITD); e.g., the difference in time that it takes for a sound to reach from the source to the listener. Another related cue is what is called interaural intensity difference (IID), since the further a sound propagates the more it looses intensity.

2.4.4 CURRENT CHALLENGES IN 3D SOUND RENDERING

This means that if a sound is placed on the left of the listener and takes longer to reach the right ear than the left ear, then also the intensity of the sound reaching the left ear will be lower. These two cues already create a sense of direction.

Researchers are now working on personalised HRTFs, in order to create user specific 3D sound experiences. Personalised HRTFs provide a better sense of space, but of course are more time consuming to reproduce. One way to obtain personalised HRTFs is by placing two microphones inside each ear, and record sounds coming from different directions.

Another important cue is the role of the ears, and the body of the listener. The ears act like a filter, and their shape and size determines the characteristics of the sound reaching them. The head and body of the listener also act as a filter. Moreover, if a person has a very large head, obviously the distance between the two ears is larger, and so are the corresponding IID and ITD. HRTF measurements are very hard to produce and costly in terms of memory requirements. Some of the challenges encountered are the elimination of front/back confusion, the elimination of intracranially heard sound, and the minimization of the localization error. It is also important to be able to reduce the amount of data necessary to produce the most salient features of HRTF measurements [7].

• •

Those sounds need to be recorded in a so-called anechoic room, e.g. a room without echo. This is to avoid any sense of space and reverberation to be added to the recordings.

Researchers are also looking into how to use image based 3D scanning techniques to obtain the geometry of different faces, and use these geometries to create the corresponding HRTFs. Oculus research centre is also investing a lot in 3D sound research, and in December 2015 they have released a 3D sound SDK. At the 2016 edition of the IEEE Virtual Reality Conference, researchers from Oculus Rift presented an efficient HRTF based spatial audio for area and volumetric sources [130]. This novel method allows to simulate not only point-wise sources, but also sound sources that occupy a significant portion of a space, such as waterfalls.

Researchers in 3D sound have used artificial heads in order to record sounds in 3D, like the one shown in Figure 2.2. An artificial head is a head with a microphone in each ear. This head allows to record static 3D sound experiences that can be used by a person with an average head. This means that these so-called generalised head related transfer functions work for some individuals, mostly those with a head as general as possible, but not for all individuals.

Novel efficient sound processing algorithms will allow sound designers to manipulate sound for virtual reality limiting the need to perform binaural recordings.

One of the still open challenges is how to choose the particular set of HRTFs used in the system. 9


3D sonic interaction design, where the sound follows the action of the user, is essential for immersive VR applications. In [97] a technique to generate binaural sound corresponding to user's viewpoint when watching omindirectional videos through HMDs is presented. This is achieved by implementing a source enhancement method for dividing the acoustics field evenly into several areas.

2.5 CAPTURING 3D SOUND In order to capture direction and distance of sound sources, a dummy head can be used. However this is not ideal for interactive applications such as VR, since the head of the user needs to be tracked in such a way that the location and motion of sound sources correspond to user's actions. If this is not achieved in software using different signal processing algorithms, it can be achieved by capturing sounds with an ambisonic microphone, for example the newly produced Ambeo microphone by Sennheiser. Ambisonic is a surround sound technique which does not provide information about speakers' setup. What is needed from an Ambisonic recordings is then a decoder that converts the recordings to specific speakers' setups, which could also be binaural headphones.

2.6 COMMERCIAL POSSIBILITIES Major companies either have a 3D audio spatialisation framework or are working on one. Experimental multichannel recording formats from decades ago are being re-purposed for VR sound, and the BBC is experimenting with binaural drama again.

2.7 TOUCH AND HAPTIC FEEDBACK In this section we will provide an overview of the sense of touch and haptic feedback in virtual reality. The surface of the human body is a large collection of mechanoreceptors which are not equally spaced. These mechanoreceptors allow us to physically interact with the external world. The motor homunculus (see Figure 2.3) is an illustration of the sensory and motor representation of the body within the brain. As an example, as the image shows, hands and tongue use a large portion of the brain's dedicated space to touch. The paper [72] provides useful guidelines on how to use knowledge from cognitive psychology to design multimodal interfaces with a strong focus on touch. The paper is based on the consideration, produced after careful experimental design, that to interact with objects with the hands we use similar so-called exploratory procedures. For example to retrieve the weight of an object we use unsupported holding, to retrieve the texture we rub it back and forward, and so on. The authors suggest that such exploratory procedures should be used as design guidelines to build novel interfaces based on touch. Current VR technologies provide rather realistic and compelling audio and visual experiences. The same cannot be said for the simulation of the sense of touch: it is still not possible to interact. Haptic interfaces generate mechanical signals that stimulate human kinesthetic and touch channels. Haptic interfaces also provide humans with the means to act on their environment. We can therefore attempt to define haptic interfaces as being concerned with the association of gesture to touch and kinesthesia to provide for communication between the humans and machines. The word haptics refers to the capability to sense a natural or synthetic mechanical environment through touch. Haptics also includes kinesthesia (or proprioception), the ability to perceive one' s body position, movement and weight. It has become common to speak of the haptic channel to collectively designate the sensory and motor components of haptics. This is because certain anatomical part (in particular the hand) are unitary organs in which perceiving the world and acting upon it are activities that take place together. For example, grasping an unfamiliar object also involves exploring it actively with our hands. Tactile and kinesthetic channels work together to provide humans with means to perceive and act on their environment. An overview of haptic interfaces is presented in [54]. Most haptic interfaces produced in virtual reality are based on gloves augmented with sensors and actuators.

FIGURE 2.2

A head and torso simulation from BrĂźel & KjĂŚr

10


2.9 A NOTE ON OTHER MODALITIES Although vision, audition and touch are the most simulated modalities in VR, some technologies have also been developed that aim to simulate other senses such as taste and smell. A fundamental difficulty with taste and smell interfaces is that olfaction and gustation are difficult to measure quantitatively (e.g., based on frequency sensing). Smell and taste are also known as chemical senses since they rely on chemical transduction [106].

The motor homunculus is an illustration of the sensory and motor representation of the body within the brain.

FIGURE 2.3

As an example of interfaces considering smell and taste, in [114] a digital lollipop is presented, whose goal is to allow people to share taste and smell sensations digitally with a remote person through existing networking technologies. The system is based on electrical and thermal stimulation on the tongue.

2.8 COMMERCIAL POSSIBILITIES One particular aspect of haptic device design is cost. The main types of consumer market devices include gamepads with vibrotactile feedback (rumble) and even true force feedback, tactile mice, force feedback trackballs, and force feedback joysticks. Massie and Salisbury developed the PHANToM, which has a 3 DOF pantograph [82]. A thimble with a gimbal is connected to the end of the pantograph, which can then apply a 3 DOF force to the fingertips. The PHANToM became one of the most popular commercially available haptic interfaces. The recent development of low-cost head mounted display has also boosted the interest in tactile and haptic interfaces. Both HTC Valve and Oculus are working on interfaces that allow to touch virtual generated environments. These interfaces have also embedded low-cost vibrotactile devices similar to the ones found on mobile phones.

11


CHAPTER 3

MULTISENSORY INTERACTIONS

Figure 2.3: The motor homunculus is an illustration of the sensory and motor representation of the body within the brain. Naturally with virtual object and have a compelling sensation of touching them.


THIS CHAPTER EXAMINES HOW THE SENSES INTERACT AND HOW THIS KNOWLEDGE CAN BE USED IN VIRTUAL REALITY THE FOCUS IS ON INTERACTION BETWEEN AUDITION AND VISION AND AUDITION AND TOUCH In simulating realistic multimodal environments, several elements including synchronization need to be taken into consideration. However, technology gives some limitations, especially when the ultimate goal is to simulate systems that react in real time. [107] explains a tradeoff between accuracy and responsiveness, which represents a crucial difference between models for science and models for interaction. Specifically, computations about the physical world are always approximations. In general, it is possible to improve accuracy by constructing more detailed models and performing more precise measurements, but this comes at the cost of latency, that is, the elapsed time before an answer is obtained. For multisensory models it is also essential to ensure synchronization of time between different sensory modalities. [107] groups all of these temporal considerations, such as latency and synchronization, into a single category called responsiveness. The question then becomes how to balance accuracy and responsiveness. The choice between accuracy and responsiveness depends also on the final goal of the multimodal system design. As an example, scientists are generally more concerned with accuracy, so responsiveness is only a soft constraint based on available resources. On the other hand, for interaction designers, responsiveness is an essential parameter that must be satisfied. There are different ways in which the senses can interact. Cross-modal mapping represents the situation where one or more dimensions of a sound are mapped to a visual or tactile feedback [102]. An example of this situation is a beeping sound combined with a flashing light. Intersensory biases represent the situation where audition and another modality provide conflicting cues. When examining specific multimodal examples in the following section, several examples of intersensory biases will be provided. In most of these situations, the user tries to perceptually integrate the conflicting information. This conflict might lead to a bias towards a stronger modality. One classic example is the ventriloquist effect [55], which illustrates the dominance of visual over auditory information. In this effect, spatially discrepant audio and visual cues are experienced as colocalized with the visual cue. This effect is commonly used in cinemas and home theaters where, although the sound physically originates at the speakers, it appears as coming from the moving image on screen, being for example a person speaking or walking. The ventriloquism effect occurs because the visual estimates of location are typically more accurate than the auditory estimates of location, and therefore the overall perception of location is largely determined by vision. This phenomenon is also known as visual capture [176].

Cross-modal enhancement refers to the situation where stimuli from one sensory channel enhance or alter the perceptual interpretation of stimulation from another sensory channel. As an example, three studies presented in [158] show how high-quality auditory displays coupled with high-quality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone. Moreover, low-quality auditory displays coupled with high-quality visual displays decrease the perception of quality of the auditory displays relative to the evaluation of the auditory display alone. These studies were performed by manipulating the pixel resolution of the visual display and Gaussian white-noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white-noise level. Subjects were asked to rate the quality of the visual image of a radio with different pixel qualities, coupled to auditory feedback resembling sounds coming from a radio. These findings strongly suggest that the quality of realism in an audiovisual display must be a function of both auditory and visual display fidelities inclusive of each other. The findings can show obvious applications in virtual reality, where the quality of the visual display is still poor. Here auditory display can help. Cross-modal enhancements can occur even when the extra-modal input does not provide information directly meaningful for the task. A primary example was reported by [152]. Subjects rated the intensity of a visual light higher when it was accompanied by a brief, broadband auditory stimulus than when it was presented alone. Cross-modal transfers or illusions are the situations where stimulation in one sensory channel leads to the illusion of stimulation in another sensory channel. An example of this is synesthesia, which in the audiovisual domain is expressed for example as the ability of seeing a color while hearing a sound. When considering intersensory discrepancies, [176] propose a modality-appropriateness hypothesis. Their model suggests that the various sensory modalities are well suited to the perception of different events. Their model also shows that the dominance of a particular modality is relative to its appropriateness to the situation. Generally, it is supposed that vision is more appropriate for the perception of spatial location than is audition, with touch somewhere in between. Audition is most appropriate for the perception of temporally structured events. Touch is more appropriate than audition for the perception of texture, whereas vision and touch may be about equally appropriate for the perception of textures. The appropriateness is a consequence of the different temporal and spatial resolution of the auditory, tactile, and visual systems.

13


3.1 AUDIO-VISUAL INTERACTIONS Research into multimodal interaction between audition and other modalities has primarily focused on the interaction between audition and vision. This choice is naturally due to the fact that audition and vision are the most dominant modalities in the human perceptual system [66]. A wellknown multimodal phenomenon is the McGurk effect [83]. The McGurk effect is an example of how vision alters speech perception; for instance, the sound ”ba” is perceived as ”da” when viewed with the lip movements for ”ga”. Notice that in this case, the percept is different from both the visual and auditory stimuli, so this is an example of intersensory bias, as described in the previous section. The different experiments described until now show a dominance of vision versus audition, when conflicting cues are provided. However, this is not always the case. As an example, in [136, 137] a visual illusion induced by sound is described. When a single visual flash is accompanied by multiple auditory beeps, the single flash is perceived as multiple flashes. These results were obtained by flashing a uniform white disk for a variable number of times, 50 milliseconds apart, on a black background. Flashes were accompanied by a variable number of beeps, each spaced 57 milliseconds apart. Observers were asked to judge how many visual flashes were presented on each trial. The trials were randomized and each stimulus combination was run five times on eight naive observers. Surprisingly, observers consistently and incorrectly reported seeing multiple flashes whenever a single flash was accompanied by more than one beep [136]. This experiment is known as sound-induced flash illusion. A follow-up experiment investigated whether the illusory flashes could be perceived independently at different spatial locations [60]. Two bars were displayed at two locations, creating an apparent motion. All subjects reported that an illusory bar was perceived with the second beep at a location between the real bars. This is analogous to the cutaneous rabbit perceptual illusion, where trains of successive cutaneous pulses delivered at a few widely separated locations produce sensations at many in-between points [43]. As a matter of fact, perception of time, wherein auditory estimates are typically more accurate, is dominated by hearing. Another experiment in determining whether two objects bounce off each other or simply cross, is influenced by hearing a beep when the objects could be in contact. In this particular case, a desktop computer displayed two identical objects moving towards each others. The display was ambiguous to provide two different interpretations after the objects met: They could either bounce off each other or cross. Since collisions usually produce a characteristic impact sound, introducing such sound when objects met promoted the perception of bouncing over crossing. This experiment is usually known as motion-bounce illusion [133]. In a subsequent study, Sekuler and Sekuler found that any transient sound temporally aligned with the would-be collision increased the likelihood of a bounce percept [132]. This includes a pause, a flash of light on the screen, or a sudden disappearance of the discs.

More recent investigations examined the role of ecological auditory feedback in affecting multimodal perception of visual content. As an example, in a study presented in [31] the combined perceptual effect of visual and auditory information on the perception of a moving objects trajectory was investigated. Inspired by the experimental paradigm presented in [62], the visual stimuli consisted of a perspective rendering of a ball moving in a three-dimensional box. Each video was paired with one of three sound conditions: 1. Silence 2. The sound of a ball rolling 3. Or the sound of a ball hitting the ground. It was found that the sound condition influenced whether observers were more likely to perceive the ball as rolling back in depth on the floor of the box or jumping in the frontal plane. Another interesting study related to the role of auditory cues in the perception of visual stimuli is the one presented in [165]. Two psychophysical studies were conducted to test whether visual sensitivity to point-light depictions of human gait reflects the action specific co-occurrence of visual and auditory cues typically produced by walking people. To perform the experiment, visual walking patterns were captured using a motion capture system, and a between-subject experimental procedure was adopted. Specifically, subjects were randomly exposed to one of the three experimental conditions: No sound, footstep sounds, or a pure tone at 1000 Hz, which represented a control case. Visual sensitivity to coherent human gait was greatest in the presence of temporally coincident and action-consistent sounds, in this case the sound of footsteps. Visual sensitivity to human gait with coincident sounds that were not actionconsistent, in this case the pure tone, was significantly lower and did not significantly differ from visual sensitivity to gaits presented without sound. As an additional interaction between audition and vision, sound can help the user search for an object within a cluttered, continuously changing environment. It has been shown that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very diffcult to find. This is known as the pip and pop effect [171]. Visual feedback can also affect several aspects of a musical performance, although in this chapter affective and emotional aspects of a musical performance are not considered. As an example, Schutz and Lipscomb report an audio-visual illusion in which an expert musician’s gestures affect the perceived duration of a note without changing its acoustic length [131]. To demonstrate this, they recorded a world-renowned marimba player performing single notes on a marimba using long and short gestures. They paired both types of sounds with both types of gestures, resulting in a combination of natural (i.e., congruent gesture-note pairs) and hybrid (i.e., incongruent gesture-note pairs) stimuli. They informed participants that some auditory and visual components had been mismatched, and asked them to judge tone duration based on the auditory component alone. Despite these instructions, the participants duration ratings were strongly influenced by visual gesture information. 14


As a matter of fact, notes were rated as longer when paired with long gestures than when paired with short gestures. These results are somehow puzzling, since they contradict the view that judgments of tone duration are relatively immune from visual influence [176], that is, in temporal tasks visual influence on audition is negligible. However, the results are not based on information quality, but rather on perceived causality, given that visual influence in this paradigm is dependent on the presence of an ecologically plausible audiovisual relationship. Indeed, it is also possible to consider the characteristics of vision and audition to predict which modality will prevail when conflicting information is provided. In this direction, [67] introduced the notion of auditory and visual objects. They describe the different characteristics of audition and vision, claiming that a primary source of information for vision is a surface, while a secondary source of information is the location and color of sources. On the other hand, a primary source of information for audition is a source and a secondary source of information is a surface. In [33] a theory is suggested on how our brain merges the different sources of information coming from the different modalities, specifically audition, vision, and touch. The first is what is called sensory combination, which means the maximization of information delivered from the different sensory modalities. The second strategy is called sensory integration, which means the reduction of variance in the sensory estimate to increase its reliability. Sensory combination describes interactions between sensory signals that are not redundant. By contrast, sensory integration describes interactions between redundant signals. Ernst and coworkers [33] describe the integration of sensory information as a bottom-up process. The modality precision, also called modality appropriateness, hypothesis, by [176], is often cited when trying to explain which modality dominates under what circumstances. This hypothesis states that discrepancies are always resolved in favour of the more precise or more appropriate modality. In spatial tasks, for example, the visual modality usually dominates, because it is the most precise at determining spatial information. However, according to [33], this terminology is misleading because it is not the modality itself or the stimulus that dominates. Rather, the dominance is determined by the estimate and how reliably it can be derived within a specific modality from a given stimulus. The experiments described until now assume a passive observer, in the sense that a subject is exposed to a fixed sequence of audiovisual stimuli and is asked to report on the resulting perceptual experience. When a subject is interacting with the stimuli provided, a tight sensory motor coupling is enabled, that is an important characteristic of embodied perception. According to embodiment theory, a person and the environment form a pair in which the two parts are coupled and determine each other. The term �embodied� highlights two points: First, cognition depends upon the kinds of experience that are generated from specific sensorimotor capacities. Second, these individual sensorimotor capacities are themselves embedded in a biological, psychological, and cultural context [29].

The notion of embodied interaction is based on the view that meanings are present in the actions that people engage in while interacting with objects, with other people, and with the environment in general. Embodied interfaces try to exploit the phenomenological attitude of looking at the direct experience, and let the meanings and structures emerge as experienced phenomena. Embodiment is not a property of artifacts but rather a property of how actions are performed with or through the artifacts. Audio-tactile interactions, described in the following section, require a continuous action-feedback loop between a person and the environment, an important characteristic of embodied perception.

3.2 AUDIO-TACTILE INTERACTIONS Although the investigation of audio-tactile interactions has not received as much attention as the audiovisual interactions, it is certainly an interesting field of research, especially considering the tight connections existing between the sense of touch and audition. As a matter of fact, both audition and touch are sensitive to the very same kind of physical property, that is, mechanical pressure in the form of oscillations. The tight correlation between the information content (oscillatory patterns) being conveyed in the two senses can potentially support interactions of an integrative nature at a variety of levels along the sensory pathways. Auditory cues are normally elicited when one touches everyday objects, and these sounds often convey useful informational regarding the nature of the objects [1, 42]. The feeling of skin dryness or moistness that arises when we rub our hands against each other is subjectively referred to the friction forces at the epidermis. Yet, it has been demonstrated that acoustic information also participates in this bodily sensation, because altering the sound arising from the hand rubbing action changes our sensation of dryness or moistness at the skin. This phenomenon is known as the parchment-skin illusion [58]. The parchment-skin illusion is an example of how interactive auditory feedback can affect subjects' tactile sensation. Specifically, in the experiment demonstrating the rubber-skin illusion, subjects were asked to sit with a microphone close to their hands, and then to rub their hands against each other. The sound of hands rubbing was captured by a microphone; they were then manipulated in real time, and played back through headphones. The sound was modified by attenuating the overall amplitude and by amplifying the high frequencies. Subjects were asked to rate the tactile sensation in their palms as a function of the different auditory cues provided, in a scale ranging from very moist to very dry. Results show that the provided auditory feedback significantly affected the perception of the skin 's dryness. This study was extended in [48], by using a more rigorous psychophysical testing procedure. Results reported a similar increase in smoothdry scale correlated to changes in auditory feedback, but not in the roughness judgments per se.

15


However, both studies provide convincing empirical evidence demonstrating the modulatory effect of auditory cues on people's tactile perception of a variety of different surfaces. A similar experiment was performed combining auditory cues with tactile cues at the tongue. Specifically, subjects were asked to chew on potato chips, and the sound produced was again captured and manipulated in real time. Results show that the perception of potato chips' crispness was affected by the auditory feedback provided [150].

3.4 CONCLUSIONS This chapter has introduced how the different senses interact, with a focus on audition, vision and touch. Having a better understanding of the human senses is important to build technologies simulating such modalities and interaction.

Lately, artificial cues are appearing in audiohaptic interfaces, allowing us to carefully control the variations to the provided feedback and the resulting perceived effects on exposed subjects [28, 100, 170]. Artificial auditory cues have also been used in the context of sensory substitution, for artificial sensibility at the hands using hearing as a replacement for loss of sensation [77]. In this particular study, microphones placed at the fingertips captured and amplified the friction sound obtained when rubbing hard surfaces. In [65] a nice investigation on the interaction between auditory and tactile cues in the near space is presented. The authors show an interesting illusion of how sounds delivered through headphones, presented near to the head induces a tactile experience. The left ear of a dummy head was stroken with a paintbrush and the sound recorded. The sound was then presented to the participants who felt a tickling sensation when the sound was presented near to the head, but not when it was presented distant from the head.

3.3 VISUAL-TACTILE INTERACTIONS One of the first and most cited examples of interaction between vision and touch is the so-called rubber hand illusion [12]. In this experiment subjects see a fake plastic hand, while their real physical hand is hidden. If the real hand and the plastic one are brushed simultaneously, subjects report the fake end to be their own physical end. This experiment is successful because our sense of vision is stronger than our sense of proprioception, so, when conflicting cues are provided, vision is dominant. A similar experiment was performed in a virtual reality environment [143]. It was demonstrated that a virtual limb can be made to feel part of one's body if appropriate multisensory correlations are provided. It is also possible to use visual feedback to create so-called pseudo-haptics sensations. As described in [71], modifying the motion of the cursor on the computer screen, i.e., the Control/Display ratio allows to create tactile sensation. Assuming that the image displayed on the screen corresponds to a top view of the texture, an acceleration (or deceleration) of the cursor indicates a negative (or positive) slope of the texture. Experimental evaluations showed that participants could successfully identify macroscopic textures such as bumps and holes, by simply using the variations of the motion of the cursor. These illusions are particularly useful in situations where haptic feedback is missing. In this case, it can be replaced with carefully designed visual feedback. 16


CHAPTER 4

INTERACTING IN VIRTUAL REALITY


INTERACTING IN VIRTUAL REALITY

4.1 BACKGROUND

As the first generation of VR devices enter the consumer market, it is straightforward to predict that human centered interaction will be a focal point in industry and research. Currently, the leading VR companies are continuously developing handheld controller options at low price points. Such controllers are perhaps the easiest way to interact with a majority of fully immersive VR experiences.

Quality interactions enhance user understanding of what has just occurred, what is happening, what can be done, and how to do it. In the best case in VR, not only will goals and needs be efficiently achieved, but the experiences will be engaging and enjoyable [57].

Manufacturers and developers create their own interaction design best-practices1 and guidelines2 for a satisfactory and acceptable interaction fidelity [57]. In Chapter 8 we have presented our own design guidelines that we have collected over the years while designing virtual reality experiences. Similarly, Jerald has categorized over a hundred VR-based interaction themes into interaction patterns [57]. The resulting 16 patterns are organized into five overarching groups: Selection, Manipulation, Viewpoint Control, Indirect Control, and Compound Patterns. These groups roughly correspond to the categories derived after an extensive task analysis [56]. Also design space approaches, such as the one presented in [11], relate to one or several groups. But an important difference between task or design-space based approaches and the pattern approach is that the patterns are derived from the user point of view. Jerald motivates his approach by revisiting general interaction models, such as Don Norman's principles and stages of interaction, and VR-specific concepts, such as interaction fidelity, prioceptive and egocentric interaction, reference frames and ergonomic concepts such as cybersickness and fatigue. The patterns also relate to the design guidelines we will present in Chapter 8. They can be further broken down into more specific interaction techniques. For example the Walking Pattern, a form of viewpoint control, consists of real-world walking, redirected walking, walking in place, treadmill walking, etc. We specifically consider walking in the next chapter. Based on these concepts and patterns, in this chapter we analyze how our design and evaluation guidelines apply to the current theory and practice of interaction in Virtual Reality. We start with briefly explaining important concepts of general and VR-specific interaction design 1

https://developer.oculus.com/design/latest/concepts/bp_intro/

2

https://ion.com/vr-best-practices 3

http://www.uxofvr.com

Whether a VR interface attempts to be realistic or not, it should be intuitive. An intuitive interface is an interface that can be quickly understood, accurately predicted, and easily used. Intuitiveness is in the mind of the user, but the designer can help form this intuitiveness by conveying through the world and interface itself concepts that support the creation of a mental model. A key concept in this creation is interaction metaphor, which exploits specific knowledge that users already have of other domains [78]. In the following, we explain concepts and models, such as Don Norman's principles and stages of interaction. We specifically revisit Fig. 2.1 and consider the possible breakdowns between action and perception, or in Norman's terms, between the gulfs of execution and evaluation. We also explain the general mappings and constraints, and VR-specific concepts, such as prioceptive and egocentric interaction, reference frames and ergonomic concepts such as cyber-sickness and fatigue.

4.2 DESIGN PRINCIPLES AND INTERACTION The design principle 1 (DP1 in the following) directly calls for a multi-sensory interaction design. When choosing or designing multimodal interactions, it can be helpful to consider different ways of integrating the modalities together. Input can be categorized into six types of combinations: 1. 2. 3. 4. 5. 6.

Specialized Equivalence Redundancy Concurrency Complementary Transfer [57].

All of the input modality types are multimodal other than specialized. 3D sound is an especially relevant issue [8] in DP1, since the location and motion of the visual virtual objects need to match the location and motion of the auditory objects. Also other modalities should be included in design, and the physicality of the performer's actions should be preserved. Redundant input modalities can be used to reduce noise and ambiguous signals.

18


Concurrent input modalities could be used to improve effciency by enabling the user to perform two interactions simultaneously, but the real benefit of the multimodal interaction is achieved when complementary input modalities are used. Finally, transfer of input modalities could be used when one modality device is unreliable so users will not have to start over if there is a failure. All interactions should be smooth with minimum latency (DP2). The guidelines in [57] on latency usually focus on hardware and displays, however to use of prediction to compensate for latencies up to 30 ms is a direct call for human-centered machine learning [44], a new trend in human-computer interaction (HCI). Prediction error can then be corrected with a post-rendering technique (e.g., 2D image warping) [57]. An extra issue with virtual reality technologies is the fact that wrong mappings, for example between vision and proprioception, can create a type of motion sickness that is known in the VR context as cybersickness (DP3). The causes of cybersickness is extensively studied (see [57] for a review) and guidelines have been proposed. For instance, when motion sickness is a primary concern, e.g., for users new to VR or a general audience, the interaction designers are advised to use one-to-one mapping of real head motion or teleportation. Meanwhile, the industry reports and constantly improves its best practices See https://developer.oculus.com/design/latest/concepts/bp_app_ simulator_sickness/ Virtual reality is a different medium compared to our physical world. Replicating interfaces in virtual reality may not bring about useful results unless there is a way to leverage common or expert techniques in interaction (DP4). This is where the metaphors become a useful tool to help users quickly develop a mental model of how an interface works. These specific interaction design guidance could include consistent affianced unambiguous signifiers, constraints to guide actions and ease interpretation, obvious and understandable mappings, and immediate and useful feedback. For instance, constraints can be used to add realism (e.g. users should not travel through walls). If appropriate mappings are not available, commonly accepted metaphors could be used (e.g., up is more, down is less). When appropriate, interactions with two hands should be supported. However, expressing expert use can be difficult, if two-hand interfaces are designed inappropriately. In cases where such natural interactions are not feasible, magical interactions could be used (DP5). In the continuum of interaction fidelity, realistic interactions should be considered for mission-critical application domains (training applications, simulations, surgical applications, therapy, and human-factors evaluations). Non-realistic interactions can be used for increasing performance and minimizing fatigue, whereas magical interactions can be used to enhance the user experience, circumvent the limitations of the real world, and teach abstract concepts. Unless realistic interaction is a primary goal, intuitive and useful magical techniques should be preferred.

Ergonomics of the display is essential for user comfort (DP6). When interaction is considered, the reach and possible fatigue of arms and hands also comes into the play. Special attention should be paid to interactions involving both hands: most of the people have a dominant hand, i.e., the user's preferred hand for performing fine motor skills. The non-dominant hand provides the reference frame, giving the ergonomic benefit of placing the object being worked upon in a way that is comfortable (often subconsciously) for the dominant hand to work with and that does not force the dominant hand to work in a single locked position. The non-dominant hand also typically initiates manipulation of the task and performs gross movements of the object being manipulated, in order to provide convenient, efficient, and precise manipulation by the dominant hand. The most commonly given example is writing; the non-dominant hand controls the orientation of the paper for writing by the dominant hand. Interactions in VR may be felt awkward when the non-dominant hand does not control the reference frame for the dominant hand. While Presence (DP7) is very important for interactions in VR, sometimes designers may need a trade-off between immersion, presence, reality, and even cybersickness. For example, adding non-realistic real-world stabilized cues to the environment as one flies forward in order to reduce sickness can reduce presence, especially if the rest of the environment is realistic. Such trade-offs aside, a sense of presence should be considered when designing VR experiences. Bare-hand systems, gloves, and/or haptic devices that correspond to virtual objects should be used when a high sense of presence is important. The human body, a real object every user has should be exploited maximally in interaction design (DP8). It can be considered as a plane of reference: commonly used tools can be placed relative to the body to take advantage of muscle memory. In addition, the fact that task performance characteristics and behaviour change with the representation of body [63] can be used a potential resource in designing interactions. Tapping on the everyday social skills (DP9), researchers from the University of Southern California Institute for Creative Technologies have developed human-like entities complete with interactive social skills and connection/rapport-building capability. The system works by sensing the user's emotional state and responding appropriately for psychotherapy applications. Those who believed they were interacting with a computer-controlled virtual human versus a human-controlled virtual human reported a lower fear of self-disclosure, reported lower impression management (only disclosing positive information), displayed sadness more intensely, and were rated by observers as more willing to disclose [75]. Besides designing for this kind of dyadic hyper-realist social interaction, social presence or co-presence [105] can be a useful resource for small group social interaction.

19


4.3 EVALUATION

This layer focuses towards higher design goals; evaluators should avoid breaks in presence; therefore methods to assess the user experience without interruptions are desirable.

Evaluation of interaction in VR presents challenges in terms of usability and engagement. Additional challenges are introduced by the immersive visualization component, such as the sense of presence and the need of a virtual body representation to create agency. These challenges can be structured with a three-layer evaluation scheme:

One widely adopted way to assess user experience in virtual reality is by using post-experimental questionnaires, sometimes coupled with physiological measurements captured during the experiment [84]. Extensions towards practice and experiencebased frameworks are also possible at this layer.

1. Interaction modalities 2. An VR-specific middle-layer 3. Higher level goals, practice, and experience. The nine design principles (DPs) outlined in the previous section can be aligned with this scheme. The first layer concerns the modalities of interaction [32]. The alignment between each input and output modality would provide a good start in this layer (DP1). After considering each modality in tandem, we can then proceed with perceptual integration and mapping (DP1), based on perceptual, sensorimotor, and cognitive abilities and capacities of users.

4.4 INDUSTRIAL POSSIBILITIES Interaction technologies in VR concentrate mainly on fast and high-resolution displays and accurate hand trackers. Companies can make an impact on developing alternative I/O devices, especially auditory, haptic, and whole-body motion interfaces. User experience is expected to be a paramount factor in designing upcoming VR systems, interaction designers may find a range of opportunities in designing interaction, not the interfaces [6], also in the virtual domain.

Such a structured approach has been previously used by [32] to evaluate a rhythmic interaction system with a virtual tutor [59], and to redesign the auditory and visual displays of the interactive system. Besides specific issues in each modality, the approach identified the effects of latency (DP2), social factors (DP9), and to some degree, the ergonomics of the display, in terms of fatigue (DP6). The second layer is VR-specific, and considers evaluation of cyber-sickness (DP3), virtual body representation and ownership (DP8), and presence (DP7). The VR-specific evaluation methods should be followed in this layer, see the related work in corresponding design principles. Latency (DP2), for instance, which was considered in the previous layer for each interaction modality, should be revisited in the VR-specific level as an integrated system property. It is clear that the advances in the VR display and tracking technologies constantly reduce the latency. Note however that there is a perceptual and motor tolerance to latency that can positively impact the evaluation [80]. Also the concept of presence (DP7) needs special attention in this level. A recent definition of the presence concerns the combination of two orthogonal components: place illusion and plausibility [139]. Place illusion corresponds to "being there", the quality of having a sensation of being in a real place. Plausibility illusion refers to the illusion that the scenario being depicted is actually occurring (see Chapter 6 for more details on the place and plausibility illusions). The final layer focuses on quality and goals of interaction and specific mechanisms involved, such as natural and magical interactions (DP5), and leveraging expert techniques (DP4) for musical expression.

20


CHAPTER 5

WALKING IN VIRTUAL REALITY


THIS CHAPTER REVIEWS THE STATE OF THE ART OF WALKING INTERFACES FOR VIRTUAL REALITY, FOCUSING FIRST ON ACADEMIC RESEARCH AND THEN ON COMMERCIAL POSSIBILITIES IT PRESENTS AND OVERVIEW OF AUDIO AND HAPTIC WALKING DISPLAYS, MODALITIES LESS INVESTIGATED WHEN CONSIDERING WALKING INTERACTIONS. One of the unresolved challenges in virtual reality applications is the locomotion problem. Navigation is a fundamental interaction task in virtual environments. Most VR applications give users the possibility to walk or move in the virtual world. One constraint that often comes along with VR setups is given by the limited workspace in which users are physically walking, also known as the problem of incompatible spaces [89]. Ideally a virtual reality world could be infinite, whereas the physical space where a person is interacting, is finite. The motion of a person is indeed bounded by either the walls of the simulation room or the range of the tracking system. Virtual navigation techniques must therefore cope with such physical restrictions. In this chapter we present several approaches to virtual navigation which cope with the problem of impossible spaces using different strategies, from walking in place to redirected walking to hardware based interfaces. The concepts behind afford progressively more effective physical walking, meanwhile gradually shifting the design focus from software to hardware issues.

5.1 WALKING IN PLACE There are several known VR locomotion metaphors in which the user is not required to walk [15], and therefore does not need to deal with workspace restrictions. Examples of these metaphors include teleportation, that is an instantaneous switch to a new location. Worlds In Miniature (WIM) [109] is a metaphor in which users hold a copy of the virtual world in their hands; from that copy, they can point to a location and be brought anywhere in the virtual world. Probably the most common navigation technique is the Flying Vehicle, where the environment is not manipulated; the illusion is that the user can move through the world, either by using a mock-up, a wand or other device. Walking in place (WIP) [141], simulates the physical act of walking without forward motion of the body; a virtual forward motion is introduced instead. When relying on WIP techniques for virtual locomotion the user performs stepping-like movements These steps in place serve as a proxy for real steps and enable the user to move through the virtual world while remaining stationary with respect to the physical environment.

The optical flow, that should match with the proprioceptive information coming during the physical walking act, is instead coupled to virtual proprioceptive cues. The sense of presence is greatly increased compared to static navigation techniques [167], though, other (mainly the vestibular) sensory cues of walking are missing. WIP techniques provide one possible solution to the problem emerging when an immersive virtual environment (IVE) offers a larger freedom of movement than the physical environment where the interaction is taking place. Such techniques are particularly useful when the spatial constraints are very prominent. WIP techniques also constitute an inexpensive, convenient and relatively natural alternative to these approaches. The advantages of WIP techniques include, but need not be limited to, convenience and cost-effectiveness [35], good performance on simple spatial orienting tasks [178], and generation of proprioceptive feedback similar, albeit not identical, to the one resulting from real walking [140]. Moreover, virtual locomotion accomplished via such stepping motions have been shown to elicit a more natural walking experience and a stronger sensation of presence compared to interaction via more traditional peripherals [167]. Combined, these potential advantages suggest the need for finding the best possible WIP technique. Arguably, the challenge of doing so is twofold. First, it includes the technical challenge of enabling users to control their virtual velocity in a manner that is both responsive and smooth [154]. Second, it is necessary to investigate how to increase the percieved naturalness of WIP locomotion; this means how to create an experience of walking in place through virtual environments as similar as possible to the experience of real walking. A part of the second challenge is to ensure that there is a natural correspondence between the gestures being performed and the resulting virtual velocity. The literature on human biomechanics tells us how to derive realistic walking speeds from gait properties, such as the step frequency [177]. However, realistic, virtual walking speeds need not always be perceived as natural. Wendt, Whitton and Brooks [177] proposed a WIP technique informed by human gait principles which is able to produce walking speeds that correspond better with those of real walking.

22


A state machine based on the human gait cycle makes it possible to estimate the user's step frequency multiple times during each step, and walking velocities are estimated based on the relationship between height and step frequency known from research on human biomechanics.

5.2 REDIRECTED WALKING

While the ability to reproduce correct walking speeds is notable, faithful reproduction of real walking speeds need not always be desirable. It is possible to divide existing WIP techniques into two categories based on the technology used to register the user's input. Some techniques are based on a physical interface which the user manipulates in order to generate virtual movement, and other techniques rely on various forms of motion tracking.

Redirected walking is a collection of techniques which makes it possible to discretely or continuously reorient or reposition the user through subtle or drastic manipulation of the stimuli used to represent the virtual world [115, 177, 161].

The physical interfaces commonly detect discrete events; i.e., contact between the feet and the ground. One example is The Walking Pad [14] which detects the user's steps through 60 iron switch sensors embedded on a 45cm 45cm plexiglass surface. Interestingly, the Nintendo's Wii Balance Board (www.nintendo. com) has also been used to facilitate WIP locomotion [178]. Unlike the discrete events detected by the physical interfaces, motion tracking enables continuous detection of the position or velocity of body parts. If necessary, the discrete events of the gait cycle (e.g., impact of the feet) can be extrapolated from the continuous motion information. Slater et al. [140] proposed the so-called Virtual Treadmill, which may be the first implementation of a WIP technique. Interestingly this technique did not rely on tracking of the feet. Instead, a neural network recognizing patterns in the head movement was able to detect whether the user was stepping in place or not. Zielinski, McMahan, and Brady [181] describe a technique that generates virtual movement by tracking the shadows cast by users' feet onto the floor of an under-floor projection. Feasel, Whitton, and Wendt [35] have proposed the technique Low-Latency, Continuous-Motion Walking-in-Place (LLCM-WIP) which controls the virtual velocity based on the speed of the user's vertical heel movement. Notably, WIP locomotion has also been achieved using commercially available motion tracking systems; for example the Microsoft Kinect can be used for WIP locomotion in combination with the Flexible Action and Articulated Skeleton Toolkit (FAAST) [160]. Finally, Wendt, Whitton, and Brooks [177], proposed the Gait-Understanding-Driven Walking-In-Place (GUD WIP). This WIP technique relies on a biomechanicsinspired state machine that can estimate the current step frequency multiple times during each step. Based on knowledge of human biomechanics the algorithm is able to translate this information into realistic walking speeds. While GUD-WIP's ability to reproduce natural walking speeds is notable, it seems reasonable to question whether faithful reproduction of real walking speeds always is desirable since individuals tend to misperceive speeds during virtual walking. For an exausting overview of WIP techniques, the reader is referred to [90].

Redirection techniques constitute a promising solution to this problem since they preserve the vestibular motion information accompanying real walking. However, work by Steinicke et al. [153] suggests that a very large tracking space is necessary (40mx40m) in order to enable unlimited walking along a straight line in the virtual environment while being redirected along a circular arc in the real world.

5.3 MECHANICAL REPOSITIONING An alternative to the techniques presented above is the use of elaborate mechanical setups that facilitate relatively natural walking without changing the users position relative to the physical environment [7, 11, 12, 13, 18]. In the next section we will examine other modalities which play an important role while walking, e.g., audition and touch.

5.4 AUDITORY RENDERING The development of efficient yet accurate simulation algorithms, together with improvements in hardware technology, has boosted the research on auditory display and physically based sound models for virtual environments [169, 113, 24]. The addition of auditory cues and their importance in enhancing sense of immersion and presence is a recognized fact in virtual environment research and developments. Most prior work in this area has focused on sound delivery methods [158, 128], sound quantity and quality of auditory versus visual information [21] and 3D sound [38, 172]. Relatively recent studies have investigated the role of auditory cues in enhancing self-motion and presence in VE [68]. Self-generated sounds have been often used as enhancements to VEs and first-person 3D computer games particularly in the form of footstep sounds accompanying self-motion or the presence of other virtual humans. A smaller number of examples, such as recent work of Nordahl [98] and Law et al. [70], have even aimed to provide multimodal cues linked to footstep events in such environments. Footsteps sounds have always represented an important element in movies, and recently also in computer games and virtual environments.

23

.


Usually such sounds are taken from sound libraries or recorded by so-called Foley artists that put shoes in their hands and interact with different materials to simulate the act of walking. Recently, several physics based algorithms have been proposed to simulate the sounds of walking. One of the pioneers in this field is Perry Cook, who proposed a collection of physically informed stochastic models (PhiSM) simulating several everyday sonic events [23]. Among such algorithms the sounds of people walking on different surfaces were simulated [22]. A similar algorithm was also proposed in [37], where physically informed models reproduced several stochastic surfaces. The Natural Interactive Walking EU project, active until fall 2012, has put major emphasis on the audio-tactile augmentation of otherwise neutral floors through the use of active tiles as well as instrumented shoes. Both such interfaces were designed based on the fundamental hypothesis that a credible, however informative augmentation of a flat, solid floor could be realized via the superposition of virtual audio-tactile cues. Effective audio-tactile simulations of aggregate and resonant ground categories have been obtained through physically-based sound synthesizers.

5.4.1 WALKING SOUNDS REPRODUCTION �Spaces speak, are you listening?" asks the title of a book by Blesser and Salter, which explores the topic of aural architecture from the disciplines of audio engineering, anthropology and cognitive psychology [10]. Indeed listening to a soundscape can provide useful information regarding the size of the space, the location and the events. Obviously sounds of a place can also evoke memories. Moreover, when exploring a place by walking, at least two categories of sounds can be identified: the person's own footsteps and the surrounding soundscape. Studies on soundscape originated with the work of Murray Schafer [129]. Among other ideas, Schafer proposed soundwalks as empirical methods for identifying a soundscape for a specific location. During a soundwalk it is important to pay attention to the surrounding environment from an auditory perspective.

Schafer claimed that each place has a soundmark, i.e., sounds which one identifies a place with. Reproducing soundscapes in a laboratory setting presents several challenges, both from the . designer's point of view and from the technologist's point of view. From the designer's point of view, the challenge is how A specific treatment on the use of the above models for footto select the different sonic events that together produce the floor interaction purposes is presented in [134], along with soundscape. From this perspective the scientific literature does pointers to sources of software, sound, and other documented not provide much input, and the approach usually adopted material. Implementing such models is not straightforward, but in the media industry is mostly based on the artistic skills and real-time software modules realizing impacts and frictions are intuitions of the sound designer. However, an exception is the available, that are open and exible enough for inclusion in more approach proposed a decade ago by Chueng, who suggested general architectures for the synthesis of footstep sounds. In to design soundscapes based on users' expectations [20]. Her 1 particular, the Sound Design Toolkit (SDT) [27] contains a set methodology consists of asking people which sounds they of physically consistent tools for designing, synthesizing and associate to specific places, and then use their answers as a manipulating ecological sounds [41] in real time. SDT consists starting point to create soundscapes. Chueng also proposes of a collection of visual programs (or patches) and dynamic discrimination as an important parameters in soundscape libraries (or externals) for the software Puredata, which is design. Discrimination is the ability of a soundscape to present publicly available, and Max/MSP, which is easier to work with few easily identifiable sound marks. In her approach, this is also although commercial. SDT provides also examples, allowing called minimal ecological sound design. Moreover, studies have users to launch these patches and see them at work in both recently shown how the addition of auditory cues could lead such visual environments. to measurable enhancement in the feeling of presence. Results are available on sound delivery methods or sound quality [157, Public software is also available, which implements footstep sound synthesis models that are ready for use. Farnell 127]. The role of self-produced sounds to enhance sense of accompanied his work with a patch and an external for presence in virtual environments has also been investigated. Puredata, both referenced in the related paper [34]. Fontana's By combining different kinds of auditory feedback consisting crumpling model for Puredata has been integrated in SDT: of interactive footsteps sounds created by ego-motion with examples of this model at work can be found, among others, static soundscapes, it was shown how motion in virtual reality in the Natural Interactive Walking project website 2. The same is significantly enhanced when moving sound sources and website collects sound examples resulting from alternative ego-motion are rendered [99]. instantiations of the physically-based approach, based on a sound synthesis engine that has not been put available in the Concerning delivery of footstep sounds, they can be conveyed public domain [166]. to the walker by means of different devices, such as headphones,

AVAILABLE RESOURCES

Furthermore, it contains footstep sounds that have been generated using the aforementioned hybrid model descending from Cook's synthesis technique. 1 http://www.soundobject.org/SDT/ 2 http://niw.soundobject.org

loudspeakers or through bone conduction. Obviously the choice of delivery methods depends on several factors, for example if the soundscape has to be part of a mobile or augmented reality installation, or if it is part of a virtual reality laboratory setting. An ecologically valid solution consists of placing loudspeakers at the shoes level, since this faithfully reproduces the equivalent situation in real life.

24


As an alternative, sounds can be conveyed by means of a system of multichannel loudspeakers. In this case a problem arises regarding how footstep sounds can be rendered in a 3D space, and how many loudspeakers should be used and where they should be placed. Sound rendering for virtual environments has reached a level of sophistication that it is possible to render in realtime most of the phenomena which appear in the real world [39]. In delivering through multichannel speakers, the choice of rendering algorithms is obviously essential. As a matter of fact, various typologies of soundscapes can be classified: • • •

Static soundscapes Dynamic soundscapes Interactive soundscapes.

Static soundscapes are those composed without rendering the appropriate spatial position of the sound sources. In static soundscapes the same content is delivered to every channel of the surround sound system. The main advantage of this approach is the fact that the user exploring the virtual environment does not need to be tracked, since the same content is displayed to every speaker. The main disadvantage is obviously the fact that the simulation does not represent a real life scenario, where we receive different sonic cues from different spatial locations. Dynamic soundscapes are those where the spatial position of each sound source is taken into account, as well as their eventual movements along three-dimensional trajectories. Finally, interactive soundscapes are based on the dynamic ones where in addition the user can interact with the simulated environment generating an auditory feedback as result of actions. This last situation ideally represents the scenario with augmented footstep sounds, where each step of the user must be tracked and rendered while the user is walking in the virtual environment, without any perceivable latency, in order to recreate the illusion of walking on a surface different from the one the user is actually stepping upon. Sound delivery using headphones can also be performed using two approaches: the simple mono or stereo delivery and a solution based on binaural synthesis. One of the main issue in combining footstep sounds and soundscape design is to find the right amplitude balance between the two. One approach can be empirical, by asking subjects to walk freely while interactively producing the simulated footstep sounds and hearing the reproduced soundscape through multi-channel speakers. Subjects are then able to adjust the volume of footstep sounds until they find a level which they considered satisfactory.

5.5 HAPTIC AND MULTIMODAL RENDERING Haptic feedback which aims to reproduce forces, movements and other cutaneous sensations felt via the sense of touch is rarely incorporated, especially in those VR applications where users are enabled to walk. A heightened sense of presence can be achieved in a VR simulation via the addition of even low-fidelity tactile feedback to an existing visual and auditory environment, and the potential gains can, in some cases, be larger than those obtained by improving feedback received from a single existing modality, such as the visual display [151]. High-frequency information in mechanical signals often closely links the haptic and auditory modalities, since both types of stimuli have their origin in the same physical contact interactions. Thus, during walking, individuals can be said to be performing simultaneous auditory and haptic probing of the ground surface and environment. As demonstrated in recent literature, walkers are capable of perceptually distinguishing ground surfaces using either discriminative touch via the feet or audition [45]. Thus, on one hand, approaches to haptic and auditory rendering like those reviewed in this chapter share common features, while, on the other hand, the two types of display can be said to be partially interchangeable. An important component of haptic sensation is movement. Walking is arguably the most intuitive means of self-motion within a real or virtual environment. In most research on virtual environments, users are constrained to remain seated or to stand in place, which can have a negative impact on the sense of immersion [148]. Consequently, there has been much recent interest in enabling users of such environments to navigate by walking. One feasible, but potentially cumbersome and costly, solution to this problem is to develop motorized interfaces that allow the use of normal walking movements to change position within a virtual world. Motorized treadmills have been extensively used to enable movement in one-dimension, and this paradigm has been extended to allow for omnidirectional locomotion through an array of treadmills revolving around a larger one [53]. Another configuration consists of a pair of robotic platforms beneath the feet that are controlled so as to provide support during virtual foot-ground contact, while keeping the user in place, while yet another consists of a spherical cage that rotates as a user walks inside of it [51]. The range of motion, forces, and speeds that are required to simulate omnidirectional motion make these devices intrinsically large, challenging to engineer, and costly to produce. In addition, while they are able to simulate the support and traction supplied by the ground, they cannot reproduce the feeling of walking on different materials.

25


Lower-cost methods for walking in virtual environments have been widely pursued in the VR research community. Passive sensing interfaces have been used to allow for the control of position via locomotion-like movements without force feedback [163].

The user walks by sliding the feet back and forth simultaneously which arguably provides proprioceptive feedback reminiscent of the one generated during real walking. Moreover the user should keep his body in the middle of the disc to maintain a good balance [87].

Research on the use of vibrotactile displays for simulating virtual walking experiences via instrumented shoes [135] or floor surfaces [174] is still in its infancy. Although tactile displays have, to date, been integrated in very few foot-based interfaces for human-computer interaction, several researchers have investigated the use of simple forms of tactile feedback for passive information conveyance to the feet. Actuated shoe soles used to provide tactile indicators related to meaningful computing events [124, 173], and rhythmic cues supplied to the feet via a stair climber have been found to be effective at maintaining a user's activity level when exercising.

5.6 COMMERCIAL POSSIBILITIES The Wizdish (see Figure 5.1) is a novel walking interface built with no moving parts and works using a low friction polymer surface. The accompanying shoes also have low friction discs under the soles. This combination makes the sliding motion achievable. To ensure that the user stays in the middle of the disc, the disc is slightly curved to reposition the user in case of drifting. The system itself does not have any trackers [164], so it needs to be used in combination with a motion tracking system. The Wizdish is approximately 0.9 meter in diameter.

FIGURE 5.1: The Wizdish and the shoes to be used with it

Relatively affordable repositioning systems based on frictionfree platforms have also been developed (i.e., Cyberith's Virtualizer and the Virtuix Omni). Moreover, while less expensive than previous omnidirectional treadmills, these solutions are not cheap, and they do require the user to allocate space for a relatively large platform. WIP techniques are an inexpensive and practical alternative that already is achievable using commercial hardware, such as: •

Microsoft's Kinect and Nintendo 's Wii Balance Board.

The 3d rudder (www.3drudder.com/) is presented as a revolutionary feet-powered VR and 3D navigation and motion controller.

It is similar to the Wobble Board [91], proposed as a tool to motivate individuals in need for ankle rehabilitation to exercise. While the Wobble Board is used while standing, this device is used while seated, for obvious safety reasons of using it together with an HMD. All one has to do is simply to rest the feet on it. To move forward, one tilts the device forward; to move to the right, one tilts the device to the right, and so on and so forth. Rotating the device implies a similar movement in the software. Up and down movements are managed by applying pressure to the right or the left of the platform.

26


CHAPTER 6

PERCEPTUAL ILLUSIONS AND DISTORTIONS IN VIRTUAL REALITY


Through a lifetime of interactions with their surroundings people become experts at deciphering the information relayed to them via their senses in response to their actions and events in the environment; e.g., turning your head will result in lateral movement of the pattern of light hitting the retina (laminar optic flow), and objects that gradually occupy more of the retinal image are likely to be approaching, rather than receding (expanding optic flow). VR leverages the fact that most healthy individuals have similar expectations about the sensory stimuli produced in response to motor commands and an external events. Particularly, the defining feature of VR systems is that they are able to facilitate natural perception and actions by means of high fidelity tracking and displays; i.e., VR systems support a sensorimotor loop similar to that of the real world, thus enabling users to interact and perceive as they would during unmediated experiences. While the senses provide us with reasonably reliable information, perception is imperfect, leaving us vulnerable to illusions; i.e. instances of erroneous or misinterpreted perceptions of sensory information [156]. VR systems are by their very nature designed to manipulate the sensory information delivered to the user, and thus they are capable of producing a range of perceptual illusions. This chapter details a series of perceptual illusions and distortions known to occur in VR and discusses how these can be used to create more compelling VR experiences. The chapter is structured as follows: Initially, section 6.1 introduces the illusion of presence which arguably is particularly prominent in relation to VR. Section 6.2 presents body ownership illusions that can be elicited in VR. Section 6.3 and 6.4 deal with illusions and distortions occurring on behalf of virtually stationary and moving users, respectively. Finally, section 6.5 introduces examples of how perceptual illusions can be used in order to create more compelling VR experiences for commercial applications.

6.1 PRESENCE IN VIRTUAL REALITY When discussing perceptual illusions in virtual reality, it is necessary to acknowledge that the experience of ”being there” inside the virtual world is itself an illusion. This illusory sensation of ”being there" in an artificial or remote environment is sometimes referred to as telepresence [85], virtual presence [138], physical presence [52], spatial presence [179], or just presence [142]. While many different conflicting, and complementary, theories of presence exist (e.g., [9, 73, 123, 175]) one lends itself particularly well when describing experiences elicited by VR; namely the view of presence advocated by Slater and colleagues [126, 139, 146]. This view stipulates that presence amounts to more than just the subjective sensation of ”being there" in the virtual environment (VE). The degree of presence corresponds to the degree to which the user responds realistically to the VE; i.e., if perfect presence in the VE is achieved then the user will respond exactly as if exposed to an equivalent real-world environment.

This presence response occurs on multiple levels ranging from unconscious and automatic physiological and behavioural responses to higher level processes involving deliberation and thoughts. Thus, the presence response, includes, but cannot be reduced to the subjective sensation of ”being there" in the VE [126]. In fact, Slater [146] argues that the presence response happens as a function of two illusions: the place illusion (PI) and the plausibility illusion (Psi). PI corresponds to the illusion of ”being there" in the VE, and it occurs as a function of the range of normal sensorimotor contingencies supported by the system; i.e., the range of normal actions the user can perform in order to perceive and affect the VE. For example, turning one's head or kneeling down should result in changes to displayed image, and reaching out and grasping an object should cause that object to move [139]. Particularly, Slater [146] has summarized the illusion as follows: ”When a person perceives by carrying out actions that result in changes in (multi-sensory) perception much as in physical reality, then the simplest hypothesis for the brain to adopt is that what is being perceived is actually there". Psi refers to the illusion that the events happening virtually are indeed happening [146], and Psi is believed to depend on the VE meeting at least three conditions: 1. 2. 3.

The user's actions have to produce correlated reactions within the VE (e.g., a virtual character might avoid eye-contact and step aside if the user stares and exhibit aggressive body language). The VE should respond directly to the user, even when the user does not perform an instigating action (e.g., a virtual character might react to the presence of the user without the user initially approaching or addressing said character) The VE should be credible; i.e., it should conform to the user's knowledge and expectations accrued through a lifetime of non-mediated interactions [125].

Finally, the virtual body is viewed as crucial since it represents . the fusion of PI and Psi [139].

6.2 BODY OWNERSHIP ILLUSIONS As suggested in the introductory paragraph of this chapter, people make sense of sensory stimuli based on tacit expectations as to how this stimuli relates to actions and events in the environment. Importantly, people do not just expect to see, hear, and feel external events, since their own bodies routinely are seen, felt and heard during interaction with the environment. When looking down you expect to see your own body, and when reaching for an object you expect to see your extended arm. However, when entering a virtual world you can no longer take for granted that looking down will reveal a view of yourself, since this requires your real body to be tracked and a virtual representation to be mapped to your physical movements.

28


If appropriate tracking and mapping is performed, then the user may experience an illusion of virtual body ownership; i.e., the experience that an artificial body is in fact one's own physical body [81]. Interestingly, illusions of ownership of artificial body-parts was first documented without the use of VR technology. Botvinick and Cohen [12] demonstrated that it is possible for individuals to get a sense of ownership of a rubber hand when tactile stimuli is synchronously applied to the rubber hand and their real hand. This illusion was aptly named the rubber hand illusion. Since then a number of studies have explored the factors influencing the rubber hand illusion (see [81] and references therein), and it has been demonstrated that the illusion can be replicated in VR [143]. Recent work by Maselli and Slater [81] empirically studied the perceptual cues forming the basis for full body ownership illusions in VR and, amongst other things found, that: 1. 2. 3.

A first person perspective is central to the experience of ownership of the virtual body. Visual proprioceptive cues can elicit the illusion when the user views a virtual body that spatially overlaps with the real body from a first person perspective. When the spatial overlap or the realism of virtual body is limited, multisensory or sensorimotor cues are necessary in order to elicit the illusion.

A particularly interesting property of virtual ownership illusions is that they can occur even when the user is exposed to a body that differs from their own. For example, it has been demonstrated that illusions of body ownership can be induced when the virtual body differs in terms of age [4], size [103], gender [147], and race [79]. Ownership may even be experienced over non-human virtual bodies (e.g., humaniod robots and cartoon-like avatars [76]) and over additional limbs (e.g., a virtual tail [155]). Moreover, ownership of a body that differs from one's own may even influence one's perception, attitudes and actions. For example, when embodying a child's body, individuals have been shown to be more likely to overestimate the size of virtual objects [4]; embodying a body with a different skin color may reduce implicit racial bias [110]; and the appropriateness of the appearance of the virtual body may introduce different movement patterns while playing virtual drums [64]. The last examples has interesting implications since it suggests that VR can help to train and adapt movements and gestures of musicians.

6.3 DISTORTIONS OF VIRTUAL SPACE Adult users inhabiting the body of a virtual child may as suggested overestimate the size of virtual objects. However, distorted perception of virtual spaces is not only a phenomena that occurs when users embody a foreign avatar. Particularly, a large body of work suggests that egocentric distances are perceived as shorter inside VE; i.e., the distance between the

observer and an object are subjectively perceived as shorter in VR (see [116] and references therein). Renner, Velichkovsky, and Helmert [116] highlight that such misperceptions pose little or no problem in relation to certain VR applications. However, veridical spatial perception will often be a fundamental requirement (e.g., correct perception of distance and scale is crucial in relation to visualization of architecture and certain training scenarios). Based on a review of the literature from fields such as computer graphics and psychology, Renner, Velichkovsky, and Helmert [116] organize the factors influencing distance perception in VEs into four groups: 1. 2. 3. 4.

Measurement methods Technical factors Compositional factors Human factors.

In regards to the first group of factors, measurement methods, three categories of measures have been used to assess distance misperceptions in VR; i.e., blind walking, timed imagined walking, and verbal estimates. The employed measures quite consistently indicate that individuals underestimate egocentric distances in VR [116]. The technical factors correspond to the properties of the technology used to display the VE. The properties of HMDs are believed to account for a certain amount of the distance underestimations reported when VEs are displayed using such hardware; For example, based on their review of the literature Renner, Velichkovsky, and Helmert [116] conclude that, while the restricted field of view (FOV) introduced by most HMDs cannot in isolation account for the underestimation, the combination of a restricted FOV, the mass and moment of inertia of such displays, along with the feeling of wearing the HMDs may account for some amount of the underestimation. Compositional factors refer to features presented in the VE (e.g., the presence or absence of textures and a virtual body). The literature is fairly consistent with respect to the effects of pictorial depth cues. Particularly, it suggests that adding complexity to the VE may positvely influence distance underestimations, and presentation of a correct avatar is believed to improve estimates [116]. Human factors essentially refer to the psychological characteristics of the users. According to Renner, Velichkovsky, and Helmert [116] feedback on performance and practice may improve distance estimations in VEs. However, this adaptation to virtual distances may reduce transfer of skills from VR to real life. Gender does not appear to influence distance estimations in VEs and neither does variations in age amongst adults. Even though we have yet to learn all the causes of underestimations of egocentric distances in VEs, Renner, Velichkovsky, and Helmert generally recommend that to facilitate distance perception as well as possible, it is important to provide binocular disparity, use high quality of graphics, carefully adjust the virtual camera settings, display a rich virtual environment containing a regularly structured ground texture, and enhance the user's sense of presence" [116].

29


6.4 ILLUSORY AND DISTORTED SELF-MOTION Readers who travel by train on a regular basis are likely to have experienced the following scenario: You are sitting on a motionless train, looking out the window at another train which is waiting for departure in an adjacent track. As this second train departs, you experience a fleeting, yet compelling, illusion that you are in fact on the train which is moving. This experience is a naturally occurring instance of visually induced illusory self-motion, also referred to as vection [74]. The fact that people are susceptible to such illusions can at least in part be attributed to the misleading nature of visual motion stimuli [47]. Particularly, visual motion stimuli are open to not one, but two perceptual interpretations [17]: 1. The optokinetic stimuli may lead to exocentric motion perception (e.g., the train passenger who (falsely) perceives the surroundings as being stationary while he is moving). 2. The optokinetic stimuli may lead to egocentric motion perception (e.g., the passenger who (correctly) perceives himself as being stationary while the second train is moving). Vection illusions are affected by both the properties of the physical stimuli indicating self-motion (bottom-op factors) and the perceivers' expectations to, and interpretation of said stimuli [120] and such illusions can be induced using visual, auditory, vibrotactile, and biomechanical stimulation [101, 119, 168, 121]. The bottom-up factors influencing visually induced vection include, but are not limited to, the movement speed of the stimulus, the area of the visual field occupied by the display, and the perceived depth structure of the visual stimulus [119]. In regards to audition, the three primary cues for discrimination of motion are binaural cues, the Doppler effect, and sound intensity [168]. Recent work has demonstrated that self-motion illusions also may be induced by applying vibrotactile stimulation to the main supporting areas of the feet. Specifically, it was demonstrated that such stimulation can be used to create the illusion of travelling in a virtual elevator [101]. Moreover, it has been shown that this type of vibrotactile stimulation may produce illusions of forward, backward and swaying movement, if visuals indicate that this motion is possible [88]. Since the visuals in these cases were devoid of explicit motion cues (i.e., optic flow) their contribution to the self-motion illusion happens due to the perceiver's expectations to, and interpretation of, said stimuli (i.e., a top-down factor). Other instances of top-down factors influencing self-motion illusions include the belief that physical movement is indeed possible [120, 118, 180], and explicitly being asked to attend to either the sensation of self-motion or object motion [108]. While VR can be used to elicit compelling illusions of self-motion in virtual environments, it is interesting to note that perception of virtual self-motion is prone to distortions particularly, in relation to virtual walking.

Intuitively, one would expect that if a user is walking on a treadmill then the visual speed displayed using a HMD should match the speed of the treadmill. However, it has been demonstrated that individuals relying on linear treadmills for virtual locomotion tend to underestimate visually presented speeds. That is to say, if an accurate visual speed matching the speed of the treadmill is presented, then the walker is likely to find it too slow [5, 61, 96, 112]. The exact causes of this perceptual distortion have yet to be uncovered, but studies have yielded the following findings: 1. 2. 3. 4.

The distortion may be eliminated if walkers direct their gaze downwards or to the side [5]. Image jitter does not appear to be responsible for the distortion [5]. No effect of increased HMD weight or varying peripheral occlusion have been found [94, 95]. The degree of underestimation appear to be inversely proportional to the size of the display FOV and the geometric FOV [92, 95]. 5. The degree of identified underestimation may vary depending on study methods [95]. 6. High step frequencies may lead to a larger degree of underestimation but the evidence is somewhat equivocal with respect to this effect [93, 30, 61]; 7. Finally, the degree of underestimation may vary slightly depending on whether the user is walking on a treadmill or walking in place [96].

6.5 LEVERAGING PERCEPTUAL ILLUSION FOR COMMERCIAL APPLICATIONS One of the features that sets VR apart from traditional audiovisual media is arguably the ability to elicit compelling illusions of presence inside computer-generated, recorded, or streamed environments. The illusion of �being there" is largely the product of exposure to immersive technologies that allow users to perceive and interact with the virtual world in the same way as they would during unmediated experiences of physical reality. Bowman and McMahan [16] describe that one primary reason for the success of many previous VR applications is exactly the ability to elicit experiences that greatly resemble the ones faced during real-world interactions. For one, virtual exposure therapy may be used to treat phobias because the experience of a virtual scenario may elicit a genuine fear response. Similarly, VR is of value in relation to military and medical training because the similarity between the virtual and real scenarios makes it possible to transfer knowledge and skills from one domain to the other. Finally, Bowman and McMahan [16] describe that to some extent VR also derives its potential as a source of entertainment from the ability to provide users with realistic experiences of places and events that are unlikely or even impossible in real life. In other words VR has the capacity to transport users to fictional (and factual) worlds of games and films rather than viewing these from a distance on a screen.

30


Moreover, besides from being central to presence, illusions of body-ownership could allow users to inhabit bodies that differ from their own which in turn may affect the attitudes and behavior of the user. Thus, applications providing users with a compelling illusion of virtual body-ownership have potential within a range of different domains. Particularly, it would seem that such illusions could potentially inspire empathy towards others, but also be used as an intervention in relation to individuals suffering from disturbances of their body self-perception. Self-motion illusions are central to the creation of convincing vehicle simulations. An advantage of relying on virtual vehicles, rather than body-centric modes of locomotion (e.g., walking or running) is that such approaches circumvent the problem of incompatible spaces (see chapter 5) by enabling the user to travel vast distances inside VEs without having to physically move. However, developers should exhibit caution when introducing virtual self-motion on behalf of stationary users since the visiovestibular conflict may introduce cybersicknes [25] or VR sickness [36]; i.e., users may experience nausea, sweating and similar symptoms in response to the conflict between external sensory information (visual and auditory stimuli) indicative of self-motion and vestibular information suggesting to the user that he or she is stationary. Thus, the factors that produce more compelling illusions of self-motion may also cause some users to become ill. The fact that visually perceived walking speeds are susceptible to distortions may entail that realistic walking speeds are not always applicable. However, the malleability of motion perception may also be considered an advantage from the point of view of developers. Indeed, this is what make some forms of redirected walking possible (see chapter 5); i.e., users walking paths can be influenced through subtle manipulation of their virtual rotation and translation [153]. Notably, other perceptual illusions and distortions have also been used to make redirection possible inspired by change blindness illusions, Suma et al. [159] devised an approach that redirects the user through subtle manipulation of the virtual architecture. In conclusion, VR software and technology enables developers to manipulate perception unlike exiting media. Perceptual illusions and distortions provide powerfull tools for solving practical problems (e.g., the problem of incompatible spaces), and allow for the creation of more compelling experiences (e.g., the illusion of self-motion during vehicular travel). Most notably, it is the ability to elicit compelling illusions of being in virtual environments where events appear to be really happening (i.e., the illusions of place and plausibility) that sets VR apart from existing audiovisual media.

31


CHAPTER 7

APPLICATIONS

Figure 2.3: The motor homunculus is an illustration of the sensory and motor representation of the body within the brain. Naturally with virtual object and have a compelling sensation of touching them.


THE DECADES OF RESEARCH ON VR SOFTWARE AND TECHNOLOGY HAS REVEALED SEVERAL POTENTIAL APPLICATION AREAS AND THE RECENT COMMERCIALIZATION OF VR HAS ONLY ADDED TO THIS LIST In this chapter we discuss a number of the areas where VR already has been applied or is likely to be applied in the future. Specifically we discuss application of VR within the context of entertainment and news; learning, education, and training, product development and marketing; travel and shared experiences, cultural heritage and art.

7.1 ENTERTAINMENT AND NEWS Slater and Sanchez-Vives [145] describe games as one of the obvious applications of VR and one of the driving forces of the industry. Several large game development platforms, such as Unity (www.unity3d.com) and the Unreal Engine (www.unrealengine.com), already support VR and have helped democratize development of VR games. Thus, a plethora of VR games have been produced by indie developers and the larger studios are working on both brand new VR titles and adaptations of existing titles. The obstacles facing developers of VR games include, but are by no means limited to, the problem of facilitating natural and nausea free virtual travel (see chapter 5), the challenge of creating compelling interactions in VR (see chapter 4) and the issue of adapting existing game mechanics to this (relatively) novel medium. Recent technological developments have entailed an increase in quality and a decrease in price of 360 video cameras and microphones, and in turn this has spurred an increased interest in immersive video. Current 360 video only allows the user to view the displayed environment from a fixed position. This generally prohibits the user from exploring and interacting with the environment and the characters inhibiting it. Moreover, the inability to move implies that the range of sensorimotor contingencies is limited in comparison to content generated using for example a game engine. This limitation may necessarily be detrimental to the place illusion (see chapter 6). Despite these limitations 360 video offers a lot of promise with respect to entertainment and the format has already been embraced by industry and content creations (e.g., a 13 minute narrative experience was created as a companion to the HBO series Mr. Robot). The challenges facing filmmakers aspiring to create cinematic VR include, the need to rethink parts of the production process (e.g., the production crew cannot be present on set or else they will have to be removed from the frame during postproduction), the question of whether traditional cinematic techniques can be used to the same dramatic effect (e.g., cinematography and editing), and the need for finding new ways of guiding the attention of an audience who is in control of what is currently in view [86]. VR and 360 video has also captured the attention of journalists and news organizations, and since the first immersive journalism piece was created in 2010 [26] a number of productions have been made. One of the powers of immersive journalism is the ability to inspire empathy for people living in distant places and under foreign conditions.

When discussing the differences between �traditional" news and immersive journalism Slater and Sanchez-Vives [145] describe that the goal is not so much the presentation of what happened but to give people experiential, non-analytic insight into the events, to give them the illusion of being present in them. That presence may lead to another understanding of the events, perhaps an understanding that cannot be well expressed verbally or even in pictures.

7.2 LEARNING, EDUCATION AND TRAINING Some of the most promising and exhaustive avenues for VR are learning, education and training. Slater and Sanchez-Vives [145] outline four reasons why VR offers great promise in regard to these domains. First, VR has the ability to present abstract concepts in a more tangible manner. The author's highlight mathematics as one topic where VR might be used to ease understanding (e.g. for the learning of geometric and vector algebra). Secondly, the fact that virtual worlds do not need to adhere to the laws of physics, or any natural laws for that matter, enable learning through exploration of the impossible. Thus a user may learn about the real world by visiting places governed by different rules or by having the user control these rules (e.g., gravity may be slightly different, time may flow at a different rate, the opaque may become transparent, and the thoughts of others may become audible or visible). The third advantage is somewhat related in that VR has the potential to replace existing practices that despite being desirable are unfeasible or even impossible in reality. Slater and Sanchez-Vives [145] use virtual field trips as an example which would allow students to visit two or more geographically distant locations without the need for expensive and time-consuming travel, and the students might even visit places that are difficult or impossible to access (e.g., the bottom of the ocean or the moon). Even though it arguably will take some time before the average students will be able to get a strong sensation of visiting distant places,projects such as Google Expeditions (https://edu.google.com/expeditions) is a step in the right direction. Finally Slater and Sanchez-Vives [145] highlight that VR support �doing" as opposed to just observing. This is particularly pertinent in relation to acquisition of practical skills. The fact that VR allows users to perform practical tasks anywhere makes it ideally suited for training skills that otherwise would be too dangerous, expensive or even impossible to train in real life. This is part of the reason why the use of VR in relation to surgical training has been researched extensively [145].

7.3 HEALTHCARE VR does, as suggested in the previous paragraph, have great potential as a tool for training medical staff. However, VR may also offer benefits for the physical and mental well-being of patients and other individuals. 33


Besides from training of staff there are at least three potential areas of application for VR within healthcare. 1. 2. 3.

FIRST, several years of academic work have explored the use of VR for treatment of mental illness and disorders; e.g., acrophobia, aviophobia, arachnophobia, body image disturbances, and post-traumatic stress (see [122] and references therein), and recent work has also sought to apply VR within the domain of positive psychology (e.g., virtual meditation [2]). Notably, there exist commercial VR applications designed with exposure therapy and meditation in mind, such as Face Your Fears (www.mimerse.com) and Guided Meditation VR (www.guidedmeditationvr.com). SECONDLY VR has potential as a tool for improving physical exercise and rehabilitation. For example, VR has been applied as a motivational aid for retirement home residents [19]. Particularly, this work used VR to enable the residents to bike through scenic virtual environments while they were exercising. FINALLY, it is possible to support existing practices at hospitals using virtual reality. For example VR may be used as alternative as a source of nonpharmacologic pain relief [49] or for use while patients are subjected to fMRI scans [50].

7.4 PRODUCT DEVELOPMENT AND MARKETING VR can be leveraged during several stages of the product development cycle. Because VR allows users to inspect and manipulate digital versions of real products of all sizes, the technology provides a relatively inexpensive approach to generating prototypes which can be as a tool for experimentation and exploration by single designers or by teams of designers working collaboratively on. In fact researchers have for decades experimented with the use of VR for product design and design reviews [18]. For example the automotive and aerospace industries have adopted VR as a development tool [182]. Recently VR has also been adopted for marketing purposes. Again the automotive industry lends itself well as an example. Using VR Volvo enables prospective customers to get a sense of what it is like to be behind the wheel of their the Volvo XC90. Moreover, Oticon has used VR to provide people with an experience of the differences between using hearing aids using dated and cutting edge technology.

7.5 TRAVEL AND SHARED EXPERIENCES One of the hallmarks of VR is as described its ability to transport users to familiar, foreign, and fantastic virtual (and real) places and events. As a consequence, virtual travel has been a topic being considered since the early `90s [145]. However, in its current form VR does not serve as a meaningful alternative to vacationing, and it is entirely probably that it never will. Nevertheless, the ease with which users can get a sense of being in a place that resembles their dream destination suggests that VR in time could provide a valid form of travel in its own terms, rather than a substitute for real travel [145]. Slater and Sanchez-Vives [145] also highlight VRs potential as a means of bringing people together, thus allowing them to

share virtual environments or collaborate remotely. Moreover, the authors describe that in its ideal form VR constitutes an improvement over traditional videoconferencing since VR is superior in regards to its ability to display spatial relationships which are important for correct representation of features such as eye contact. However, Slater and Sanchez-Vives [145] stress that such a system is not currently feasible since it would require real-time full facial capture, eye tracking, real-time rendering of subtle emotional changes such as blushing and sweating, subtle facial muscle movements such as almost imperceptible eyebrow raising, the possibility of physical contact such as the ability to shake hands, or embrace, or even push, and so on. Nevertheless, research on shared experiences has been ongoing for close to two decades and as a result there exists a large body of work detailing the factors that positively influence the sensation of being together virtually.

7.6 CULTURAL HERITAGE The fact that VR can elicit compelling place illusions (i.e. the sensation of �being there" in the virtual environment) also means that it has obvious applications in regards to cultural heritage. While the ideal way of preserving cultural heritage is through physical protection and restoration, a great deal of work has also sought to digitally capture and present cultural artifacts [145]. Slater and Sanchez-Vives [145] outline four ways in which VR can be used for preservation of cultural heritage: 1. Virtual travel and tourism will allow users to visit distant sites of cultural importance. 2. VR will allow future generations to visit digital versions historical sites that could not be preserved physically. 3. Users will be able to visit alternate versions of historical sites that appear as they did during the time of their creation. 4. VR enables us to provide users with a glimpse of how historical and current locations will look in the future (e.g., after exposure to regular weathering or after exposure to climate change scenarios).

7.7 ART VR has also shown the potential to create and impact for artistic expression. An application that has been widely employed by professional and amateur artists is Tiltbrush by Google (https://www.tiltbrush.com/). Tiltbrush enables to paint in a 3-dimensional space. VR has also enabled novel experiences of music, for example through interactive and immersive music videos. As an example, the Bohemian Rapsody Experience is an interactive VR app created by a collaboration between Google and the VR startup Enosis VR. By combining motion capture, 3D animation and spatialized sound, the experience allows to be immersed the Queen's song. In 2017, also the Venice Biennale had a focus on VR art, in a collaboration between Faurschou Foundation and the company Khora which resulted in a new company called Khora Contemporary. Major commissions by German artist Christian Lemmerz and from LA-based Paul McCarthy, marked Khora Contemporary' s launch.

34


CHAPTER 8

CONCLU SIONS

Figure 2.3: The motor homunculus is an illustration of the sensory and motor representation of the body within the brain. Naturally with virtual object and have a compelling sensation of touching them.

1

DESIGN SOUND, VISUAL, TOUCH AND


PROPRIOCEPTION IN TANDEM, AND CONSIDER THE MAPPINGS.

For virtual reality experiences, it is not only relevant to design sound and visual in tandem, and consider the mapping; touch and motion should also enter the equation, as well as full body interaction. From the perspective of auditory feedback, virtual reality experiences have been delivered through speakers or headphone based systems. For virtual reality, 3D sound is an especially relevant issue [8], since the location and motion of the visual virtual objects need to match the location and motion of the auditory objects.

Haptic feedback can be delivered in mid-air, as in the case of [149], or using a tangible interface. Most hand based input devices for virtual reality, as a matter of fact, have still the form of gloves, remote controllers or joysticks. It might be beneficial for virtual reality researchers to consider tangible interfaces, to extend the possibilities provided by traditional virtual reality input devices.

Lately the Leap motion controller has become a popular option in the .consumer virtual reality community, mainly for its ability to track and represent in 3D world both hands, without the .need for the user to wear any additional interface such as gloves or joysticks.

2

REDUCE LATENCY

All interactions should be smooth and with minimum latency. Since VR experiences are inherently multisensory, reduction of latency with respect to both sound and visuals is paramount. Particularly, synchronicity between the arrival of stimuli in different modalities is known to in uence the perceptual .binding happening in response to an event producing stimulation of more than one modality [66].

Thus, it is crucial that VR experiences represent timely and .synchronized audiovisual feedback in response to the user's actions. More generally, it is crucial to reduce system .latency since it is believed to increase cybersickness [69].

3

PREVENT OR LIMIT CYBERSICKNESS OR VR SICKNESS

Cybersickness may involve a range of different symptoms including, but not limited to, disorientation, headaches, sweating, eye strain and nausea [25]. While alternative explanations of the causes of cybersickness exist [117, 13], the most popular is so-called sensory conflict theory. This theory basically stipulates that cybersickness arises as a consequence of conflicting information from the visual and vestibular senses; e.g., a user driving a virtual vehicle will be physically stationary while the visual stimuli suggests that movement is occurring.

System factors believed to influence cyber-.sickness include latency, display flicker, calibration, and ergonomics [25]. Aside from optimizing factors such as the .efficient tracking and high update and frame rates, developers should be mindful of how the user's movement around .the virtual environment is made possible. Particularly, it is advisable to employ a one-to-one mapping between virtual .and real translations and rotations, and if the user has to move virtually while being physically stationary, accelerations .and decelerations should be minimized as the vestibular system is sensitive to such motion.

4

DO NOT COPY BUT LEVERAGE EXPERT TECHNIQUES

This principle is related to the notion that it might be interesting not only to merely copy reality, but to be inspired by it and create novel possibilities. VR has the potential to become interesting when it provides immersive experiences that are not possible in the real world.

The role of virtual reality, e.g., immersive visualisation hardware and software, i.s to enhance the overall multimodal experience. Virtual reality is a different medium compared to our physical world. .Replicating interfaces in virtual reality may not bring about useful results. We should discover the kinds of interfaces t. hat are best suited for the VR medium. Using metaphors derived from interactions existing in real worlds offer interesting possibilities.

36


5

CONSIDER BOTH NATURAL AND MAGICAL INTERACTIONS

In continuation of the previous principle and in line with Bowman et al.'s [15] general recommendations for 3D user interfaces, developers should consider the use of both natural and magical interactions.

Using virtual reality musical .instruments (VRMIs) as an example, an interaction or instrument will qualify as magical if either of the two are not l.imited by real-world constraints, such as the ones imposed by the laws of physics, biological evolution, or the current s. tate of technological development. Contrarily, interactions and instruments qualify as natural if they conform to real-world constraints.

Notably a natural interactions can be combined with magical instruments and .vice versa (e.g., the user may manipulate a physically plausible object located at a great distance using a nonisomorphic .approach (e.g., the Go Go interaction technique [111]), or the user may use natural gestures to manipulate an ex.perience that defies the laws of physics.

The advantage of natural techniques is that the familiarity of such approaches .may increase usability; however, magical techniques allow the user to overcome the real-world limitations normally .imposed on the user and the object [15].

6

CONSIDER THE ERGONOMICS OF THE DISPLAY

All the devices necessary to create virtual reality experiences introduce the additional challenge of how to hide the technology to focus on the experience. The recently introduced head mounted display devices are improving also from the ergonomic perspective.

However, while wearing HMDs such as the HTC Vive and Oculus Rift, the user remains .tethered and the weight of the two displays is noticeable. Thus, when creating VR, developers should be mindful of .the potential strain and discomfort introduced by wearing the HMD and the issues currently introduced by wires .(e.g., a 360 degree turn will leave the user entangled).

7

CONSIDER THE BODY OF THE PLAYER

One of the challenges of virtual reality is the fact that a person cannot see her own body represented in the virtual world, unless the real body is tracked and mapped to a virtual representation. The resulting sensation is called virtual body ownership, and several studies have shown that it is possible to generate perceptual illusions of ownership in virtual reality over a virtual body seen from first person perspective. In other words, the user may feel ownership over a body that visually substitutes her real body [40]. .For a more elaborate discussion of virtual body ownership see Chapter 6.

A study presented in [64] shows how differences between the real and virtual body have temporary consequences for participants' attitudes and behaviour under the illusory experience of body ownership. Particularly, differences in body movement patterns while drumming were found depending on whether the perceived body representation fulfilled expectations of what appearance was and was not appropriate for the context. This study has interesting implications since it shows that virtual reality can help to train and adapt movements and gestures of musicians.

The presence of a virtual body representation is also important to provide the necessary visual feedback to the user. Notably, a study by Argelaguet et al. [3] explored the effects of varying degrees of visual hand realism and found that abstract and iconic representations of the hand produced a greater sense of agency than the realistic representation. One potential explanation is that the realistic representation made the participants expect more natural interaction than the one offered by the Leap Motion used for the study.

Thus, while a virtual representation of the user is important, .it may also be important that the appearance of this representation matches the fidelity of the possible interaction.

37


8

CREATE A SENSE OF PRESENCE

The experience of �being there" in an computer-generated environment is often referred to as the sensation of presence or the place illusion [139]. The illusion is believed to be a response to the degree of technological immersion offered by the system. The degree of technological immersion offered by a given system can be characterized by the range of normal sensorimotor contingences (SCs) supported by the system.

Particularly, Slater [139] describes SCs as the set of actions a user will know how to perform in order to perceive (e.g. , moving one's head and eyes to change gaze direction, kneeling in order to get a closer look at something on the ground, or turning one's head in order to localize the position of an invisible sound source).

Thus, it is advisable for developers, .to construct virtual instruments with the limitations of the system in mind, so as to avoid encouraging the user to rely on SCs not fully supported by the system. Besides from depending on the range of normal SCs, presence is also believed to be influenced by the illusion of body-ownership. That is, seeing a body that one feels ownership over inside the virtual environment contributes to the sensation that one is inside said environment. For a more detailed description of presence see Chapter 6.

9

MAKE THE EXPERIENCE SOCIAL

One of the fundamental aspects of life are social shared experiences. On the other hand, virtual reality has merely been an individual activity where one person is immersed in a virtual world. This is mostly due to the occlusive properties of head-mounted displays, that block any visual communication with the outside world. Recent developments in virtual reality technologies, however, are pointing towards the desire to create shared virtual reality experiences.

It is interesting to use other channels when the visual modality is blocked; in this case the auditory and tactile channel can create some interesting possibilities that are not discovered yet. This can create novel social experiences in virtual reality.

An alternative to facilitating social interaction between individuals in the same physical space is for the users to share experiences virtually and potentially experiencing so-called social presence or co-presence [104]. Previous work has documented that individuals ask to present in front of an audience of virtual characters, tend to respond in a manner similar to how they would if they found themselves in a similar real-world situation [144].

Thus, it seems possible that virtual reality can provide people with a surrogate for actually being on stage while interacting VR. This could be useful for training people to play or speak in public.

38


BIBLIO GRAPHY

Figure 2.3: The motor homunculus is an illustration of the sensory and motor representation of the body within the brain. Naturally with virtual object and have a compelling sensation of touching them.


[1] T Ananthapadmanaban and V Radhakrishnan. An investigation of the role of surface irregularities in the noise spectrum of rolling and sliding contacts. Wear, 83(2):399{409, 1982.

[15] Doug Bowman, Ernst Kruij , Joseph J LaViola Jr, and Ivan Poupyrev. 3D User Interfaces: Theory and Practice, CourseSmart eTextbook. Addison-Wesley, 2004.

[2] Thea Andersen, Gintare Anisimovaite, Anders Christiansen, Mohamed Hussein, Carol Lund, Thomas Nielsen, Eoin Rafferty, Niels C Nilsson, Rolf Nordahl, and Stefania Serafin. A preliminary study of users' experiences of meditation in virtual reality. In Virtual Reality (VR), 2017 IEEE, pages 343-344. IEEE, 2017.

[16] Doug A Bowman and Ryan P McMahan. Virtual reality: how much immersion is enough? Computer, 40(7):36{43, 2007.

[3] Ferran Argelaguet, Ludovic Hoyet, Michael Trico, and Anatole Lecuyer. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In 2016 IEEE Virtual Reality (VR). IEEE, 2016. [4] Domna Banakou, Raphaela Groten, and Mel Slater. Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proceedings of the National Academy of Sciences, 110(31):12846{12851, 2013. [5] Tom Banton, Jeanine Stefanucci, Frank Durgin, Adam Fass, and Dennis Proffitt. The perception of walking speed in a virtual environment. Presence: Teleoperators & Virtual Environments, 14(4):394{406, 2005. [6] Michel Beaudouin-Lafon. Designing interaction, not interfaces. In Proc. Conf. Advanced Visual Interfaces, pages 15{22, Gallipoli (LE), Italy, May 2004. [7] Durand R Begault. Challenges to the successful implementation of 3-d sound. Journal of the audio engineering society, 39(11):864{870, 1991. [8] Durand R Begault et al. 3-D sound for virtual reality and multimedia, volume 955. Citeseer, 1994. [9] Frank Biocca. Can we resolve the book, the physical reality, and the dream state problems? from the two-pole to a three-pole model of shifts in presence. In EU Future and Emerging Technologies, Presence Initiative Meeting. Citeseer, 2003. [10] B. Blesser. An interdisciplinary synthesis of reverberation viewpoints. J. of the Audio Engineering Society, 49(10), 2001. [11] Kristopher J Blom and Steffi Beckhaus. The design space of dynamic interactive virtual environments. Virtual Reality, 18(2):101{116, September 2013. [12] Matthew Botvinick and Jonathan Cohen. Rubber hands' feel'touch that eyes see. Nature, 391(6669):756, 1998. [13] Stephane Bouchard, Genevieve Robillard, Patrice Renaud, and Francois Bernier. Exploring new dimensions in the assessment of virtual reality induced side effects. Journal of computer and information technology, 1(3):20{32, 2011.

[17] Th Brandt, Jo Dichgans, and E Koenig. Differential effects of central versus peripheral vision on egocentric and exocentric motion perception. Experimental Brain Research, 16(5): 476-491, 1973. [18] Frederick P Brooks. What's real about virtual reality? IEEE Computer graphics and applications, 19(6):16-27, 1999. [19] Jon Ram Bruun-Pedersen, Stefania Serafin, and Lise Busk Kofoed. Going outside while staying inside? exercise motivation with immersive vs. non-immersive recreational virtual environment augmentation for older adult nursing home residents. In Healthcare Informat-ics (ICHI), 2016 IEEE International Conference on, pages 216-226. IEEE, 2016. [20] P. Chueng. Designing sound canvas: The role of expectation and discrimination. In CHI'02 extended abstracts on Human factors in computing systems, pages 848-849. ACM, 2002. [21] P. Chueng and P. Marsden. Designing Auditory Spaces to Support Sense of Place: The Role of Expectation. In CSCW Workshop: The Role of Place in Shaping Virtual Community. Citeseer, 2002. [22] P. Cook. Modeling Bill's Gait: Analysis and Parametric Synthesis of Walking Sounds. Proceedings of the AES 22nd International Conference on Virtual, Synthetic, and Enter-tainment Audio, pages 73-78, 2002. [23] P.R. Cook. Physically Informed Sonic Modeling (PhISM): Synthesis of Percussive Sounds. Computer Music Journal, 21(3):38-49, 1997. [24] P.R. Cook. Real sound synthesis for interactive applications. AK Peters, Ltd., 2002. [25] Simon Davis, Keith Nesbitt, and Eugene Nalivaiko. A systematic review of cybersickness. In Proceedings of the 2014 Conference on Interactive Entertainment, pages 1{9. ACM, 2014. [26] Nonny De la Pena, Peggy Weil, Joan Llobera, Elias Giannopoulos, Ausias Pomes, Bernhard Spanlang, Doron Friedman, Maria V Sanchez-Vives, and Mel Slater. Immersive journalism: immersive virtual reality for the first-person experience of news. Presence: Teleoperators and Virtual Environments, 19(4):291{301, 2010. [27] S. Delle Monache, P. Polotti, and D. Rocchesso. A toolkit for explorations in sonic inter-action design. In Proc. of the 5th Audio Mostly Conf., pages 1{7. ACM, 2010.

[14] L. Bouguila, F. Evequoz, M. Courant, and B. Hirsbrunner. Walking-pad: a step-in-place locomotion interface for virtual environments. In Proceedings of the 6th international con-ference on Multimodal interfaces, pages 77{81. ACM, 2004. 40


[28] Derek DiFilippo and Dinesh K Pai. The ahi: An audio and haptic interface for contact interactions. In Proceedings of the 13th annual ACM symposium on User interface software and technology, pages 149-158. ACM, 2000. [29] Paul Dourish. Where the action is: the foundations of embodied interaction. MIT press, 2004. [30] Frank H Durgin, Catherine Reed, and Cara Tigue. Step frequency and perceived self-motion. ACM Transactions on Applied Perception (TAP), 4(1):5, 2007. [31] Adam J Ecker and Laurie M Heller. Auditory-visual interactions in the perception of a ball's path. Perception, 34(1):59-75, 2005. [32] Cumhur Erkut, Antti Jylha, and Reha Discioglu. A Structured Design and Evaluation Model with Application to Rhythmic Interaction Displays. In Proceedings of the 2011 conference on New interfaces for musical expression, pages 477-480, Oslo, Norway, 2011. [33] Marc O Ernst and Heinrich H Bultho. Merging the senses into a robust percept. Trends in cognitive sciences, 8(4):162-169, 2004.

[42] W.W. Gaver. What in the world do we hear?: An ecological approach to auditory event perception. Ecological Psychology, 5(1):1-29, 1993. [43] Frank A Geldard and Carl E Sherrick. The cutaneous" rabbit": A perceptual illusion. Science, 178(4057):178-179, 1972. [44] Marco Gillies, Rebecca Fiebrink, Atau Tanaka, Jeremie Garcia, Frederic Bevilacqua, Alexis Heloir, Fabrizio Nunnari, Wendy Mackay, Saleema Amershi, Bongshin Lee, Nicolas d’Alessandro, Joelle Tilmanne, Todd Kulesza, and Baptiste Caramiaux. Human-Centred Machine Learning. In Proc. CHI EA. ACM, May 2016. [45] B. L. Giordano, S. McAdams, Y. Visell, J. Cooperstock, H.-Y. Yao, and V. Hayward. Non-visual identification of walking grounds. Journal of the Acoustical Society of America, 123(5):3412, 2008. [46] E Goldstein and James Brockmole. Sensation and perception. Nelson Education, 2016. [47] E.B. Goldstein. Sensation and perception. Wadsworth Pub Co, 2009.

[34] A. J. Farnell. Marching onwards procedural synthetic footsteps for video games and animation. In pd Convention, 2007.

[48] Steve Guest, Caroline Catmur, Donna Lloyd, and Charles Spence. Audiotactile interactions in roughness perception. Experimental Brain Research, 146(2):161-171, 2002.

[35] J. Feasel, M.C. Whitton, and J.D. Wendt. Llcm-wip: Low-latency, continuous-motion walking-in-place. In Proceedings of the 2008 IEEE Symposium on 3D User Interfaces, pages 97-104. IEEE, 2008.

[49] Hunter G Hoffman, David R Patterson, and Gretchen J Carrougher. Use of virtual reality for adjunctive treatment of adult burn pain during physical therapy: a controlled study. The Clinical journal of pain, 16(3):244-250, 2000.

[36] Ajoy S Fernandes and Steven K Feiner. Combating vr sickness through subtle dynamic field-of-view modi cation. In 2016 IEEE Symposium on 3D User Interfaces (3DUI), pages 201-210. IEEE, 2016.

[50] Hunter G Hoffman, Todd Richards, Barbara Coda, Anne Richards, and Sam R Sharar. The illusion of presence in immersive virtual reality during an fmri brain scan. CyberPsychology & Behavior, 6(2):127-131, 2003.

[37] F. Fontana and R. Bresin. Physics-based sound synthesis and control: crushing, walking and running by crumpling sounds. Proc. Colloquium on Musical Informatics, pages 109-114, 2003.

[51] J. Hollerbach. Locomotion interfaces and rendering. In M. Lin and M. Otaduy, editors, Haptic Rendering: Foundations, Algorithms and Applications. A K Peters, Ltd, 2008.

[38] J. Freeman and J. Lessiter. Hear there & everywhere: the e ects of multi-channel audio on presence. Proceedings of ICAD 2001, pages 231{234, 2001.

[52] Wijnand A IJsselsteijn. Presence: concept, determinants, and measurement. Proceedings of SPIE, 31(0):520-529, 2000.

[39] T. Funkhouser, N. Tsingos, and J.M. Jot. Survey of methods for modeling sound propa-gation in interactive virtua l environment systems. Presence, 2003. [40] Shaun Gallagher. Philosophical conceptions of the self: implications for cognitive science. Trends in cognitive sciences, 4(1):14{21, 2000. [41] W. W. Gaver. What in the world do we hear?: An ecological approach to auditory event perception. Ecological psychology, 5(1):1{29, 1993.

[53] H. Iwata. Haptic interface. In A. Sears and J. A. Jacko, editors, The Human-Computer Interaction Handbook. Lawrence Erlbaum Assoc., New York, 2nd edition, 2008. [54] Hiroo Iwata. Haptic interface. Human-Computer Interaction, page 205, 2009. [55] Charles E Jack and Willard R Thurlow. Effects of degree of visual association and angle of displacement on the" ventriloquism" effect. Perceptual and motor skills, 1973. [56] J Jankowski and M Hachet. Advances in interaction with 3D environments. Computer Graphics Forum, 34(1):152-190, 2015. 41


[57] Jason Jerald. The VR Book. Human-Centered Design for Virtual Reality. Association for Computing Machinery and Morgan & Claypool Publishers, October 2016. [58] Veikko Jousmaki and Riitta Hari. Parchment-skin illusion: sound-biased touch. Current Biology, 8(6):R190-R191, 1998. [59] Antti Jylha, I Ekman, Cumhur Erkut, and Koray Tahiroglu. Design and Evaluation of Rhythmic Interaction with an Interactive Tutoring System. Computer Music Journal, 35(2):36-48, July 2011. [60] Yukiyasu Kamitani and Shinsuke Shimojo. Sound-induced visual rabbit. Journal of vision, 1(3):478-478, 2001. [61] Laura Kassler, Je Feasel, Michael D Lewek, Frederick P Brooks Jr, and Mary C Whitton. Matching actual treadmill walking speed and visually perceived walking speed in a projection virtual environment. In Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization. ACM, 2010. [62] Daniel Kersten, Pascal Mamassian, and David C Knill. Moving cast shadows induce ap-parent motion in depth. Perception, 26(2):171{192, 1997. [63] K Kilteni, Ilias Bergstrom, and Mel Slater. Drumming in immersive virtual reality: The body shapes the way we play. In Virtual Reality (VR), 2013 IEEE. IEEE, 2013. [64] Konstantina Kilteni, Ilias Bergstrom, and Mel Slater. Drumming in immersive virtual reality: the body shapes the way we play. Visualization and Computer Graphics, IEEE Transactions on, 19(4):597-605, 2013. [65] Norimichi Kitagawa, Massimiliano Zampini, and Charles Spence. Audiotactile interactions in near and far space. Experimental Brain Research, 166(3-4):528-537, 2005. [66] Armin Kohlrausch and Steven van de Par. Auditory-visual interaction: from fundamental research in cognitive psychology to (possible) applications. In Electronic Imaging'99, pages 34-44. International Society for Optics and Photonics, 1999. [67] Michael Kubovy and David Van Valkenburg. Auditory and visual objects. Cognition, 80(1):97-126, 2001. [68] P. Larsson, D. Västfjäll, and M. Kleiner. Perception of self-motion and presence in auditory virtual environments. In Proceedings of seventh annual workshop presence, pages 252-258, 2004. [69] Joseph J LaViola Jr. A discussion of cybersickness in virtual environments. ACM SIGCHI Bulletin, 32(1):47-56, 2000. [70] A.W. Law, B.V. Peck, Y. Visell, P.G. Kry, and J.R. Cooperstock. A Multi-modal Floor-space for Experiencing Material Deformation Underfoot in Virtual Reality. In IEEE Inter-national Workshop on Haptic Audio visual Environments and Games, 2008. HAVE 2008, pages 126-131, 2008.

[71] Anatole Lecuyer, Jean-Marie Burkhardt, and Laurent Etienne. Feeling bumps and holes without a haptic interface: the perception of pseudo-haptic textures. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 239246. ACM, 2004. [72] Susan J Lederman and Roberta L Klatzky. Designing haptic and multimodal interfaces: a cognitive scientists perspective. In Proc. of the Workshop on Advances in Interactive Multimodal Telepresence Systems, pages 71-80, 2001. [73] Matthew Lombard and Theresa Ditton. At the heart of it all: The concept of presence. Journal of Computer-Mediated Communication, 3(2), 1997. [74] K. Lowther and C. Ware. Vection with large screen 3d imagery. In Conference Companion on Human Factors in Computing Systems, pages 233-234. ACM, 1996. [75] Gale M Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. It's only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37:94-100, August 2014. [76] Jean-Luc Lugrin, Johanna Latt, and Marc Erich Latoschik. Avatar anthropomorphism and illusion of body ownership in vr. In 2015 IEEE Virtual Reality (VR), pages 229-230. IEEE, 2015. [77] Göran Lundborg, Birgitta Rosen, and Styrbjörn Lindberg. Hearing as substitution for sensation: a new principle for artificial sensibility. The Journal of hand surgery, 24(2): 219-224, 1999. [78] Kim Halskov Madsen. A guide to metaphorical design. Communications of the ACM, 37(12):57-62, December 1994. [79] Lara Maister, Natalie Sebanz, Günther Knoblich, and Manos Tsakiris. Experiencing ownership over a dark-skinned body reduces implicit racial bias. Cognition, 128(2):170-178, 2013. [80] Teemu Mäki-Patola and Perttu Hämäläinen. Effect of latency on playing accuracy of two gesture controlled continuous sound instruments without tactile feedback. In Proc. Intl. Conf. Digital Audio Effects (DAFx), pages 11-16, Naples, Italy, 2004. [81] Antonella Maselli and Mel Slater. The building blocks of the full body ownership illusion. Frontiers in human neuroscience, 7:83, 2013. [82] Thomas H Massie and J Kenneth Salisbury. The phantom haptic interface: A device for probing virtual objects. In Proceedings of the ASME winter annual meeting, symposium on haptic interfaces for virtual environment and teleoperator systems, volume 55, pages 295-300. Chicago, IL, 1994. [83] Harry McGurk and John MacDonald. Hearing lips and seeing voices. Nature, 264:746-748, 1976.

42


[84] Michael Meehan. Physiological reaction as an objective measure of presence in virtual environments. PhD thesis, Ph.D. thesis. University of North Carolina at Chapel Hill, 2001. [85] Marvin Minsky. Telepresence. Omni, pages 45-51, 1980. [86] Lasse T Nielsen, Matias B Møller, Sune D Hartmeyer, Troels Ljung, Niels C Nilsson, Rolf Nordahl, and Stefania Serafin. Missing the point: an exploration of how to guide users' attention during cinematic virtual reality. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, pages 229-232. ACM, 2016. [87] Mikkel Nielsen, Christian Toft, Niels C Nilsson, Rolf Nordahl, and Stefania Serafin. Evaluating two alternative walking in place interfaces for virtual reality gaming. In Virtual Reality (VR), 2016 IEEE, pages 299-300. IEEE, 2016. [88] N.C. Nilsson, R. Nordahl, E. SikstrÜm, L. Turchet, and S. Serafin. Haptically induced illusory self-motion and the in uence of context of motion. In Proceedings of the 2012 international conference on Haptics: perception, devices, mobility, and communication-Volume Part I, pages 349-360. Springer-Verlag, 2012. [89] N.C. Nilsson, S. Serafin, M. H. Laursen, K. S. Pedersen, E. Sikstrom, and R. Nordahl. Tapping-in-place: Increasing the naturalness of immersive walking-in-place locomotion through novel gestural input. In Proceedings of the 2013 IEEE Symposium on 3D User Interfaces. IEEE, 2013. [90] Niels C. Nilsson. Walking without moving. 2015. [91] Niels Christian Nilsson, Stefania Serafin, and Rolf Nordahl. Gameplay as a source of intrinsic motivation for individuals in need of ankle training or rehabilitation. Presence: Teleoperators and Virtual Environments, 21(1):69-84, 2012. [92] Niels Christian Nilsson, Stefania Serafin, and Rolf Nordahl. Establishing the range of perceptually natural visual walking speeds for virtual walking-in-place locomotion. Visualization and Computer Graphics, IEEE Transactions on, 20(4):569-578, 2014. [93] Niels Christian Nilsson, Stefania Serafin, and Rolf Nordahl. The influence of step frequency on the range of perceptually natural visual walking speeds during walking-in-place and treadmill locomotion. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, pages 187-190. ACM, 2014. [94] Niels Christian Nilsson, Stefania Serafin, and Rolf Nordahl. The effect of head mounted display weight and locomotion method on the perceived naturalness of virtual walking speeds. In 2015 IEEE Virtual Reality Conference (VR), 2015. [95] Niels Christian Nilsson, Stefania Serafin, and Rolf Nordahl. The e ect of visual display properties and gain presentation mode on the perceived naturalness of virtual walking speeds. In 2015 IEEE Virtual Reality (VR), pages 81-88. IEEE, 2015.

[96] Niels Christian Nilsson, Stefania Serafin and Rolf Nordahl. The perceived naturalness of virtual walking speeds during wip locomotion: Summary and meta-analyses. PsychNology Journal, 14(1):7-39, 2016. [97] K Niwa, Y Koizumi, K Kobayashi, and H Uematsu. Binaural sound generation corresponding to omnidirectional video view using angular region-wise source enhancement. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2852-2856. IEEE, 2016. [98] R. Nordahl. Increasing the motion of users in photorealistic virtual environments by utilizing auditory rendering of the environment and ego-motion. Proceedings of Presence, pages 57-62, 2006. [99] R. Nordahl. Increasing the motion of users in photorealistic virtual environments by utilizing auditory rendering of the environment and ego-motion. Proc. of Presence, pages 57-62, 2006. [100] Rolf Nordahl, Amir Berrezag, Smilen Dimitrov, Luca Turchet, Vincent Hayward, and Stefania Serafin. Preliminary experiment combining virtual reality haptic shoes and audio synthesis. In International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, pages 123-129. Springer, 2010. [101] Rolf Nordahl, Niels C. Nilsson, Luca Turchet, and Stefania Serafin. Vertical illusory self-motion through haptic stimulation of the feet. In Proceedings of the 2012 IEEE VR Workshop on Perceptual Illusions in Virtual Environments, 2012. [102] Donald A Norman. The design of everyday things: Revised and expanded edition. Basic books, 2013. [103] Jean-Marie Normand, Elias Giannopoulos, Bernhard Spanlang, and Mel Slater. Multisensory stimulation can induce an illusion of larger belly size in immersive virtual reality. PloS one, 6(1):e16128, 2011. [104] Kristine L Nowak and Frank Biocca. The effect of the Agency and Anthropomorphism on Users' sense of Telepresence, copresence, and social presence in virtual environments. Presence, 12(5):481-494, 2003. [105] Kristine L Nowak and Frank Biocca. The Effect of the Agency and Anthropomorphism on Users' Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments, 12(5): 481-494, October 2003. [106] Marianna Obrist, Carlos Velasco, Chi Thanh Vi, Nimesha Ranasinghe, Ali Israr, Adrian D Cheok, Charles Spence, and Ponnampalam Gopalakrishnakone. Touch, taste, & smell user interfaces: The future of multisensory hci. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pages 3285-3292. ACM, 2016. [107] Dinesh K Pai. Multisensory interaction: Real and virtual. In Robotics Research. The Eleventh International Symposium, pages 489-498. Springer, 2005. 43


[108] S. Palmisano and A.Y.C. Chan. Jitter and size e ects on vection are immune to experi-mental instructions and demands. PERCEPTION-LONDON-, 33:987-1000, 2004.

[120] BE Riecke, D. Vastfjall, P. Larsson, and J. Schulte-Pelkum. Top-down and multi-modal influences on self-motion perception in virtual reality. In HCI international, 2005.

[109] Randy Pausch, Tommy Burnette, Dan Brockway, and Michael E Weiblen. Navigation and locomotion in virtual worlds via ight into hand-held miniatures. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 399-400. ACM, 1995.

[121] Bernhard E Riecke and Jรถrg Schulte-Pelkum. Perceptual and cognitive factors for self-motion simulation in virtual environments: how can self-motion illusions ("vection") be utilized? In Human walking in virtual environments, pages 27-54. Springer, 2013.

[110] Tabitha C Peck, Sofia Seinfeld, Salvatore M Aglioti, and Mel Slater. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and cognition, 22(3): 779-787, 2013.

[122] Giuseppe Riva. Virtual reality in psychotherapy: review. Cyberpsychology & behavior, 8(3):220-230, 2005.

[111] Ivan Poupyrev, Mark Billinghurst, Suzanne Weghorst, and Tadao Ichikawa. The go-go interaction technique: Non-linear mapping for direct manipulation in VR. In Proceedings of the 9th annual ACM symposium on User interface software and technology, pages 79-80. ACM, 1996. [112] W. Powell, S. Stevens, B. abd Hand, and M. Simmonds. Blurring the boundaries: The perception of visual gain in treadmill-mediated virtual environments. In Proceedings of the 3rd IEEE VR Workshop on Perceptual Illusions in Virtual Environments, pages 4-8. IEEE, 2011. [113] N. Raghuvanshi and M.C. Lin. Interactive sound synthesis for large scale environments. In Proceedings of the 2006 symposium on Interactive 3D graphics and games, pages 101-108. ACM New York, NY, USA, 2006. [114] Nimesha Ranasinghe, Kasun Karunanayaka, Adrian David Cheok, Owen Noel Newton Fer-nando, Hideaki Nii, and Ponnampalam Gopalakrishnakone. Digital taste and smell communication. In Proceedings of the 6th International Conference on Body Area Networks, pages 78-84. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), 2011. [115] Sharif Razzaque, Zachariah Kohn, and Mary C Whitton. Redirected walking. In Proceed-ings of EUROGRAPHICS, volume 9, pages 105-106. Citeseer, 2001.

[123] Giuseppe Riva, John A Waterworth, Eva L Waterworth, and Fabrizia Mantovani. From intention to action: The role of presence. New Ideas in Psychology, 29(1):24-37, 2011. [124] A. F. Rovers and H. van Essen. Guidelines for haptic interpersonal communication appli-cations. Virtual Reality, 9:177-191, 2006. [125] Aitor Rovira, David Swapp, Bernhard Spanlang, and Mel Slater. The use of virtual reality in the study of people's responses to violent incidents. Frontiers in Behavioral Neuroscience, 3:59, 2009. [126] Maria V Sanchez-Vives and Mel Slater. From presence to consciousness through virtual reality. Nature Reviews Neuroscience, 6(4):332-339, 2005. [127] R.D. Sanders and M.A. Scorgie. The Effect of Sound Delivery Methods on a User's Sense of Presence in a Virtual Environment, 2002. [128] R.D. Sanders Jr. The effect of sound delivery methods on a users sense of presence in a virtual environment. PhD thesis, NAVAL POSTGRADUATE SCHOOL, 2002. [129] R. M. Schafer. The tuning of the world. Random House Inc., 1977.

[116] Rebekka S Renner, Boris M Velichkovsky, and Jens R Helmert. The perception of egocentric distances in virtual environments - a review. ACM Computing Surveys (CSUR), 46(2):23, 2013.

[130] Carl Schissler, Aaron Nicholls, and Ravish Mehra. E cient hrtf-based spatial audio for area and volumetric sources. 2016.

[117] Gary E Riccio and Thomas A Sto regen. An ecological theory of motion sickness and postural instability. Ecological psychology, 3(3):195-240, 1991.

[131] Michael Schutz and Scott Lipscomb. Hearing gestures, seeing music: Vision influences perceived tone duration. Perception, 36(6):888-897, 2007.

[118] B.E. Riecke, D. Feuereissen, and J.J. Rieser. Auditory self-motion illusions (circular vection) can be facilitated by vibrations and the potential for actual motion. In Proceedings of the 5th symposium on Applied perception in graphics and visualization, pages 147-154. ACM, 2008.

[132] Allison B Sekuler and Robert Sekuler. Collisions between moving visual targets: what controls alternative ways of seeing an ambiguous display? Perception, 28(4):415-432, 1999.

[119] B.E. Riecke, J. Schulte-Pelkum, M.N. Avraamides, M. von der Heyde, and H.H. Bultho . Scene consistency and spatial presence increase the sensation of self-motion in virtual reality. In Proceedings of the 2nd symposium on Applied Perception in Graphics and Visualization, pages 111-118. ACM, 2005.

[133] Robert Sekuler, Allison B Sekuler, and Renee Lau. Sound alters visual motion perception. Nature, 385(6614):308, 1997. [134] S. Serafin, F. Fontana, L. Turchet, , and S. Papetti. Auditory rendering and display of interactive floor cues. In F. Fontana and Y. Visell, editors, Walking with the Senses, chapter 7, pages 123-152. Logos Verlag, Berlin, Germany, march 2012. 44


[135] S. Serafin, L. Turchet, R. Nordahl, S. Dimitrov, A. Berrezag, and V. Hayward. Identification of virtual grounds using virtual reality haptic shoes and sound synthesis. EuroHaptics 2010, page 61, 2010.

[149] Rajinder Sodhi, Ivan Poupyrev, Matthew Glisson, and Ali Israr. Aireal: interactive tactile experiences in free air. ACM Transactions on Graphics (TOG), 32(4):134, 2013.

[136] Ladan Shams, Yukiyasu Kamitani, and Shinsuke Shimojo. Illusions: What you see is what you hear. Nature, 408(6814):788788, 2000.

[150] Charles Spence and Massimiliano Zampini. Auditory contributions to multisensory product perception. Acta Acustica united with Acustica, 92(6):1009-1025, 2006.

[137] Ladan Shams, Yukiyasu Kamitani, and Shinsuke Shimojo. Visual illusion induced by sound. Cognitive Brain Research, 14(1):147-152, 2002.

[151] Mandayam A. Srinivasan and Cagatay Basdogan. Haptics in virtual environments: Taxonomy, research status, and challenges. Computers & Graphics, 21(4):393-404, 1997.

[138] T.B. Sheridan. Musings on telepresence and virtual presence. Presence: Teleoperators and virtual environments, 1(1): 120-126, 1992.

[152] Barry E Stein, Nancy London, Lee K Wilkinson, and Donald D Price. Enhancement of perceived visual intensity by auditory stimuli: a psychophysical analysis. Journal of cognitive neuroscience, 8(6):497-506, 1996.

[139] M. Slater. Place illusion and plausibility can lead to realistic behaviour in immersive vir-tual environments. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535):3549-3557, 2009.

[153] Frank Steinicke, Gerd Bruder, Jason Jerald, Harald Frenz, and Markus Lappe. Estimation of detection thresholds for redirected walking techniques. IEEE Transactions on Visualization and Computer Graphics, 16(1):17-27, 2010.

[140] M. Slater, M. Usoh, and A. Steed. Steps and ladders in virtual reality. In Proceedings of the ACM Conference on Virtual Reality Software and Technology, pages 45-54, 1994.

[154] Frank Steinicke, Yon Visell, Jennifer Campos, and Anatole Lecuyer. Human Walking in Virtual Environments. Springer, 2013.

[141] M. Slater, M. Usoh, and A. Steed. Taking steps: the influence of a walking technique on presence in virtual reality. ACM Transactions on Computer-Human Interaction, 2(3):201-219, 1995.

[155] William Steptoe, Anthony Steed, and Mel Slater. Human tails: ownership and control of extended humanoid avatars. IEEE transactions on visualization and computer graphics, 19(4): 583-590, 2013.

[142] Mel Slater. A note on presence terminology. In Presence connect, volume 3, January 2003.

[156] Angus Stevenson. Oxford dictionary of English. Oxford University Press, USA, 2010.

[143] Mel Slater, Daniel Perez Marcos, Henrik Ehrsson, and Maria V Sanchez-Vives. Towards a digital body: the virtual arm illusion. Frontiers in human neuroscience, 2:6, 2008.

[157] R. L. Storms and M. J. Zyda. Interactions in Perceived Quality of Auditory-Visual Displays. Presence: Teleoperators & Virtual Environments, 9(6):557-580, 2000.

[144] Mel Slater, D-P Pertaub, and Anthony Steed. Public speaking in virtual reality: Facing an audience of avatars. IEEE Computer Graphics and Applications, 19(2):6-9, 1999.

[158] Russell L Storms and Michael J Zyda. Interactions in perceived quality of auditory-visual displays. Presence: Teleoperators and Virtual Environments, 9(6):557-580, 2000.

[145] Mel Slater and Maria V Sanchez-Vives. Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3:74, 2016.

[159] E.A. Suma, S. Clark, D. Krum, S. Finkelstein, M. Bolas, and Z. Warte. Leveraging change blindness for redirection in virtual environments. In Proceedings of the 2011 IEEE Virtual Reality Conference, pages 159-166. IEEE, 2011.

[146] Mel Slater, Bernhard Spanlang, and David Corominas. Simulating virtual environments within virtual environments as the basis for a psychophysics of presence. ACM Transactions on Graphics (TOG), 29(4):92, 2010.

[160] E.A. Suma, B. Lange, A. Rizzo, DM Krum, and M. Bolas. Faast: The exible action and articulated skeleton toolkit. In Proceedings of the 2011 IEEE Virtual Reality Conference, pages 247-248. IEEE, 2011.

[147] Mel Slater, Bernhard Spanlang, Maria V Sanchez-Vives, and Olaf Blanke. First person experience of body transfer in virtual reality. PloS one, 5(5):e10564, 2010. [148] Mel Slater, Martin Usoh, and Anthony Steed. Taking steps: The in uence of a walking technique on presence in virtual reality. ACM Trans. on Computer-Human Interaction, 2(3):201-219, 1995.

[161] Evan A Suma, Zachary Lipps, Samantha Finkelstein, David M Krum, and Mark Bolas. Impossible spaces: Maximizing natural walking in virtual environments with self-overlapping architecture. IEEE Transactions on Visualization and Computer Graphics, 18(4):555-564, 2012.

45


[162] Ivan E. Sutherland. The ultimate display. In Proceedings of the IFIP Congress, pages 506-508, 1965. [163] D. Swapp, J. Williams, and A. Steed. The implementation of a novel walking interface within an immersive display. In 3D User Interfaces (3DUI), 2010 IEEE Symposium on, pages 71-74. IEEE, 2010. [164] David Swapp, Julian Williams, and Anthony Steed. The implementation of a novel walking interface within an immersive display. In 3D User Interfaces (3DUI), 2010 IEEE Symposium on, pages 71-74. IEEE, 2010. [165] James Philip Thomas and Maggie Shihrar. I can see you better if i can hear you coming: Action-consistent sounds facilitate the visual detection of human gait. Journal of vision, 10(12): 14-14, 2010. [166] L. Turchet, S. Serafin, and R. Nordahl. Physically based sound synthesis and control of footsteps sounds. In Proc. Conf. on Digital Audio Effects (DAFX-10), Graz, Austria, 2010. [167] M. Usoh, K. Arthur, M.C. Whitton, R. Bastos, A. Steed, M. Slater, and F.P. Brooks Jr. Walking> walking-in-place>flying, in virtual environments. In Proceedings of the 26th annual conference on Computer Graphics and Interactive Techniques, pages 359-364. ACM Press/Addison-Wesley Publishing Co., 1999. [168] A. Valjamae. Auditorily-induced illusory self-motion: A review. Brain research reviews, 61(2):240-255, 2009. [169] K. Van Den Doel, P.G. Kry, and D.K. Pai. FoleyAutomatic: physically-based sound e ects for interactive simulation and animation. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 537-544. ACM New York, NY, USA, 2001.

[175] Eva L Waterworth and John A Waterworth. Focus, locus, and sensus: The three dimensions of virtual experience. CyberPsychology & Behavior, 4(2):203-213, 2001. [176] Robert B Welch and David H Warren. Immediate perceptual response to intersensory discrepancy. Psychological bulletin, 88(3):638, 1980. [177] J.D. Wendt, M.C. Whitton, and F.P. Brooks. Gud wip: Gait-understanding-driven walking-in-place. In Proceedings of the 2010 IEEE Virtual Reality Conference, pages 51-58. IEEE, 2010. [178] B. Williams, S. Bailey, G. Narasimham, M. Li, and B. Bodenheimer. Evaluation of walking in place on a wii balance board to explore a virtual environment. Proceedings of the ACM Transactions on Applied Perception, 8(3):19, 2011. [179] Werner Wirth, Tilo Hartmann, Saskia Bocking, Peter Vorderer, Christoph Klimmt, Holger Schramm, Timo Saari, Jari Laarni, Niklas Ravaja, Feliz Ribeiro Gouveia, et al. A process model of the formation of spatial presence experiences. Media psychology, 9(3):493-525, 2007. [180] W.G. Wright, P. DiZio, and J.R. Lackner. Perceived self-motion in two visual contexts: dissociable mechanisms underlie perception. Journal of Vestibular Research, 16(1):23-28, 2006. [181] D.J. Zielinski, R.P. McMahan, and R.B. Brady. Shadow walking: An unencumbered locomotion technique for systems with under-floor projection. In Proceedings of the 2011 IEEE Virtual Reality Conference, pages 167-170. IEEE, 2011. [182] Peter Zimmermann. Virtual reality aided design. a survey of the use of VR in automotive industry. Product Engineering, pages 277-296, 2008.

[170] Kees van den Doel and Dinesh K Pai. The sounds of physical shapes. Presence: Teleoperators and Virtual Environments, 7(4):382-395, 1998. [171] Erik Van der Burg, Christian NL Olivers, Adelbert W Bronkhorst, and Jan Theeuwes. Pip and pop: nonspatial auditory signals improve spatial visual search. Journal of Experimental Psychology: Human Perception and Performance, 34(5): 1053, 2008. [172] D. Vastfjall. The subjective sense of presence, emotion recognition, and experienced emotions in auditory virtual environments. CyberPsychology & Behavior, 6(2):181-188, 2003. [173] R. Velazquez. On-shoe tactile display. In Proc. of the IEEE Intl. Workshop on Haptic Audio Visual Env. and Their Applications, 2008. [174] Y. Visell, J. Cooperstock, B. L. Giordano, K. Franinovic, A. Law, S. McAdams, K. Jathal, and F. Fontana. A vibrotactile device for display of virtual ground materials in walking. In Proc. of Eurohaptics 2008, 2008. 46


layout & production: www.gedde.co

VIRTUAL REALITY AND THE SENSES DANISH SOUND INNOVATION NETWORK TECHNICAL UNIVERSITY OF DENMARK RICHARD PETERSENS PLADS, BUILDING 324 2800 KGS. LYNGBY WWW.DANISHSOUND.ORG

a we


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.