AR[t] 01 - ARlab publication

Page 1

01

april 2012

AUGMENTED REALITY, ART AND TECHNOLOGY

INTRODUCING ADDED WORLDS Yolande Kolstee

THE TECHNOLOGY BEHIND AUGMENTED REALITY Pieter Jonker

RE-INTRODUCING MOSQUITOS Maarten Lamers

HOW DID WE DO IT Wim van Eck


2


AR[t] Magazine about Augmented Reality, art and technology

APRIL 2012

3


COLOPHON

ISSN NUMBER 2213-2481

CONTACT The Augmented Reality Lab (AR Lab) Royal Academy of Art, The Hague (Koninklijke Academie van Beeldende Kunsten)

Prinsessegracht 4 2514 AN The Hague The Netherlands +31 (0)70 3154795 www.arlab.nl info@arlab.nl

EDITORIAL TEAM Yolande Kolstee, Hanna Schraffenberger, Esmé Vahrmeijer (graphic design) and Jouke Verlinden.

CONTRIBUTORS Wim van Eck, Jeroen van Erp, Pieter Jonker, Maarten Lamers, Stephan Lukosch, Ferenc Molnár (photography) and Robert Prevel.

COVER ‘George’, an augmented reality headset designed by Niels Mulder during his Post Graduate Course Industrial Design (KABK), 2008

www.arlab.nl

4


TABLE OF CONTENTS

32 WELCOME to AR[t]

36 07

Yolande Kolstee

08

INTERVIEW WITH HELEN PAPAGIANNIS

12

INTRODUCING ADDED WORLDS

ARTIST IN RESIDENCE PORTRAIT: MARINA DE HAAS Hanna Schraffenberger

A MAGICAL LEVERAGE — IN SEARCH OF THE KILLER APPLICATION.

Pieter Jonker

RE-INTRODUCING MOSQUITOS Maarten Lamers

LIEVEN VAN VELTHOVEN — THE RACING STAR

20 28 30

HOW DID WE DO IT Wim van Eck

PIXELS WANT TO BE FREED! INTRODUCING AUGMENTED REALITY ENABLING HARDWARE TECHNOLOGIES

THE POSITIONING OF VIRTUAL OBJECTS

70

Robert Prevel

MEDIATED REALITY FOR CRIME SCENE INVESTIGATION

72

Stephan Lukosch

DIE WALKÜRE Wim van Eck, AR Lab Student Project

Hanna Schraffenberger

66

Jeroen van Erp

Hanna Schraffenberger

THE TECHNOLOGY BEHIND AR

60

76

36 42

Jouke Verlinden 5


6


WELCOME...

to the first issue of AR[t], the magazine about Augmented Reality, art and technology! Starting with this issue, AR[t] is an aspiring

their designs to clients. Designers of games and

magazine series for the emerging AR commu-

theme parks want to create immersive experi-

nity inside and outside the Netherlands. The

ences that integrate both the physical and the

magzine is run by a small and dedicated team

virtual world. Marketing specialists are working

of researchers, artists and lecturers of the AR

with new interactive forms of communication.

Lab (based at the Royal Academy of Arts, The

For all of them, AR can serve as a powerful tool

Hague), Delft University of Technology (TU

to realize their visions.

Delft), ­Leiden University and SME. In AR[t], we

Media artists and designers who want to acquire

share our interest in Augmented Reality (AR),

an interesting position within the domain of new

discuss its applications in the arts and provide

media have to gain knowledge about and experi-

insight into the underlying technology.

ence with AR. This magazine series is intended to

At the AR Lab, we aim to understand, develop,

provide both theoretical knowledge as well as a

refine and improve the amalgamation of the

guide towards first practical experiences with AR.

physical world with the virtual. We do this

Our special focus lies on the diversity of contri-

through a project-based approach and with the

butions. Consequently, everybody who wants

help of research funding from RAAK-Pro. In the

to know more about AR should be able to find

magazine series, we invite writers from the in-

something of interest in this magzine, be they

dustry, interview artists working with Augment-

art and design students, students from techni-

ed Reality and discuss the latest technological

cal backgrounds as well as engineers, develop-

developments.

ers, inventors, philosophers or readers who just ­happened to hear about AR and got curious.

It is our belief that AR and its associated

We hope you enjoy the first issue and invite you

technologies are important to the field of new

to check out the website www.arlab.nl to learn

media: media artists experiment with the inter-

more about Augmented Reality in the arts and

section of the physical and the virtual and probe

the work of the AR Lab.

the limits of our sensory perception in order to create new experiences. Managers of cultural heritage are seeking after new possibilities for worldwide access to their collections. Designers, developers, architects and urban planners are looking for new ways to better communicate

www.arlab.nl

7


INTRODUCING ADDED WORLDS: AUGMENTED REALITY IS HERE! By Yolande Kolstee

Augmented Reality is a relatively recent computer­ based technology that differs from the earlier known concept of Virtual Reality. Virtual Reality is a computer based reality where the actual, outer world is not directly part of, whereas Augmented Reality can be characterized by a combination of the real and the virtual. Augmented Reality is part of the broader concept of Mixed Reality: environments that consist of the real and the virtual. To make these differences and relations more clear, industrial engineer Paul Milgram and Fumio Kishino introduced the Mixed Reality Continuum diagram in 1994, in which the real world is placed on the one end and the virtual world is placed on the other end. MIXED REALITY (MR) Real Environment

Augmented Reality (AR)

Augmented Virtuality (AV)

Virtuality continuum by Paul Milgram and Fumio Kishino (1994)

8

Virtual Environment


A SHORT OVERVIEW OF AR

Paul ­C. Lauterbur and Peter Mansfield, who won

We define Augmented Reality as integrating 3-D

ing magnetic resonance imaging (MRI). Although

virtual objects or scenes into a 3-D environment

their original goals were different, in the field of

in real time (cf. A zuma, 1997).

Augmented Reality one might use the 3D virtual

the prize in 2003 for their discoveries concern-

models that are produced by such systems. However, they have to be processed prior to use in

WHERE 3D VIRTUAL OBJECTS OR SCENES COME FROM

AR because they might be too heavy. A 3D laser scanner is a device that analyses a real-world object or environment to collect data on its shape and its appearance (i.e. colour). The collected

What is shown in the virtual world, is created

data can then be used to construct digital, three

first. There are three ways of creating virtual

dimensional models. These scanners are some-

objects:

times called 3D digitizers. The difference is that the above medical scanners are looking inside to

1. By hand: using 3D computer graphics

create a 3D model while the laser scanners are

Designers create 3D drawings of objects, game

creating a virtual image from the reflection of

developers create 3D drawings of (human) figures,

­the outside of an object.

(urban) architects create 3D drawings of buildings and cities. This 3D modeling by (product)

3. Photo and/or film images

designers, architects, and visual artists is done

It is possible to use a (moving) 2D image like a

by using specific software. Numerous software

picture as a skin on a virtual 3D model. In this

programs are developed. While some software

way the 2D model gives a three-dimensional

packages can be downloaded for free, others

impression.

are pretty expensive. Well known examples are Maya, Cinema 4D, Studio Max, Blender, Sketch up, Rhinoceros, Solidworks, ­Revit, Zbrush, AutoCad, Autodesk. By now at least 170 different software programs are available.

INTEGRATING 3-D VIRTUAL OBJECTS IN THE REAL WORLD IN REAL TIME

2. B y computer controlled imaging equipment/3D scanners.

There are different ways of integrating the vir-

We can distinguish different types of three-

tual objects or scenes into the real world. For all

dimensional scanners – the ones used in the

three we need a display possibility. This might

bio-medical world and the ones used for other

be a screen or monitor, small screens in AR

purposes – although there is some overlapping.

glasses, or an object on which the 3D images are

Inspecting a piece of medieval art or inspecting ­a

projected. We distinguish three types of (visual)

living human being is different but somehow also

Augmented Reality:

alike. In recent years we see a vigorous expansion of the use of image-producing bio-medical

Display type I : Screen based

equipment. We owe these developments to the

AR on a monitor, for example on a flatscreen or

of engineer sir Godfrey Hounsfield and physi-

on a smart phone (using e.g. LAYAR). With this

cist Allan Cormack, among others, who were

technology we see the real world and added at

jointly awarded the Nobel Prize in 1979 for their

the same time on a computer screen, monitor,­

pioneering work on X-ray computed tomography

smartphone or tablet computer, the virtual

(CT). Another couple of Nobel Prize winners are

object. In that way, we can, for example, ­

9


Artist: KAROLINA SOBECKA | http://www.gravitytrap.com

add information to a book, by looking at the book and the screen at the same time.

Display type III: Projection based Augmented Reality With projection based AR we project virtual 3D

Display type II: AR glasses (off-screen)

scenes or objects on a surface of a building of an

A far more sophisticated but not yet consumer

exactly the dimensions of the object we project

friendly method uses AR glasses or a head

AR info onto. The projection is seen on the object

mounted display (HMD), also called a head-up

or building with remarkable precision. This can

display. With this device the extra information is

generate very sophisticated or wild projections

object (or a person). To do this, we need to know

mixed with one’s own perception of the world.

on buildings. The Augmented Matter in Context

The virtual images appear in the air, in the real

group, led by Jouke Verlinden at the Faculty of

world, around you, and are not projected on a

Industrial Design Engineering, TU-Delft, uses

screen. In type II there are two types of mixing

pro­jection-based AR for manipulating the appear-

the real world with the virtual world:

ance of products.

Video see-through: a camera captures the real world. The virtual images are mixed with the captures (video images) of the real world and this mix creates an Augmented Reality. Optical see-through: the real world is perceived

CONNECTING ART AND TECHNOLOGY

directly with one’s own eyes in real time. ­Via small translucent mirrors in goggles, virtual

The 2011 IEEE International Symposium on Mixed

images are displayed on top of the perceived

and Augmented Reality (ISMAR) was held in

Reality.

Basel, Switzerland. In the track Arts, Media, and

10


Humanities, 40 articles were offered discussing

society.” (p.73., cited in Papagiannis, 2011, p.61)

the connection of ‘hard’ physics and ‘soft’ art.

As Helen Papagiannis concludes, it is then up to

There are several ways in which art and Aug-

the artist “to act as a pioneer, pushing forward ­

mented Reality technology can be connected:

a new aesthetic that exploits the unique materi-

we can, for example, make art with Augmented

als of the novel technology” (2011, p.61). Like

Reality technology, create Augmented Reality

Helen, we believe this holds also for the emerging

artworks or use Augmented Reality technology­

field of AR technologies and we hope, artists will

­to show and explain existing art (such as a

set out to create exciting new Augmented Real-

monument like the Greek Pantheon or paintings

ity art and thereby contribute to the interplay

from the grottos of Lascaux). Most of the contri-

between art and technology. An interview with

butions of the conference concerned Augmented

Helen Papagiannis can be found on page 12 of this

Reality as a tool to present, explain or augment

magazine. A portrait of the artist Marina de Haas,

existing art. However, some visual artists use AR

who did a residency at the AR Lab, can be found

as a medium to create art.

on page 60.

The role of the artist in working with the emerging technology of Augmented Reality has been

REFERENCES

discussed by Helen Papagiannis in her ISMAR

■ Milgram P. and Kishino, F., “A Taxonomy of

­paper The Role of the Artist in Evolving AR as a

Mixed Reality Visual Displays,” IEICE Trans.

New Medium (2011). In her paper, Helen Papagi-

Information Systems, vol. E77-D, no. 12, 1994,

annis reviews how the use of technology as a creative medium has been discussed in recent years.

pp. 1321-1329. ■ A zuma, Ronald T., “A Survey of Augmented

She points out, that in 1988 John Pearson wrote

Reality”. In Presence: Teleoperators and

about how the computer offers artists “new

Virtual Environments 6, 4 (August 1997),

means for expressing their ideas” (p.73., cited in Papagiannis, 2011, p.61). According to Pearson,

pp. 355-385. ■ Papagiannis, H., “The Role of the Art-

“Technology has always been, the handmaiden of

ist in Evolving AR as a New Medium”, 2011

the visual arts, as is obvious, a technical means is

IEEE International Symposium on Mixed and

always necessary for the visual communication of

Augmented Reality(ISMAR) – Arts, Media, and

ideas, of expression or the development of works

Humanities (ISMAR-AMH), Basel, Switserland,

of art—tools and materials are required.” (p. 73)

pp. 61-65.

However, he points out that new technologies

■ Pearson, J., “The computer: Liberator or

“were not developed by the artistic community

Jailer of The creative Spirit.” Leonardo,

for artistic purposes, but by science and industry

Supplemental Issue, Electronic Art, 1 (1988),

to serve the pragmatic or utilitarian needs of

pp. 73-80.

11


BIOGRAPHY HELEN PAPAGIANNIS

Helen Papagiannis is a designer, artist,­ and PhD researcher specializing in Augmented Reality (AR) in Toronto, Canada. Helen has been working with AR since 2005, exploring the creative possibilities for AR with a focus on content development and storytelling. She is a Senior Research Associate at the Augmented Reality Lab at York University, in the Department of Film, Faculty of Fine Arts. Helen has presented her interactive artwork and research at global juried

conferences and events including TEDx (Technology, Entertainment, Design), ISMAR (International Society for Mixed and Augmented Reality) and ISEA (International Symposium for Electronic Art). Prior to her Augmented life, Helen was a member of the internationally renowned Bruce Mau Design studio where she was project lead on “Massive Change: ­ The Future of Global Design." Read more about Helen’s work on her blog and follow her on Twitter: @ARstories.

www.augmentedstories.com

12


INTERVIEW WITH HELEN PAPAGIANNIS BY HANNA SCHRAFFENBERGER

What is Augmented Reality?

I wholeheartedly agree that AR can create a magical experience. In my TEDx 2010 talk, ­­“How

Augmented Reality (AR) is a real-time layering ­of

Does Wonderment Guide the Creative Process”

virtual digital elements including text, images,

(http://youtu.be/ScLgtkVTHDc), I discuss how

video and 3D animations on top of our existing

AR enables a sense of wonder, allowing us to see

reality, made visible through AR enabled devices

our environments anew. I often feel like a magi-

such as smart phones or tablets equipped with

cian when presenting demos of my AR work live;

a camera. I often compare AR to cinema when

astonishment fills the eyes of the beholder ques-

it was first new, for we are at a similar moment

tioning, “How did you do that?” So what happens

in AR’s evolution where there are currently no

when the magic trick is revealed, as you ask,

conventions or set aesthetics; this is a time ripe

when the illusion loses its novelty and becomes

with possibilities for AR’s creative advancement.­

habitual? In Virtual Art: Illusion to Immersion

Like cinema when it first emerged, AR has com-

(2004), new media art-historian Oliver Grau

menced with a focus on the technology with

­discusses how audiences are first overwhelmed

little consideration to content. AR content needs

by new and unaccustomed visual experiences,

to catch up with AR technology. As a community

but later, once “habituation chips away at the

of designers, artists, researchers and commer-

illusion”, the new medium no longer possesses

cial industry, we need to advance content in AR

“the power to captivate” (p. 152). Grau writes

and not stop with the technology, but look at

that at this stage the medium becomes “stale

what unique stories and utility AR can present.

and the audience is hardened to its attempts at illusion”; however, he notes, that it is at this

So far, AR technologies are still new to many people and often AR works cause a magical experience. Do you think AR will lose its magic once people get used to the technology and have developed an understanding of how AR works? How have you worked with this ‘magical element’ in your work ‘The Amazing Cinemagician’?

stage that “the observers are receptive to content and media competence” (p. 152). When the initial wonder and novelty of the technology wear off, will it be then that AR is explored as a possible media format for various content and receive a wider public reception as a mass medium? Or is there an element of wonder that need exist in the technology for it to ­ be effective and flourish?

13


Picture: PIPPIN LEE

“Pick a card. Place it here. Prepare to be amazed and entertained.”

14


I believe AR is currently entering the stage of

hidden within each physical playing card. Part

content development and storytelling, however,

of the magic and illusion of this project was to

I don’t feel AR has lost its “power to captivate”

disguise the RFID tag as a normal object, out

or “become stale”, and that as artists, design-

of the viewer’s sight. Each of these tags cor-

ers, researchers and storytellers, we continue to

responds to a short film clip by Méliès, which is

maintain wonderment in AR and allow it to guide

projected onto the FogScreen once a selected

and inspire story and content. Let’s not forget

card is placed atop the RFID tag reader. The

the enchantment and magic of the medium. I

RFID card reader is hidden within an antique

often reference the work of French filmmaker

wooden podium (adding ­to the aura of the magic

and magician George Méliès (1861-1938) as a

performance and historical time period).

great inspiration and recently named him the Patron Saint of AR in an article for The Creators

The following instructions were provided to the

Project (http://www.thecreatorsproject.com/

participant: “Pick a card. Place it here. Prepare

blog/celebrating-georges-méliès-patron-saint-

to be amazed and entertained.” Once the

of-augmented-reality) on what would have been

participant placed a selected card atop the des-

Méliès’ 150th birthday. Méliès was first a stage

ignated area on the podium (atop the concealed

magician before being introduced to cinema at

RFID reader), an image of the corresponding

a preview of the Lumiere brothers’ invention,

card was revealed on the FogScreen, which was

where he is said to have exclaimed, “That’s

then followed by one of Méliès’ films. The deci-

for me, what a great trick”. Méliès became

sion was made to provide visual feedback of the

famous for the “trick-film”, which employed a

participant’s selected card to add to the magic

stop-­motion and substitution technique. Méliès

of the experience and to generate a sense of

applied the newfound medium of cinema to

wonder, similar to the witnessing and question-

extend magic into novel, seemingly impossible

ing of a magic trick, with participants asking,

visualities on the screen.

“How did you know that was my card? How did you do that?” This curiosity inspired further

I consider AR, too, to be very much about creat-

exploration of each of the cards (and in turn,

ing impossible visualities. We can think of AR as

Méliès’ films) to determine if each of the par­

a real-time stop-substitution, which layers con-

ticipant’s cards could be properly identified.

tent dynamically atop the physical environment and creates virtual actualities with shapeshifting objects, magically appearing and disappearing— as Méliès first did in cinema. In tribute to Méliès, my Mixed Reality exhibit, The Amazing Cinemagician integrates Radio Frequency Identification (RFID) technology with the FogScreen, a translucent projection screen consisting of a thin curtain of dry fog. The

You are an artist and researcher. Your scientific work as well as your artistic work explores how AR can be used as a creative medium. What’s the difference between your work as an artist­/ designer and your work as ­a researcher?

Amazing Cinemagician speaks to technology as magic, linking the emerging technology of the

Excellent question! I believe that artists and

FogScreen­with the pre-cinematic magic lantern

designers are researchers. They propose novel

and phantasmagoria spectacles of the Victorian

paths for innovation introducing detours into the

era. The installation is based on a card-trick,

usual processes. In my most recent TEDx 2011

using physical playing cards as an interface

talk in Dubai, “Augmented Reality and the Power

to­interact with the FogScreen. RFID tags are

of Imagination” (http://youtu.be/7QrB4cYxjmk), ­

15


I discuss how as a designer/artist/PhD researcher I am both a practitioner and a researcher, a maker and a believer. As a practitioner, I do, create, ­design; as a researcher I dream, aspire, hope. ­ I am a make-believer working with a technology­ that is about make-believe, about imagining possibilities atop actualities. Now, more than ever, we need more creative adventurers and make-believers to help AR continue to evolve and become a wondrous new medium, unlike anything we’ve ever seen before! I spoke to the importance and power of imagination and makebelieve, and how they pertain to AR at this critical junction ­in the medium’s evolution. When we make-believe and when we imagine, we are in two places simultaneously; make-believe is about projecting or layering our imagination on top of a current situation or circumstance. In many ways, this is what AR is too: layering imagined worlds on top of our existing reality.

You’ve had quite a success with your AR pop-up book ‘Who’s Afraid of Bugs?’ In your blog you talk about your inspiration for the story behind the book: it was inspired by AR psychotherapy studies for the treatment of ­phobias such as arachnophobia. Can you tell us more? Who’s Afraid of Bugs? was the world’s first Augmented Reality (AR) Pop-up designed for iPad2 and iPhone 4. The book combines hand-crafted paper-engineering and AR on mobile devices to create a tactile and hands-on storybook that­ explores the fear of bugs through narrative and play. Integrating image tracking in the design, as opposed to black and white glyphs commonly

16

Picture: HELEN PAPAGIANNIS

seen in AR, the book can hence be enjoyed alone as a regular pop-up book, or supplemented with Augmented digital content when viewed through a mobile device equipped with a camera.­The book is a playful exploration of fears using AR in a meaningful and fun way. Rhyming text takes


the reader through the storybook where various

Hallucinatory Augmented Reality (AR), 2007,

‘creepy crawlies’ (spider, ant, and butterfly) are

was an experiment which investigated the

awaiting to be discovered, appearing virtually

possibility of images which were not glyphs/AR

as 3D models you can interact with. A tarantula

trackables to generate AR imagery. The projects

attacks when you touch it, an ant hyperlinks to

evolved out of accidents, incidents in earlier

educational content with images and diagrams,

experiments­in which the AR software was mis­

and a butterfly appears flapping its wings atop­

taking non-marker imagery for AR glyphs and

a flower in a meadow. Hands are integrated­

attempted to generate AR imagery. This confu-

throughout the book design, whether its pla­cing

sion, by the software, resulted in unexpected

one’s hand down to have the tarantula crawl

and random flickering AR imagery. I decided to

over you virtually, the hand holding the magnify-

explore the creative and artistic possibilities

ing lens that sees the ant, or the hands that pop-

of this effect further and conduct experiments

up holding the flower upon which the butterfly

with non-traditional marker-based tracking. ­

appears. It’s a method to involve the reader in

The process entailed a study of what types of

the narrative, but also comments on the unique

non-marker images might generate such ‘hallu­

tactility AR presents, bridging the ­digital with

cinations’ and a search for imagery that would

the physical. Further, the story for the AR

evoke or call upon multiple AR imagery/videos

Pop-up Book was inspired by AR psychotherapy

from a single image/non-marker.

studies for the treatment of phobias such as arachnophobia. AR provides a safe, controlled

Upon multiple image searches, one image

environment to conduct exposure therapy

emerged which proved to be quite extraordi-

within a patient’s physical surroundings, creat-

nary. A cathedral stained glass window was

ing a more believable scenario with heightened

able to evoke four different AR videos, the only

­presence (defined as the sense of really being in

instance, from among many other images, in

an imagined or perceived place or scenario) and

which multiple AR imagery appeared. Upon close

provides greater immediacy than in Virtual Real-

examination of the image, focusing in and out

ity (VR). A video of the book may be watched at

with a web camera, a face began to emerge in

http://vimeo.com/25608606.

the black and white pattern. A fantastical image of a man was encountered. Interestingly, ­it

In your work, technology serves as an inspiration. For example, rather than starting with a story which is then adapted to a certain technology, you start out with AR technology, investigate its strengths and weaknesses and so the story evolves. However, this does not limit you to only use the strength of a medium. On the contrary, weaknesses such as accidents and glitches have for example influenced your work ‘Hallucinatory AR’. Can you tell us a bit more about this work?

was when the image was blurred into this face using the web camera that the AR hallucinatory imagery worked best, rapidly multiplying and appearing more prominently. Although numerous attempts were made with similar images, no other such instances occurred; this image appeared to be unique. The challenge now rested in the choice of what types of imagery to curate into this hallucinatory viewing: what imagery would be best suited to this phantasmagoric and dream-like form? My criteria for imagery/videos were like-form and shape, in an attempt to create a collage-like set of visuals. As the sequence or duration of the imagery in Hallucinatory AR could not be predetermined, the goal was to identify imagery

17


that possessed similarities, through which the

example of the technology failing. To the artist,

possibility for visual synchronicities existed.

however, there is poetry in these glitches, with

Themes of intrusions and chance encounters are

new possibilities of expression and new visual

at play in Hallucinatory AR, inspired in part by

forms emerging.

Surrealist artist Max Ernst. In What is the Mechanism of Collage? (1936), Ernst writes:

On the topic of glitches and accidents, I’d like to

One rainy day in 1919, finding myself on a village­

return to Méliès. Méliès became famous for the

on the Rhine, I was struck by the obsession

stop trick, or double exposure special effect,

which held under my gaze the pages of an illus­

a technique which evolved from an accident:

trated catalogue showing objects designed for

Méliès’ camera jammed while filming the streets

anthropologic, microscopic, psychologic, miner-

of Paris; upon playing back the film, he observed

alogic, and paleontologic demonstration. There

an omnibus transforming into a hearse. Rather

I found brought together elements of figuration­

than discounting this as a technical failure, or

so remote that the sheer absurdity of that col­

glitch, he utilized it as a technique in his films.

lection provoked a sudden intensification of

Hallucinatory AR also evolved from an accident,

the­visionary faculties in me and brought forth

which was embraced and applied in attempt

an illusive succession of contradictory images,

to evolve a potentially new visual mode in the

double, triple, and multiple images, piling up

medium of AR. Méliès introduced new formal

on each other with the persistence and rapidity

styles, conventions and techniques that were

which are particular to love memories and vi-

specific to the medium of film; novel styles and

sions of half-sleep (p. 427).

new conventions will also emerge from AR artists and creative adventurers who fully embrace

Of particular interest to my work in exploring

the medium.

and experimenting with Hallucinatory AR was Ernst’s description of an “illusive succession of contradictory images” that were “brought forth” (as though independent of the artist), rapidly multiplying and “piling up” in a state of “halfsleep”. Similarities can be drawn to the process of the seemingly disparate AR images jarringly coming in and out of view, layered atop one another. One wonders if these visual accidents are what the future of AR might hold: of unwelcome glitches in software systems as Bruce Sterling describes on Beyond the Beyond in 2009; or perhaps we might come to delight in the visual poetry of these Augmented hallucinations that are “As beautiful as the chance encounter of a sewing machine and an umbrella on an operating table.”

1

To a computer scientist, these ‘glitches’, as applied in Hallucinatory AR, could potentially be viewed or interpreted as a disaster, as an

18

[1] Comte de Lautreamont’s often quoted allegory, famous for inspiring both Max Ernst and Andrew Breton, qtd. in: Williams, Robert. “Art Theory: An Historical Introduction.” Malden, MA: Blackwell Publishing, 2004: 197


“As beautiful as the chance encounter of a sewing machine and an umbrella on an operating table.” Comte de Lautréamont

Picture: PIPPIN LEE 19


THE TECHNOLOGY BEHIND AUGMENTED REALITY Augmented Reality (AR) is a field that is primarily

the data can also be added to the vision of the

concerned with realistically adding computer-

user by means of a head-mounted display (HMD)

generated images to the image one perceives

or Head-Up Display. This is a second, less known

from the real world.

form of Augmented Reality. It is already known to fighter pilots, among others. We distinguish

AR comes in several flavors. Best known is the

two types of HMDs, namely: Optical See Through

practice of using flatscreens or projectors,

(OST) headsets and Video See Through (VST)

but­­nowadays AR can be experienced even on

headsets. OST headsets use semi-transparent

smartphones and tablet PCs. The crux is that 3D

mirrors or prisms, through which one can keep

digital data from another source is added to the

seeing the real world. At the same time, virtual

ordinary physical world, which is for example

objects can be added to this view using small

seen through a camera. We can create this ad-

displays that are placed on top of the prisms.

ditional data ourselves, e.g. using 3D drawing

VSTs are in essence Virtual Reality goggles, so

programs such as 3D Studio Max, but we can

the displays are placed directly in front of your

also add CT and MRI data or even live TV images

eyes. In order to see the real world, there are

to the real world. Likewise, animated three

two cameras attached on the other side of the

dimensional objects (avatars), which then can be

little displays. You can then see the Augmented

displayed in the real world, can be made using a

Reality by mixing the video signal coming from

visualization program like Cinema 4D. Instead of

the camera with the video signal containing the

displaying information on conventional monitors,

virtual objects.

20


21


UNDERLYING TECHNOLOGY

is computing power and energy consumption. Companies such as Microsoft, Google, Sony,

Screens and Glasses

Zeiss,... will enter the consumer market soon with AR technology.

Unlike screen-based AR, HMDs provide depth perception as both eyes receive an image. When objects are projected on a 2D screen,

Tracking Technology

one can convey an experience of depth by

A current obstacle for major applications which

letting the objects move. Recent 3D screens

soon will be resolved is the tracking technology.

allow you to view stationary objects in depth.

The problem with AR is embedding the virtual

3D televisions that work with glasses quickly

objects in the real world. You can compare

alternate the right and left image - in sync with

this with color printing: the colors, e.g., cyan,

this, the glasses use active shutters which let

magenta, yellow and black have to be printed

the image in turn reach the left or the right

properly aligned to each other. What you

eye. This happens so fast that it looks like you

often see in prints which are not cut yet, are

view both, the left and right image simultane-

so called fiducial markers on the edge of the

ously. 3D television displays that work without

printing plates that serve as a reference for

glasses make use of little lenses which are

the alignment of the colors. These are also

placed directly on the screen. Those refract

necessary in AR. Often, you see that mark-

the left and right image, so that each eye can

ers are used onto which a 3D virtual object is

only see the corresponding image. See for

projected. Moving and rotating the marker, lets

example www.dimenco.eu/display-technology.

you move and rotate the virtual object. Such

This is essentially the same method as used

a marker is comparable to the fiducial marker

on the well known 3D postcards on which a

in color printing. With the help of computer

beautiful lady winks when the card is slightly

vision technology, the camera of the headset

turned. 3D film makes use of two projectors

can identify the marker and based on it’s size,

that show the left and right images simultane-

shape and position, conclude the relative posi-

ously, however, each of them is polarized in a

tion of the camera. If you move your head rela-

different way. The left and right lenses of the

tive to the marker (with the virtual object), the

glasses have matching polarizations and only

computer knows how the image on the display

let through the light of to the corresponding

must be transformed so that the virtual object

projector. The important point with screens

remains stationary. And conversely, if your

is that you are always bound to the physical

head is stationary and you rotate the marker, ­

location of the display while headset based

it knows how the virtual object should rotate

techniques allow you to roam freely. This is

so that it remains on top of the marker.

called immersive visualization — you are immersed in a virtual world. You can walk around

AR smartphone applications such as Layar use

in the 3D world and move around and enter

the build in GPS and compass for the tracking.

virtual 3D objects.

This has an accuracy of meters and measures

Video-See-Through AR will become popular

angles of 5-10 degrees. Camera-based tracking,

within a very short time and ultimately be-

however, is accurate to the centimetre and can

come an extension of the smartphone. This is

measure angles of several degrees. Nowadays,

because both display technology and camera

using markers for the tracking is already out

technology have made great strides with the

of date and we use so called “natural feature

advent of smartphones. What currently still

tracking” also called “keypoint tracking”.

might stand in the way of smartphone models

Here, the computer searches for conspicuous

22


(salient) key points in the left and right camera ­image. If, for example, you twist your head, this shift is determined on the basis of those key points with more than 30 frames per second. This way, a 3D map of these keypoints can be built and the computer knows the relationship (distance and angle) between the keypoints and the stereo camera. This method is more robust than marker based tracking because you have many keypoints — widely spread in the scene — and not just the four corners of the marker close together in the scene. If someone walks in front of the camera and blocks some of the keypoints, there will still be enough keypoints left and the tracking is not lost. Moreover, you do not have to stick markers all over the world.

Collaboration with the Royal Academy of Arts (KABK) in The Hague in the AR Lab (Royal Academy, TU Delft, Leiden University, various SMEs) in the realization of applications. The TU Delft has done research on AR since 1999. Since 2006, the university works with the art academy in The Hague. The idea is that AR is a new technology with its own merits. Artists are very good at finding out what is possible with the new technology. Here are some pictures of realized projects. liseerde projecten

Fig 1. The current technology that replaces the markers with natural feature tracking or so called keypoint tracking. Instead of the four corners of the marker, the computer itself determines which points in the left and right images can be used as anchor points for calculating the 3D pose of the camera in 3D space. From top: 1: you can use all points in the left and right images to slowly build a complete 3D map. Such a map can, for example, be used to relive your past experience because you can again walk in the now virtual space. 2: the 3D keypoint space and the trace of the camera position within it. 3: keypoints (the color indicates the suitability) 4: you can place virtual objects (eyes) on an existing surface

23


Fig 2. Virtual furniture exhibition at the Salone di Mobile in Milan (2008); students of the Royal Academy of Art, The Hague show their furnitures by means of AR headsets. This saves transportation costs.

Fig 3. Virtual sculpture exhibition in KrĂśller-MĂźller (2009). From left: 1) visitors on adventure with laptops on walkers, 2) inside with a optical see-through headset, 3) large pivotable screen on a field of grass, 4) virtual image.

24


Fig 4. Exhibition in Museum Boijmans van Beuningen (2008-2009). From left: 1) Sgraffitto in 3D; 2) the 3D print version may be picked up by the spectator, 3) animated shards, the table covered in ancient pottery can be seen via the headset, 4) scanning antique pottery with the CT scanner delivers a 3D digital image.

Fig 5. The TUD, partially in collaboration with the Royal Academy (with the oldest industrial design course in the Netherlands), has designed a number of headsets.This design of headsets is an ongoing activity. From left: 1) first optical see-through headset with Sony headset and self-made inertia tracker (2000), 2) on a construction helmet (2006), 3) SmartCam and tracker taped on a Cyber Mind Visette headset (2007); 4) headset design with engines by Niels Mulder, a student at Royal Academy of Art, The Hague (2007), based on Cybermind technology, 5) low cost prototype based on the Carl Zeiss Cinemizer headset, 6) future AR Vizor?, 7) future AR lens? 25


There are many applications that can be realized using AR; they will find their way in the coming decades: 1. H ead-Up Displays have already been used for many years in the Air Force for fighter pilots; this can be extended to other vehicles and civil applications. 2. The billboards during the broadcast of a football game are essentially also AR; more can be done by also ivolving the game itself an allowing interaction of teh user, such as off-side line projection. 3. In the professional sphere, you can, for example, visualize where pipes under the street lie or should lie. Ditto for designing ships, houses, planes, trucks and cars. What’s outlined in a CAD drawing could be drawn in the real world, allowing you to see in 3D if and where there is a mismatch. 4. You can easily find books you are looking for in the library. 5. You can find out where restaurants are in a city... 6. Y ou can pimp theater / musical / opera / pop concerts with (immersive) AR decor. 7. You can arrange virtual furniture or curtains from the IKEA catalog and see how they look in your home. 8. Maintenance of complex devices will become easier, e.g. you can virtually see where the paper in the copier is jammed. 9. If you enter a restaurant or the hardware store, a virtual avatar can show you the place to find that special bolt or table.

26


SHOWING THE SERRA ROOM IN MUSEUM BOIJMANS VAN BEUNINGEN DURING THE EXHIBITION SGRAFFITO IN 3D

Picture: JOACHIM ROTTEVEEL

27


28


RE-INTRODUCING MOSQUITOS MAARTEN LAMERS AROUND 2004, MY YOUNGER BROTHER VALENTIJN INTRODUCED ME TO THE FASCINATING WORLD OF AUGMENTED REALITY. HE WAS A MOBILE PHONE SALESMAN AT THE TIME, AND SIEMENS HAD JUST LAUNCHED THEIR FIRST “SMARTPHONE”, THE BULKY SIEMENS SX1. THIS PHONE WAS QUITE MARVELOUS, WE THOUGHT – IT RAN THE SYMBIAN OPERATING SYSTEM, HAD A BUILT-IN CAMERA, AND CAME WITH… THREE GAMES.

One of these games was Mozzies, a.k.a Virtual

that “stuff” is. In Mozzies it was pesky little

Mosquito Hunt, which apparently won some

mosquitos -- nowadays it is anything from

2003 Best Mobile Game Award and my brother

restaurant information to crime scene data.

was eager to show it to me in the store where

But nothing really changed, right?

he worked at that time. I was immediately hooked… Mozzies lets you kill virtual mos-

Right! Technology became more advanced,

quitos that fly around superimposed over the

so we no longer need to hold the phone in

live camera feed. By physically moving the

our hand, but get to wear it strapped to our

phone you could chase after the mosquitos

skull in the form of goggles. But the idea is

when they attempted to fly off the phone’s

unchanged; you look at fake stuff in the real

display. Those are all the ingredients for

world and physically move around to deal

Augmented Reality in my personal opinion:

with it. You still don’t get the tactile sensa-

something that interacts with my perception

tion of swatting a mosquito or collecting

and manipulation of the world around me, at

“virtually heavy” information. You still don’t

that location, at that time. And Mozzies did

even hear the mosquito flying around you…

exactly that. Now almost eight years later, not much

It’s time to focus on those matters also, in my

has changed. Whenever people around me

opinion. Let’s take up the challenge and make

speak of AR, because they got tired of saying

AR more than visual, exploring interaction

“Augmented Reality”, they still refer to bulky

models for other senses. Let’s enjoy the full

equipment (even bulkier than the Siemens

experience of seeing, hearing, and particu-

SX1!) that projects stuff over a live camera

larly swatting mosquitos, but without the

feed and lets you interact with whatever

itchy bites.

29


LIEVEN VAN VELTHOVEN — THE RACING STAR

“IT AIN’T FUN IF IT AIN’T REAL TIME” BY HANNA SCHRAFFENBERGER 30


WHEN I ENTER LIEVEN VAN VELTHOVEN’S ROOM, THE PEOPLE FROM THE EFTELING HAVE JUST LEFT. THEY ARE INTERESTED IN HIS ‘VIRTUAL GROWTH’ INSTALLATION. AND THEY ARE NOT THE ONLY ONES INTERESTED IN LIEVEN’S WORK. IN THE LAST YEAR, HE HAS WON THE JURY AWARD FOR BEST NEW MEDIA PRODUCTION 2011 OF THE INTERNATIONAL CINEKID YOUTH MEDIA FESTIVAL AS WELL AS THE DUTCH GAME AWARD 2011 FOR THE BEST STUDENT GAME. THE WINNING­ MIXED REALITY­GAME ‘ROOM RACERS’ HAS BEEN SHOWN AT THE DISCOVERY FESTIVAL,­MEDIAMATIC, THE STRP FESTIVAL AND THE ZKM IN KARLSRUHE. HIS VIRTUAL GROWTH INSTALLATION HAS EMBELLISHED THE STREETS OF AMSTERDAM AT NIGHT. NOW, HE IS GOING TO SHOW ROOM RACERS TO ME, IN HIS LIVING ROOM — WHERE IT ALL STARTED. The room is packed with stuff and on first sight it seems rather chaotic, with a lot of random things laying on the floor. There are a few plants, which probably don’t get enough light, because Lieven likes the dark (that’s when his projections look best). It is only when he turns

Lieven tells me. He hands me a controller and soon we are racing the little projected cars around the chocolate spread, marbles, a remote control and a flash light. Trying not to crash the car into a belt, I tell him what I remember about when I first met him a few years ago at a Media Technology course at Leiden University. Back then, he was programming a virtual bird, which would fly from one room to another, preferring the room in which it was quiet. Loud and sudden sounds would scare the bird away into another room. The course for which he developed it was called sound space interaction, and his installation was solely based on sound. I ask him whether the virtual bird was his first contact with Augmented Reality. Lieven laughs.

“It’s interesting that you call it AR, as it only uses sound!” Indeed, most of Lieven’s work is based on interactive projections and plays with visual augmentations of our real environment. But like the bird, all of them are interactive and work in real-time. Looking back, the bird was not his first AR work.

“My first encounter with AR was during our first Media Technology course — a visit to the Ars Electroncia festival in 2007 — where ­ I saw Pablo Valbuena’s Augmented Sculpture. It was amazing. I was asking myself, can I do something like this but interactive instead?”

on the beamer, that I realize that his room is actually not chaotic at all. The shoe, magnifying

Armed with a bachelor in technical computer

class, video games, tape and stapler which cover

science from TU Delft and the new found possi­

the floor are all part of the game.

bility to bring in his own curiosity and ideas at the Media Technology Master program at Leiden

“You create your own race game tracks by placing real stuff on the floor”

University, he set out to build his own inter­ active projection based works.

31


ROOM RACERS Up to four players race their virtual cars around real objects which are lying on the floor. Players can drop in or out of the game at any time. Everything you can find can be placed on the floor to change the route. Room Racers makes use of projection-based mixed reality. The structure of the floor is analysed in real-time using a modified camera and self-written software. Virtual cars are projected onto the real environment and interact with the detected objects that are lying on the floor. The game has won the Jury Award for Best New Media Production 2011 of the international Cinekid Youth Media Festival, and the Dutch Game Award 2011 for Best Student Game. Room Racers shas been shown at several international media festivals. You can play Room Racers at the 'Car Culture' exposition at the Lentos Kunstmuseum in Linz, Austria until 4th of July 2012. Picture: LIEVEN VAN VELTHOVEN, ROOM RACERS AT ZKM | CENTER FOR ARTS AND MEDIA IN KARLSRUHE, GERMANY ON JUNE 19TH, 2011

32


33


“The first time, I experimented with the combination of the real and the virtual myself was in a piece called shadow creatures which I made with Lisa Dalhuijsen during our first semester in 2007.” More interactive projections followed in the next semester and in 2008, the idea for Room Racers was born. A first prototype was build in ­ a week: a projected car bumping into real world things. After that followed months and months of optimizations. Everything is done by Lieven himself, mostly at night in front of the computer.

“My projects are never really finished, they are always work in progress, but if something works fine in my room, it’s time to take it out in the world.”

His success does surprise him and he especially did not expect the attention it gets in an art context.

“I knew it was fun. That became clear when I had friends over and we played with it all night. But I did not expect the awards. And I did not expect it to be relevant in the art scene. I do not think it’s art, it’s just a game. I don’t consider myself an artist. I am a developer and I like to do interactive projections. Room Racers is my least arty project, nevertheless it got a lot of response in the art context.” A piece which he actually considers more of an artwork is Virtual Growth: a mobile installation which projects autonomous growing structures

After having friends over and playing with the

onto any environment you place it in, be it

cars until six o’clock in the morning, Lieven

buildings, people or nature.

knows it’s time to steer the cars out of his room and show them to the outside world.

“I wanted to present Room Racers but I didn’t know anyone, and no one knew me. There was no network I was part of.” Uninhibited by this, Lieven took the initiative and asked the Discovery Festival if they were interested in his work. Luckily, they were — and showed two of his interactive games at the Discovery Festival 2010. After the festival requests started coming and the cars kept rolling. When I ask him about this continuing success he is divided:

“It’s fun, but it takes a lot of time — I have not been able to program as much as I used to.” 34

“For me AR has to take place in the real world. I don’t like screens. I want to get away from them. I have always been interested in other ways of interacting with computers, without mice, without screens. There is a lot of screen based AR, but for me AR is really about projecting into the real world. Put it in the real world, identify real world objects, do it in real-time, thats my philosophy. It ain’t fun if it ain’t real-time. One day, I want to go through a city with a van and do projections on buildings, trees, people and whatever else I pass.” For now, he is bound to a bike but that does not stop him. Virtual Growth works fast and


stable, even on a bike. That has been witnessed

at university. While talking, he smokes his

in ­Amsterdam, where the audiovisual bicycle

cigarette and takes the ashtray from the floor.

project ‘Volle Band’ put beamers on bikes and

With the road no longer blocked by it, the cars

invented Lieven to augmented the city with his

take a different route now. Lieven might take a

mobile installation. People who experienced

different route soon as well. I ask him, if he will

Virtual Growth on his journeys around Amster-

still be working from his living room, realizing

dam, at festivals and parties, are enthusiastic

his own ideas, once he has graduated.

about his (‘smashing!’) entertainment-art. As the virtual structure grows, the audience members not only start to interact with the piece but also with each other.

“They put themselves in front of the projector, have it projecting onto themselves and pass on the projection to other people by touching them. I don’t explain anything. I believe in simple ideas, not complicated concepts. The piece has to speak for itself. If people try it, immediately get it, enjoy it and tell other people about it, it works!”

“It’s actually funny. It all started to fill my portfolio in order to get a cool job. I wanted to have some things to show besides a diploma. That’s why I started realizing my ideas. It got out of control and soon I was realizing one idea after the other. And maybe, I’ll just continue doing it. But also, there are quite some companies and jobs I’d enjoy working for. First ­ I have to graduate anyway.” If I have learned anything about Lieven and his work, I am sure his graduation project will be placed in the real world and work in in real-

Virtual Growth works, that becomes clear from

time. More than that, it will be fun. It ain’t

the many happy smiling faces the projection

Lieven, if it ain’t’ fun.

grows upon. And that’s also what counts for Lieven.

“At first it was hard, I didn’t get paid for doing these projects. But when people see them and are enthusiastic, that makes me happy. If I see people enjoying my work, and playing with it, that’s what really counts.”

Name:

Lieven van Velthoven

Born:

1984

Study: Media Technology MSc, Leiden University Background: Computer Science, TU Delft Selected AR Works: R oom Racers, Virtual Growth Watch: http://www.youtube.com/ user/lievenvv

I wonder where he gets the energy to work that much alongside being a student. He tells me, what drives him, is that he enjoys it. He likes to spend the evenings with the programming language C#. But the fact that he enjoys working on his ideas, does not only keep him motivated but also has caused him to postpone a few courses

35


HOW DID WE DO IT: ADDING VIRTUAL SCULPTURES AT THE KRÖLLER-MÜLLER MUSEUM By Wim van Eck ALWAYS WANTED TO CREATE YOUR OWN AUGMENTED REALITY PRO­ JECTS BUT NEVER KNEW HOW? DON’T WORRY, AR[T] IS GOING TO HELP YOU! HOWEVER, THERE ARE MANY HURDLES TO TAKE WHEN REALIZING AN AUGMENTED REALITY PROJECT. IDEALLY YOU SHOULD BE A SKILLFUL 3D ANIMATOR TO CREATE YOUR OWN VIRTUAL OBJECTS, AND A GREAT PROGRAMMER TO MAKE THE PROJECT TECHNICALLY WORK. PROVIDING YOU DON’T JUST WANT TO MAKE A FANCY TECH-DEMO, YOU ALSO NEED TO COME UP WITH A GREAT CONCEPT!

My name is Wim van Eck and I work at the AR

Blender (www.blender.org). These are all great

Lab, based at the Royal Academy of Art. One of

programs, however at the AR Lab we mostly use

my tasks is to help art-students realize their Aug-

Cinema 4d (image 1) since it is very user friendly

mented Reality projects. These students have

and because of that easier to learn. It is a shame

great concepts, but often lack experience in 3d

that the free Blender still has a steep learning

animation and programming. Logically I should

curve since it is otherwise an excellent program.

tell them to follow animation and programming

You can download a demo of Cinema 4d at

courses, but since the average deadline for their

http://www.maxon.net/downloads/demo-ver-

projects is counted in weeks instead of months

sion.html, these are some good tutorial sites to

or years there is seldom time for that... In the

get you started:

coming issues of AR[t] I will explain how the AR

http://www.cineversity.com

Lab helps students to realize their projects and

http://www.c4dcafe.com

how we try to overcome technical boundaries,

http://greyscalegorilla.com

showing actual projects we worked on by example. Since this is the first issue of our magazine I will give a short overview of recommendable programs for Augmented Reality development. We will start with 3d animation programs, which we need to create our 3d models. There are many 3d animation packages, the more well known ones include 3ds Max, Maya, Cinema 4d, Softimage, Lightwave, Modo and the open source

36

Image 1


Image 2

Image 3 | Picture by Klaas A. Mulder

Image 4

quid.com), for example, offers good quality but

Sweet summer nights at the Kröller-Müller ­Museum.

often at a high price, while free sites such as

As mentioned before in the introduction we

Artist-3d (http://artist-3d.com) have a more var-

will show the workflow of AR Lab projects with

ied quality. When a 3d model is not constructed

these ‘How did we do it’ articles. In 2009 the AR

properly it might give problems when you import

Lab was invited by the Kröller-Müller Museum to

it or visualize it. In coming issues of AR[t] we

present during the ‘Sweet Summer Nights’, an

will talk more about optimizing 3d models for

evening full of cultural activities in the famous

Augmented Reality usage. To actually add these

sculpture garden of the museum. We were asked

3d models to the real world you need Aug-

to develop an Augmented Reality installation

mented Reality software. Again there are many

aimed at the whole family and found a diverse

options, with new software being added continu-

group of students to work on the project. Now

ously. Probably the easiest to use software is

the most important part of the project started,

BuildAR (http://www.buildar.co.nz) which is

brainstorming!

In case you don’t want to create your own 3d models you can also download them from various websites. Turbosquid (http://www.turbos-

available for Windows and OSX. It is easy to import 3d models, video and sound and there is

Our location in the sculpture garden was in-

a demo available. There are excellent tutorials

between two sculptures, ‘Man and woman’, a

on their site to get you started. In case you want

stone sculpture of a couple by Eugène Dodeigne

to develop for iOS or Android the free Junaio

(image 2) and ‘Igloo di pietra’, a dome shaped

(http://www.junaio.com) is a good option. Their

sculpture by Mario Merz (image 3). We decided

online GLUE application is easy to use, though

to read more about these works, and learned

their preferred .m2d format for 3d models is

that Dodeigne had originally intended to create

not the most common. In my opinion the most

two couples instead of one, placed together in a

powerful Augmented Reality software right now

wild natural environment. We decided to virtu-

is Vuforia (https://developer.qualcomm.com/

ally add the second couple and also add a more

develop/mobile-technologies/Augmented-reality)

wild environment, just as Dodeigne initially had

in combination with the excellent game-engine

in mind. To be able to see these additions we

Unity (www.unity3d.com). This combination

placed a screen which can rotate 360 degrees

offers high-quality visuals with easy to script

between the two sculptures (image 4). ­

interaction on iOS and Android devices

37


Image 5

A webcam was placed on top of the screen,

actually build what the camera will see. This will

and a laptop running ARToolkit (http://www.

already save us quite some work. We can also

hitl.washington.edu/artoolkit) was mounted

see the screen is positioned quite far away from

on the back of the screen. A large marker was

the sculpture, and when an object is viewed

placed near the sculpture as a reference point

from a distance it will optically lose its depth.

for ARToolkit.

When you are one meter away from an object and take one step aside you will see the side of

Now it was time to create the 3d models of the

the object, but if the same object is a hundred

extra couple and environment. The students

meter away you will hardly see a change in per-

working on this part of the project didn’t have

spective when changing your position (see image

much experience with 3d animation, and there

6). From that distance people will hardly see the

wasn’t much time to teach them, so manually

difference between an actual 3d model and a

modeling the sculptures would be a difficult task.

plain 2d image. This means we could actually use

Soon options such as 3d scanning the sculpture

photographs or drawings instead of a complex 3d

were opted, but it still needs quite some skill

model, making the whole process easier again.

to actually prepare a 3d scan for Augmented

We decided to follow this route.

Reality usage. We will talk more about that in a coming issue of this magazine. But when we look carefully at our setup (image 5) we can draw some interesting conclusions. Our screen is immobile, we will always see our added 3d model from the same angle. So since we will never be able to see the back of the 3d model there is no need to actually model this part. This is a common practice while making 3d models, you can compare it with set construction for Hollywood movies where they also only Image 6 38


= Image 7

Image 8

Image 9

Image 10

Image 11 39


Original photograph by Klaas A. Mulder Image 12

To be able to place the photograph of the

of an image are visible, white is opaque, black

sculpture in our 3d scene we have to assign

is transparent. Detailed tutorials about alpha

it ­to a placeholder, a single polygon, image 7

channels are easily found on the internet.

shows how this could look.

As you can see this looks much better (image 9). We followed the same procedure for the second

This actually looks quite awful, we see the

statue and the grass (image 10), using many

statue but also all the white around it from the

separate polygons to create enough randomness

image. To solve this we need to make usage of

for the grass. As long as you see these models

something called an alpha channel, an option

from the right angle they look quite realistic

you can find in every 3d animation package

(image 11). In this case this 2.5d approach prob-

(image 8 shows where it is located in the mate-

ably gives even better results than a ‘normal’ 3d

rial editor of Cinema 4d). An alpha channel is

model, and it is much easier to create. Another

a grayscale image which declares which parts

advantage is that the 2.5d approach is very easy

40


The Lab collaborated in this project with students from different departments of the KABK: Ferenc Molnar, Mit Koevoets, Jing Foon Yu, Marcel Kerkmans and Alrik Stelling. The AR Lab team consisted of: Yolande Kolstee, Wim van Eck, Melissa Coleman en Pawel Pokutycki, supported by Martin Sjardijn and Joachim Rotteveel.

to compute since it uses few polygons, so you

We can conclude that it is good practice to

don’t need a very powerful computer to run it

analyze your scene before you start making your

or you can have many models on screen at the

3d models. You don’t always need to model all

same time. Image 12 shows the final setup.

the detail, and using photographs or drawings

For the iglo sculpture by Mario Merz we used

can be a very good alternative. The next issue

a similar approach. A graphic design student

of AR[t] will feature a new ‘How did we do it’, in

imagined what could be living inside the iglo,

case you have any questions you can contact me

and started drawing a variety of plants and

at w.vaneck@kabk.nl

creatures. Using the same 2.5d approach as described before we used these drawings and placed them around the iglo, and an animation was shown of a plant growing out of the iglo (image 12).

41


PIXELS WANT TO BE FREED! INTRODUCING AUGMENTED REALITY ENABLING HARDWARE TECHNOLOGIES

BY JOUKE VERLINDEN

42


1. Introduction

From the early head-up display in the movie ­“Robocop” to the present, Augmented Reality (AR) has evolved to a manageable ICT environment that must be considered by product designers of the 21st century. Instead of focusing on a variety of applications and software solutions, this article will discuss the essential hardware of Augmented Reality (AR): display techniques and tracking techniques. We argue that these two fields differentiate AR from regular human-user interfaces and tuning these is essential in realizing an AR experience. As often, there is a vast body of knowledge behind each of the principles discussed below, hence a large variety of literature references is given. Furthermore, the first author of this article found it important to elude his own preferences and experiences throughout this discussion. We hope that this material strikes a chord and makes you consider employing AR in your designs. After all, why should digital information always be confined to a dull, rectangular screen? 43


2. Display Technologies To categorise AR display technologies, two

3. Projector-based systems: one or more

important characteristics should be identified: imaging generation principle and physical

projectors cast digital imagery directly on the physical environment.

layout.

As Raskar and Bimber (2004, p.72) argued, an

Generic AR technology surveys describe a

important consideration in deploying an Aug-

large variety of display technologies that sup-

mented system is the physical layout of the

port imaging generation (Azuma, 1997; Azuma

image generation. For each imaging genera-

et al., 2001); these principles can be catego-

tion principle mentioned above, the imaging

rised into:

display can be arranged between user and physical object in three distinct ways:

1. Video-mixing. A camera is mounted somewhere on the product; computer graphics

a) head-attached, which presents digital

are combined with captured video frames

images directly in front of the viewer’s

in real time. The result is displayed on an

eyes, establishing a personal information

oblique surface, for example, an immer-

display.

sive Head-Mounted Display (HMD).

b) hand-held, carried by a user and does not

2. See-through: Augmentation by this

cover the whole field of view

principle typically employs half-silvered

c) spatial, which is fixed to the environment.

mirrors to superimpose computer graphics onto the user’s view, as found in head-up

The resulting imaging and arrangement combi-

displays of modern fighter jets.

nations are summarised in Table 1.

1.Video-mixing A. Head-attached

2. See-through

3. Projection-based

Head-mounted display (HMD)

B. Hand-held

Handheld devices

C. Spatial

Embedded display

See-through boards

Spatial projection-based

Table 1. Image generation principles for Augmented Reality

When the AR image generation and layout principles are combined, the following collection of display technologies are identified: HMD, Handheld devices, embedded screens, see-through boards and spatial projection-based AR. These are briefly discussed in the following sections.

44


2.1 Head-mounted display Head-attached systems refer to HMD solutions, which can employ either of the three image generation technologies. Even the first headmounted displays developed by virtue of the Virtual Reality already considered a see-through system with half-silvered mirrors to merge virtual line drawings with the physical environment (Sutherland, 1967). Since then, the variety of head-attached imaging systems has been expanded and encompasses all three principles for AR: video-mixing, see-through and direct projection on the physical world (Azuma et al., 2001). A benefit of this approach is its handsfree nature. Secondly, it offers personalised content, enabling each user to have a private view of the scene with customised and sensitive data that das not have to be shared. For most applications, HMDs have been considered inadequate, both in the case of see-through and video-mixing imaging. According to Klinker et al. (2002), HMDs introduce a large barrier between the user and the object and their resolution is insufficient for IAP — typically 800 × 600 pixels for the complete field of view (rendering the user “legally blind”by American standards). Similar reasoning was found in Bochenek et al. (2001), in which both the objective and subjective assessment of HMDs were less than those of hand-held or spatial imaging devices. However, new developments (specifically high-resolution OLED displays) show promising new devices, specifically for the professional market (Carl Zeiss) and enterntainment (Sony), see figure right.

Figure 1. RECENT HEAD MOUNTED DISPLAYS (ABOVE: KABK THE HAGUE AND UNDER: CARL ZEISS).

2.2 Handheld display Hand-held video-mixing solutions are based on­

Augmented Reality technologies are emerging.

smartphones, PDAs or other mobile devices

By employing built-in cameras on smartphones

equipped with a screen and camera. With the

or PDAs, video mixing is enabled while concur-

advent of powerful mobile electronics, handheld

rent use is being supported by communication

45


GPS Antenna

Camera + IMU

Joystick Handles

UMPC

Figure 2. THE VESP´R DEVICE FOR UNDERGROUND INFRASTRUCTURE VISUALIZATION (SCHALL ET AL., 2008).

through wireless networks (Schmalstieg and

tems are found in each modern smartphone,

Wagner, 2008). The resulting device acts as a

and apps such as Layar (www.layar.com) and

hand-held window of a mixed reality. An exam-

Junaio (www.junaio.com) offer such functions

ple of such a solution is shown in Figure 2, which

for free to the user — allowing different layers of

is a combination of an Ultra Mobile Personal

content to the user (often social-media based).

Computer (UMPC), a Global Positioning System

‘such systems are found in each modern smartphone’

The advantage of using a video-mixing approach is that the lag times in processing are less influential than with the see-through or projector-based systems — the live video feed is also delayed and, thus, establishes a consistent combined image. This hand-held solution works well for occasional, mobile use. Long-term use can cause strain in the arms. The challenges in employing this principle are the limited screen coverage/resolution (typically with a 4-in diameter and a resolution of 320 × 240 pixels). Furthermore, memory, processing power and graphics processing is limited to ren-

(GPS) antenna for global position tracking, a

dering relatively simple 3D scenes, although these

camera for local position and orientation sensing

capabilities are rapidly improving by the upcom-

along with video mixing. As of today, such sys-

ing dual-core and quad-core mobile CPUs.

46


2.3 Embedded display Another AR display option is to include a number

with such screens. To our knowledge, no such

of small LCD screens in the observed object in

systems have been developed or commercialised

order to display the virtual elements directly on

so far. Although it does not support changing

the physical object. Although arguably an aug-

light effects, the Luminex material approximates

mentation solution, embedded screens do add

this by using an LED/fibreglass based fabric (see

digital information on product surfaces.

Figure 4). A Dutch company recently presented

This practice is found in the later stages of pro-

a fully interactive light-emitting fabric based on

totyping mobile phones and similar information

integrated RGB LEDs labelled ‘lumalive’. These

appliances. Such screens typically have a similar

initiatives can manifest as new ways to support

resolution as that of PDAs and mobile phones,

prototyping scenarios that require a high local

which is QVGA: 320 × 240 pixels. Such devices

resolution and complete unobstructedness. How-

are connected to a workstation by a specialised

ever, the fit to the underlying geometry remains

cable, which can be omitted if autonomously

a challenge, as well as embedding the associated

components are used, such as a smartphone.

control electronics/wiring. An elegant solution

Regular embedded screens can only be used on

to the second challenge was given by (Saakes et

planar surfaces and their size is limited while

al 2010) entitled “the slow display: by temporar-

their weight impedes larger use. With the ad-

ily changing the color of photochromatic paint

vent of novel, flexible e-Paper and Organic Light-

properties by UV laser projection. This effect

Emitting Diode (OLED) technologies, it might

lasts for a couple of minutes and demonstrates

be possible to cover a part of a physical model

how fashion and AR could meet.

Figure 3. IMPRESSION OF THE LUMINEX MATERIAL

47


2.4 See-through board See-through boards vary in size between desk-

compelling display system for exhibits and trade

top and hand-held versions. The Augmented

fairs. However, see-through boards obstruct user

engineering system (Bimber et al., 2001) and

interaction with the physical object. Multiple

the AR extension of the haptic sculpting project

viewers cannot share the same device, although

(Bordegoni and Covarrubias, 2007) are examples

a limited solution is offered by the virtual

of the use of see-through technologies, which

showcase by establishing a faceted and curved

typically employ a half-silvered mirror to mix

mirroring surface (Bimber, 2002).

virtual models with a physical object (Figure 4). Similar to the Pepper’s ghost phenomenon, standard stereoscopic Virtual Reality (VR) workbench systems such as the Barco Baron are used to project the virtual information. In addition to the need to wear shutter glasses to view stereoscopic graphics, head tracking is required to align the virtual image between the object and the viewer. An advantage of this approach is that digital images are not occluded by the users’ hand or environment and that graphics can be displayed outside the physical object (i.e., to display the environment or annotations and tools). Furthermore, the user does not have to wear heavy equipment and the resolution of the

Figure 4. THE AUGMENTED ENGINEERING SEE-THROUGH DISPLAY (BIMBER ET AL., 2001).

projection can be extremely high — enabling a

2.5 Spatial projection-based displays This technique is also known as Shader Lamps

mentary to constructing a perspective image

by (Raskar et al., 2001) and was extended in

of a virtual object by a pinhole camera. If the

(Raskar&Bimber, 2004) to a variety of imaging so-

physical object is of the same geometry as the

lutions, including projections on irregular surface

virtual object, a straightforward 3D perspective

textures and combinations of projections with

transformation (described by a 4 × 4 matrix)

(static) holograms. In the field of advertising and

is­sufficient to predistort the digital image. To

performance arts, this technique recently gained

obtain this transformation, it suffices to indicate

popularity labelled as Projection Mapping: to

6 corresponding points in the physical world

project on buildings, cars or other large objects,

and virtual world: an algorithm entitled Linear

replacing traditional screens as display means, cf.

Camera Calibration can then be applied (see

Figure 5. In such cases, theatre projector systems

Appendix). If the physical and virtual shapes dif-

are used that are prohibitively expensive (>30.000

fer, the projection is viewpoint-dependent and

euros). The principle of spatial projection-based

the head position needs to be tracked. Impor-

technologies is shown in Figure 6. Casting an im-

tant projector characteristics involve weight

age to a physical object is considered comple-

and size versus the power (in lumens) of the

48


Figure 5. TWO PROJECTIONS ON A CHURCH CHAPEL IN UTRECHT (HOEBEN, 2010).

49


projector. There are initiatives to employ LED lasers for direct holographic projection, which also decreases power consumption compared to traditional video projectors and ensures that the projection is always in focus without requiring optics (Eisenberg, 2004). Both fixed and handheld spatial projection-based systems have been demonstrated. At present, hand-held projectors measure 10 × 5 × 2 cm and weigh 150 g, including the processing unit and battery. However, the light output is little (15–45 lumens). The advantage of spatial projection-based tech­ nologies is that they support the perception of all visual and tactile/haptic depth cues without the need for shutter glasses or HMDs. Furthermore, the display can be shared by multiple co-located users. It requires less expensive equipment, which are often already available at design studios. Challenges to projector-based AR approaches include optics and occlusion. First, only a limited field of view and focus depth can be achieved. To reduce these problems, multiple video projectors can be used. An alternative so­lution is to employ a portable projector, as Figure 6. PROJECTION-BASED DISPLAY PRINCIPLE

proposed in the iLamps and the I/O Pad concepts

(ADAPTED FROM (RASKAR AND LOW, 2001)), ON THE

(Raskar et al., 2003) (Verlinden et al., 2008).

RIGHT THE DYNAMIC SHADER LAMPS DEMONSTRATION

Other issues include occlusion and shadows,

(BANDYOPADHYAY ET AL., 2001)).

which are cast on the surface by the user or other parts of the system. Projection on nonconvex geometries depends on the granularity and orientation of the projector. The perceived quality is sensitive to projection errors (also known as registration errors), especially projection overshoot (Verlinden et al., 2003b). A solution for this problem is either to include an offset (dilatation) of the physical model or introduce pixel masking in the rendering pipeline. As projectors are now being embedded in consumer cameras and smartphones, we are expecting this type of augmentation in the years to come.

50


3. Input Technologies

Logitech 3D Tracker, Microscribe and Minolta VI900). All these should be considered for object tracking in Augmented prototyping scenarios.

In order to merge the digital and physical, posi-

There are significant differences in the tracker/

tion and orientation tracking of the physical

marker size, action radius and accuracy. As

components is required. Here, we will discuss

the physical model might consist of a number

two different types of input technologies: track-

of parts or a global shape and some additional

ing and event sensing. Furthermore, we will

components (e.g., buttons), the number of items

briefly discuss other input modalities.

to be tracked is also of importance. For simple tracking scenarios, either magnetic or passive optical technologies are often used.

3.1 Position tracking

In some experiments we found out that a projector could not be equipped with a standard Flock of Birds 3D magnetic tracker due to interfer-

Welch and Foxlin (2002) presented a compre-

ence. Other tracking techniques should be used

hensive overview of the tracking principles

for this paradigm. For example, the ARToolkit

that are currently available. In the ideal case,

employs complex patterns and a regular web-

the measurement should be as unobtrusive and

camera to determine the position, orientation

invisible as possible while still offering accurate

and identification of the marker. This is done by

and rapid data. They concluded that there is

measuring the size, 2D position and perspective

currently no ideal solution (‘silver bullet’) for

distortion of a known rectangular marker, cf.

position tracking in general, but some respect-

Figure 7 (Kato and Billinghurst, 1999).

able alternatives­are available. Table 2 summarises the most important characteristics of

Passive markers enable a relatively untethered

these tracking methods for Augmented Reality

system, as no wiring is necessary. The optical

purposes. The data have been gathered from

markers are obtrusive when markers are visible

commercially available equipment (the As-

to the user while handling the object. Although

cension Flock of Birds, ARToolkit, Optotrack,

computationally intensive, marker-less optical­

Tracking type

Size of tracker (mm)

Typical number of trackers

Action radius/ accuracy

DOF

Issues

Magnetic

16x16x16

2

1.5 m (1 mm)

6

Ferro-magnetic interference

Optical passive

80x80x0.01

>10

3m (1 mm)

6

line of sight

Optical active

10x10x5

>10

3m (0.5 mm)

3

line of sight, wired connections

Ultrasound

20x20x10

1

1m (3 mm)

6

line of sight

Mechanical linkage

defined by working envelope

1

0.7 m (0.1 mm)

5

limited degrees of freedom, inertia

Laser scanning

none

infinite

2m ( 0.2mm)

6

line of sight, frequency, object recognition

Table 2. SUMMARY OF TRACKING TECHNOLOGIES. 51


Figure 7. WORKFLOW OF THE ARTOOLKIT OPTICAL TRACKING ALGORITHM,

http://www.hitl.washington.edu/artoolkit/documentation/userarwork.html

tracking has been proposed (Prince et al.,2002).

bodies. This method has a number of challenges

The employment of Laser-Based tracking sys-

when used as a real-time tracking means, includ-

tems is demonstrated by the illuminating Clay

ing the recognition of objects and their posture.

system by Piper et al. (2002): a slab of Plasti-

However, with the emergence of depth cameras

cine acts as an interactive surface — the user

for gaming such as the Kinect (Microsoft), similar

influences a 3D simulation by sculpting the clay,

systems are now being devised with a very small

while the simulation results are projected on the

technological threshold.

surface. A laser-based Minolta Vivid 3D scanner is employed to continuously scan the clay

In particular cases, a global measuring system is

surface. In the article, this principle was applied

combined with a different local tracking principle

to geodesic analysis, yet it can be adapted to

to increase the level of detail, for example, to

design applications, e.g., the sculpting of car

track the position and arrangement of buttons on

Figure 8. ILLUMINATING CLAY SYSTEM WITH A PROJECTOR/LASER SCANNER (PIPER ET AL., 2002).

52


the object’s surface. Such local positioning sys-

sliders, rotation knobs and sensors to measure

tems might have less advanced technical require-

force, touch and light. More elaborate compo-

ments; for example, the sampling frequency can

nents like a mini joystick, Infrared (IR) motion

be decreased to only once a minute. One local

sensor, air pressure and temperature sensor are

tracking system is based on magnetic resonance,

commercially available. Similar initiatives are

as used in digital drawing tablets. The Sensetable

iStuff (Ballagas et al., 2003), which also hosts a

demonstrates this by equipping an altered com-

number of wireless connections to sensors. Some

mercial digital drawing tablet with custom-made

systems embed switches with short-range wire-

wireless interaction devices (Patten et al., 2001).

less connections, for example, the Switcheroo

The Senseboard (Jacob et al., 2002) has similar

and Calder systems (Avrahami and Hudson, 2002;

functions and an intricate grid of RFID receivers

Lee et al., 2004) (cf. Figure 9). This allows a

to determine the (2D) location of an RFID tag on

greater freedom in modifying the location of the

a board. In practice, these systems rely on a rigid

interactive components while prototyping. The

tracking table, but it is possible to extend this to

Switcheroo system uses custom-made RFID tags.

a flexible sensing grid. A different technology was

A receiver antenna has to be located nearby

proposed by Hudson (2004) to use LED pixels as

(within a 10-cm distance), so the movement en-

light emitters and sensors. By operating one pixel

velope is rather small, while the physical model

as a sensor whilst its neighbours are illuminated,

is wired to a workstation. The Calder toolkit

it is possible to detect light reflected from a

(Lee et al., 2004) uses a capacitive coupling

fingertip close to the surface. This principle could

technique that has a smaller range (6 cm with

be applied to embedded displays, as mentioned

small antennae), but is able to receive and trans-

in Section 2.3.

mit for long periods on a small 12 mm coin cell. Other active wireless technologies would draw more power, leading to a system that would

3.2 Event sensing

only fit a few hours. Although the costs for this system have not been specified, only standard electronics components are required to build

Apart from location and orientation tracking,

such a receiver.

Augmented prototyping applications require inter­action with parts of the physical object, ­ for example, to mimic the interaction with the

Hand tracking

artefact. This interaction differs per AR scenario, so a variety of events should be sensed

Instead of attaching sensors to the physi-

to cater to these applications.

cal environment, fingertip and hand tracking technologies can also be used to generate user

Physical sensors

events. Embedded skins represent a type of interactive surface technology that allows the accurate measurement of touch on the object’s

The employment of traditional sensors labelled

surface (Paradiso et al., 2000). For example, the

‘physical widgets’ (phidgets) has been studied

Smartskin by Reikimoto (2002) consists of a flex-

extensively in the Computer-Human Interface

ible grid of antennae. The proximity or touch of

(CHI) community. Greenberg and Fitchett (2001)

human fingers changes the capacity locally in the

introduced a simple electronics hardware and

grid and establishes a multi-finger tracking cloth,

software library to interface PCs with sensors

which can be wrapped around an object. Such a

(and actuators) that can be used to discern

solution could be combined with embedded dis-

user interaction. The sensors include switches,

plays, as discussed in Section 2.3. Direct electric

53


Figure 9. MOCKUP EQUIPPED WITH WIRELESS SWITCHES THAT CAN BE RELOCATED TO EXPLORE USABILITY (LEE ET AL., 2004).

contact can also be used to track user interac-

tip and hand tracking as well. A simple example

tion; the Paper Buttons concept (Pedersen et

is the light widgets system (Fails and Olsen,

al.,­2000) embeds electronics on the objects and

2002) that traces skin colour and determines

equips the finger with a two-wire plug that sup-

finger/hand position by 2D blobs. The OpenNI

plies power and allows bidirectional communica-

library enables hand and body tracking of depth

tion with the embedded components when they

range cameras such as the Kinect (OpenNi.org).

are touched. Magic Touch (Pedersen, 2001) uses

A more elaborate example is the virtual drawing

a similar wireless system; the user wears an RFID

tablet by Ukita and Kidode (2004); fingertips

reader on his or her finger and can interact by

are recognised on a rectangular sheet by a

touching the components, which have hidden

head-mounted infrared camera. Traditional VR

RFID tags. This method has been adapted to

gloves can also be used for this type of tracking

Augmented Reality for design by Kanai et al.

(Schäfer et al., 1997).

(2007). Optical tracking can be used for finger­­

54


3.3 Other input modalities

tions and constraints in terms of the field of view and resolution and lend themselves to a kind of isolation. For all display technologies,

Speech and gesture recognition require consid-

the current challenges include an untethered

eration in AR as well. In particular, pen-based

interface, the enhancement of graphics capabili-

interaction would be a natural extension to the

ties, visual coverage of the display and improve-

expressiveness of today’s designer skills. Oviatt

ment of resolution. LED-based laser projection

et al. (2000) offer an comprehensive overview of

and OLEDs are expected to play an important

the so-called Recognition-Based User Interfaces

role in the next generation of IAP devices

(RUIs), including the issues and Human Factors­

because this technology can be employed by

aspects of these modalities. Furthermore,

see-through or projection-based displays.

speech-based interaction can also be useful to

To interactively merge the digital and physical

activate operations while the hands are used for

parts of Augmented prototypes, position and

selection.

orientation tracking of the physical components is needed, as well as additional user input means.­For global position tracking, a variety of

4. Conclusions and Further reading

principles exist. Optical tracking and scanning suffer from the issues concerning line of sight and occlusion. Magnetic, mechanical linkage and ultrasound-based position trackers are obtrusive

This article introduces two important hardware

and only a limited number of trackers can be

systems for AR: displays and input technologies.

used concurrently.

To superimpose virtual images onto physical models, head mounted-displays (HMDs), see-

The resulting palette of solutions is summarized

through boards, projection-based techniques

in Table 3 as a morphological chart. In devising a

and embedded displays have been employed.

solution for your AR system, you can use this as

An important observation is that HMDs, though

a checklist or inspiration of display and input.

best known by the public, have serious limita-

Display Imaging principle

Video Mixing

Projectorbased

See-through

Display arrangment

Headattached

Handheld/ wearable

Spatial

Position tracking Input technologies

Event sensing

Optical Magnetic

Passive markers Physical sensors

Wired connection

Wireless

Active markers

3D laser scanning

Ultrasound

Mechanical

Virtual Surface tracking

3D tracking

Table 3. Morphological chart of AR enabling technologies.

55


Further reading

eters of the projector, respectively. Then a point P in 3D-space is transformed to:

For those interested in research in this area,

p=[I·E] ·P

(1)

the following publication means offer a range of detailed solutions:

where p is a point in the projector’s coordinate

■ International Symposium on Mixed and

system. If we decompose rotation and transla-

Augmented Reality (ISMAR) – ACM-sponsored

tion components in this matrix transformation

annual convention on AR, covering both spe-

we obtain:

cific applications as emerging technologies.

p=[R t] ·P

(2)

accesible through http://dl.acm.org ■ Augmented Reality Times — a daily update

In which R is a 3x3 matrix corresponding to the

on demos and trends in commercial and aca-

rotational components of the transformation and

demic AR systems: http://artimes.rouli.net

t the 3x1 translation vector. Then we split the

■ Procams workshop — annual workshop on

rotation columns into row vectors R1, R2, and R3

projector-camera systems, coinciding with

of formula 3. Applying the perspective division

the IEEE conference on Image Recognition

results in the following two formulae:

and Robot Vision. The resulting proceedings are freely accessible at http://www.procams.

(3)

(4)

org ■ Raskar, R. and Bimber, O. (2004) Spatial Augmented Reality, A.K. Peters, ISBN: 1568812302 – personal copy can be downloaded for free at http://140.78.90.140/medien/ar/SpatialAR/download.php ■ BuildAR – download simple webcam-based application that uses markers, http://www.

in which the 2D point pi is split into (ui,vi).

buildar.co.nz/buildar-free-version

Given n measured point-point correspondences (pi; Pi); (i = 1::n), we obtain 2n equations:

Appendix: Linear Camera

R1·Pi – ui·R3·Pi + t x - ui·tz = 0

(5)

Calibration

R 2·Pi – vi·R3·Pi + ty - ui·tz = 0

(6)

This procedure has been published in (Raskar

We can rewrite these 2n equations as a matrix

and Bimber, 2004) to some degree, but is slightly

multiplication with a vector of 12 unknown

adapted to be more accessible for those with

variables, comprising the original transformation

less knowledge of the field of image processing.

components R and t of formula 3. Due to mea-

C source code that implements this mathemati-

surement errors, a solution is usually non-singu-

cal procedure can be found in appendix A1 of

lar; we wish to estimate this transformation with

(Faugeras, 1993). It basically uses point corres­

a minimal estimation deviation. In the algorithm

pondences between original x,y,z coordinates

presented at (Bimber & Raskar, 2004), the mini-

and their projected u,v, counterparts to resolve

max theorem is used to extract these based on

internal and external camera parameters.

determining the singular values. In a straightfor-

In general cases, 6 point correspondences are

ward matter, internal and external transforma-

sufficient (Faugeras 1993, Proposition 3.11).

tions I and E of formula 1 can be extracted from

Let I and E be the internal and external param-

the resulting transformation.

56


References ■ Avrahami, D. and Hudson, S.E. (2002) ‘Forming interactivity: a tool for rapid pro­

empirical testing’, Int. J. Technology Management, Vol. 21, Nos. 3–4, pp.340–352.

totyping of physical interactive products’, Proceedings of DIS ‘02, pp.141–146.

■ Bordegoni, M. and Covarrubias, M. (2007) ‘Augmented visualization system for a haptic

■ Azuma, R. (1997)

interface’, HCI International 2007 Poster.

‘A survey of augmented reality’, Presence: Teleoperators and Virtual Environments, Vol. 6, No. 4, pp.355–385.

■ Eisenberg, A. (2004) ‘For your viewing pleasure, a projector in your pocket’, New York Times, 4 November.

■ Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S. and MacIntyre, B. (2001)

■ Faugeras, O. (1993)

‘Recent advances in augmented reality’, IEEE

‘Three-Dimensional Computer Vision:

Computer Graphics and Applications, Vol. 21,

a Geometric Viewpoint’, MIT press.

No. 6, pp.34–47. ■ Fails, J.A. and Olsen, D.R. (2002) ■ Ballagas, R., Ringel, M., Stone, M. and Borchers, J. (2003)

‘LightWidgets: interacting in everyday spaces’, Proceedings of IUI ‘02, pp.63–69.

‘iStuff: a physical user interface toolkit for ubiquitous computing environments’, Proceedings of CHI 2003, pp.537–544.

■ Greenberg, S. and Fitchett, C. (2001) ‘Phidgets: easy development of physical interfaces through physical widgets’, Proceed-

■ Bandyopadhyay, D., Raskar, R. and Fuchs,

ings of UIST ‘01, pp.209–218.

H. (2001) ‘Dynamic shader lamps: painting on movable

■ Hoeben, A. (2010)

objects’, International Symposium on Aug-

“Using a projected Trompe L’Oeil to highlight

mented Reality (ISMAR), pp.207–216.

a church interior from the outside”, EVA 2010

■ Bimber, O. (2002) ‘Interactive rendering for projection-based

■ Hudson, S. (2004)

augmented reality displays’, PhD disserta-

‘Using light emitting diode arrays as touch-

tion, Darmstadt University of Technology.

sensitive input and output devices’, Proceedings of the ACM Symposium on User Interface

■ Bimber, O., Stork, A. and Branco, P. (2001)

Software and Technology, pp.287–290.

‘Projection-based augmented engineering’, Proceedings of International Conference on

■ Jacob, R.J., Ishii, H., Pangaro, G. and Pat-

Human-Computer Interaction (HCI’2001),

ten, J. (2002)

Vol. 1, pp.787–791.

‘A tangible interface for organizing information using a grid’, Proceedings of CHI ‘02,

■ Bochenek, G.M., Ragusa, J.M. and Malone,

pp.339–346.

L.C. (2001) ‘Integrating virtual 3-D display systems into product design reviews: some insights from

57


■ Kanai, S., Horiuchi, S., Shiroma, Y., Yo-

■ Pederson, T. (2001)

koyama, A. and Kikuta, Y. (2007)

‘Magic touch: a simple object location track-

‘An integrated environment for testing and

ing system enabling the development of

assessing the usability of information appli-

physical-­virtual artefacts in office environ-

ances using digital and physical mock-ups’,

ments’, Personal Ubiquitous Comput., Janu-

Lecture Notes in Computer Science, Vol.

ary, Vol. 5, No. 1, pp.54–57.

4563, pp.478–487. ■ Piper, B., Ratti, C. and Ishii, H. (2002) ■ Kato, H. and Billinghurst, M. (1999)

‘Illuminating clay: a 3-D tangible interface

‘Marker tracking and HMD calibration for a

for landscape analysis’, Proceedings of CHI

video-based augmented reality conferenc-

‘02, pp.355–362.

ing system’, Proceedings of International Workshop on Augmented Reality (IWAR 99), pp.85–94.

■ Prince, S.J., Xu, K. and Cheok, A.D. (2002) ‘Augmented reality camera tracking with homographies’, IEEE Comput. Graph. Appl.,

■ Klinker, G., Dutoit, A.H., Bauer, M., Bayer,

November, Vol. 22, No. 6, pp.39–45.

J., Novak, V. and Matzke, D. (2002) ‘Fata Morgana – a presentation system for

■ Raskar, R., Welch, G., Low, K-L. and

product design’, Proceedings of ISMAR ‘02,

­Bandyopadhyay, D. (2001)

pp.76–85.

‘Shader lamps: animating real objects with image based illumination’, Proceedings

■ Oviatt, S.L., Cohen, P.R., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., et al.

of Eurographics Workshop on Rendering, pp.89–102.

(2000) ‘Designing the user interface for multimodal

■ Raskar, R. and Low, K-L. (2001)

speech and gesture applications: state-of-

‘Interacting with spatially augmented real-

the-art systems and research directions’,

ity’, ACM International Conference on Virtual

Human Computer Interaction, Vol. 15, No. 4,

Reality, Computer Graphics and Visualization

pp.263–322.

in Africa (AFRIGRAPH), pp.101–108.

■ Paradiso, J.A., Hsiao, K., Strickon, J.,

■ Raskar, R., van Baar, J., Beardsley, P.,

Lifton, J. and Adler, A. (2000)

Willwacher, T., Rao, S. and Forlines, C.

‘Sensor systems for interactive surfaces’,

(2003)

IBM Systems Journal, Vol. 39, Nos. 3–4,

‘iLamps: geometrically aware and self-con-

pp.892–914.

figuring projectors’, SIGGRAPH, pp.809–818.

■ Patten, J., Ishii, H., Hines, J. and Pangaro,

■ Raskar, R. and Bimber, O. (2004)

G. (2001)

Spatial Augmented Reality, A.K. Peters,

‘Sensetable: a wireless object tracking plat-

ISBN: 1568812302.

form for tangible user interfaces’, Proceedings of CHI ‘01, pp.253–260.

■ Reikimoto, J. (2002) ‘SmartSkin: an infrastructure for freehand

■ Pedersen, E.R., Sokoler, T. and Nelson, L. (2000) ‘PaperButtons: expanding a tangible user interface’, Proceedings of DIS ’00, pp.216–223.

58

manipulation on interactive surfaces’, Proceedings of CHI ‘02, pp.113–120.


■ Saakes, D.P., Chui, K., Hutchison, T.,

■ Verlinden, J., Horvath, I. (2008)

Buczyk, B.M., Koizumi, N., Inami, M.

”Enabling interactive augmented prototyp-

and Raskar, R. (2010)

ing by portable hardware and a plugin-based

’ Slow Display’. In SIGGRAPH 2010 emerg-

software architecture” Journal of Mechani-

ing technologies: Proceedings of the 37th

cal Engineering, Slovenia, Vol 54(6), pp.

annual conference on Computer graphics and

458-470.

interactive techniques, July 2010. ■ Welch, G. and Foxlin, E. (2002) ■ Schäfer, K., Brauer, V. and Bruns, W.

‘Motion tracking: no silver bullet, but a re-

(1997)

spectable arsenal’, IEEE Computer Graphics

‘A new approach to human-computer

and Applications, Vol. 22, No. 6, pp.24–38.

interaction – synchronous modelling in real and virtual spaces’, Proceedings of DIS ‘97, pp.335–344.

.

■ Schall, G., Mendez, E., Kruijff, E., Veas, E., Sebastian, J., Reitinger, B. and Schmalstieg, D. (2008) ‘Handheld augmented reality for underground infrastructure visualization’, Journal of Personal and Ubiquitous Computing, Springer, DOI 10.1007/s00779-008-0204-5. ■ Schmalstieg, D. and Wagner, D. (2008) ‘Mobile phones as a platform for augmented reality’, Proceedings of the IEEE VR 2008 Workshop on Software Engineering and Architectures for Realtime Interactive Systems, pp.43–44. ■ Sutherland, I.E. (1968) ‘A head-mounted three-dimensional display’, Proceedings of AFIPS, Part I, Vol. 33, pp.757–764. ■ Ukita, N. and Kidode, M. (2004) ‘Wearable virtual tablet: fingertip drawing on a portable plane-object using an activeinfrared camera’, Proceedings of IUI 2004, pp.169–175. ■ Verlinden, J.C., de Smit, A., Horváth, I., Epema, E. and de Jong, M. (2003) ‘Time compression characteristics of the augmented prototyping pipeline’, Proceedings of Euro-uRapid’03, p.A/1.

59


LIKE RIDING A BIKE. LIKE PARKING A CAR. PORTRAIT OF THE ARTIST IN RESIDENCE

MARINA DE HAAS BY HANNA SCHRAFFENBERGER

60


"Hi Marina. Nice to meet you! I have heard a lot about you."

support of the lab, she has realized the AR artworks Out of the blue and Drops of white in the course of her study. In 2008 she graduated with

I usually avoid this kind of phrases. Judging from

an AR installation that shows her 3d animated

my experience, telling people that you have

portfolio. Then, having worked with AR for three

heard a lot about them makes them feel uncom-

years, she decided to take a break from technol-

fortable. But this time I say it. After all, it’s no

ogy and returned to photography, drawing and

secret that Marina and the AR Lab in The Hague

painting. Now, after yet another three years, ­

share a history which dates back much longer

she is back in the mixed reality world. Convinced

than her current residency at the AR Lab. At the

by her concepts for future works, the AR Lab

lab, she is known as one of the first students

has invited her as an Artist in Residence. That is

who overcame the initial resistance of the fine

what I have heard about her, and made me want

arts program and started working with AR. With

to meet her for an artist-portrait. ­Knowing quite

61


a lot about her past, I am interested in what she is currently working on, in the context of her residency. When she starts talking, it becomes

ogy, the audience will then see a dying­dove or dying crane fly with a missing foot.”

clear that she has never really stopped thinking about AR. There’s a handwritten notebook

Marina tells me her current piece is about imper-

full of concepts and sketches for future works.

manence and mortality, but also about the fact

Right now, she is working on animations of two

that death can be the beginning of something

animals. Once she is done animating, she'll use

new. Likewise, the piece is not only about death

AR technology to place the animals — an insect

but also intended as an introduction and begin-

and a dove — in the hands of the audience.

ning for a forthcoming work. The AR Lab makes

“I usually start out with my own photographs and a certain space I want to augment.” "They will hold a little funeral monument in the shape of a tile in their hands. Using AR technol62

this beginning possible through financial support but also provides technical assistance and serves as a place for mutual inspiration and exchange. Despite her long break from the digital arts, the young artist feels confident about working with AR again:

“It’s a bit like biking, once you’ve learned it, you never unlearn it. It’s the same with me and AR, of course I had to practice a bit, but I still have the feel for it. I think working with AR is just a part of me.”


After having paused for three years, Marina is positively surprised about how AR technology has emerged in the meantime:

“AR is out there, it’s alive, it’s growing and finally, it can be markerless. I don’t like the use of markers. They are not part of my art and people see them, when they don’t wear AR glasses. I am also glad that so many people know AR from their mobile phones or at least have heard about it before. Essentially, I don’t want the audience to wonder about the technology, I want them to look at the pictures and animations I create. The more people are used to the technology the more they will focus on the content. I am really happy and excited how AR has evolved in the last years!”

“When I was a child I found a book with code and so I programmed some games. That was fun, I just understood it. It’s the same with creating AR works now. My way of thinking perfectly matches with how AR works. It feels completely natural to me.” Nevertheless, working with technology also has its downside:

“The most annoying thing about working with AR is that you are always facing technical limitations and there is so much that can go wrong. No matter how well you do it, there is always the risk that something won’t work. I hope for technology to get more stable in the future.” When realizing her artistic augmentations,

I ask, how working with brush and paint differs

Marina sticks to an established workflow:

from working with AR, but there seems to be surprisingly little difference.

“The main difference is that with AR I am working with a pen-tablet, a computer and a screen. I control the software, but if I work with a brush I have the same kind of control over it. In the past, I used to think that there was a difference, but now I think of the computer as just another medium to work with. There is no real difference between working with a brush and working with a computer. My love for technology is similar to my love for paint.”

“I usually start out with my own photographs and a certain space I want to augment. Preferably I measure the dimensions of the space, and then I work with that

Marina discovered her love for technology at a young age:

63


room in my head. I have it in my inner vision and I think in pictures. There is a photo register in my head which I can access. It’s a bit like parking a car. I can park a car in a very small space extremely well. I can feel the car around me and I can feel the space I want to put it in. It’s the same with the art I create. Once I have clear idea of the artwork I want to create, I use Cinema4D software to make 3d models. Then I use BuildAR to place my 3d models it the real space. If everything goes well, things happen that you could not have imagined.” A result of this process is, for example, the AR installation Out of the blue which was shown at Today’s Art festival in The Hague in 2007:

“The idea behind ‘Out of the blue’ came from a photograph I took in an elevator. I took the picture so that the lights in the elevator looked like white ellipses on a black background. I took this basic elliptical shape as a basis for working in a very big space. I was very curious if I could use such a simple shape and still convince the audience that it really existed in the space. And it worked — people tried to touch it with their hands and were very surprised when that wasn’t possible.” The fact that people believe in the existence of her virtual objects is also important for Marina’s personal understanding of AR:

64

“For me, Augmented Reality means using digital images to create something which is not real. However, by giving meaning to it, it becomes real and people realize that it might as well ­exist.” I wonder whether there is a specific place or space she’d like to augment in the future and Marina has quite some places in mind. They have one thing in common: they are all known museums that show modern art.

“I would love to create works for the big museums such as the TATE Modern or MoMa. In the Netherlands, I’d love to augment spaces at the Stedelijk Museum in Amsterdam or Boijmans museum in Rotterdam. That’s my world. Going to a museum means a lot to me. Of course, one can place AR artworks everywhere, also in public spaces. But it is important to me that people who experience my work have actively chosen to go somewhere to see art. I don’t want them to just see it by accident at a bus stop or in a park.” Rather than placing her virtual models in a specific physical space, her current work follows a different approach. This time, Marina will place the animated dying animals in the hands of the audiences. The artist has some ideas about how to design this physical contact with the digital animals.

“In order for my piece to work, the viewer needs to feel like he is holding something in his hand. Ideally, he will feel the weight of


the animal. The funeral monuments will therefor have a certain weight.”

Coming from a fine arts background, Marina has a tip for art students who want to to follow in her footsteps and are curious about working with AR:

It is still open where and when we will be able to experience the piece:

“My residency lasts 10 weeks. But of course that’s not enough time to finish. In the past, a piece was finished when the time to work on it was up. Now, a piece is finished when it feels complete. It’s something I decide myself, I want to have control over it. I don’t want any more restrictions. I avoid deadlines.”

“I know it can be difficult to combine technology with art, but it is worth the effort. Open yourself up to for art in all its possibilities, including AR. AR is a chance to take a step in a direction of which you have no idea where you’ll find yourself. You have to be open for it and look beyond the technology. AR is special — I couldn’t live without it any­ more...”

65


BIOGRAPHY JEROEN VAN ERP Under Jeroen’s joint leadership, Fabrique has grown through the years into a multifaceted design bureau. It currently employs more than 100 artists, engineers and storytellers working for a wide range of customers: from supermarket chain Albert Heijn to the Rijksmuseum. Fabrique develops visions, helps its clients think about strategies, branding and innovation and realises designs. Preferably cutting straight through the design disciplines, so­ that the traditional borders between graphic design, industrial design, spatial design and interactive media are sometimes barely re­ cognisable. In the bureau’s vision, this cross

JEROEN VAN ERP GRADUATED FROM THE FACULTY OF INDUSTRIAL DESIGN AT THE TECHNICAL UNIVERSITY OF DELFT IN 1988. IN 1992, HE WAS ONE OF THE FOUNDERS OF FABRIQUE IN DELFT, WHICH POSITIONED ITSELF AS A MULTIDISCIPLINARY DESIGN BUREAU. HE ESTABLISHED THE INTERACTIVE MEDIA DEPARTMENT IN 1994, FOCUSING PRIMARILY ON DEVELOPING WEBSITES FOR THE WORLD WIDE WEB - BRAND NEW AT THAT TIME.

media approach will be the only way to create apparently simple solutions for complex and relevant issues. The bureau also opened a studio in Amsterdam in 2008. Jeroen is currently CCO (Chief Creative Officer) and a partner, and in this role he is responsible for the creative policy of the company. He has also been closely involved in various projects as art director and de­ signer. He is a guest lecturer for various courses and is a board member at NAGO ­ (the Netherlands Graphic Design Archive) and the Design & Emotion Society.

www.fabrique.nl 66


A MAGICAL LEVERAGE IN SEARCH OF THE KILLER APPLICATION

BY JEROEN VAN ERP The moment I was confronted with the technology of Augmented Reality, back in 2006 at the Royal Academy of Arts in The Hague, I was thrilled. Despite the heavy helmet, the clumsy equipment, the shaky images and the lack of a well-defined purpose, it immediately had a profound impact on me. From the start, it was clear that this technology had a lot of potential, although at first it was hard to grasp why. Almost six years later, the fog that initially surrounded this new technology has gradually faded away. To start with, the technology itself is developing rapidly, as is the equipment.

But more importantly: companies and cul-

tural institutions are starting to understand how they can benefit from this technology. At the moment there are a variety of applications available (mainly mobile applications for tablets or smart phones) that create added value for the user or consumer. This is great, because it not only allows the audience to gain experience in the field of this still-developing technology, but also the industry. But to make Augmented Reality a real success, the next step will be of vital importance.

67


INNOVATING OR INNOVATING?

a social goal. A business goal is often derived

Let’s have a look at different forms of innovat-

which is expected to generate an economic

ing in figure 1. On the left we see innovations

benefit for the company. A marketing special-

with a bottom-up approach, and on the right a

ist would state that there is already a market.

top-down approach to innovating. A bottom-

This approach means that you have to inno-

up approach means that we have a promising

vate with an intended goal in mind. A business

new technique, concept or idea although the

goal-driven innovation can be a product inno-

exact goal or matching business model aren’t

vation (either physical products, services or a

clear yet. In general, bottom-up developments

combination of the two) or a brand innovation

are technological or art-based, and are there-

(storytelling, positioning), but always with an

fore what I would call autonomous: the means

intended economical or social benefit in mind.

are clear, but the exact goal has still to be

As there is an expected benefit, people are

defined. The usual strategy to take it further

willing to invest.

from a benefit for the user or the consumer,

is to set up a start-up company in order to develop the technique and hopefully to create

It’s interesting to note the difference on the

a market.

vertical axis between radical innovations and

This is not always that simple. Innovating from

incremental changes (Robert Verganti – Design

a top-down approach means that the innova-

Drive Innovation). Incremental changes are

tion is steered on the basis of a more or less

improvements of existing concepts or prod-

clearly defined goal. In contrast with bottom-

ucts. This is happening a lot, for instance in

up innovations, the goal is well-defined and

the automotive industry. In general, a radical

the designer or developer has to choose the

innovation changes the experience of the

right means, and design a solution that fits

product in a fundamental way, and as a result

the goal. This can be a business goal, but also

of this often changes an entire business.

Figure 1.

68


This is something Apple has achieved several

me that the experience of AR wasn’t suitable

times, but it has also been achieved by Tom-

at all for this form of publishing. AR doesn’t do

Tom, and by Philips and Douwe Egberts with

well on a projection screen. It does well in the

their Senseo coffee machine.

user’s head, where time, place, reality and imagination can play an intriguing game with our senses. It is unlikely that the technique of

HOW ABOUT AR?

Augmented Reality will lead to mass consump-

What about the position of Augmented Real-

a lot of people at the same time’. No, by their

ity? To start with, the Augmented Reality

nature, AR applications are intimate and in-

technique is not a standalone innovation. It’s

tense, and this is one of its biggest assets.

tion as in ‘experiencing the same thing with

not a standalone product but a technique or feature that can be incorporated into products or services with a magical leverage. At its core it is a technique that was developed — and is

FUTURE

still developing — with specialist purposes in

We have come a long way, and the things we

mind. In principle there was no big demand

can do with AR are becoming more amazing by

from ‘the market’. Essentially, it is a bottom-

the day. The big challenge is to make it appli-

up technological development that needs a

cable in relevant solutions. There’s no discus-

concept, product or service.

sion about the value of AR in specialist areas, such as the military industry. Institutions in

You can argue about whether it is an incre-

the field of art and culture have discovered

mental innovation or a radical one. A virtual

the endless possibilities, and now it is the

reality expert will probably tell you that it is

time to make the big leap towards solutions

an improvement (incremental innovation) of

with social or economic value (the green area

the VR technique. But if you look from an ap-

in figure 1). This will give the technique the

plication perspective, there is a radical aspect

chance to develop further in order to flourish

to it. I prefer to keep the truth in the middle.

at the end. From that perspective, it wouldn’t

At this moment in time, AR is in the blue area

surprise me if the first really good, efficient

(figure 1).

and economically profitable application will

It is clear that bottom-up innovation and top-

emerge for educational purposes.

down innovation are different species. But when it comes to economic leverage, it is a

Let’s not forget we are talking about a tech-

challenge to be part of the top-down game.

nology that is still in its infant years. When I

This provides a guarantee for further develop-

look back at the websites we made 15 years

ment, and broad acceptation of the technique

ago, I realize the gigantic steps we have made,

and principles. So the major challenge for AR

and I am aware of the fact that we could

is to make the big step to the right part of fig-

hardly imagine then what the impact of the

ure 1 as indicated by the red arrow. Although

internet would be on society today. Of course,

the principles of Augmented Reality are very

it’s hard to compare the concept of Augment-

promising, it’s clear we aren’t there yet. An

ed Reality with that of the internet, but it is

example: we recently received a request to

a valid comparison, because it gives the same

‘do something’ with Augmented Reality. The

powerless feeling of not being able to predict

idea was to project the result of an AR appli-

its future. But it will probably be bigger than

cation onto a big wall. Suddenly it occurred to

you can imagine.

69


THE POSITIONING OF VIRTUAL OBJECTS ROBERT PREVEL

WHEN USING AUGMENTED REALITY (AR) FOR VISION, VIRTUAL OBJECTS ARE ADDED TO THE REAL WORLD AND DISPLAYED IN SOME WAY TO THE USER; BE THAT VIA A MONITOR, PROJECTOR, OR HEAD-MOUNTED DISPLAY (HMD). OFTEN IT IS DESIRABLE, OR EVEN UNAVOIDABLE, FOR THE VIEWPOINT OF THE USER TO MOVE AROUND THE ENVIRONMENT (THIS IS PARTICULARLY THE CASE IF THE USER IS WEARING A HMD). THIS PRESENTS A PROBLEM, REGARDLESS OF THE TYPE OF DISPLAY USED: HOW CAN THE VIEWPOINT BE DECOUPLED FROM THE AUGMENTED VIRTUAL OBJECTS?

position and orientation (pose) in 3D space, and its scale, should be. However, if the view point changes, then how we view the virtual object should also change. For example, if I­­walk around to face the back of a virtual object, I expect to be able to see the rear of that object. The solution to this problem is to keep track of the user’s viewpoint and, in the event that the viewpoint changes, to update the pose of any virtual content accordingly. There are a number of ways in which this can be achieved, by using, for example: positional sensors (such as inertia trackers), a global positioning system, computer vision techniques, etc. Typically the best results are those systems that take the data from many tracking systems and blend them together. At TU Delft, we have been researching and developing techniques to track position using computer vision techniques. Often it is the case that video cameras are used in AR systems; indeed, in the case where the AR system uses video see-through, the use of cameras is necessary. Using computer vision techniques, we can identify landmarks in the

To recap, virtual objects are blended with the

environment, and, using these landmarks, we

real world view in order to achieve an Aug-

can determine the pose of our camera with

mented world view. From our initial viewpoint

basic geometry. If the camera is not used

we can determine what the virtual object’s

directly as the viewpoint (as is the case in

70


optical see-through systems), then we can still

does not change, should deliver the same AR

keep track of the viewpoint by attaching the

experience each time. Sometimes however,

camera to it. Say, for example, that we have

it is not feasible to prepare an environment

an optical see-through HMD with an attached

with markers. Often it is desirable to use an

video camera. Then, if we calculate the pose

AR application in an unknown or unprepared

of the camera, we can then determine the

environment. In these cases, an alternative

pose of the viewpoint, provided that the

to using markers is to identify the natural

camera’s position relative to the viewpoint

features found in the environment.

remains fixed. The term ‘natural features’ can be used to The problem then, has been reduced to

describe the parts of an image that stand out.

identifying landmarks in the environment.

Examples include: edges, corners, areas of

­Historically, this has been achieved by the

high contrast, etc. In order to be able to use

use of fiducial markers, which act as points

the natural features to track the camera posi-

of reference in the image. Fiducial markers

tion in an unknown environment, we need to

provide us with a means of determining the

be able to first identify the natural features,

scale of the visible environment, provided

and then determine their relative positions

that: enough points of reference are visible,

in the environment. Whereas you could place

we know their relative positions, and these

20 markers in an environment and still only

relative positions don’t change. A typical

have 80 identifiable corners, there are often

marker often used in AR applications consists

hundreds of natural features in any one image.

of a card with a black rectangle in the centre,

This makes using natural features a more ro-

a white border, and an additional mark to

bust solution than using markers, as there are

determine which edge of the card is consid-

far more landmarks we can use to navigate,

ered the bottom. As we know that the corners

not all of which need to be visible. One of the

of the black rectangle are all 90 degrees, and

key advantages to using natural features over

we know the distance between corners, we

markers is that: as we already need to identify

can identify the marker and determine the

and keep track of those natural features seen

pose of the camera with regard to the points

from our initial view point, we can use the

of reference (in this case the four corners of

same method to continually update a 3D map

the card).

of features as we change our view point. This allows our working environment to grow,

A large number of simple ‘desktop’ AR applica-

which could not be achieved in a prepared

tions make use of individual markers to track

environment.

camera pose, or conversely, to track the posi-

Although we are able to determine the rela-

tion of the markers relative to our viewpoint.

tive distance between features, the question

Larger applications require multiple markers

remains: how can we determine the absolute

linked together, normally distinguishable by

position of features in an environment without

a unique pattern or barcode in the centre

some known measurement? The short answer

of the marker. Typically the more points of

is that we cannot; either we need to estimate

reference that are visible in a scene, the bet-

the distance or we can introduce a known

ter the results when determining the camera’s

measurement. In a future edition we will

pose. The key advantage to using markers

discuss the use of multiple video cameras and

for tracking the pose of the camera is that

how, given the absolute distance between the

an environment can be carefully prepared

cameras, we can determine the absolute posi-

in advance, and provided the environment

tion of our identified features. 71


MEDIATED REALITY FOR CRIME SCENE INVESTIGATION1 STEPHAN LUKOSCH CRIME SCENE INVESTIGATION IN THE NETHERLANDS IS PRIMARILY THE RESPONSIBILITY OF THE LOCAL POLICE. FOR SEVERE CRIMES, A NATIONAL TEAM SUPPORTED BY THE NETHERLANDS FORENSIC INSTITUTE (NFI) IS CALLED IN. INITIALLY CAPTURING ALL DETAILS OF A CRIME SCENE IS OF PRIME IMPORTANCE (SO THAT EVIDENCE IS NOT ACCIDENTLY DESTROYED). NFI’S DEPARTMENT OF DIGITAL IMAGE ANALYSIS USES THE INFORMATION COLLECTED FOR 3D CRIME SCENE RECONSTRUCTION AND ANALYSIS. Within the CSI The Hague project (http://

for future crime scene investigation and to

www.csithehague.com) several companies and

tackle current issues in crime scene investi-

research institutes cooperate under the guid-

gation. In Augmented­Reality, virtual data is

ance of the Netherlands Forensic Institute in

spatially overlaid on top of physical reality. With

order to explore new technologies to improve

this technology the flexibility of virtual reality

crime scene investigation by combining differ-

can be used and is grounded in physical reality

ent technologies to digitize, visualize and in-

(Azuma, 1997). Mediated reality refers to the

vestigate the crime scene. The major motiva-

ability to add to, subtract information from, or

tion for the CSI The Hague project is that one

otherwise manipulate one’s perception of real-

can investigate a crime scene only once. ­If you

ity through the use of a wearable computer or

do not secure all possible evidence during this

hand-held device (Mann and Barfield, 2003).

investigation, it will not be available for solving the committed crime. The digitalization

In order to reveal the current challenges for

of the crime scene provides opportunities for

supporting spatial analysis in crime scene

testing hypotheses and witness statements,

investigation, structured interviews with five

but can also be used to train future investi-

international experts in the area of 3D crime

gators. For the CSI The Hague project, two

scene reconstruction were conducted. The

groups at the Delft University of Technology,

interviews showed a particular interest for

Systems Engineering2 and BioMechanical En-

current challenges in spatial reconstruction

gineering3, joined their efforts to explore the

and the interaction with the reconstruction

potential of mediated and Augmented Reality

data. The identified challenges are:

This article is based upon (Poelman et al., 2012). http://www.sk.tbm.tudelft.nl 3 http://3me.tudelft.nl/en/about-the-faculty/departments/biomechanical-engineering/research/dbl-delft-biorobotics-lab/people/ 1

2

72


Figure 1. MEDIATED REALITY HEAD MOUNTED DEVICE IN USE DURING THE EXPERIMENT IN THE DUTCH FORENSIC FIELD LAB.

■ Time needed for reconstruction: data cap-

tion: The hands of the CSIs have to be free to

ture, alignment, data clean-up, geometric

physically interact with the crime scene when

modelling and analyses are manual steps.

needed, e.g. to secure evidence, open doors,

■ Expertise required to deploy dedicated soft-

climb, etc. Additional hardware such as data

ware and secure evidence at the crime scene. ■ Complexity: Situations differ significantly. ■ Time freeze: Data capture is often conducted once after a scene has been contaminated.

gloves or physically touching an interface such as a mobile device is not acceptable. ■ Remote connection to and collaboration with experts: Expert crime scene investigators are a scarce resource and are not often available

The interview sessions ended with an open

at location on request. Setting up a remote

discussion on how mediated reality can support­

connection to guide a novice investigator

crime scene investigation in the future. Based

through the crime scene and to collaborative-

on these open discussions, the following require-

ly analyze the crime scene has the potential

ments for a mediated reality system that is to

to improve the investigation quality.

support crime scene investigation were identi-

To address the above requirements, a novel

fied:

mediated reality system for collaborative spatial

■ Lightweight head-mounted display (HMD): ­

analysis on location has been designed, devel-

It became clear that the investigators whom

oped and evaluated together with experts in the

arrive first on the crime scene currently carry

field and the NFI. This system supports collabo-

a digital camera. Weight and ease of use are

ration between crime scene investigators (CSIs)

important design criteria. Experts would like

on location who wear a HMD (see Figure 1) and

those close to a pair of glasses.

expert colleagues at a distance.

■ Contactless augmentation alignment (no markers on the crime scene): The first

The mediated reality system builds a 3D map of

investigator who arrives on a crime scene has

the environment in real-time, allows remote users

to keep the crime scene as untouched as pos-

to virtually join and interact together in shared

sible. Technology that involves preparing the

Augmented space with the wearer of the HMD,

scene is therefore unacceptable.

and uses bare hand gestures to operate the 3D

■ Bare hands gestures for user interface opera-

multi-touch user interface. The resulting medi-

73


ated reality system supports a lightweight headmounted display (HMD), contactless augmentation alignment, and a remote connection to and collaboration with expert crime scene investigators. The video see-through of a modified Carl Zeiss Cinemizer OLED (cf. Figure 2) for displaying content fulfills the requirement for a lightweight HMD, as its total weight is ~180 grams. Two Microsoft HD-5000 webcams are stripped and mounted in front of the Cinemizer providing a full stereoscopic 720p resolution pipeline. Both cameras record at ~30hz in 720p, images are Figure 2. HEAD MOUNTED DISPLAY, MODIFIED CINEMIZER OLED (CARL ZEISS) WITH TWO MICROSOFT HD-5000 WEBCAMS.

projected in our engine, and render 720p stereoscopic images to the Cinemizer. As for all mediated reality systems, robust realtime pose estimation is one of the most crucial parts, as the 3D pose of the camera in the physical world is needed to render virtual objects correctly on required positions. We use a heavily modified version of PTAM (Parallel Tracking and Mapping) (Klein and Murray, 2007), in which a single camera setup is replaced by a stereo camera setup using 3D natural feature matching and estimation based on natural features. Using this algorithm, a sparse metric map (cf. Figure 3) of the environment is created. This sparse metric map can be used for pose estimation in our Augmented Reality system. In addition to the sparse metric map, a dense

Figure 3. SPARSE 3D FEATURE MAP GENERATED BY THE POSE ESTIMATION MODULE.

3D map of the crime scene is created. The dense metric map provides a detailed copy of the crime scene enabling detailed analysis and is created from a continuous stream of disparity maps that are generated while the user moves around the scene. Each new disparity map is registered (combined) using the pose information from the PE module to construct or extend the 3D map of the scene. The point clouds are used for occlusion and collision checks, and for snapping digital objects to physical locations. By using an innovative hand tracking system, the mediated reality system can recognize bare hands gestures for user interface operation. This hand tracking system utilizes the stereo

Figure 4. GRAPHICAL USER INTERFACE OPTIONS MENU.

74

camera rig to detect the hand movements in 3D.


The cameras are part of the HMD and an adap-

activity could possibly help to overcome this is-

tive algorithm has been designed to determine

sue. Whether traditional patterns for computer-

whether to rely on the color, disparity or on both

mediated interaction (Schümmer and Lukosch,

depending on the lighting conditions. This is the

2007) support awareness in mediated reality

core technology to fulfill the requirement of

or rather new forms of awareness need to be

bare hand interfacing. The user interface and

designed, will be the subject of future research.

the virtual scene are general-purpose parts of

Further tasks for future research include the

the mediated reality system. They can be used

design and evaluation of alternative interaction

for CSI, but also for any other mediated reality

possibilities, e.g. by using physical objects that

application. The tool set, however, needs to be

are readily available in the environment, sensor

tailored for the application domain. The current

fusion, image feeds from spectral cameras or

mediated reality system supports the following

previously recorded laser scans, to provide more

tasks for CSIs: recording the scene, placing tags,

situational awareness and the privacy, security

loading 3D models, bullet trajectories and plac-

and validity of captured data. Finally, though

ing restricted area ribbons. Figure 4 shows the

IT is being tested and used for educational

corresponding menu attached to a user’s hand.

purposes within the CSI Lab of the Netherlands Forensic Institute (NFI), only the application and

The mediated reality system has been evalu-

test of the mediated reality system in real set-

ated on a staged crime scene at the NFI’s Lab

tings can show the added value for crime scene

with three observers, one expert and one

investigation.

layman with only limited background in CSI. Within the experiment the layman, facilitated

REFERENCES

by the expert, conducted three spatial tasks,

■ R. Azuma, A Survey of Augmented Reality,

i.e. tagging a specific part of the scene with information tags, using barrier tape and poles

Presence 6, Vol 4, 1997, 355-385

■ J. Burkhardt, F. Détienne, A. Hébert, L. Perron,

to spatially secure the body in the crime scene

S. Safin, P. Leclercq, An approach to assess the

and analyzing a bullet trajectory analysis with

quality of collaboration in technology-mediated

ricochet. The experiment was analyzed along

design situation, European Conference on Cognitive

seven dimensions (Burkhardt et al., 2007): fluid-

Ergonomics: Designing beyond the Product - Under-

ity of collaboration, sustaining mutual under-

standing Activity and User Experience in Ubiquitous

standing, information exchanges for problem solving, argumentation and reaching consensus,

Environments, 2009, 1-9

■ G. Klein, D. Murray, Parallel Tracking and Map-

task and time management, cooperative ori-

ping for Small AR Workspaces, Proc. International

entation, and individual task orientation. The

Symposium on Mixed and Augmented Reality, 2007,

results show that the mediated reality system supports remote spatial interaction with the physical scene as well as collaboration in shared augmented space while tackling current issues in

225-234

■ S. Mann, W. Barfield, Introduction to Mediated Reality, International Journal of Human-Computer Interaction, 2003, 205-208

crime scene investigation. The results also show

■ R. Poelman, O. Akman, S. Lukosch, P. Jonker, As

that there is a need for more support to identify

if Being There: Mediated Reality for Crime Scene

whose turn it is and who wants the next turn,

Investigation, CSCW ‘12: Proceedings of the 2012

etc. Additionally, the results show the need to

ACM conference on Computer Supported Coopera-

represent the expert in the scene to increase

tive Work, ACM New York, NY, USA, 2012, 1267-1276,

the awareness and trust of working in a team and to counterbalance the feeling of being observed. Knowing the expert’s focus and current

http://dx.doi.org/10.1145/2145204.2145394

■ T. Schümmer, S. Lukosch, Patterns for ComputerMediated Interaction, John Wiley & Sons, Ltd. 2007

75


76


ON FRIDAY DECEMBER 16TH 2011 THE SYMPHONY ORCHESTRA OF THE ROYAL CONSERVATOIRE PLAYED DIE WALKÜRE (ACT 1) BY RICHARD WAGNER, AT THE BEAUTIFUL CONCERT HALL ‘DE VEREENIGING’ IN NIJMEGEN. THE AR LAB WAS INVITED BY THE ROYAL CONSERVATOIRE TO PROVIDE VISUALS DURING THIS LIVE PERFORMANCE. TOGETHER WITH STUDENTS FROM DIFFERENT DEPARTMENTS OF THE ROYAL ACADEMY OF ART, WE DESIGNED A SCREEN CONSISTING OF 68 PIECES OF TRANSPARENT CLOTH (400X20 CM), HANGING IN FOUR LAYERS ABOVE THE ORCHESTRA. BY PROJECTING ON THIS CLOTH WE CREATED VISUALS GIVING THE ILLUSION OF DEPTH. WE CHOSE 7 LEITMOTIVS (RECURRING THEME, ASSOCIATED WITH A PARTICULAR PERSON, PLACE, OR IDEA), AND CREATED ANIMATIONS REPRESENTING THESE USING COLOUR, SHAPE AND MOVEMENT. THESE ANIMATIONS WERE PLAYED AT KEY-MOMENTS OF THE PERFORMANCE.

77


CONTRIBUTORS WIM VAN ECK Royal Academy of Art (KABK) w.vaneck@kabk.nl

Wim van Eck is the 3D animation specialist of the AR Lab. His main tasks are developing Augmented Reality projects, supporting and supervising students and creating 3d content. His interests are, among others, real-time 3d animation, game design and creative research.

JEROEN VAN ERP Fabrique jeroen@fabrique.nl

Jeroen van Erp co-founded Fabrique, a multidisciplinary design agency in which the different design disciplines (graphic, industrial, spatial and new media) are closely interwoven. As a designer he was recently involved in the flagship store of Giant Bicycles, the website for the Dutch National Ballet and the automatic passport control at Schiphol airport, among others.

PIETER JONKER Delft University of Technology P.P.Jonker@tudelft.nl

Pieter Jonker is Professor at Delft University of Technology, Faculty Mechanical, Maritime and Materials Engineering (3ME). His main interests and fields of research are: real-time embedded­ image processing, parallel image processing architectures, robot vision, robot learning and Augmented Reality.

YOLANDE KOLSTEE Royal Academy of Art (KABK) Y.Kolstee@kabk.nl

Yolande Kolstee is head of the AR Lab since 2006. She holds the post of Lector (Dutch for 78

r­ esearcher in professional universities) in the field of ‘Innovative Visualisation Techniques in higher Art Education’ for the Royal Academy of Art, The Hague.

MAARTEN LAMERS Leiden University lamers@liacs.nl

Maarten Lamers is assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) and board member of the Media Technology MSc program. Specializations include social robotics, bio-hybrid computer games, scientific creativity, and models for perceptualization.

STEPHAN LUKOSCH Delft University of Technology S.g.lukosch@tudelft.nl

Stephan Lukosch is associate professor at the Delft University of Technology. His current research focuses on collaborative design and engineering in traditional as well as emerging interaction spaces such as augmented reality. In this research, he combines recent results from intelligent and context-adaptive collaboration support, collaborative storytelling for know­ ledge elicitation and decision-making, and design patterns for computer-mediated interaction.

FERENC MOLNÁR Photographer info@baseground.nl

Ferenc Molnár is a multimedia artist based in The Hague since 1991. In 2006 he has returned to the KABK to study photography and that’s where he started to experiment with AR. His focus is on the possibilities and on the impact of this new technology as a communication platform in our visual culture.


ROBERT PREVEL Delft University of Technology r.g.prevel@tudelft.nl

Robert Prevel is working on a PhD focusing on localisation and mapping in Augmented Reality applications at the Delft Biorobotics Lab, Delft University of Technology under the supervision of Prof.dr.ir P.P.Jonker.

HANNA SCHRAFFENBERGER Leiden University hkschraf@liacs.nl Hanna Schraffenberger works as a researcher and PhD student at the Leiden Institute of Advanced Computer Science (LIACS) and at the AR Lab in The Hague. Her research interests include interaction in interactive­ art and (non-visual) Augmented Reality.

ESMÉ VAHRMEIJER Royal Academy of Art (KABK) e.vahrmeijer@kabk.nl

Context” lab that focuses on blend between bits and atoms for design and creativity. Co-founder and lead of the minor on advanced prototyping­ programme and editor of the International Journal of Interactive Design, Engineering and Manufacturing.

SPECIAL THANKS We would like to thank Reba Wesdorp, Edwin van der Heide, Tama McGlinn, Ronald Poelman, Karolina Sobecka, Klaas A. Mulder, Joachim Rotteveel and last but not least the Stichting Innovatie Alliantie (SIA) and the RAAK (Regionale Aandacht en Actie voor Kenniscirculatie) initiative of the Dutch Ministry of Education, Culture and Science.

NEXT ISSUE The next issue of AR[t] will be out in October 2012.

Esmé Vahrmeijer is graphic designer and webmaster of the AR Lab. Besides her work at the AR Lab, she is a part time student at the Royal Academy of Art (KABK) and runs her own graphic design studio Ooxo. Her interests are in graphic design, typography, web design, photography and education.

JOUKE VERLINDEN Delft University of Technology j.c.verlinden@tudelft.nl

Jouke Verlinden is assistant professor at the section of computer aided design engineering at the Faculty of Industrial Design Engineering. With a background in virtual reality and interaction design, he leads the “Augmented Matter in 79



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.