Designing VR

Page 1

DESIGNING FOR VR



4 14 24 30 38 48

A DESIGNER’S GUIDE TO VR N AV I G AT I O N I N V R A VR TOOLKIT TO HELP RAPID PROTOTYPING P L AY I N G A 3 6 0 V I D E O I N V R 10 EASY STEPS TO C R E AT I N G A N U N W R A P P E D SPHERE IN BLENDER V R – A N E W WAY O F WORKING FOR A NEW MEDIUM


A DESIGNER’S GUIDE TO VR By Lawrence Cox 4


INTRODUCTION When the head of Oculus, Cory Ondrejka, was asked when Virtual Reality headsets would make it into the average American home, he replied, ‘As soon as we can get them there.’ VR’s potential

applications in gaming, immersive experiences and learning will be the ones that get all the credit for that development, but once inside the average home, how we use VR can start to broaden.

This includes day-to-day web interactions – those that have been

conducted via smartphones for the past half-decade. Yet while the web is filled with insights and guidelines around the application

of VR in gaming and immersive experiences, very little research has been done into the impact it may have on regular web

experiences. So we decided to right the ship by rebuilding our agency site for VR and exploring our findings in a series of articles.

It’s no surprise that, as an industry, we’re still in the early stages

of designing for native Virtual Reality experiences. It’s only this year that the headsets have begun to break through to consumer adoption. With that in mind, here we share a series of articles

that showcase our learnings of this new medium, acquired while working on our agency projects.

5


FROM SCREEN S I Z E S T O V I E WA B L E AREAS Throughout the 1970s and 80s, we

This means we can immediately start

and billboards. The 90s saw the rise of

canvas – those areas that aren’t visible and

predominantly designed for newspapers digital design, and in the early 2000s, that digital design was transplanted onto our

mobile screens. So we’ve been conditioned to design within the parameters of specific 2D areas, no matter the platform or medium. Virtual Reality bucks that

trend and puts designers in an existential panic. Rather than screen sizes, we’re

designing for viewable areas. And the

360 degrees of vision means we’re spoilt for choice when it comes to content

positioning and layout. We’ve moved from 2D to 3D. So without the parameters of

two dimensions to guide interface design, the first question to ask is: Where does a

therefore not usable (for the most part).

The space behind the person immediately becomes, well, behind them. Naturally,

it’s impractical to have the user constantly looking around, as this can lead to a lack

of orientation within the experience and of,

course, quickly becomes tedious and tiring. Therefore, we need to work in the user’s

field of vision, FOV for short. The specific FOV is headset-dependent, so it fluctuates from one to another, but the average tends to be between 90 and 110 degrees. As we

were using the Oculus Rift (Development Kit 1), our FOV was 110 degrees.

designer put things?

VR expert Alex Chu discovered that,

When building AnalogFolk’s website-in-

your head 30 degrees to the left or right

a-headset prototype, we optimised for a

sit-down experience, as that’s how most

of us enjoy longer-term web consumption. 6

disregarding areas of our 360 degree

on average, you can comfortably rotate to a maximum of 55 degrees, and you

can comfortably look up 20 degrees to a

maximum of 60 degrees. Looking down,


you can comfortably go 12 degrees, to a maximum of 40 degrees. When you couple this

with your FOV (from a seated position), you can plot the areas people can see comfortably and where they’ll strain.

7


With the available dimensions laid out, we can start to allocate suitable areas for content and rule out dead zones. Dead zones

are the areas outside of these dimensions, where integral content

should not be placed. However, that doesn’t make them completely redundant. Because its a novel, immersive experience, people are

curious when they put on a VR headset. In user testing, they turned around and looked at things, exploring the new reality. So it’s

beneficial to have something fun – an Easter egg, for example – that rewards their curiosity and makes the best use of VR’s 360 capabilities.

It’ll come as no surprise that the zones deemed comfortable should house the main content. Areas that are accessible but strenuous

(that no man’s land between dead zones and suitable areas) can contain content, but the user should only have to interact with them briefly.

8


PLANE THINKING Now we’ve narrowed our 360 degrees into

1 and 20 metres. Anything closer and the

we need to start thinking about depth.

away becomes flat.

comfortable zones for digesting content,

We’ve added another dimension, placing

a user into a whole new reality, so how do we design for three dimensions?

Applying basic film, painting and

photography principles, we settled on three planes within our interface: a foreground, midground and background. Each of

those planes can have multiple canvases sitting at different depths within a plane

user becomes cross-eyed; anything further

We tested a few depths of planes in our

prototype, settling on the foreground at 3

metres away from the user, the midground

at 6 metres (with slight variations of depth between different canvases), and the

background at 20 metres, but this could be

set further back depending on what type of background was selected.

to enhance the 3D feeling. Alex Chu’s

research into depth perception found the best distance to place content to ensure you get a strong 3D effect is between

9


FOREGROUND We set the foreground plane as 10:3

three canvases should correspond with the

to the user, sitting 3 metres away, the

in a moment.

(1200px by 360px). As the closest plane other planes must be seen behind it. So we

anchored it to the bottom of the FOV. That

The peripheral Secondary canvases can

made it the ideal area for navigational

maintain a consistent depth measurement

omnipresence in the user’s initial FOV elements.

The plane itself can be divided into three canvases – a Primary (central) canvas and two Secondary ones – with the

navigational elements spread across them. (Read more about navigational principles of VR in the next chapter.) These

navigational elements spread across the

10

midground content zones, as I’ll discuss

be tilted (50 degrees) around the user to from the eye and set up an immersive

experience (as seen below), rather than a feeling of looking at a flat surface.


MIDGROUND After playing with a few plane sizes,

(corresponding with the navigational

by 900px). In the interests of a simple

canvas). These peripheral canvases can

we set the midground at 2:1 (1800px

interface, we split the plane into three

canvases sitting 6 metres from the user,

though more canvases could be added to

give depth and complexity to the design.

elements found in the matching foreground once again be tilted forwards, so content is easily digestible and the experience feels wrapped around the user.

As with the foreground navigation hub, the three canvases are made up of a Primary one flanked by two Secondary canvases.

The Primary canvas dominates the user’s initial FOV, so is used to showcase main

content. As the user will have to turn their head to view them fully, the Secondary canvases house the secondary content

11


BACKGROUND You have a few options when it comes to

our user tests, we found some people spent

of VR are put to best use by wrapping

than interacting with your the design.

the background, but the obvious qualities the user in a background sphere, whether

that’s a 360 video or image (with the same

Beauty can become a distraction.

aspect ratio as the midground – 2:1; we

On the other hand, placing someone in a

how to build that sphere and implement a

feel underwhelming. So you need to find

used 3600px by 1800px). Read up more on 360 video in following articles. We had our background sphere sitting 15 metres from the user, but it could be anywhere up to

20 (particularly if you use multiple canvas depths in your midground).

Wherever you set your background and whether it’s spherical or flat-pack, its

design has a sizeable impact on the VR

interface. In fact, it ultimately dictates the overall user experience. Using a spherical background or skybox of an endless

landscape will blow away a user with the

vastness of the experience. That’s great for how immersive it feels, and the sense of

‘presence’ (meaning a user feels connected to a reality that’s outside of their own

physical body via technology). But at the

same time, such a vast immersiveness can

be detrimental to engagement with the rest of the content, navigation and design. In

12

more time looking around the landscape

confined space can make the experience

a balance and weigh up the pros and cons of both. The needs of your user and role of the experience you’re building can help to answer these questions. For example, are you helping people to

complete a functional form or digest complex content? Then remove distractions wherever possible.


CONCLUSION As this platform is still in its infancy, insights and

learnings will constantly evolve. Such is the nature of

new technologies, the only way we can learn from them is iteration and development.

We’ll release our findings in the form of further articles,

but to aid a new breed of VR designers in the here and now, we’ve created a sketch template from our current findings. To go along with this, we’re in the process of making a

Unity VR toolkit, which will allow designers and UX to quickly create click-through prototypes by themselves. Stay tuned for the release.

13


N AV I G AT I O N IN VR

By Paolo Cattaneo 14


INTRODUCTION In recent weeks, we’ve been working on creating VR design

guidelines for user interfaces through actual user testing. The first

usability study focused on two out of four navigation frameworks. In order to assess which type of navigation might work best, we tested five variations on 20 people using usability metrics.

For the test, we used an Oculus CV1 with gaze and click as an input method. Participants were presented with five designs

in random order (to minimise familiarity biases) and asked to

navigate to a specific page and back. For this round of user testing, we decided to focus on two frameworks: Hierarchical, and Hub and Spoke.

15


HIERARCHICAL N AV I G AT I O N This follows a standard site structure, with the main

navigation listing all index screens on all screens and in a consistent position.

16


HUB AND SPOKE N AV I G AT I O N This pattern gives users a central index on the main screen from which they can navigate out and back. Users can’t navigate

between sub-screens, and must return to the main one each time.

17


Within the hierarchical framework, we decided to place the

navigation in two different positions: at the user’s feet and within the work space zone. For the hub and spoke framework, menu items were placed within the content and peripheral zones.

Option A

H I E R A R C H I C A L N AV: AT T H E U S E R ’ S F E E T

18


Option B

H I E R A R C H I C A L N AV: IN THE USER’S FIELD OF VIEW

Option C

HUB AND SPOKE: AT T H E U S E R ’ S F E E T

19


Option D

HUB AND SPOKE: IN THE USER’S FIELD OF VIEW

Option E

HUB AND SPOKE: IN THE USER’S PERIPHERAL VIEW

20


FRAMEWORK Users accomplished their task perfectly when using all design options. Options B, C and D achieved success rates of 100%

without causing users any discomfort. They also saw comparable task completion times and received similar satisfaction ratings. Options A and E also achieved 100% success rates, but task

completion times were longer and satisfaction rates lower. This is because users had to perform unnatural head movements in order to gaze at the correct menu item.

Option D performed best of all. The time it took to locate

navigation items was shorter when using this design, and people were also able to complete the task more quickly and efficiently than they did when using the other options.

Option A performed poorly when it came to completion times and user satisfaction, as people struggled to gaze and click due to the proximity and size of the items.

21


PLACEMENTS Options A, C and E performed poorly during testing. Many users

struggled to notice the navigation, and in the case of option A, they

lingered over the module, struggling to realise they were looking at the right piece of information. Overall, people were slower when using this design than they were when using options B and D.

IN SUMMARY While most immersive VR experiences

experiences is to have people explore

the users or integrate them within the

situations where designing for speed and

tend to either hide navigation items from experience itself in order to maintain an

high level of presence, when it comes to

functional interfaces, users tend to struggle with this paradigm.

The navigation framework seems to be

less important than the placement when it comes to completing a task. Overall,

users responded well to designs where the navigation was in sight regardless of the

framework. This suggests they were more

concerned about accomplishing a task than exploring a virtual environment.

While the primary goal of most VR 22

and get ‘lost’ in a virtual world, there are function could be more effective.

The navigation framework, however, does matter from an information architecture

point of view. Based on our designs, the hierarchical navigation didn’t allow us to place more than a few items on the

page, while the hub and spoke framework allowed users to browse through a

considerable number of options and

amount of content at the same time with

ease. With hub and spoke, hierarchy can be implied by proximity, and secondary

content can be pushed out of immediate view but still remain accessible.


While designing and testing for VR, it also became apparent that

people naturally tend to rely on what they’ve learnt from years of browsing the web, so using its standards will increase ease of use and discoverability.

VR user interfaces still need affordances to indicate what can be

interacted with, and where that interaction takes place; similarly, using the logo as a home button was a paradigm completely understood by participants.

23


A VR TOOLKIT TO HELP RAPID PROTOTYPING By Rick Smith 24


INTRODUCTION 2016 saw a huge step forwards in the adoption of VR headsets in the consumer space. This was thanks to the release of the Oculus CV1 (Consumer Version) and the Google Daydream, plus the

announcement that the minimum computing specification to run a VR headset would be lowered in the future. While this hasn’t

instantly led to the widespread adoption everyone wanted, it’s still

very much early days. We believe VR headsets will play a big role in computing in the future, but one of the main problems currently is that the user experience needs to be better.

Here at AnalogFolk, we’re working on a project to come up with a set of better UI paradigms for browsing and interacting with content in a VR headset, whether that be a seated experience or a full-blown, room-scale one. As a starter project, we’ve

taken to adapting our current agency site into a VR experience,

essentially looking at the problem through the lens of, ‘What if, in the future, everyone browses web content through a headset

and not a computer screen?’ This has led us to develop a couple of functioning prototypes of layouts and navigation paradigms,

so we can run some user testing on them and gain much-needed feedback. You can see these prototypes on the next pages.

25


26


27


While these were great for testing fully

instead of days. When we’re happy with

frustratingly difficult to quickly iterate and

a library for quick re-use, allowing us to

functional prototypes, the team found it

respond to the findings. Many of us have come from environments where we use

paper prototypes or loose wireframes to

test these user experience hypotheses, and having to wait for proposed changes to be

developed into a new prototype takes a lot more time than changing paper ones.

This insight led our team to develop a VR toolkit of sorts. After working with the experience designers and mapping out

the bare minimum it would take to test

these user journeys, we came up with a

Unity project that could allow designers

to quickly import their designs and make them interactable.

The toolkit lets our UI designers import

designs from VR template sketch files, so they can see whether their layout works

visually in the headset. It also allows our experience designers to quickly create

new pages or scenes and specify that the

imported images be ‘clickable’ and link to these new scenes.

This gives us an environment in which the

designers can develop and test new layouts and user journeys in a matter of minutes 28

certain UI elements, we can save these to build more complex layouts even faster.


29


P L AY I N G A 360 VIDEO IN VR

By Rick Smith 30


INTRODUCTION Over the past year, we’ve seen a number of cheaper consumer-

grade 360 cameras hit the market. That’s led to a lot more user-

generated 360 content making its way onto the web. The problem is, the web is still very much a 2D experience, so apart from sites such as YouTube, which can detect when you’re trying to view a 360 degree video and allow you to drag your view around, most sites are stuck showing you a warped panoramic version of the photo or video. We’re going to show you how to play this 360

content using Unity, allowing you to easily view the content more

naturally through a Virtual Reality headset, such as an Oculus Rift. The steps for building a 360 viewing experience are easy to follow. So let’s get going…

31


W H AT ’ S U N I T Y ? As mentioned, we’re going to use Unity to view our content. Unity is a game engine for developing 3D environments through which

you can view 360 content. But the most important thing about it is it’s free.

G E T T I N G S TA R T E D Once installed, open Unity and create

Once you have the 360-content-sphere

Content Viewer’ and make sure 3D

a new folder in the ‘Assets’ folder called

a new project. Name the project ‘360

is selected. After the project has been

created, you’ll be taken to an empty scene. We’ll be using a spherical 3D object to project our content onto, so we’ll need

a 3D sphere to wrap around the current camera in the scene.

Unfortunately, we can’t use a standard

3D sphere from Unity, as it will project

the content onto the outside of the sphere by default. So we’ll need to create an

‘unwrapped’ sphere. We’ve created this asset in a 3D modelling program called Blender for you already; or, if you’re

feeling studious, you can read our 10-step guide to building one from scratch in the next article.

32

.blend file, go back into Unity and create ‘Models’. Then simply drag the .blend file into that folder in the Unity window so it looks like this:


Next you can drag the 360-content-sphere from there onto your

scene (see panel above). After that, you’ll have something like this:

33


If yours looks different, make sure the sphere object’s position and

the Main Camera’s position are both at 0, 0, 0 in the inspector panel (on the right). You should also change the rotation of the sphere on the X-axis to -90: it won’t make a difference now, but when you add the content, it will make sure it’s orientated correctly.

Now save the scene, as we’re ready to import our 360 content.

To import a video, we need to create a new folder in the ‘Assets’ folder called ‘Streaming Assets’ – this is the folder into which you’ll need to drag your video file. This step can take a while, especially if your video file is quite large, so don’t worry if it

looks as if Unity has frozen – it’s just doing its stuff! When the

video has been imported into Unity, you can then drag this video

onto the 360-content-sphere, which will create a ‘Materials’ folder in ‘Streaming Assets’ and will add the generated material as a component of the 360-content-sphere.

34


Now we have the content projected onto the sphere, we need to

get the video to play, as it won’t by default. To do this, we need to create a script and attach it to the 360-content-sphere. Create

a new folder in the ‘Assets’ folder called ‘Scripts’. In the folder,

right-click and select ‘C# Script’, then name this script VideoPlay.

If you double-click this new script, it will open in another program for you to edit, looking something like this:

35


Replace everything from the first curly brace to the last, so it looks like the below, then save the script.

Now you can close that program and switch back to Unity, where

you can drag that newly created script onto the 360-content-sphere, like you did with the video. This will ensure the video will autoplay as soon as the program is run. To test, you can hit the play

button at the top and you should see the video playing in the top

window. You won’t be able to pan around in this view, but if you

connect up your VR headset, you should be able to look around the video as you normally would.

36


37


10 EASY STEPS TO C R E AT I N G AN UNWRAPPED SPHERE IN BLENDER By Rick Smith 38


INTRODUCTION As part of our exploration into how Virtual Reality will impact day-to-day web interactions, we’ve been creating a lot of 360

content, but we’ve struggled to find a good, fast way to view it. By default, most devices allow you to pan around the image or

video on a 2D screen. Instead, we wanted to create an environment where we could view the content quickly and easily in a VR

headset – experiencing it as an end user would. But traditionally

this takes time and, on a project that requires multiple iterations, time is everything.

Our solution was building an unwrapped sphere of 360 content

that we could then project directly into a VR headset, saving us precious hours.

39


STEP 1 The first thing you want to do is delete the default cube in Blender to insert a UV

sphere into your scene. You can do this quickly using Shift + A, which will bring up a menu, and selecting ‘UV Sphere’.

STEP 2 When you have the sphere in place, you need to delete the top and bottom vortexes. Right-click the top vortex, then Shift + Right-click the bottom vortex and press delete to get rid of them. This will open up two holes on the top and bottom of the sphere.

40


41


STEP 3 Next, you have to close the diameter of those holes. Select one of them by Alt +

Right-clicking on an edge of the hole, then Shift + Alt + Right-click on the other to select both at the same time. When you have both selected, press ‘E’ to start the extrude tool, then ‘S’ to switch to scale, and finally Shift + ‘Z’ to lock the

transformation to that axis. This should close up the holes, but not all the way.

42


STEP 4 You’ll now have a sphere that’s slightly flatter at the top and bottom, so we want

to return curvature to those areas. Keep the hole edges selected and press ‘S’ and ‘Z’ to scale on that axis again and raise / lower the vertices. This step is easier to

do if you use the orthographic view from the front by pressing ‘1’ then ‘5’ on the numpad. (For those without a numpad, go to user preferences and tick ‘Emulate

Numpad’ in the input settings – this will allow you to use the top row numbers.)

43


STEP 5 With that completed, we need to unwrap the sphere. Mark a seam by Alt + Right-

clicking one of the vertical edges to highlight it. Then press Ctrl + ‘E’ to bring up a menu and select ‘Mark Seam’.

44


STEP 6 We can then proceed to unwrap the sphere. This is easier with a second window open so you can see what it’s being unwrapped. To do this, drag a window using the tab in the top-right from right to left, then switch the window to ‘UV/Image Editor’.

45


STEP 7 Go back to the first window and press ‘U’ to bring up the UV menu and select ‘Sphere Projection’. In the Tools panel in the bottom-left, make sure the

options for ‘Sphere Projection’ are set to ‘Direction’ and ‘Align to Object’, and that ‘Scale to Bound’ is ticked. This will unwrap a map of small rectangles in the right-hand window.

46


STEP 8 You need to ensure all the faces are directed inwards, so you can project your content onto the inside of the sphere. In the ‘Shading/UVs’ tab, click the ‘Flip Direction’ button in the ‘Normals’ section.

STEP 9 Then you need to make sure your content appears in the sphere as a

singular pane, so a user won’t be able to see the faces of the sphere. In the ‘Shading’ section, set the ‘Faces’ option to ‘Smooth’.

STEP 10 That’s it! Your .blend file’s ready to be used in a Unity project to project your 360 video or photo onto.

47


VRA NEW WAY O F WORKING FOR A NEW MEDIUM By Emma Conti 48


INTRODUCTION While the R&D team’s main focus is to make cool new things,

we’re also learning that the traditional roles and ways of working we adhere to in agencies don’t always work. We’re having to

evolve quickly to ensure we’re delivering the best possible stuff.

49


KEY FINDINGS Roles are less defined than ever. UX and Design’s roles were

already blurry, but, working in Unity, this has shifted even further. Instead of taking assets from Design and building them, the team

had to rebuild all the designs from scratch. This process was timeconsuming and the result too high fidelity for testing purposes.

To get around this, the team developed a toolkit, so UX and Design could work straight into Unity’s testing lab. Once we were happy

with the findings, we could then build and polish the final version. Agile doesn’t work. You could argue that pure agile has never

fully worked in agencies (but that’s a whole other article), although product-specific projects are suited to this framework. But for us,

working in the new medium of VR, this methodology hasn’t been a success.

50


When planning out the tasks we wanted to achieve within each sprint, we had a

broad, top-level list, but the detail wasn’t

there. We had a rough idea of timings for

getting things done, but, because working in VR is so new, they were way off and

frequently changed on a daily basis. This

didn’t improve over time, as you’d expect with agile.

We remain flexible, and change direction quickly, but we’re no longer held to

sprints. Instead, we’re heading towards a more scientific approach: hypothesising, experimenting, testing and publishing

findings. It’s too early to tell whether this

is the best way forwards, but this is where we’re at and we’ll report back with our next set of findings.

51


52


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.