Design Report Fuad Soudah
Fuad Soudah (fsou4085) - Assessment 4, page 1
Designing for Embodied Interaction Virtual Reality
Mobile, Desktop, Web, Video Games
- The designed environment is placed in 3D: x, y and z axises. Placement of objects needs to take place in depth at all times. - Interaction methods are limited even to just one button in case of Google Cardboard, viewing angles need to be binded with the application’s interface - Perfect for compelling, slow-paced interaction - FOV (Field of View) ~100 degrees - Optimization is of quite high importance - Lays very heavily on hardware - Refresh rate varies: 60 Hz on smartphones, 90-120 Hz on VR headsets. Technologically best if 45-60 Hz is allocated per eye
- The designed environment is placed in a 2D environment, some elements may need to make use of an extra dimension for placement and animation purposes, yet the assumed standard is creation in between x and y axises - Ample interaction methods available: Desktop - Mouse, Keyboard, other devices may be plugged in; Mobile - Touch interaction and gestures; Gaming consoles - Gamepad, motion-sensing controllers (Nintendo Wii, Microsoft Kinect, Playstation Move), Web - rather the same as Desktop - Perfect for compelling, slow and fast-paced interaction - FOV: Video Games ~65-90 degrees; Desktop ~30-60 degrees; Mobile ~15 degrees - Video Games: Optimization is of medium importance; Web: Optimization is of low-medium importance - Video Games: Lays Very heavily to medium in terms of hardware requirements; Web: low-medium heavy - Refresh Rate doesn’t really matter as long as it’s above 60 Hz. In theory, FPS (Frames per second) exceeding the refresh rate will not be visually noticeable. Below 60Hz the visuals may seem as if flickering, tiring your eyes and leading in some cases to headaches
Designing for VR is quite distinct. The most prominent issues that may come across are: motion sickness due to low framerate, in such case optimization is crucial; lack of immersion due to uncompelling visuals. Interaction methods are usually limited, managing such ought to be done with care. Often a myriad of workarounds are implemented to get things working, though.
Designing a website is an art that potentially could have reached its peak in terms of what may be introduced. It’s designed rather in 2D, unless dealing with animations or particular APIs for graphics manipulation. It usually comes with a specific structure, grids, elements that may be easily scanned and opened on a variety of devices, seamlessly and intuitively.
Writing a website may take place via a variety of ways, you may write it by yourself in a text-based application, create it natively on websites dedicated for ones who cannot do any html, css or web development. Despite that a website is launched in 2D, the writing requires a bit more of an abstract approach, as maths, even physics and etymology can come into play.
As in any game engine, in Unity you may introduce a variety of lightning techniques. VR is particularly hardware demanding, therefore optimization will save your life sooner than you’d expect. Pre-baked lightning (Global Illumination) will increase performance compared to real-time lightning. Also, shadows on android do not run, so pre-baking is almost a must-have.
Designing for VR is closely related to video games as both usually base of the similar solutions (e.g. engines like Unity). Each requires usually a GUI (Graphical User Interface) and interaction methods. Technically, you could adapt video games to Virtual Reality, yet, hardware demands are a limitation. Usually a dedicated controller will manipulate the player’s movement.
Designing for smartphones comes with difficulties such as Midas Touch Problem (every tap will react), Fat Finger Problem (inaccurate selection). The interface ought to adhere strictly to conventions, but that makes the process quite efficient, as the designer knows straightaway which solutions will more likely meet the users’ needs and which will not.
Fuad Soudah (fsou4085) - Assessment 4, page 2
What Worked? - Complete Environment Eventually we managed to create a space that was relatively close to what our vision was (vaporwave aesthetics), however, stitched up to meet the requirements of our project. The world was supposed to feel trippy, one-of-a-kind, invoke amaze and surprise but with its own limitations as it would make the user eventually decide to lay off due to repetitiveness of the experience (relating to how it would work in real life as well). The experience was perceived as abstract, a tiny bit contained with a balanced amount of objects embedded so the user doesn’t feel over or underwhelmed.
The world at first was created in a straightforward manner, with some interactivity enabled and objects resembling our destination-like (vaporwave) style
- Positive User Evaluation At first, users signaled that the experience made them sick. On one hand due to low framerate and stuttering and on the other, the visuals were too lowly saturated and in the overall felt quite unclean. Once the rendering techniques and 3D Models restructurization were complete, our users enjoyed the experience and pointed out to their additional emotions as they experienced Trouble Cruise. In such case, we were able to tailor further development, convinced that we’re on the right track. - Optimized Application We managed to put a vast amount of objects that were detailed enough, but still relatively lightweight so that it ran quite smoothly, not causing motion sickness and such. LODs in case of certain objects were implemented, some were rescaled and some deleted. We also got rid of realtime lightning altogether. Potentially we could have included baked lightning and occlussion culling for even further visual and performance improvements, but the time required for including the baked inclusion was just simply exorbitant. We were quite already happy with the current state.
The more objects our team introduced, the more difficult it became to manage environment’s performance. Overdraw mode showcased key problem areas
- Clean Project Marginal amount of errors, no missing prefabs, relatively balanced file structure. The final package was quite fair and contained, as lots of effort was put towards keeping it approachable for the user and any person that would like to introduce any iterations. In the overall, quite a good job. Nevertheless, we had to carry on populating our world. In wireframe mode, you could see how densely populated with polygons the environment became
I imported all my assets that I have done for my other project in which the only part that could have impacted the rendering process would have been the Valve Lab Renderer asset. But I tried using the asset alone in this project and nothing really happened. The colors remained washed out, some strange fog kept on enthralling the player in VR mode and my participants signaled that the app kept on making them sick. And then I imported my older package in full: Colors radically improved, the performance kicked up and my users signaled the app is actually fun and enjoyable. Hm. This is one of these rare instances when I did something and I literally have no idea why and how it exactly works.
Fuad Soudah (fsou4085) - Assessment 4, page 3
With a properly configured ProOptimizer in 3ds Max, I was able to radically cut down on the number of polygons, introduce LODs (Level of Detail) and drastically improve the application’s overall performance
What Didn’t? - Educational Component Our main objective was to introduce elements educating the audience of the impact levels delivered by the intake of a variety of drugs, simultating how they influence a person and what are the existing hazards. As an idea, this would have been achieved by the introduction of real-life representations of drugs, possible to interact with the use of Google Cardboard’s button.
From the very beginning we planned on embedding external 3D Models as recreating many objects natively would have taken an exponential amount of time. Some came with a gigantic amount of polygons, though. Unfortunately, optimizing each model would have taken too much time, some had to be deleted.
Each interaction would trigger a response based on graphical illusions and manipulations with in-game’s environment, such as: distorted imagery, new objects popping up, slowed-down or sped-up gameplay. The effects would have been mixable and offer insights on actual impact, even lead-up to severe cases when the game would stop responding, glitch out or black in-and-out, therefore, as a plan we would even make use of natural occurences by even flooding the smartphone’s hardware to make it unresponsive. - Interaction Methods Unfortunately, we never managed to accomplish our objectives due to the enormous difficulty of proper EventTrigger imlementation. Despite that we already had one version made with cubes hosted and working, we were not able to adapt the effects to affect the player and its MainCamera themselves. Objects resembling drug use were already featured but the execution of effects could not be done properly, partially due to our programming skills limitations, especially that the scripts we had were split between Javascript and C#.
The interaction structure was implemented and working, however, effects that were supposed to take place, unfortunately, did not.
Our environment became so huge that it exceeded some internal values within the Engine, making the scene glitch out. Z-index should not surpass 100000. Ours did.
I experimented with baked lightning and reflective probes. The first took too long to render, the second never really planned on becoming rendered. It’s a shame, as the visuals would have radically improved, if the techniques were implemented.
- Shader Manipulation Eventually we didn’t manage to pull off our golden ticket: Shader Iterations, which would have invoked all the trippy elements and drug-like effect. Lacking this particular part, our environment became stripped out of the educational component and interaction altogether. As a result, an experience was really what we were left with and in a few cases, it was even perceived as a music video (which was not particularly our aim). - Machine Learning One of the likely more interesting features was the implementation of Machine learning techniques. Based on a script I developed this semester, I was able to transcribe the music’s lyrics, generate synonyms for each sang line, allocate them as tags for FlickrAPI to generate images which would have invoked a nuanced effect on the progression of the song. The system turned out as inefficient as it took a lot of time to roll it out, but potentially powerful and working. Unfortunately, I ran out of time to implement the system in full.
Fuad Soudah (fsou4085) - Assessment 4, page 4
Contribution - Animation implementation
Bug Testing
Implemented the early-on logic of player’s movement based on animating the MainCamera and timing its position with the progression of the music. Player’s movement involved a simple animated pathway, in default with an eased in and out motion, for the duration of the music’s length (4:20) - Interaction Methods Implementation Package Preparation
Methods developed based on the tutorial’s outline, included a modified version of triggering different outcomes once looked at an object and interacted with it with the introduction of additional UI elements monitoring the count of interactions, potentially helpful once we had the shader manipulation working.
Keyframing
- Rendering Techniques
Machine Learning (attempt at implementing)
Merged solutions used in another project (that I made), revamping the overall quality, performance and meeting the user’s need (for not making them sick) - Optimization Many 3D Models were rebuilt one by one in ProOptimizer (3DS Max) and structured, functioning as LOD Levels in some cases. - Ongoing Beta Testing Constant testing in terms of performance, flickering and visuals were made. Also checked whether the application ran at all (was an issue at times) on Android, sought and found solutions for such instances. - Object Integration Rebuilt some environments by enlarging them, enclosing spaces, animating values such as color, size, position in case of particular objects like cars - User Evaluation
Optimization Beta Testing
LOD Implementation
Formative Assessment made during the development process with 15 participants, Summative Assessment made at a later, near finished stages - Other Figured out how to increase Raycaster Distance; attempted at including machine learning scheme of nuanced imagery; Recorded the Film, edited it in Adobe Premiere; Prepared the final .7z Package Fuad Soudah (fsou4085) - Assessment 4, page 5
Formative and Summative User Testing
Object Animations