Expanded Interface Preliminary Research At the beginning of this brief we were shown a couple of videos as an example of a expanded interface. I was impressed with the object remote which allowed the user to place an object like a toy or square into a oak bowl it would then play with media. i started looking at different ideas which were posted on the blog. i eventually started doing my own research, looking at videos on youtube and other websites. I looked further into different interfaces like touch interface, voice interface and kinect for the xbox. I was looking to expand on one of these ideas as i felt that there was so many ways you could experiment with them. After doing a little bit more research i stumbled onto a device called OLED which is a flexible piece of film which is a display screen. It can show both images and videos. I was impressed by this and i wanted to look into the idea further.
1
Expanded Interface Starting my idea My 1st idea after looking at the OLED device was to create a digital book, which would allow the user to store there favourite reading books into a built in hard drive. The book allows features OLED pages so any text information show on the pages is loaded rather than printed like an ordinary book. Along with these features, the book also acts as a note pad and sketchbook. The user is able to write and drawn on the pages, and can save his work on the hard drive which is located in the spine of the book. The idea behind this was to expand the way a book works and add the interactions of touching and drawing. I was looking to make the book more interactive with its user and a lot more fun, i also wanted to blend together notepad and sketch pads with a reading book. Based on the feedback i was getting from my tutors, i realised this idea couldnt really go forward anymore. I took certain elements from this idea and i started creating a few more ideas which i would later go on to use as my final.
2
Expanded Interface
3
Research After i changed my idea, i started looking into the idea of holograms. This idea came to me after looking at hologram videos. My inspiration comes from films like Iron Man and Minority Report. I looked mainly through Youtube videos to see what i could find as i wasnt sure how i could present learning information as a hologram. I wanted to keep my idea of making learning fun from my original book idea. As i felt that people would learn better this way. I was then told about infographics which is a visual way of showing information. After researching about infographics i started developing my idea fully, my idea is to have the user interacting by voice and movement with the AI system. The AI system acts as a search engine which projects information to the user. This idea is aimed at visual learners who find it more easy to learn infromation by seeing it visual in the form of images, rather than hearing or reading it. I started looking at ways i could show this, as there was no chance of me creating a real hologram. I first had to find a topic i could present. I wanted to present a topic that people didnt know too much about as i wanted them to learn something through my infographics. I was impressed by the gulf of Mexico oil spill inforgraphics i found on Youtube. I found this very informative and entertaining. I wanted to focus my topic on a natural disaster after seeing this.
Expanded Interface Research I started looking into more depth about inforgraphics. I researched things like visual signs, pictographs and icons. I looked at a few authors like David Crow and Rayan Abdulluh to help me gather a general understanding about it. i managed to find a couple of their books which i took out and highlighted key words which explain how icons and images are communicated. I looked into AI softwares and also speech recognition, as ways i could experiment with the user talking to a machine. I found a software called Zabaware, which givers the person the ability to control their computer with the help of an AI assistant. I didnt manage to get this to open or work for that matter. I did more research and found a speech recognition for windows, which open programmes by voice command. The researched helped me plan out my experiments as i found different methods i could use for interactions. The books also helped me to structure my infographics animation, as i read what was important when trying to communicate.
4
Expanded Interface Development I did a rough storyboard in After Effects because i havnt used it in a while so i wanted to familiarise myself with it. I then started drawing my final storyboard on paper, i used this as a guideline as i added more information as i went on. I knew that i wanted to roughly make my video round 1 minute 15 sec, as infographics is mainly short and quick. For the images i had to choose a few which i thought was suitable for the japan earthquake. I searched for the images then vectored them in photoshop. I then transported the files into After Effects. From here i started to piece together the animation, i started to stick to a colour theme of black and white for the background, as i didnt want to keep changing colours. After i was done i edited this in premier pro as i found this much easier to use the After Effects. I then started making a desktop animation which will be seen when the user starts up the AI systems and interacts with it. For the computer voice over, i used a website called Natural Readers which allows you convert text into voice clips. I did this as i tried using other peoples voice that i recorded for the AI computer, and it wasnt what i was looking for. I wanted to use a computer styled voice as it fits my animation.
5
Expanded Interface
6
Development & Outcome After putting together my animation and editing it, i then moved on to creating the experiments. First i recorded a friend watching the animation which is played from a projector onto a wall. My second experiment was using a microphone and the speech recognition software for windows. I would get the user to open and play the video by voice command. I did one where the video is projected onto a screen, and the other where its playing from just the pc alone. I recorded both the user interacting and also a screen record. My last experiment was a video showing you how it would work if it existed. I filmed the within the university campus in a room with a projector. Due to complications with the media store i was not able to film how i wanted it to look like(see blog). The idea for this is that the user would enter the room, they would then approach the AI system and ask to find a topic they want information on. The Ai projects the holograhics information from a wikipedia page, then plays the infographics to the user. I edit the footage in Premier Pro and After Effects for the holographic videos. I placed text inbetween certain scenes so people would understand how it would work if they were going to use it. I will now see if i can get film more people and try getting their reactions on the video animation.