Space by Space

Page 1

S p a c e

Sp 2012

b y S p a c e Interactive Spaces and Technology: & a guide to working with the kinect.

Student: Jennifer Wong Advisor: Kris Mun


Interactive Spaces and Technology: Connecting people through Physical Social Interaction and Virtual Space


table of contents :: project introduction

1

:: research

10

:: technology “how to�

12

:: conclusion

13



reason

project introcution

#storefronts #vacancy # urban_ street_scape #urban_development #kinect #virtual_spaces #tangibility


proposal brief

Vacant storefronts are becoming increasingly pressing

problems, especially in today’s economic climate. A study done by the Reitveld Academy in The Netherlands are identifying that a good number of storefronts and retail spaces around the world are being left uninhabited due to high rental rates and a lessened need for a physical shop presence. Many businesses are forces to shut down because of a lack of sales due to not only a downed economy, but also a decrease in foot traffic in urban areas.

A dressed up store front window in the UK used as a way to change the street appearance to look more lively after a majority of the stores closed.

Urban areas are slowly fading because of an increase in online communication and media culture. As William Mitchell noted in his article, E-Bodies , the movement to the web allows for place and time


The number of closed and vacant stores in New York City, the numbers have been steadily rising throughout the years.

to become increasingly unlimited. More individuals are spending time online, socializing, working and purchasing at home, and spending more time entrenched in the digital world via their mobile devices. Walking down the streets, you can find multiple people staring down at their digital communication devices over socializing with individuals around them.

This project hopes to take advantage of the embedded idea of

technology in our lives and use it to re-vitalize the urban storefronts. By taking these vacant storefronts and infusing them with communication and interaction technology, we can turn these abandoned spaces into a tactile modern virtual space for socialization.

This project takes a look at how the new and future technology

of the Microsoft Kinect and 3D Projection Holographic technology can create an immersive social experience. These digital spaces will be linked to a virtual network. Individuals can occupy these virtual settings and because ‘virtualized’. A 3D representation of them and any other individuals on the network will be broadcasted out so that the same


social gathering could appear all around the world at the same time, bridging spaces.

A network of audio and visual sensors will virtualize an individual

and connect them in a more social and humanistic way compared to the more common online communications such as Chat Roulette and Small World, a Facebook app. Individuals would meet in a social setting in their real physical forms; be able to interact with each other; and manipulate each others’ physical spaces, creating a physical-virtual social experience.

Through the use of the capabilities of a Kinect, a image mapped

3D model can be captured and projected on a surface as a 3D image. Then through the use of audio sensors and devices, conversations can be relayed through the network in the same medium they were delivered. The network of spaces would simulate a singular room with multiple individuals present. The intention is to create a tactile and physical virtual social interaction through the merging of the physical and virtual spheres.



diagram



idea diagram ** Idea:

To make the newly vacated spaces a space for social gather among people from aroun the world. Reclaiming the urban s facades as areas that will be the beginning the engagement communication with people from around the world.

Virtual Enviornment in connected on a world New York

Amsterdam


social gathering g the urban street e engagement and orld.

Enviornment in retail space ed on a worldwide network Shanghai

Los Angeles


composite




research

new technology

#kinect #holographic_imaging #head_tracking #hand_tracking #projection_techniques


Microsoft Kinect overview

“The prototype for Microsoft’s Kinect camera and microphone famously cost $30,000. At midnight Thursday morning, you’ll be able to buy it for $150 as an Xbox 360 peripheral.” Camera The Kinect’s camera uses both hardware and software to generate two different types of images. 1: 3D moving images captured by depth sensors using reference points that are mapped into the physical space by the Kinect. The Kinect uses infared sensors so that light does not affect the 3D image produced 2: 2D images of the field of view captured by a camera, that can be later mounted on the 3D capture Firmware The Kinect uses algorithms to process the incoming data captured


through its infared sensors to translate the data into a 3D model. This firmware can also distinguish human bodies through parts, joints, movement, and even faces. This is how the Kinect distinguishes the informaiton it recieves in order to react appropriately when the right gestures are given.


Microsoft Kinect SDK for windows article Produced by Ars Technica:

The Kinect for Windows SDK, a beta version of which is already available to developers, is being prepared for a commercial rollout in early 2012. The current beta version is targeted at academics, enthusiasts, and researchers who use the motion-sensing capabilities of the Kinect for Xbox 360 technology to create new applications. Kinect apps have already popped up in health care, education, and other industries, Microsoft noted in an announcement today. Despite being designed for video games, the Kinect—which has 600 patents behind it—has moved beyond the gaming world both because of its usefulness and its price: the Kinect lets people buy a device with 3D motion capture, facial and voice recognition, microphones, depth sensors, and an RGB camera for $149. While the software development kit released earlier this year targets non-commercial projects, Microsoft today said “the Kinect for Windows commercial program will launch early next year, giving global businesses the tools they need to develop applications on Kinect that could take their businesses and industries in new directions.” Microsoft’s announcement did not detail the terms under which the Kinect SDK will be released commercially. Microsoft officials also discussed the forthcoming commercial SDK with the Financial Times, which details the Microsoft pilot program involving “more than 200 companies for use of the Kinect across 25 industries, from healthcare to education, advertising and the automotive industry.” For example, Toyota developed a virtual showroom allowing cars to be explored with gestures, and a Spanish technology group called Tedesys is using a Kinect device linked to a PC and monitor, allowing surgeons “to wave their way through patient records on screen during operations,” the Financial Times notes. Microsoft


Xbox official Alex Kipman told the paper “12 months from now, educational, academic and commercial applications will look nothing like what they are today.� The Kinect for Windows SDK beta includes drivers, APIs for raw sensor streams and human motion tracking, along with more than 100 pages of technical documentation. It is targeted at developers who use C++, C#, or Visual Basic. Kinect applications are designed to be used in conjunction with Windows 7, and presumably the forthcoming Windows 8 will receive the same treatment. --- ---

The industry standard of using the Kinect has increased. The power of the Kinect is slowly starting to be utilized and integrated into all different fields.


Holographic projection

programming4fun -- YouTube


head tracking

Johnny Lee (johnnylee.net) Johnny has created a program that’s a piece of code for developers to work with using C# Dirext X. The software needs to know your display and sensor bar size and will help track the motion of your head as you wear the glasses. His instructions: 1. Connect your wiimote to your PC via Bluetooth. If you don’t know how to do this, you can follow this tutorial. I’ve been told it works with other Bluetooth drivers, but I have not tested them myself. 2. Download the WiiDesktopVR (v02) sample program. Read the README file on program usage and configuration. Launch the “WiiDesktopVR.exe” in the main folder. A potentially more stable/Vista/64-bit compatible version has been created by Andrea Leganza. There also may be more variants on the web. The Code is built upon the Wiimote library. > Can use the idea of stereoscopic displays when looking at the screen with the use of shutter/polarized glasses. This works better with the head tracking so that you get a sense of the real depth and the change in the perception.


Project addition

1. Technical stuff: Kinect. >> for the Kinect, the location be changed to a corner as opposed to having them as a straight on shot which will provide better 3D imaging to be used. That with the use of some other software and open ware programs, I would be able to create handles in a 3D environment to move the Kinect captured 3D model. To create a 3D object that can be manipulated, I should try the projection of the compilation on a black perforated screen. One catch would be that I would need a projector with a higher lumin output to around 3,000 from the normal < 2,000 that most projectors get. The increase in contrast ratio will make the shadows appear darker than if it were to be projected onto a screen giving a better feeling of the depth of an image.


2. Moveable floor. >> idea of the floor being made up of individual 3 dimensional pixels, also known as voxels. These voxels would contain information and be responsive to the individuals involved in the communication. These voxels would not only be able to be manipulated between the users as an interaction piece, such as moving pieces up and down, changing colors or waves through the room, but these voxels would be able to be manipulated so that separate rooms and spaces could be created for more private conversations. The construction of spaces in the given area is completely left open to the individuals within the room.

3. Sound. >> two solutions to the sounds, both of which provide sound to such an amazing quality. He described the two types of spatial audio that is available, wave field synthesis and ambisonics. Wave field synthesis is a singular long speaker that is able to produce sound at high specificity and ‘shoot’ it to points of a room similar to a laser. Ambisonics is the use of multiple speakers that are delayed with certain timing that can be controlled with OSC (open sound control and can use OSC-ulator).

4. Interaction >> the more interaction between the individuals with each other in a digital but yet physical sense will start to influence the users idea of a real interaction. The main purpose of the project is to promote the interaction between individuals even with a great amount of distance between them. These interactions should cause emotions and responses from the individuals as they communicate with each other. The understanding of interactions will greatly affect the success or failure of any public urban project.


technical



bigger picture questions Before the progression of the project there are a few things I needed to look at and address: How is this related/relevant to architecture? The use of the technology in order to affect the way a room functions and behaves will change the way architecture is designed. When a space takes on the idea of being a multiuse room, but remains as is (a square or rectangle), architecture becomes the design of something so simple any program can fit. But when the program can start to alter the spatial conditions, there are now a new set of variables that architects can start to manipulate and understand for making more human connections. In this project, the manipulation of space is only one quality that will change the way architects see space as being more temporal. The next is the use of the Kinect to expand rooms and spaces without having to actually expand the spaces. The perception of a larger space will already change the social conditions that most people are used to.

How is this going to revive urban space? With more and more urban spaces become less habited because of fewer interactions. By bringing a different type of interaction into the urban space, a greater draw of being able to communicate through a full body experience can engage more individuals. By taking communication out of a fully digital space and into a physical space, the interactions between individuals will start to change and draw interest because this physical humanization of people in a digital way will start to bring people out from their hiding spots in our digital technology driven world.

What is really real? Reality is a state of perception, we know things are real because we have been told things are real. Through a full body, communication and experiential interaction, the only thing that


seperates that type of digital interaction from a claimed ‘real’ life interaction is the ability to touch a physical body. However, the entire experience, emotion, and view is a real one, as they are through digital space that is now tangible. Can you prove that an interaction like that is unreal? When people are exposed digitally by showing themselves as they are with real visiable actions, they are as real as anyone else compared to digital interaction from behind an avatar or computer screen.

What’s next? Technology, especially ones that elicit responses from humans are a steady growing trend in society. By addressing this trend early on, architects can start to gain a better understanding of what all this means to the profession. The Kinect will end up playing a major role in many industries, having already branched out to the gaming, automobile and museum industries. The future of the Kinect in architecture is incredible because it can literally capture a person or any space in 3D to have recreated in a 3D software. The device can literally turn the physical world into a digital one, removing the idea of distance between people, objects and spaces. However, not only is distance no longer a factor, but because the digital is accessible the restriction of time is no longer and issue. In a sense, form is no longer an issue either in the digital tech age. The increase in use of moveable and maleable spaces to fit the users needs and wants can start to happen. The ability to change a space with the click of a button or the touch of a screen will revolutionize the way we have to percieve what spatial design will become.


max / msp Understanding Max/MSP

correct code design:

This piece of code was designed to output a live camera feed as well as the 3D capture feed. You are able to control the number of Kinects, the degree of tilt, the number of degrees of depth and the type of file output.


incorrect code design:

This piece of coding was an addition to Pelletier’s original code I recreated on the left side. However, I tried to create a connection to add a second connect to the stream. The Problem: The laptop would confused the Kinects with being #1 and #2. Because I was using a port splitter, there was no way to determine if this file worked without getting a computer with more USB hubs. The program would only capture and stream from one camera at a time, and never from the same one.






code design for: adding a 3D model Scale


position

rotate

properties

connection your 3d model


3D model integrated only worked with a duck preset model



project 3d imaging To start off simple, Myles had recommended a style of 3D projection imaging that is common, simple, fast and cheap. The use of two perforated mesh screens layered in a dark room with a projection directed towards the sheets. During this experimentation, there were a few things that I noticed, first, the projector needs to be able to output high lumens or else the projection won’t work. Without high lumens, the contrast between the lighter and darker areas are not as well defined making it look flat.



stereoscopic imaging The first idea of Stereoscopic Imaging came to me while talking to Perry Hoberman: This is the cheap option to buy a 3D TV. The set up consists of two projectors, a silver screen, 2 circular polarizing screens of the projectors, a pair of circular polarizing glasses and a Matro DualHead2Go -Analog splitter.


The resulting experiment let to me making a silver screen out of Chrome Spray Paint from Krylon, buying a set of circular polarizing filters and a set of glasses. Unfortuantely the computer was unable to split the image properly so when it came to putting it on the screen, it wasn’t lined up or working properly. This idea still only works in theory for me until I can get the two images being sent to the projectors to communicate with each other.



instruction

technology how - to

#Kinect #attempted #future #new_ design_spaces #repurposing #incorporation


basic hacking http://www.freenect.com/how-to-get-the-kinect-workingon-mac-os-x Over the next couple of days I’ll post specific details on the hack so anyone following that hasn’t done much computer work before like me can do this easily. I hope it will be as easy as following the instructions, but it’s not about the instructions but what happens when you do follow them. Step 1: Download XCode, CMake and GIT for Mac OS X > XCode should come with your Mac as the developer tools that come standard now. If you don’t have it, you may have to buy it to get it. > Next CMake is an easy download > CMake is an open-source system that allows for build automation which basically allows for you to build, text and package the software. > Finally: GIT. > GIT is actually a source control management system that is very interesting and very useful. The system was designed so that multiple developers could code or fix pieces of a larger code at the same time. The program will then track and trace what parts of the code have changed and make the according changes in the main code. Of course this could eventually end up harming the other coder if you changed something drastic that will affect their piece of code. However, the great thing is that you can track who did what, so you know what’s changed, who changed it and they can even include why they changed it.


So to start the installation and hacking process, although I won’t be actually writing the hack, I’ll be doing all this in Terminal. Part I is the download of all the pre-made files onto the computer. > git clone git://git.libusb.org/libusb.git > git clone git://github.com/OpenKinect/libfreenect.git Part II is the start of running all the pieces of the puzzle which is where I started to run into problems. > cb libusb Which is the creation of the libusb folder that will now be in your home user directory. If you’re having trouble with using and following what’s going on in terminal, what I found that helps is if you open your user directory so that you can see all the folders in it. It will be useful later if you get lost and haven’t gotten used to the syntax yet. > patch -p1<../libfreenect/platform/osx/libusb-osx-kinect. diff > ./autogen.sh > make > sudo make install


Which is the creation of the libusb folder that will now be in your home user directory. Next you will adding in the build folder into the original libfreenect folder. So you move back out of the sequence and then start making the build. > cd ../libfreenect/c > mkdir build > cmake .. > make && sudo make install Then the last comman will help you install all the folder inside the build folder. After all of this, your files and folder structure should look lke the following images:

** It is incredibly important to make sure that all the files live in the correct folders before trying to run the glview or glpclview commands. Without proper placement there will be numerous errors that will occur.


downloaded master files


libfreenect files


libusb files


operation The operation procedure is as follows using TERMINAL on the Mac.

cd libfreenect

cd build


cd bin

Within the bin folder, there are several useful opperations related to connecting and using the Kinect for . glpclview: This view gives you the live camera capture mapped onto the 3D model generated from the Kinect’s depth sensor. glview: This view provides you with two windows, one of the live camera capture and the second of the depth map in color. record: This command will start to capture and record the feed from the Kinect. regview: This command gives you the test of the camera view. registration_test_depth_...: This series of depth tests can check your Kinect and show you the 3D depth map that is generated with the depth sensor. tiltdemo: This command will test the tilt on the Kinect and show you the range of motion.


./ glview

In order to exit the video feedback loop, you need to push ctrl + ‘C’

* Please take note that depending on your RAM and your hard drive space, this may severely slow down your computer and in some instances you will have to terminate the program in order to stop the foodback. Especially when using the ‘glpclview’.


Capture from depth sensor from glview.

Capture from main video feed from glview.


processing Processing for the Kinect involves a great amount of coding in C++ format. Analyzing the understanding the information and coding written proved to be very difficult because of the high level of programming language used. Self manipulation of the code did not seem possible because of the extensive coding and extensive number of attached files (aka objects) used to make the code streamlined.

http://processing.org/


max / msp Using Max/MSP by Cycling74 was the easiest and better program of the two. The code had previously been assembled so there was a good base to work off of and to figure out the best way to manipulate the previous design in order to achieve a specific function. The Max/MSP design that was used provided most of the necessary objects and sliders. What I added to the design was a extra package to the design that would allow me to import models to manipulate and place into the 3D space that the 3D caputres would exsist in.

http://cycling74.com/


max / msp Understanding Max/MSP My training for Max/MSP comes from Perry Hoberman and Myles Sciotto. The designed material used to connect Max/MSP to the Kinect was designed by Jean-Marc Pelletier:

The way to create on Max/MSP is simple. The drag and drop menu on the side of the program lets you set up all the buttons, controls, jitter packages, etc.


By pushing the CMD + E you can toggle the interface between editable and presentation mode. Some of the shortcuts for creating the elements are: b - bang (sends a signal) c - comment (adds comments that don’t affect the design) t - toggle (switch) n - new object p - open object palette f - float number box i - integer number box m - message l - live (interactive piece) j - jitter object (add additional information on the jitter package) Jitter packages are pre-written code packages specifically created to execute a particular function. Each jitter package has a different set of attributes that can be applied to it. These attributes can be seen in the jitter packages menu window. x - command menu



findings

project conclusion

#Kinect #Processing #attempted #future #new_design_spaces #repurposing #incorporation


wrap up My research for the past semester mainly focused around learning and understanding computer programs that were able to control the Kinect. Along with that I researched the creation of 3D projection screens and holograms. A lot of the work focused on my exploration on understanding the technology and the programming possibilities with each program and the Kinect. Through a lot of trial and error, the exploration has lead me to a greater understanding of the best ways to utilize the technology and incorporate it into the design process with the specific details that come with setting up large scale digital projects to be built in the physical. The major programs that I worked with were mainly Processing and Max/MSP, both extensive and difficult programs with their own type of ‘language’ and ‘style’. Processing was the harder to understand of the two, using a language similar to C++ and JavaScript, which required a lot more skill to tap into the Kinect. Max/MSP, however, was designed as graphic interface utilizing pre-made packages for easier and faster code assembly and understanding. Working with the Kinect has been a bit of a challenge because of the heavy amount of computing that required in order to process the camera and 3D spatial sensor captures and then mapping the camera capture onto the 3D map. The processing power of the laptop was unable to handle the capacity needed to create the compilation. This major problem restricted further exploration on the project because of the limited resources, but can be further explored at a later date with the correct framework in Max/MSP already set up. This semester’s worth of research has led to a better understanding of how to utilize the functions of the Kinect in order to integrate the technology into a physical space. The exploration of the parameters and capabilities can lead to further developments and integration of the use of the Kinect for future spaces.


http://142days.wordpress.com



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.