Thesis Topic
Augmented Reality informed by Real Time Analysis Thesis report by
Sureshkumar Kumaravel Thesis advisor
Luis Fraguada
Acknowledgement
I would first like to thank my thesis advisor Mr. Luis Fraguada for his guidance and support without whom my research would not have been possible. And I would like to thank Prof. Gonzalo Delacรกmara from whom I got the first encouragement that my thesis research is solid topic and that I should persue it with my fullest. I would like to thank IaaC, Barcelona for considering my research worth and buying the hardware(Lenovo Phab 2 Pro) required for the completion of my research. I would also like to thank the experts Angel Munoz who helped me with the kinect hack and Angelos Chronis who helped me with the most important part of this thesis research and also Maite Bravo, Ricardo Devesa, Maria Kuptsova & Jordi Vivaldi. Without their passionate participation and input, the research could not have been successfully conducted. Finally, I must express my very profound gratitude to my parents and to my friends (Viplav Goverdhan and Karthik Dhanabalan) for providing me with unfailing support and continuous encouragement throughout the process of researching and writing this thesis. My special thanks to Maria Comiche who was always available irrespective of time and situation to answer my emails while borrowing and returning the hardware to the college. This accomplishment would not have been possible without them. Thank you.
Sureshkumar Kumaravel
TABLE OF CONTENTS
Abstract
005
Research glimpse
006
Research timeline
007
Chapter 1
010
Basic vocabularies
Chapter 2 Problem, questions and hypothesis
Chapter 3 Preliminary experiments
052 058
Chapter 4 The next step
Chapter 5
072 088
Evolution of technology
Chapter 6
096
Interface beta
Chapter 7
108
The last change
Chapter 8 The interface
Bibliography
132 150
Abstract Augmented reality has slowly taken its place in computer vision field. Especially in gaming industries, where it has merged as one among them in a very short period. In architectural design we use it for mere presentation. The next step in architectural design is finding where we can actually incorporate augmented reality and finding the right place for augmented reality in architectural design process is the real problem. We have so many tech giants coming up with concepts on how augmentation can be incorporated in modelling and editing using their respective smart glass and HMD (Head Mount Devices). The real question is whether we want augmented reality only for editing models and visualising them for presentation. Augmented reality interacts in real time with the user giving more scope for architectural integration. By enabling the existing augmented reality with new layers of computed visual dimension for analysing the spatial surrounding, we can actually replace the conventional method of interior day light analysis done through modelling software in architecture. With new computational visual metrics in real time one can not only visualise data but also give/get feedbacks by editing/altering the surrounding according to spatial requirements and natural factors in real time. This tool can also be used to rectify the problems from time to time in each phase of the project at the site. Making the interface react like an architect/designer considering various factors while analysing spaces helps augmented reality merge as one among the architectural deisgn process and not a mere tool. This thesis research proposes a real time interface, using augmented reality with new layers of computed vision for doing architectural design analysis in real time. In this interface we can introduce virtual geometries into the real environment to edit and alter the surroundings. This way we are merging augmented reality into the conventional architectural design process. Therefore, architectural design analysis in the conventional approach can be done in real time using augmented reality barring various micro factors like sudden change in daylight and artificial light, sudden rainy weather in summers, and sunny days in winters along with various others and macro factors such as time, seasons, daylight availability and artificial light availability and most of all, the available technology.
Keywords : #AugmentedReality, #RealTime, #Daylight, #Windflow, #WindSimulations, #VirtualObjects, #EditRealEnvironment, #Kinect, #Unity3D.
Augmented reality informed by real time analysis
Glimpse of the prototypes
Prototype Proposal 21/12/2016
First Prototype 27/03/2017
Final Prototype 15/06/2017 006/152
Research Timeline Voxels 21/09/2016
Point cloud 17/11/2016
Generating Mesh 23/11/2016
Real time mesh generation 30/11/2016 False Coloring 02/02/2017 Augmented reality informed by real time analysis : Research Timeline
Final Prototype - Real time wind simluation 15/06/2017
Prototype 1 - Day light analysis 27/03/2016
008/152
01a Augmented reality informed by real time analysis : Basic Vocabularies
- Initial ideas - Narrowing down the research - Difference between polygon and voxel rendering - Drawbacks of voxel rendering compared to polygon rendering - Current usage and future possibilities - Voxel engines - Algorithms
Basic Vocabularies Voxels
010/152
Voxel as research topic The idea for voxel as a research topic started when I was rendering images for a studio design at school. While searching for rendering software onlines I came across a few, which were GPU (Graphics Processing Unit) based renderers. While doing a deep seach about those renderers I got very existed and went on to know how rendering are done by the CPU and GPU and then what the rasterization, ray tracing, ray casting, etc. means and at the end about the primitives of rendering which were polygons and voxel. This word VOXEL struck me for a while and then the rest is the following research.
Augmented reality informed by real time analysis : Basic Vocabularies
GPU based rendering applications
GPU accerlated and physically correct render
Rendering process - Rasterization - Ray Casting - Ray Tracing
Rendering pritimives - Polygons - Voxels
012/152
Polygons A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling
Difference in form based on number of polygons used
Polygon rendering - The main advantage of polygons is that they are faster than other representations. - While a modern graphics card can show a highly detailed scene at a frame rate of 60 frames per second or higher, ray tracers, the main way of displaying non-polygonal models, are incapable of achieving an interactive frame rate (10 frame/s or higher) with a similar amount of detail. - Polygons are incapable of accurately representing curved surfaces, so a large number of them must be used to approximate curves in a visually appealing manner. - Programmers must use multiple models at varying levels of detail to represent the same object in order to cut down on the number of polygons being rendered.
Augmented reality informed by real time analysis : Basic Vocabularies
Voxels A voxel is a unit of graphic information that defines a point in three dimensional space. A unit to store the depth value and stores information of its position, color and density.
Difference in form based on number of voxels
Voxel rendering - Voxel based rendering techniques leads to new classes of 3D visualisation applications in real time. - In game modificability and destructibiltiy of terrain and objects. - The triangle meshes used in most games are also only approximations of volumes, and just as you can refine a representation by using more triangles, it’s possible to use a finer grid of voxels to limit the effects. - With the improvement in technology and availability of more memory and performance of this generation GPU, many developers are investing in developing voxels as game engines not only for terrains but also as a whole game interface to bring a more realistic feel and experience to the gamers. - But here is where memory consumption comes into play, which represents the main disadvantage associated with the use of voxels. - Hence new voxel based game engines and compressing techniques and voxel rendering techniques have been found. 014/152
Issues with voxel rendering - A game world implemented as a voxel grid one million cells per side would contain 1,000,000,000,000,000,000 cells and require more storage than all of the world’s computers. - The good news is that in most voxel implementations many of the cells are identical to their neighbors, which leads to incredible compression. - Having loads of voxels allows for superb accuracy and realism, but the sheer number of voxels required to make something not look like a big cube mess introduces even more problems. -Sure, computers are good at “the maths,� but asking a computer to calculate the position of literally billions of points in space, thirty times per second, is asking quite a lot.
Augmented reality informed by real time analysis : Basic Vocabularies
Present condition of voxel rendering - Due to various advancement in technology and performance of the CPUs and GPUs various startup and companies have emerged with quite a lot of voxel rendering engines for games. - These tools to suit the modern rendering GPUs, have the ability to not only export models and pieces as raw voxel objects, they are able to export these models as usable polygon objects. - Voxels, when utilized properly, allow significantly higher complexity than triangles because voxels can be tiled in 3 dimensional space. Various voxel engines under development are -ATOMONTAGE engine (is the most progressed engine) - Qubicle - MagicaVoxel - VoxelShop Various techniques associated to read and compress voxels are - Quadtrees - Octrees - Marching Squares - Marching Cubes - IsoSurfaces, etc.
016/152
Atomontage Engine
Augmented reality informed by real time analysis : Basic Vocabularies
Qubicle Engine
Magicavoxel Engine
018/152
Quadtrees - A quadtree is a tree data structure in which each internal node has exactly four children. - Quadtrees are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions.
A pixel has four edges, therefore four different nodes are produced from one node and then four from each other four nodes and thus multiplying accordingly to get aacurate resolution where ever needed.
Quadtrees create more nodes at curves and reduces the number of nodes at straights to reduce data and performance required
Augmented reality informed by real time analysis : Basic Vocabularies
Octrees - An octree is a tree data structure in which each internal node has exactly eight children. - Octrees are most often used to partition a three dimensional space by recursively subdividing it into eight octants. - If a tree can be made from smaller, more numerous voxels, then its perceived accuracy will improve. What holds developers back from doing this is processing power and bandwidth.
A voxel cube has 8 corners, so when a detail is required at one particular space each cube creates 8 other nodes and so on multiplying to get accurate resolution where ever needed.
Examples for the use of Octrees
020/152
Marching Squares - Marching squares is a computer graphics algorithm that generates contours for a two-dimensional scalar field (rectangular array of individual numerical values). A similar method can be used to contour 2D triangle meshes.
Working - The marching squares have four control nodes in the corners and four nodes in the middle of the all the edges in that square. - The contour is above the isoline if the control is black and below if the control node is white. - After the terrain is divided into these control nodes and nodes, and using the above level table lines can be drawn throughout the contour terrain to figure out where the contour is up and down. And then using the threshold value and linear interpolation the line and end points can be generated in a more accurate way.
Augmented reality informed by real time analysis : Basic Vocabularies
The red spots are above and blue spots are below
Working of marching squares to get contours using the algorithms
Use of marching squares in games to generate random terrains
Using linear interpolation to get the curves in the terrain. The more resolution the more accurate curve
3d terrains made with 2D data using marching squares algorithms
022/152
Marching Cubes - Marching cubes is a computer graphics algorithm for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (sometimes called voxels).
Working - First a three dimensional grid of cubes is generated and each vertex of each cube is assigned a density value (read in from the data file). - Then, we examine each cube in turn and determine, for each of its vertices, which side of the isosurface contour it lies on. - There are two trivial cases: when all the vertices of the cube are on one side of the isosurface. This gives us a set of edges of the cube that are intersected by the isosurface. - The exact point of intersection of the isosurface is then computed by interpolating the densities at the vertices to get the desired threshold density. - Once the intersection points for a cube have been determined, we compute a three-dimensional polygonal model for the contour. - This is a complicated process that first requires that we compute all possible permutations of the cube. - The fact that each cube possesses eight vertices and that there are two states for each vertex (“inside” or “outside” the isosurface) means that there are 2^8 = 256 ways in which the contour can intersect the cube. Because of symmetry, we can reduce these cases to 15 basic cases that are shown in the diagram at the left.
Marching cubes use density interpolation to get accuracy in 3D form. The more tiny the cubes and number of cubes used the more density interpolation can be performed and high rate of accuracy can be attained. Augmented reality informed by real time analysis : Basic Vocabularies
IsoSurfaces - An isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space; in other words, it is a level set of a continuous function whose domain is 3D-space.
Applications - Isosurfaces are normally displayed using computer graphics, and are used as data visualization methods in computational fluid dynamics (CFD), allowing engineers to study features of a fluid flow (gas or liquid) around objects, such as aircraft wings. - An isosurface may represent an individual shock wave in supersonic flight, or several isosurfaces may be generated showing a sequence of pressure values in the air flowing around a wing. - Isosurfaces tend to be a popular form of visualization for volume datasets since they can be rendered by a simple polygonal model, which can be drawn on the screen very quickly. - In medical imaging, isosurfaces may be used to represent regions of a particular density in a three-dimensional CT scan, allowing the visualization of internal organs, bones, or other structures. - Numerous other disciplines that are interested in three-dimensional data often use isosurfaces to obtain information about pharmacology, chemistry, geophysics and meteorology.
024/152
Narrowing down the research Voxels
Form Finding Simple to Complex form using voxels
Augmented reality informed by real time analysis : Basic Vocabularies
Self Assembly Blocks Self assemble bots
Urban Paradigm Voxel City
Real Time Processing Algorithms Programming Computer vision Augmented reality
026/152
01b Augmented reality informed by real time analysis : Basic Vocabularies
1) Voxel carving and coloring building constructing a 3D model of an object from 2D images 2) A real time system for robust 3D voxel reconstruction of human motions 3) Real time dynamic 3D object shape reconstruction and high fidelity texture mapping for 3D video 4) Real time high resolution sparse voxelization with application to image based modeling 5) Real time visual comfort feedback for architectural design
Basic Vocabularies Paper references
028/152
Paper Reference 1 Voxel carving and coloring building constructing a 3D model of an object from 2D images Year - 2003 Author - A.O. Balan, CSE Department, Brown University Abstract -
We present a simplistic approach to obtaining a 3D representation of an object moving in a scene. The object is monitored by four different cameras with known characteristics. At any given time frame, the silhouettes of the object are extracted from each of the four images using a background subtraction method. Some strategy is afterwards used to perform a computationally efficient 3D model construction by projecting voxels from the 3D scene onto the camera’s viewing planes. A 3D binary grid is obtained and visualized using a simple voxel coloring strategy that we developed. Depth maps are constructed as well for each camera image to help with the coloring.
01
02
03
04
Augmented reality informed by real time analysis : Basic Vocabularies
05
Paper Reference 2 A real time system for robust 3D voxel reconstruction of human motions Year - 2000 Author - Cheung,KM ; Kanade,Takeo ; Bouguet,Jean-Yves ; Holler,Mark Abstract -
We present a multi-PC/camera system that can perform 3D reconstruction and ellipsoids fitting of moving humans in real time. The system consists of five cameras. Each camera is connected to a PC which locally extracts the sil- houettes of the moving person in the image captured by the camera. The five silhouette images are then sent, via local network, to a host computer to perform 3D voxel-based re- construction by an algorithm called SPOT. Ellipsoids are then used to fit the reconstructed data. By using a simple and user-friendly interface, the user can display and ob- serve, in real time and from any view-point, the 3D mod- els of the moving human body. With a rate of higher than 15 frames per second, the system is able to capture non- intrusively sequence of human motions.
01
02
03
04 030/152
Paper Reference 3 Real time dynamic 3D object shape reconstruction and high fidelity texture mapping for 3D video Year - 2004 Author - Matsuyama,T ; Wu,Xiajun ; Takai,T ; Wada,T Abstract -
3D video is a real 3D movie recording the object’s full 3D shape, motion, and precise surface texture. This paper first proposes a parallel pipeline processing method for reconstructing dynamic 3D object shape from multi-view video images, by which a temporal series of full 3D voxel representations of the object behavior can be obtained in real-time. To realize the real-time process-ing, we first introduce a plane-based volume intersection algorithm: represent an observable 3D space by a group of parallel plane slices, back-project observed multi-view object silhouettes onto each slice, and apply 2D silhouette intersection on each slice. Then, we propose a method to parallelize this algorithm using a PC cluster, where we employ 5 stage pipeline processing in each PC as well as slice-by-slice parallel silhouette intersection. Several results of the quantitative performance evaluation are given to demonstrate the effectiveness of the proposed methods. In the latter half of the paper, we present an algorithm of generating video texture on the reconstructed dynamic 3D object surface. We first describe a naive view-independent rendering method and show its problems. Then, we improve the method by introducing image-based rendering techniques. Experimental results demonstrate the effectiveness of the improved method in generating high fidelity object images from arbitrary viewpoints.
Augmented reality informed by real time analysis : Basic Vocabularies
02
01
03
Main Setup
04
05
032/152
Paper Reference 4 Real time high resolution sparse voxelization with application to image based modeling Year - 2013 Author - Loop,C ; Zhang,C ; Zhang,Z Abstract -
We present a system for real-time, high-resolution, sparse voxelization of an image-based surface model. Our approach consists of a coarse-to-fine voxel representation and a collection of parallel processing steps. Voxels are stored as a list of unsigned integer triples. An oracle kernel decides, for each voxel in parallel, whether to keep or cull its voxel from the list based on an image consistency criterion of its projection across cameras. After a prefix sum scan, kept voxels are subdivided and the process repeats until projected voxels are pixel size. These voxels are drawn to a render target and shaded as a weighted combination of their projections into a set of calibrated RGB images. We apply this technique to the problem of smooth visual hull reconstruction of human subjects based on a set of live image streams. We demonstrate that human upper body shapes can be reconstructed to giga voxel resolution at greater than 30 fps on modern graphics hardware. Š 2013 by the Association for Computing Machinery, Inc (ACM)
01
02
03
Scan and Output Augmented reality informed by real time analysis : Basic Vocabularies
Paper Reference 5 Real time visual comfort feedback for architectural design Year - 2016 Author - Jones,N ; Reinhart,C. Abstract -
Today’s predictions of visual comfort are based on high-quality physically-based visualization renderings. Unfortunately, designers and practitioners rarely realize the full benefit of physically-based lighting simulation due to the amount of time required for these simulations. Visual comfort analysis is generally performed late in the design process as a form of validation, if at all. We propose a design workflow wherein certain quantitative visual comfort metrics can be displayed immediately to the designer as the scene changes, often before the physically-based visualization reaches a finished quality. In our prototype software, live-updating predictions of daylight glare probability, task luminance, and contrast are presented alongside a progressively rendered image of the scene so that the user may decide when to accept the values and move on with the design process. In most cases, sufficiently accurate results are available within seconds after rendering only a few frames.
034/152
01c Augmented reality informed by real time analysis : Basic Vocabularies
Basic Vocabularies Real time processing
036/152
Real time processing (RTP) Definition 1 - Data processing that appears to take place, or actually takes place, instantaneously upon data entry or receipt of a command. Definition 2 - Real-time data processing is the execution of data in a short time period, providing near-instantaneous output. - The processing is done as the data is inputted, so it needs a continuous stream of input data in order to provide a continuous output. - Real-time processing systems are usually interactive processing systems. Advantages - RTP provides immediate updating of databases and immediate responses to user inquiries. - Focus on Application - Real time processing in embedded systems - Error Free - 24x7 systems - Memory allocations is best managed in this type of system.
Disadvantages - Because of the online, direct access nature of RTP networks, special precautions must be taken to protect the contents of the databases. - More controls have to build into the software and network procedures to protect data from unauthorized access or the accidental destruction of data. - Limited Taks - Uses heavy system performance and resources - Low multi tasking - Complex algorithms - Device driver and interrupts signals - Precision of code
Augmented reality informed by real time analysis : Basic Vocabularies
Present Implementation
Face feature detection
Vehicle queue, Traffic detection and monitoring
Real time image processing
Real time feature points detection
Traffic lane and vehicle monitoring
Face detection
Color occulsion
Face detection 038/152
01d Augmented reality informed by real time analysis : Basic Vocabularies
Basic Vocabularies Augmented reality
040/152
Augmented Reality - Augmented reality simply means adding a new layer of information over a user’s reality using computer generated images to enable us an enhanced interaction with reality in real time. Applications of augmented reality - Military - Head up display (HUD) - Medical - To pratice surgery in a controlled environment - AR can reduce the risk of an operation by giving the surgeon imporved sensory perception - The ability to image the brain in 3D on top of the patient’s actual anatomy is powerful for the surgeon - Navigation - Enhanced GPS systems use AR to make it easier to get from point A to point B - Sightseeing in AR - The ability to augment a live view of displays in a museum with facts and figures is a natural use of the technology. - Using a smartphone equipped with a camera, tourists can walk through historic sites and see facts and figures presented as an overlay on their live screen. - Gaming industry - With “Pokemon Go,” you can jump into an AR game that works with your mobile device, superimposing mythical creatures over your everyday landscape. - Layer reality browser - This app is designed to show the world around you by displaying real time digital information in conjunction with the real world. It uses the camera on your mobile device to augment your reality. Using the GPS location feature in your mobile device
Augmented reality informed by real time analysis : Basic Vocabularies
AR in Medical field
AR in Medical field
AR in Military field
AR for Navigation
AR in Museums
AR in Architecture
AR in Architecture
AR in Advertising
AR in Gaming Industries
AR in Gaming Industries 042/152
01e Augmented reality informed by real time analysis : Basic Vocabularies
Basic Vocabularies False coloring
044/152
False Color - False color is a technique where colour is added during the processing of a photographic or computer image to aid interpretation of the subject. - A false-color image is an image that depicts an object in colors that differ from those a photograph (a true-color image) would show. - At the moment, false color palys an important role in informing us with new layers of information over the real existence. This is very similar to augmenting a reality which is adding a new layer of information to the real existence. - Similarly in the AEC industry we use a lot of false color for computational fluid dyanmic simulations which is an important source of information for our design process. - We are used to false colored data right from our school days where we color our country maps with different colors to depict various details of a place. This is where I feel augmented reality can be implemented in the architectural design process or any AEC industry to simulate the CFD simulations in real time.
Augmented reality informed by real time analysis : Basic Vocabularies
True Images
False Color Images
046/152
01f Augmented reality informed by real time analysis : Basic Vocabularies
Basic Vocabularies Point cloud
048/152
Point Cloud - A point cloud is a set of data points in some coordinate system. In a three-dimensional coordinate system, these points are usually defined by X, Y, and Z coordinates, and often are intended to represent the external surface of an object. Point clouds may be created by 3D scanners - Each point in the point cloud holds the depth value of an object or form. Explained - Populate the entire screen space with points in regular intervals. - Place an invisible sphere in front of it in the 3D space just like the image below - Project the points from the screen space onto the sphere in the 3D space. Some points intersect at the sphere and some dont. - If viewed in 2D space it will look like the points are all in the same plane but in real there is difference in distance(d) from the view plane and points on the sphere. - This distance d is the depth from the view plane which gives sphere a 3D form. - Each point on the sphere represents the depth value and distance to the screen space
Augmented reality informed by real time analysis : Basic Vocabularies
1
Screen space populated with points
2
Placing a sphere in front of the points in 3D space
3
Projecting point onto the sphere from the screen space
distance (d) from screen space to the sphere
4
Screen
Screen 050/152
Combining the research Voxels Real Time Processing Interface
See beyond human vision Augmented Reality Interactive Application
Computer vision enabled
False Coloring
Augmented reality informed by real time analysis : Problem, Question and Hypothesis
Point cloud Mesh generation
Editing reality Adding virtual objects to reality
Final prototype
CFD simulations
Serve AEC field
052/152
Thesis Problem Real time interaction Merged well into the field
Augmented games
Presentation purpose only Merged into the field ?? Architectural Presentation
Augmented reality has slowly taken its place in computer vision field. Especially in gaming industries, where it has merged as one among them in a very short period. In architectural design we use it for mere presentation. The next step in architectural design is finding where we can actually incorporate augmented reality and finding the right place for augmented reality in architectural design process is the real problem. We have so many tech giants coming up with concepts on how augmentation can be incorporated in modelling and editing using their respective smart glass and HMD (Head Mount Devices). The real question is whether we want augmented reality only for editing models and visualising them for presentation. Augmented reality interacts in real time with the user giving more scope for architectural integration.
Augmented reality informed by real time analysis : Problem, Question and Hypothesis
Thesis Question
Finding at which phase of the architectural design process augmented reality can be implemented to be merge as one is the big question.
Where in Architectural Design Process?
Various stages of a design process * Swiss Re Tower
054/152
Thesis Hypothesis Augmented Reality +
Layers of computed false color vision +
=
Final Interface
CFD Analysis +
Real time performance
Data Analysing Augmented reality informed by real time analysis : Problem, Question and Hypothesis
By enabling the existing augmented reality with new layers of computed visual dimension for analysing the spatial surrounding, we can actually replace the conventional method of interior day light analysis done through modelling software in architecture. With new computational visual metrics in real time one can not only visualise data but also give/get feedbacks by editing/ altering the surrounding according to spatial requirements and natural factors in real time. This tool can also be used to rectify the problems from time to time in each phase of the project at the site. Making the interface react like an architect/designer considering various factors while analysing spaces helps augmented reality merge as one among the architectural deisgn process and not a mere tool.
Final Interface +
=
Hypothesis
Editing/Altering Surroundings +
Virtual Objects
Data Utilizing 056/152
03 Augmented reality informed by real time analysis : Preliminary Experiments
- Understanding real time processing - Preliminary experiments to understand real time processing - Experiment inference - Variables influencing the research
Preliminary Experiments
058/152
Understanding real time processing Methodology The preliminary things I had do to understand how real time processing works based on my thesis research were - Choose an interface where I can work in real time. - Choose a device capable enough to work in real time. Based on the resources I had, the preliminary experiments were done using Grasshopper3D for Rhino3D as I had very good knowledge of using it with the help of a Kinect v2.0 for PC. The experiments conducted were Experiment 1 - Understanding Grasshopper3D in real time I used the Firefly pulgin to get real time feed from the computer’s inbuilt speakers and converted the audio source as input for manipulating attractor points of sphere to change it’s openings. Experiment 2 - Understanding Grasshopper3D in real time using Kinect v2.0 Using Firefly the pulgin I connected the Kinect v2.0 to Grasshopper3D to test it test whether the feeds are in real time since too many interface was used to conduct this experiment. Experiment 3 - Working towards the goal in Grasshopper3D Since the first two experiments were successful, I moved onto towards my goal by trying to get the point clouds from the mesh the Kinect generates
Kinect v2.0 for PC
Augmented reality informed by real time analysis : Preliminary Experiments
Kinect is a line of motion sensing input devices by Microsoft for Xbox 360 and Xbox One video game consoles and Microsoft Windows PCs. Based around a webcam-style add-on peripheral, it enables users to control and interact with their console/computer without the need for a game controller, through a natural user interface using gestures and spoken commands. The first-generation Kinect was first introduced in November 2010 in an attempt to broaden Xbox 360’s audience beyond its typical gamer base. A version for Microsoft Windows was released on February 1, 2012. A newer version, Kinect 2.0, was released with the Xbox One platform starting in 2013. Microsoft released the first Beta of the Kinect software development kit for Windows 7 on June 16, 2011. This SDK was meant to allow developers to write Kinecting apps in C++/CLI, C#, or Visual Basic .NET Hardware technology - The hardware included a time-of-flight sensor developed by Microsoft, replacing the older technology from PrimeSense. - The new Kinect uses a wide-angle time-of-flight camera, and processes 2 gigabits of data per second to read its environment. - The new Kinect has greater accuracy with three times the fidelity over its predecessor and can track without visible light by using an active IR sensor. - It has a 60% wider field of vision that can detect a user up to 3 feet from the sensor, compared to six feet for the original Kinect, and can track up to 6 skeletons at once. - It can also detect a player’s heart rate, facial expression, the position and orientation of 25 individual joints (including thumbs), the weight put on each limb, speed of player movements, and track gestures performed with a standard controller. - The color camera captures 1080p video that can be displayed in the same resolution as the viewing screen, allowing for a broad range of scenarios. - In addition to improving video communications and video analytics applications, this provides a stable input on which to build interactive applications. - Kinect’s microphone is used to provide voice commands for actions such as navigation, starting games, and waking the console from sleep mode. Kinect for Microsoft Windows On February 21, 2011 Microsoft announced that it would release a non-commercial Kinect software development kit (SDK) for Microsoft Windows in spring 2011. The SDK includes - Raw sensor streams - Access to low-level streams from the depth sensor, color camera sensor, and four-element microphone array. - Skeleton tracking - The capability to track the skeleton image of one or two people moving within Kinect’s field of view for gesture-driven applications. - Advanced audio capabilities - Audio processing capabilities include sophisticated acoustic noise suppression and echo cancellation, beam formation to identify the current sound source, and integration with Windows speech recognition API. - Sample code and documentations 060/152
Hacking the Kinect
The original Kinect v2.0 for Microsoft Windows comes with an adapter to convert and connect from a Kinect to the PC. The original price for Kinect v2.0 for PC is 150 euros My idea was to buy a Kinect v2.0 separate for second hand and buy the original adapter to reduce the overall cost as this was only for preliminary experiments. Therefore I used an application called Wallapop in Barcelona to search and buy a Kinect v2.0 of Microsoft Xbox One which was listed as 30 euros for second hand by the seller. After few negeotiations and talk with him about my research the seller decided to sell it to me for free as it was for a research. So now the price of Kinect v2.0 was 0 euros Now all I had to buy was an adapter to convert the feed from Kinect to PC and connect for external power source. While searching for cheaper options I found a YouTube video about hacking the adapter for Kinect v2.0 for PC. I thought let it me do it. Modifications - USB 3.0 type A to B cable - This is to convert the feed from the KInect v2.0 to the PC. PC’s without 3.0 cannot get the feed from the kinect as the feed is in real time it requires a faster means of data transfer and USB 3.0 was the latest technology available at that time of release. - 12v DC Power Adapter - This is the external power source required for the Kinect v2.0 to run as this version of Kienct runs an incredible processor inside to support the sensors and data transmission. - Male to Female Convertor - This was the most vital part of the modification as the original adapter had single cable which can transfer data as well as act a power source supplier. Therefore a modification needs to be made. A cable was split to get the positive and negative of the power supply and was connected to the Female connect to support the Male connector of the DC Power adapter.
Augmented reality informed by real time analysis : Preliminary Experiments
USB 3.0 Type A to B
12v DC power adpater
Male to Female Connector
Savings Original Xbox One Kinect for PC -150 euros Wallapop Xbox One Kinect - Free Modification - USB 3.0 A to B cable = 3.75 euros - 12v DC Power Adaptor = 17.90 euros - Male to Female Connector = 2.5 euros Total Spent = 24.15 euros instead of 150 euros
062/152
Preliminary test 1 Using the Firefly plugin to get real time feed from the computer’s inbuilt speakers to convert the audio source as input for manipulating attractor points of sphere to change it’s openings. Why Audio? Before testing any external device the only device and input I could think of at that moment was the audio source as I was listening to songs. But not all audio sourcewas fesible for this test. Too loud and regular noise from the song or music was having same input throughout the test. Therefore a sound input with difference in music levels was required to notice the difference in input. This audio input is given to the attractor point which simulates a change in the circle opening and form. Interface Workflow
Rhinoceros
Grasshopper 3D
GH Script
Augmented reality informed by real time analysis : Preliminary Experiments
Firefly plugin
064/152
Preliminary test 2 Using Firefly the pulgin I connected the Kinect v2.0 to Grasshopper3D to test it test whether the feeds are in real time since too many interface was used to conduct this experiment. Why Skeleton Motion Tracking? Kinect’s main features where the depth sensing and the motion tracking of upto 6 humans. So why not test it in Grasshopper. If this test went smooth I was sure that the main test where I wanted to generate mesh in real time would work. More over tracking your skeleton form and working with gestures in Grasshopper using Kinect was fun and a big among the users in my college and among my friends. Even though the ideas inspired from videos in the internet by main artist the working grasshopper script was donw by me. Interface Workflow
Rhinoceros
Grasshopper 3D
GH Script
Augmented reality informed by real time analysis : Preliminary Experiments
Firefly plugin
Skeleton tracking
Depth from kinect
Hand movements
Kangaroo physics
Hand gestures
Hand gestures 066/152
Preliminary test 3 Since the first two experiments were successful, I moved onto towards my goal by trying to get the point clouds from the mesh the Kinect generates Why Mesh Generation? The above two experiments done were a start for this ultimate test. This part of the experiment was to generate a mesh so that I can get point cloud from it. Using these point cloud I wanted to depth value ot each for starters which later would let me get the day light values and solar radiations in real time. Interface Workflow
Rhinoceros
Grasshopper 3D
GH Script
Augmented reality informed by real time analysis : Preliminary Experiments
Firefly plugin
Inference Rhinoceros
Grasshopper 3D
Firefly plugin
Problem - When the mesh generation script began, the generation was not in real time and then later while trying to manipulate various features like depth feild of view, point cloud generation between various distance from the kienct source the Rhinoceros application started to show some screen tearing (low frame rates) and later shows an error saying that there is not enough memory in the CPU to run the application and force quits. - Later in the task manager i found that once the mesh generation script is running the memory in the CPU starts from 256MB allocation to 12,000-13000MB allocation and runs out of memory within few minutes and force quits. Inference - Due to huge interface workflow from Rhinoceros - Grasshopper - FireFly and then from the Kinect, the mesh generation was not in real time - Therefore a decision was not made that I have to leave Grasshopper aside as it needs more processing power to run those apps in the first place and then to run it in real time. - As a result, I decided to work on a dedicated application to work on real time processing. 068/152
Variables influencing Real Time
Kinect v2.0
User interface
Rhinoceros
Augmented reality informed by real time analysis : Preliminary Experiments
Grasshopper 3D
the research so far Day Light Analysis
FireFly
Semi Experiment
User
Real time sound processing Real time image processing Real time motion tracking
070/152
04 Augmented reality informed by real time analysis : The Next Step
- Microsoft Kinect Studio and SDK - Kinect Fusion Explorer - Initial interface ideas - Initial prototype schematics - Variables influencing the research
The Next Step
072/152
Kinect Studio & SDK After the preliminary tests using the Kinect v2 in Grasshopper3D I understood the amount of performance that was required for running the interface and then performing task in real time. Untill then I thought I would be working on my entire thesis with Rhinoceros and Grasshopper. But from the experiment inference I understood that was not going to be the case. I decided that I need to make a dedicated application to use the full potential of my computer’s performance for real time feed. My laptop’s configuration at that moment was as follows
- Acer Aspire V Nitro V17 model - Intel i7 series, Skylake processor - 16GB DDR4 Ram - NVidia GeForce 960 4GB GDDR5 - Cool Boost exhaust
Finally I decided that I have to become a programmer if I am going to finish this research. Then I started to acquire this new amazing skill in my entire life. But to get started I had to get used to the Kinect’s SDK and user friendly interface called Kinect Studio.
Kinect Studio v2
Augmented reality informed by real time analysis : The Next Step
Kinect v2 SDK
Surface with Normals
Point Cloud
Point Cloud colored based on depth
Motion and Body tracking
Everything that I was trying to do in Grasshopper3D was running smoothly in the Kinect Studio v2. Therefore to access these features I had to use the SDK provided by Microsoft for application developers. Finally I became an application developer.
I got my hands on all the available features from the SDK. Microsoft have provided an interface for SDK which after experiencing various other SDK I found very happy that Microsoft have gone all the way to make an interface to access the SDK. The SDK is categorised according to different programming languages. I choose to work with C#. The feature that caught my eyes was the Kinect Fusion Explorer - WPF, which had everything I need to start and make an application myself.
Kinect Fusion Explorer - WPF
074/ 152
Kinect Fusion Explorer Mesh Generation - This space shows the real tim efeedback of the mesh generated by Kinect. - Using the Image options below the Mesh Window we can generate various textures. - Here depth texture is seen.
Editing Options - These options enable the user to edit the generated mesh. - Create Mesh button allows the user to save the generated mesh in three different formats. - Also avails an option to refresh the mesh where it regenerates the mseh from first
Augmented reality informed by real time analysis : The Next Step
Depth Feed - This window shows the depth feed of the existing mesh. - This is powered by the Infrared camera within the kinect.
Resolution - Performing this task in Grasshopper3D ultimately lead to crashing of the entire software. - Here one can manipulate the resolution of the kinect’s feed by increasing the Voxels per meter but would have to lose the volume covered by the kinect. - When one manipulates the X Y Z axis sliders one can set the screen space and move the kinect’s feed within the window. Dpeth Threshold - Performing this action in grasshoper3D was a big deal has it takes most of the computer’s performance. - Here it was smooth and was able to run at 30 fps. 076/ 152
Initially Proposed
Augmented reality informed by real time analysis : The Next Step
UI Prototype
078/152
User Interface Workflow
MS Visual Studio Working interface using Kinect v2 SDK
OpenCV OpenCV to program computer generated augmented vision
Kinect v2 Real time device to generate mesh in real time
Augmented reality informed by real time analysis : The Next Step
Desktop Application
FINAL AUGMENTED REALITY APPLICATION
Mobile Application
AR/VR Applicaiton
080/152
Interface Strategy The criteria to be considered and what are the different data required to make this interface work. 1) Lighting Conditions - Natural - Artifical - Calculating the amount of natural light available. - Analysing if the natural light is enough for the number of people using that space. - Calculation of light availability at any given time in that space. - Using the above analysis we can calculate the amount of artifical light required during day time. 2) Lighting Conditions - Direct - Indirect - Analysing the space where there is direct light and where there is indirect light. - As a result those places can be manipulated to affect the whole space lighting condition. 3) Lighting Conditions - Intensity - Luminocity Hue - In this case, Analysing the intensity of the light required for a space or person using that space. - By calculating the intensity of the light and the luminocity hue available, we can analysis if the user and space required more light or needs to be reduced in respect with the natural lighting and the induced lighting. 4) Lighting Conditions - Material Analysis - Surface Texture - In this case, We can analysis both the materials used in that space as well as the texture of those materials as it plays a very vital role in the lighting conditions. - Therefore getting the information of the material, it’s texture conditions, reflection quality, etc., as these factors play a very vital role in the reflection and indirect lighting conditons.
Augmented reality informed by real time analysis : The Next Step
Page intentionally left blank
082/152
Prototype Schematics
VR Glasses
MS Kinect v2
Front View
Augmented reality informed by real time analysis : The Next Step
Side
This initial prototype was inspired by many modern warfare games that I used to play. So I thought why not have a backpack gadget of my own which houses a powerful yet compact Processor supported by a powerful GPU. This back pack also houses a power supply connection circuit which is powered externally. The user will have to mount the MS KInect v2 on the chest if he needs to move around to do the analysis. To view the augmented computer vision of the simulation the user requires a VR glass headwear.
FPGA
GPU
View
Back View
084/152
Variables influencing Real Time
Kinect v2.0
User interface
Rhinoceros
Augmented Reality
Grasshopper 3D
Fi
Visual Studio
Head Mount Device
Kinect v2.0
Front Augmented reality informed by real time analysis : The Next Step
Side
the research so far
ireFly
Day Light Analysis
Semi Experiment
Real time sound processing Real time image processing Real time motion tracking
Main Experiment
Real time mesh generation
Main Experiment
Real time mesh generation
User
Processor
GPU
Back 086/152
05 Augmented reality informed by real time analysis : The Evolution of Technology
Evolution of Technology
088/152
Why Portability as a variable ? From the first semester research, presentation and comments I learnt that the portability of the interface as a tool is limited to the mere length of the power cables as the kniect requires external power supply to run itself as well as the backpack that was designed to hold the processor and GPU. With the advancement in technology there should be a solution to this by any means. That is when I landed upon Google Tango formerly known as Project Tango. This solution offers everything my first prototype would offer as well as portability. Even though this changes everything that I have worked for the past three months. Still I would embrace it any day as it extends the possiblity that my research would offer. Importance of Portability - The user interface is still a dedicated application. Since it is powered by Android OS, this interface as the ability to reach thousands and may be even millions. Which was definately not possible if it was done for kinect. - The type of user will significantly change and in turn the extend of this interface as a tool will reach various other fields not only just Architecture Engineering and Construction (AEC). - If it was a backpack prototype, only one user would use at a time and making it difficult to pass it on from one user to another. Now with the help of Google Tango in smartphones the number of users is not an issue anymore as it can be passed on to any number of users without any difficulty. - Further this being the first of its devices and if it can do all this, imagine what can be done when the sensors and devices are improved significantly
Augmented reality informed by real time analysis : The Evolution of Technology
Final Variables h no Tec
logy
Dev ice
is
r se
e Int gm
en
er Us of
ted R
e Typ
e a l it y
Comp
e rfac
utat i
lys
U
on al An a
Real Time
Au
Portability
090/152
Google Tango About Google Tango Google Tango (formerly named Project Tango, while in-testing) is an augmented reality computing platform, developed and authored by Google. It uses computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allows application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world. Google Tango is different from other emerging 3D-sensing computer vision products, in that it’s designed to run on a standalone mobile phone or tablet and is chiefly concerned with determining the device’s position and orientation within the environment. The software works by integrating three types of functionality: - Motion-tracking: using visual features of the environment, in combination with accelerometer and gyroscope data, to closely track the device’s movements in space. - Area learning: storing environment data in a map that can be re-used later, shared with other Tango devices, and enhanced with metadata such as notes, instructions, or points of interest. - Depth perception: detecting distances, sizes, and surfaces in the environment. Together, these generate data about the device in “six degrees of freedom” (3 axes of orientation plus 3 axes of motion) and detailed three-dimensional information about the environment. Project tango was also the first project to graduate from Google X in 2012. Applications on mobile devices use Tango’s C and Java APIs to access this data in real time. In addition, an API is also provided for integrating Tango with the Unity game engine; this enables the rapid conversion or creation of games that allow the user to interact and navigate in the game space by moving and rotating a Tango device in real space. These APIs are documented on the Google developer website.
Other devices so far The Peanut phone - “Peanut” was the first production Tango device, released in the first quarter of 2014. It was a small Android phone with a Qualcomm MSM8974 quad-core processor and additional special hardware including a fisheye motion camera, “RGB-IR” camera for color image and infrared depth detection, and Movidius Vision processing units. A high-performance accelerometer and gyroscope were added after testing several competing models in the MARS lab at the University of Minnesota. The Yellowstone tablet - “Yellowstone” is a 7-inch tablet with full Tango functionality, released in June 2014, and sold as the Project Tango Tablet Development Kit. It features a 2.3 GHz quad-core Nvidia Tegra K1 processor, 128GB flash memory, 1920x1200-pixel touchscreen, 4MP color camera, fisheye-lens (motion-tracking) camera, an IR projector with RGB-IR camera for integrated depth sensing, and 4G LTE connectivity. The device is sold through the official Project Tango website and the Google Play Store. As of May 27, 2017, the Tango tablet is considered officially unsupported by Google.
Augmented reality informed by real time analysis : The Evolution of Technology
Commercially available devices Lenovo Phab 2 Pro Lenovo Phab 2 Pro is the first commercial smartphone with the Tango Technology, the device was announced at the beginning of 2016, launched in August, and available for purchase in the US in November. The Phab 2 Pro has a 6.4 inch screen, contains a Snapdragon 652 processor, and 64 GB of internal storage, with a rear facing 16 Megapixels camera and 8 MP front camera.
Asus Zenfone AR Asus Zenfone AR, announced at CES 2017, will be the second commercial smartphone with the Tango Technology. It runs Tango AR & Daydream VR on Snapdragon 821, with 6GB or 8GB of RAM and 128 or 256GB of internal memory depending on the configuration. The exclusive launch carrier in the US was announced to be Verizon in May, with a targeted release of summer 2017.
The device I was using - Lenovo Phab 2 Pro
Technical Specifications - 6.4� Dispaly (2560x1440) - Snapdragon 652 processor - 16MP Camera - Time of flight camera - IR Projector - Fisheye motion camera - 4GB RAM - Fingerprint Scanner 092/152
Variables influencing Real Time
Google Tango
User interface
Unity 3D
Visual Studio Android Studio
Augmented reality informed by real time analysis : The Evolution of Technology
Augmented Reality
the research so far Portability
Day Light Analysis
Semi Experiment
Depth Perception Motion Tracking
Main Experiment
Real Time Mesh Generation Real Time Point Cloud
User
Main Experiment Final Prototype Interface
Main Experiment
094/152
06 Augmented reality informed by real time analysis : Interface Beta
Interface Beta
096/152
Interface Breakdown
Interface Prototype
Interface Procedural Breakdown
Software Development Tool
Programming Language
Software Development Kit
Augmented reality informed by real time analysis : Interface Beta
- Connect and Read the Tango - Read RGB camera value - Read PointCloud value - Toggle between RGB and PC - Adding Brightness Filter - Adding Grayscale Filter - Color Clustering - Lux meter real world calculation - Max and Min Value - False color legend - Create an optimal color contour - Get lux value on touch at any space - Adding Virtual Objects to Reality
Connect and Read the Tango
- Getting started with this application by importing all necessary SDKs and fixing all update issues.
Hello World !!
- Like all the tutorials I saw to get started, for my satisfaction I started my application by showing the message “Hello World”. - When I clicked the Build button and gave permission for the application inside the mobie phone for the camera, this was the screen I saw saying “Hello World !!”.
Read RGB camera
- The next step after “Hello World !!” was getting the RGBA camera feed. - This RGBA camera feed code was accessed from the Google Tango SDK for Android Studio in Java and C++ - To access C++ (Native Language) one has to download the NDK tools from the Manager in the Android Studio. - Like the image in the left, the code was taken from the C++ RGB Depth Sync Example.
Read PointCloud
- The most important part of this application and example code is the Point Cloud generation. - My entire application and is based on the data from the point cloud that is generated in real time. - Since it is performance consuming task, Google has given the option to access the GPU of this mobile device for smooth real time feed of the RGBA and Point cloud.
098/152
Toggle between RGB and PC
- Continuing to the previous step is the option to toggle between RGBA feed only and Point Cloud feed only or Both the feed together. - That is where the option “Debug Overlay� comes in handy. This was the inbuilt piece of code given by Google themselves. - When this option is clicked we can switch On and Off the Point cloud feed. And this gives the option to use a slider to range from 0 to 1 in the alpha value of the RGBA camera. When the slider is set to 1 only the Point Cloud with the depth feed is seen which is shades of grey based on the depth from the camera. When 0 is set the depth cannot be seen as the RGBA is full and overlays the depth feed. - This example code which gives access to the RGBA as well as Depth feed is very important as I can now access the corresponding RGBA value of the Point Cloud. This in turn helps to calculate the Lux value which is explained in the following steps.
Adding Brightness Filter
- Using the OpenCV library for Android studio we can add various filters just like Photoshop but for videos in Real Time. - Adding the Brightness filter to the RGBA feed is required for better computation so that the program can categorise difference between the darker and lighter shades easily. - This difference in shades is needed to normalise the value of camera feed from Vector4 (0, 0, 0, 0)-(255, 255, 255, 100) to Vector4 (0, 0, 0, 0)-(1,1,1,1).
Adding Grayscale Filter
- To normalise the video feed from 0-255 in all channels of RGB and 0-100 of Alpha channel to (0-1) in all four channels I am adding Grayscale filter on top of the brightness filter. - Now the program can read the real time video feed much better with just the Grayscale range of 0-1.
Augmented reality informed by real time analysis : Interface Beta
Color Clustering
- Adding all those filters to normalise the video feed is a computation input for Color clustering technique. - So based on the brightness values in the real world, this texhnique colors the video feed in Jet Color Clustering combination based on the brightness of the RGBA video feed. - In the image to the left you see that the video is edited ranging from the below false color legend.
Lux meter real world calculation 348 Lux
747 Lux
575 Lux
755 Lux 789 Lux
110 Lux
31 Lux 79 Lux
147 Lux
202 Lux
143 Lux
397 Lux
25 Lux
20 Lux
47 Lux
12 Lux 53 Lux 00 Lux 23 Lux 12 Lux
10 Lux
06 Lux
- Since the mobile device does not house a light(lux) calculation sensor this part must be done manually. - I bought a Lux Meter to do the manual calculation with both the window blinds open and closed to check how the lux values changes. - From the above two images you can see the difference in light ( one with blinds open and the other with blinds closed) and difference in lux value at each spot. - Now I have to do a manual caliburation and teach the program to do this lux calculation without the Lux Meter only just using the Camera feed. 100/152
Caliburating the camera feed for lux calculation - The false coloring that the program does is the same for any video irrespective of the daylight. So blindly caliburating with just one position or video may never have a level of accuracy. - To eliminate this error, the manual caliburation must be done taking into consideration various location varying in different light lux value. - For this I need to remove the auto focus option of the camera as it changes the exposure value automatically. - So the following steps to be done are - Manually set the exposure value of the camera in the program starting from lower value to the highest value. for example from -1 EV to 1 EV. - Now at -1 EV take a series of images. - Select a particular portion of the screens pixel with RGB value and also get the lux reading from the Lux Meter and make a note of it in a tabular column. - Now perform this action for -0.9 EV upto +1.0EV and note down the values.
- Like this perform the readings at various spots and compare the reading.
- From the various tables from various light source and location do a mean of the Lux values at different exposure levels. - So now from -1.0 EV to +1.0 EV you have a mean reading from various places. - Now do a mean value of the LUX from different exposure settings. - Once you have a mean value from different exposure create a legend for the false color generated that from lowest to highest.
False color legend 00 Lux
800 Lux
- Repeat the above step and calculate the Lux value using the program and compare it with the Lux value from the Lux Meter. - Keep experiment with mean values untill you reach a level of accuracy when compared with the LUX meter. - According to myself and my Tutor Angelos Chronis who is an expert in Computational Fluid Dynamics, the result from the above experiment will have a considerable rate of accuracy.
Augmented reality informed by real time analysis : Interface Beta
Create an optimal color contour
- Using the algorithm K means Clustering, we can normalize this false colored video feed in real time. - This algorithm draws boundary between the shades of colors. For example when a value of 5 is given, the algorithm splits the false color feed in the left to 5 different colors which is the most common or most closely common. - This value can range from 0-255. When 255 value is given it is similar to the left hand side. - So based on the legend we are creating we can give value to suit it.
Create an optimal color contour
- While teaching the program to draw these boundaries between colors like the left hand side image, we can give few lines of codes teaching the program to color them according to the light standards for a living room, or study room or whatever space. - Say like the left side image, Green boundary means the optimal light for a living room and red being light value less than the required light value for a living room
Get lux value on touch at any space
- This sample image serves as perfect image as to what K means Clustering can do and should do. - Now with the help of the K Means Clustering and Point cloud I can get the color value at given spot in the screen by reading the RGB value of the corresponding Point in the cloud and comparing it with the false color legend we created from the caliburation. - Therefore in this interface when one touches any spot on the screen we can get the lux value like the sample image in the left 102/152
Adding Virtual Objects to Reality
- The last and final part of this interface and research is what can we do with all the data that we acquire. - Getting and collecting data is one thing. There is too much of data and what we are going to do with the data is the key in this research. - For the DATA UTILIZATION part of this research, I am utilising the PLANE FITTING and DEPTH PERCEPTION feature of the Google Tango. - For starters I am introducing basic geometry shapes into the scene like the image above. - At the right you can see how Google Tango can detect real environment features such as walls and table surfaces and places the virtual object accordingly based on the normal of the real world features. - So what if I can introduce these virtual objects into the scenes and make it interact and alter with the real world condition. - For example, like the above image introduce planes at the light source and block the light and calculate how much the ligth has been blocked if there is surplus amount of light inside a space. Once the virtual object give a result, the designer or the user can built a real structure based on the result from the applcation and edit his space to have optimal light conditions. Augmented reality informed by real time analysis : Interface Beta
Users
LEED
USGBC
IGBC
Energy Efficiency
Sustainable
Cost Effective
104/152
To Do for the final prototype
- Connect and Read the Tango - Read RGB camera value - Read PointCloud value - Toggle between RGB and PC - Adding Brightness Filter - Adding Grayscale Filter - Color Clustering - Lux meter real world calculation - Max and Min Value - False color legend - Create an optimal color contour - Get lux value on touch at any space - Adding Virtual Objects to Reality
Reached till here before the start of final semester
Final Semester Timeline
20/04/2017
- Combining the filters to the app - K means Clustering algorithm for better color mapping
04/05/2017
- Calibrating the false color legend - Proper PointCloud RGBA sync
18/05/2017
- Integrate Plane Fitting - Adding Geometries - Sync with reality
01/06/2017
- Color Contouring - Touch Interface - Small Room Space Demo
Final Presentation
Augmented reality informed by real time analysis : Interface Beta
Page Intentionally left blank
106/152
07 Augmented reality informed by real time analysis : The Last Change
The Last Change
108/152
Jury Comments Angelos Chronis - Computational Expert, CFD expert. - Do the lux calculations properly. - The white color represents more than 2000 lux but my meter says it is 800 lux only. - May be it is got to do with the camera sensor of my phone. - Turn off the automatic brightness in the SDK - Generate many HDRI images and choose the one that suits mine and also The camera SDK app you can specify if if i want to change the exposure automatically according to focus in space or no change in exposure value. - Virtual Objects - Generate a virtual space using the point cloud - Each point in the virtual space is a light value. - Introduce a virtual geometry. - The point cloud mesh will change accordingly and the light value of each point will change. - This will be the process to do the plane fitting interface. Areti Markopoulou - Insane - Feels that I have complication in the process - Take advantage of the existing simulations - Have I studied the existing Daylight analysing tools to know what the entire process behind it. - Thinks the Scenario where applying this tool to decide where to place the light fitting at the end and reading if the optimal conditions are attained or not WORKS. - Bringing the virtual objects into space is fine but the fact that light changes at each level of the space in terms of height of the building, seasons of the year, time of the day, etc. feels it is very difficult to implement. - This application changes the according to users. - Any user can understand this easily, and any user can be the one to utilize this for analysing light in a space. - If you manage to really like find a tool, software. i would be an application trying to have a complete narrative how this is being used. - What is the contribution of this tool to the existing analysing tool. - I will push you to come up with a powerful narrative that makes a sense to my project and make it a complete project.
Augmented reality informed by real time analysis : The Last Change
The Last Change The research has taken a different path following the second semester’s presentation comments from various experts, time remaining, technology available and the level of accuracy attainable. The last change that I made to my research were based on the above factors and also considering the knowledge of softwares and programs I had to do with the limited time. I narrowed down the various false colored enabled simulation that can be done in real time without considering the time at the initial stage of narrowing doen. The various simualtions that were possible without time as a factor are as follows False Color Simulations
Light
Coloring
Heat
Wind
Fluid
Intensity of light
Color segmenting
Change in temperature
Wind flow direction
Fluid movement
Thermal analysis
Low and high pressure
Light illuminance
Normal Maps Map legends Surface curve coloring Contouring
Heat transfer Electronics Cooling
Velocity of the wind Aerodynamics Ventilation analysis HVAC
The final set of factors or variables that was considered before deciding the final prototype from the above simulations were- False Color simulation - Real Time simulation - Image based - Depth based - Applications/uses - Technology required - Analysis to be considered and - Achieveable within the available time
110/152
Final Prototype & UI Wind Flow Simulation
Introduce Geometries
Analyse the surface pressure based on wind flow
Augmented reality informed by real time analysis : The Last Change
Manipulate geometries
Real Time
Unity3D/Unreal Engine
112/152
Interface Breakdown +
Goggle Tango
=
Unity3D engine
The Interface
Augmented Reality Dynamic Mesh Generation
Wind Simulation Virtual Object Manipulation
Manipulating existing location’s wind flow Analysing the influence of existing structures on wind flow Automobile slip stream analysis
Augmented reality informed by real time analysis : The Last Change
Frequently asked question During my final presentation and during various job interviews, when I presented them this research I was constently asked one question. It was annoying but may be they had a point when they asked me this question.
“We all know what Unity3D can do and what Google Tango is capable of. What is your input in this interface?” My answer to all those question were - While cooking a signature dish every chef uses the ingredients which are commonly used by various other chefs. But what makes the dish a signature dish is how and what dish they make from the commonly used ingredients. - This application of mine is similar to a chef’s signature dish. Even though I used the features of Google Tango and Unity3D to make an application, the way this application works and uniqueness and purpose for this application makes it what it is, My Signature Dish.
114/152
Application 1 Use for this applicaiton Analysing the influence of existing structures on wind flow and influence of wind flow on existing structures. About this application This application of testing and analysing the influence of wind flow on existing structures and vice versa is the first ever tested application using this interface on the outdoors. The location choosed was the corner of the block of building with a bevel corner. I imagined how the wind might flow on a 45 degree wall intersection and thought why not put the applicaiton to test.
Augmented reality informed by real time analysis : The Last Change
Generating Mesh for analysis
Generating Wind flow
WInd flow against existing structures
116/152
Application 2 Use for this applicaiton Manipulating existing location’s wind flow
About this application This application and research was done for this purpose only, where an architect or designer can go to the site and use this applicaiton to simulate the existing wind condition and based on the wind flow in that site he can start modelling in real time based on the wind flow condition in that site. If there are existing buildings or structures in that site, this application enables the user to introduce virtual objects and the simulated wind flow will interact with the real world structures as well as the virtually introduced objects by user to manipulate the wind flow.
Augmented reality informed by real time analysis : The Last Change
Generating Mesh for analysis
Generating Wind flow
Generating wind speed result using mesh plane
118/152
Application 3 Use for this applicaiton Automobile slip stream analysis
About this application This application ensures that this research and this interface is not only built for Architecture Engineering and Construction (AEC) purpose but beyond that to various other fields as well. This application demonstrates how other than AEC fields can benefit from such an interface. Here automobile industry can benefit from this application which helps in eliminating an entire space specially built for Slip Stream analysis of an automobile, which requires a special condition to get the level of accuracy. This application demonstrates how all that can be eliminated when this analysis is done thorugh augmented reality.
Augmented reality informed by real time analysis : The Last Change
Generating Mesh for analysis
Generating Wind flow
WInd flow against existing structures
120/152
Application 4 Use for this applicaiton Wind flow in the interior spaces (Virtual Reality) About this application This application is a demonstration that it not only suits exterior and augmented reality but also interior and vitual reality. The 3D model of the existing space is done and using the Google Tango motion tracking one can actually walk thorugh the virtual world under similar scale and experience how the wind flows in that space but in a virtual condition yet getting the same result as the augmented version. Only change is that when the application is upgraded to do better modelling the virtual space can be edited like the existing 3D modelling softwares based on the analysis
Augmented reality informed by real time analysis : The Last Change
Interior wind flow simulation
Virtual wind flow simulations
Virtual wind flow simulations
122/152
Inference Performance, Accuracy and Resolution
- The mesh below the wind simulation is the velocity result of the wind flow in that zone or area.
If the resolution of the wind speed is required more number of particles needs to generated therefore there is a huge drop in performance. But with better resoution there is better accuracy.
If a user needs better performance the number of wind particles needs to be reduced which result in very less accuracy level
Augmented reality informed by real time analysis : The Last Change
Resolution
Accuracy
Performance
Resolution
Accuracy
- Resolution
Resolution
Accuracy
- Accuracy
Performance
Performance
- The color on the mesh plane represents the wind speed at that zone. The reseolution of that result varies from three factors such as - Performance
If a user needs maximum accuracy then the number of wind particles needs to be increased whihc result in very low performance and slows down the interface.
Inference Micro Factors
- Are the sudden change occuring in a long constant period, where things are not supposed to happen
- Sudden change in weather conditions - A period of sunny days in Winter period - Few cloudy and rainy days in Summer period. - Change in daylight conditions, etc.
Macro Factors
- These are the known facts. We know that they will occur constant. - Four seasons of the year - Amount of daylight and wind conditions over different seasons, different geographical locations
Overall Inference
- The relationship between the Resolution, Accuracy and Performance and the Micro and Macro factors easily determine the level of accuracy that can be attained using this application. - Barring the above important factors, it is possible that architectural design process can be done in real time. - First smartphone of this type. - Need for improvement in terms of technology for features such as hand gestures, better rate of mesh generation and increased depth capture - Dedicated software development kit leads to software hardware compartibiity which in turn leads to the better performance. - Platform for future applications based on real time augmented analysis. 124/152
Prototype Images
Augmented reality informed by real time analysis : The Last Change
Prototype Images
126/152
Prototype Images
Augmented reality informed by real time analysis : The Last Change
Prototype Images
128/152
Thesis Statement
This research merges augmented reality into the cr turn enables architects and designers to ana
Augmented reality informed by real time analysis : The Last Change
ritical part of architectural design process which in alyse and design simultaneously in real time.
130/152
08 Augmented reality informed by real time analysis : The Interface
The Interface
132/152
Interface Explained Main Menu
Augmented reality informed by real time analysis : The Interface
1
About the Applicaiton button - Click to view the abstract of this research to know more about this application.
2
Press to Start Applicaiton button - Click to enter the main interface screen from where you can start analysing and utilise the information
1
2
134/152
Interface Explained Main Screen
1
Command Bar - This toolbar gives tips and directions to do when any button is selected or in an idle mode.
2
Wind Flow - When this button is clicked the user gets options to switch on/off the wind simulation mode.
3
Tango - When this button is clicked the user gets options to generate mesh of the real world and various other advanced options related to mesh.
4
Geometry - When this button is clicked the user gets options to create and edit basic box geometry. It contains all the necessary tools to edit and transform the box.
Augmented reality informed by real time analysis : The Interface
1
2
3
4
136/152
Interface Explained Interface Chain of Command
The Tango mode - It is first in the chain of command. This option is used to Generate Mesh
The Geometry mode - It is second in the chain of command. This option is used to create and transform box geometry for modelling.
The Wind Flow mode - It is thrid in the chain of command. This option is used to simulate the wind flow analysis.
Augmented reality informed by real time analysis : The Interface
Page intentionally left blank
138/152
Interface Explained Meshing 1
Main Tango Button - When this button is clicked the user gets access to the various options under this menu. - At first only the basic options are available
2
Place Objects Button - This option uses the plane fitting feature of the Google Tango. When this button is clicked the user can place a Mesh Plane on any surface. Touch on the screen to place a Mesh Plane on any horizontal or vertical surface in the real world.
3
Points per object - This option lets the user to place objects in a very small surface or larger surface based on the number of points per object. - The default is 200 points per object which means the plane fitting needs 200 points to be detected to place an object on the surface. - The lesser the number of Points per Object the smaller the surface area can be to place a Mesh Plane - The larger the number of Points per Object, the bigger the surface area has to be to place a Mesh Plane
4
Activate Dynamic Mesh- This option enables the user to generate a mesh of the real world for augmenting and placing virtual objects. - If this option is selected the user is enabled with various other advanced options to edit the mesh generated.
Augmented reality informed by real time analysis : The Interface
1
2 3 4
5
5
Top View- This option enables the user with a top view camera at th Bottom Right corner of the screen which aides the user to view the mesh generated from the Top View
140/152
Interface Explained Meshing 1
Place Objects Button - This option uses the plane fitting feature of the Google Tango. When this button is clicked the user can place a Mesh Plane on any surface. Touch on the screen to place a Mesh Plane on any horizontal or vertical surface in the real world.
2
Points per object - This option lets the user to place objects in a very small surface or larger surface based on the number of points per object. - The default is 200 points per object which means the plane fitting needs 200 points to be detected to place an object on the surface. - The lesser the number of Points per Object the smaller the surface area can be to place a Mesh Plane - The larger the number of Points per Object, the bigger the surface area has to be to place a Mesh Plane
3
Activate Dynamic Mesh- This option enables the user to generate a mesh of the real world for augmenting and placing virtual objects. - If this option is selected the user is enabled with various other advanced options to edit the mesh generated. - Click the Activate button to start generating the Dynamic Mesh of the real world environment.
Augmented reality informed by real time analysis : The Interface
4
Meshing Options- This option enables the user with options to Clear, Pause/Resume and Export the Mesh generated
5
Mesh Grid Types- When the Activate button is clicked the interface starts to generate a mesh with default Grid Type 1. - Click Grid Type 2 and Click Clear button from the Meshing Options to refresh the mesh generation. Now the new mesh will have Grid Type 2 as grid pattern.
6
Texture UV Size- Move the Slider to reduce or increase the size or gap inbetween the grid. - Move the Slider and click Clear button from the Meshing Options to refresh the mesh generation for closer or farer grid size.
1 2 3 4
5
6
7 7
Clear Button- Click this button to clear the generated mesh and start generating a new mesh. Pause/Resume Button- Click this button to Pause/Resume the mesh generation. Export Button- Click this button to Export the generated mesh to the SD card in the mobile phone for external use. 142/152
Interface Explained Geometry 1
Main Geometry Button - When this button is clicked the user gets access to the various options under this menu.
2
Create Box - Click this button and Touch in the Screen to place a box of default size. - This option uses the Plane Fitting. It also works on the Dynamic Mesh created.
3
Edit Box - Click this button to edit the size of the box. Edit the size by moving the Slider and Touch on the Screen to update the change in box size.
4
Control Points - Click this button to display the control points for the box. - Touch on the Screen to update the change in box size. - The box size can still be edited using the Sliders
5
Deform Box - Click this button to deform the box form. When this button is pressed the Box has Control Points in Red Color. - Press and move the Control Point to deform the box. - When the Deform Box button is clicked the Edit Box and Create Box will be automatically switched off and can no longer be used.
6
Add Boxes - Click this button to Add boxes of default size on top of any geometry. - Use this option to add unlimited number of box on Touch in the Screen on any surface in the real world or virtual objects. - This option disables Control Points therefore the box cannot be deformed.
Augmented reality informed by real time analysis : The Interface
7
Move Box - Click this button to Move box one at a time. - Press and hold and Drag the geometry to move the box. - This option disables Add Box therefore the new boxes cannot be added.
8
Edit Settings - Move Sliders to edit the Height, Width and Depth of the created box. - This option is accessible only when the Edit Box option is clicked
1
2 3 4 5 6 7
8
9
9
Create Plane - Use these options to Create Plane and use the Sliders to edit the Depth and Width of the Mesh Plane.
144/152
Interface Explained Windflow 1
Main Wind Flow Button - When this button is clicked the user gets access to the various options under this menu.
2
Activate Wind - Click this button to activate the wind flow to see the direction. - The Wind Flow will always be parallel to the phone’s Screen at the start of the app.
3
Wind Analysis - Click this button to activate the wind flow analysis over the virtual and real world structures.
4
Wind Density - Increase the Slider to increase the denstiy of the wind flow. - This affects the phone’s performance.
5
Wind Velocity - Increase the Slider to increase the velocity of the wind flow. - This affects the phone’s performance.
6
Wind Simulation Height - Increase the Slider to increase the height of the wind flow.
7
Wind Trubulance Multiplier- Increase the Slider to add more turbulance to the wind flow.
8
Wind Distribution- Increase the Slider to add more power to the wind flow.
9
Wind Effect on Geometry- Increase the Slider to affect the flow of wind when it hits any virtual or real world structures.
10
Cone/Edge- Toggle between the Cone and Edge to change the flow of wind. - Cone will have a more distributed flow of wind. - Edge will a one row of wind flow
Augmented reality informed by real time analysis : The Interface
14
11
Resolutio Setting- Click the Ground Mesh to activate the resolution result on the Ground mesh plane.
12
- To activate the Ground Mesh Plane use the Place Objects from the Tango Menu.
Ground Mesh/Geometry Mesh UV Size - Move the Slider to increase or decrease the resolution of the wind flow. - Smaller value gives a faster and more understandable result.
- Click Geometry Mesh to show the resolution on the boxes in the scene
- Larger value affects the phone’s performance
1
2 3 4 5 6 7 8 9 10 11 12 13
13
Velocity Value- Click the Velocity Value button to switch on the False Color Legend at the Top Left Corner of the screen.
14
False Color Legend - Displays the velocity of the wind based on the color of the wind flow which is affected by various virtual or real world structures.
146/152
Download App Google Tango Enabled Application Link
Non Google Tango Application Link
Google AR Core Applicaiton Link
Augmented reality informed by real time analysis : The Interface
Video Link Google Tango Application Working Video Youtube Link
148/152
Bibliography Journals -Balan, A O. 2003. “Voxel Carving and Coloring–Constructing a 3D Model of an Object from 2D Images.” Brown University Providence RI. Citeseer, 231–42. -Loop, C., C. Zhang, and Z. Zhang. 2013. “Real-Time High-Resolution Sparse Voxelization with Application to Image-Based Modeling.” In Proceedings - High-Performance Graphics 2013, HPG 2013, 73–80. -Schmidt, J., H. Niemann, and S. Vogt. 2002. “Dense Disparity Maps in Real-Time with an Application to Augmented Reality.” Proceedings of IEEE Workshop on Applications of Computer Vision 2002–Janua (Wacv): 225–30. doi:10.1109/ACV.2002.1182186. -Kaur, Jasdeep, and Preetinder Kaur. 2012. “Fuzzy Logic Based Adaptive Noise Filter for Real Time Image Processing Applications.” International Journal of Engineering Research & Technology (IJERT) 1 (7): 1–6. -Caillette, Fabrice, and Toby Howard. 2004. “Real-Time Markerless Human Body Tracking Using Colored Voxels and 3-D Blobs.” ISMAR 2004: Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality, no. Ismar: 266–67. doi:10.1109/ISMAR.2004.50. -Cho, Peter, Hyrum Anderson, Robert Hatch, and Prem Ramaswami. 2006. “Real-Time 3-D Ladar Imaging.” Proceedings - HPCMP Users Group Conference, UGC 2006 16 (1): 321–26. doi:10.1109/HPCMP-UGC.2006.63. -Nie\ssner, Matthias, Michael Zollhöfer, Shahram Izadi, and Marc Stamminger. 2013. “Real-Time 3D Reconstruction at Scale Using Voxel Hashing.” ACM Trans. Graph. 32 (6): 169:1--169:11. doi:10.1145/2508363.2508374. -Caillette, Fabrice, and Toby Howard. 2004. “Real-Time Markerless Human Body Tracking with Multi-View 3-D Voxel Reconstruction.” British Machine Vision Conference 2: 597–606. doi:10.1016/j.biocon.2006.05.017. -Cheung, KM, Takeo Kanade, Jean-Yves Bouguet, and Mark Holler. 2000. “A Real Time System for Robust 3D Voxel Reconstruction of Human Motions.” IEEE, 714–20. doi:10.1109/ CVPR.2000.854944. -Matsuyama, T, Xiaojun Wu, T Takai, and T Wada. 2004. “Real-Time Dynamic 3D Object Shape Reconstruction and High-Fidelity Texture Mapping for 3D Video.” IEEE Transactions on Circuits and Systems for Video Technology 14 (3): 357–69. doi:10.1109/ TCSVT.2004.823396. Webpages - Rehman, Junaid. 2014. “What Are Advantages and Disadvantages of Real Time Operating Systems - IT Release.” IT Release. [viewed 2016-11-11] http://www.itrelease. com/2014/07/advantages-disadvantages-real-time-operating-systems/. Augmented reality informed by real time analysis : Bibliography
- Prock, Andrew, and Chuck Dyer. 2016. “Real-Time Voxel Coloring.” Photorealistic Scene Reconstruction by Voxel Coloring (Seitz, Dyer) --- in Proc. CVPR 97 Towards Real-Time Voxel Coloring (Prock, Dyer) --- in Proc. IUW 98 . Accessed October 24. http://pages. cs.wisc.edu/~dyer/vsam/vc/. - Abi-Chahla, Fedy. 2009. “Voxel Ray Casting.” Tom’s Hardware. [viewed 2016-10-22] http://www.tomshardware.com/reviews/voxel-ray-casting,2423.html. - dkotfis. 2015. “Voxel Map Construction and Rendering.” OctreeSlam. [viewed 2019-10-17] http://octreeslam.blogspot.com.es/. - Technipedia. 2016. “What Is a Volume Pixel.” Technopedia Multimedia. Accessed October 12. https://www.techopedia.com/definition/2055/volume-pixel-volume-pixel-or-voxel. - Daniel. 2015. “Voxel Explosion.” The Christev Creative. [viewed 2016-10-23] https:// christevcreative.com/2015/04/19/voxel-explosion/. - Super, Daniel. 2013. “What Are the Pros and Cons of Using Voxels instead of Polygons?” Quora. [viewed 2016-10-12] https://www.quora.com/What-are-the-pros-and-cons-of-using-voxels-instead-of-polygons. - Versweyveld, Leslie. 2000. “Integration of Voxel-Based and Polygonal Rendering Techniques Leads to New Classes of 3D Visualisation Applications.” Cypress. [viewed 2016-1015] http://www.hoise.com/vmw/00/articles/vmw/LV-VM-09-00-10.html. - Margaret Rouse. 2007. “Voxel.” WhatIs. [viewed 2106-10-16] http://whatis.techtarget. com/definition/voxel. - Aaron. 2012. “Low-Level Octree Structure.” Point Cloud Library. [viewed 2016-10-21] http://www.pcl-users.org/Low-level-Octree-structure-questions-td4019246.html. - Mula, Wojciech. 2008. “File:Quad Tree Bitmap.svg.” Wikimedia Commons. [viewed 201610-19] https://commons.wikimedia.org/wiki/File:Quad_tree_bitmap.svg. - Heine, Christian Nicolaus. 2004. “Dreidimensionale Darstellung Der Hirnnerven V-VIII Mittels Virtueller Zisternoskopie.” [viewed 2016-10-21] http://edoc.hu-berlin.de/dissertationen/heine-christian-nicolaus-2004-09-23/HTML/chapter1.html. - “LaGriT Example: Octree Refine Hex, Intersect Object.” 2016. Accessed October 18. [Authot name and Publisher not available] https://meshing.lanl.gov/proj/examples/ex_octree_refine_intersect_object/index.html. - AlwaysGeeky. 2012. “Let’s Make a Voxel Engine - 005 - Octree.” YouTube.[viewed 201610-20] https://www.youtube.com/watch?v=-fmBm704Qj4. - Liang, Qiuhua. 2016. “Adaptive Quadtree Grid and Cartesian Cut-Cell Method.” Accessed October 20. [Publisher name and year not available] https://www.staff.ncl.ac.uk/qiuhua. liang/Research/grid_generation.html. - MATLAB. 2013. “How to Plot Isosurfaces in Cylindrical Polars?” MATLAB Answers. [viewed 2016-10-19] https://es.mathworks.com/matlabcentral/answers/107199-how-to-plot-isosurfaces-in-cylindrical-polars?requestedDomain=www. mathworks.com. 150/152
Bibliography - Rodriguez, Alfredo. 2007. “NeuronStudio Documentation - Other Viewing Options.” NeuronStudio Documentation © CNIC, Mount Sinai School of Medicine, 2007-2009. [viewed 2016-10-19] http://research.mssm.edu/cnic/help/ns/viewother.html. - Pickton, Mike. 2016. “What Is a Voxel, Anyway? Voxels vs. Vertexes in Games.” Accessed October 19. [Publisher name and year not available] http://www.gamersnexus.net/ gg/762-voxels-vs-vertexes-in-games. - Coding, Catlike, and Jasper Flick. 2016. “Marching Squares.” Accessed October 20. [Publisher name and year not available] http://catlikecoding.com/unity/tutorials/. - Digital [Sub]tance. 2016. “IsoSurface Definition.” Code-Share, Design, NudiBranch. Accessed October 19. [Publishing year not available] https://digitalsubstance.wordpress. com/tag/millipede/. - Anderson, Ben. 2016. “Marching Cubes.” Accessed October 20. [Publisher name and year not available] http://www.cs.carleton.edu/cs_comps/0405/shape/marching_ cubes.html. Lavanya Viswanathan. 1997. “Volume Rendering: Marching Cubes Algorithm.” CS 580 Programming Assignment 5. [viewed 2016-10-16] http://cns-alumni.bu.edu/~lavanya/ Graphics/cs580/p5/web-page/p5.html. - Insectoid. 2013. “Quadtree Generator for BMP Sprites.” Computer Science Canada. [viewed 2016-10-20] http://compsci.ca/v3/viewtopic.php?t=33713. Conference - Mahalanobis, Abhijit, Jamie L. Cannon, Steven R. Stanfill, Robert R. Muise, and Mubarak A. Shah. 2004. “Network Video Image Processing for Security, Surveillance, and Situational Awareness.” In , edited by Raghuveer M. Rao, Sohail A. Dianat, and Michael D. Zoltowski, 1. doi:10.1117/12.548981.
Augmented reality informed by real time analysis : Bibliography
Page Intentionally left blank
152/152
Augmented Reality informed by Real Time Analysis Thesis report by
Sureshkumar Kumaravel Thesis advisor
Luis Fraguada