Reality to virtuality | Experimental project on Point cloud technology

Page 1

Static scenes to non-static scenes, reality to virtuality - A concert in 3D An experimental approach to new technologies through artistic process

1


Date: Authors: Supervisors: Professorships:

10.01.2017 Katharina Henggeler, Nico Lang, Nicolas Uebersax Jonathan Banz, Ephraim Friedli Chair Karin Sander, Architecture and Art Chair Andreas Wieser, Geosensors and Engineering Geodesy 2


1 Index

p.03

2 Introduction

p.04

3 Inspiration

p.06

4 Concept

p.14

5 Key ideas

p.16

5.1 Approach A: Abstract fusion

p.18

5.2 Approach B: Realistic fusion

p.18

6 Data collection

p.20

7 Realization

p.34

7.1 Approach A: Abstract fusion

p.34

7.2 Approach B: Realistic fusion

p.38

8 Results

p.46

8 Discussion

p.48

9 Future work

p.50

3


- 2 Introduction -

4


Terrestrial laser-scanners are high-end measurement instruments, typically used to produce accurate and dense 3D-models of the surrounding environment. A common application in the field of Geomatics is the modeling of buildings, bridges, dams or other constructions for extensive monitoring purposes. In other fields like Architecture, laser-scanners are deployed for visualization purposes. As a single recording takes a few minutes, it is usually assumed that the objects of interest are at rest during the measurement. Many challenges occur if the application requires a complete model of the 3D object. The major reason causing data gaps is occlusion. Usually the object is scanned from different positions to merge the resulting point-clouds by a registration to a complete model. Additionally, the point-clouds consist of noisy data points that are caused by reflecting surfaces or if the laser-beams are reflected by multiple surfaces (mixed pixels). Another reason for noisy artifacts are moving objects like walking people. These noisy data points are usually removed to get a clean model of the steady environment.

5


- 3 Inspiration -

6


This project is not the first work that studies laser scan artifacts in an artistic way. Figure 1 to Figure 3 show screenshots of a video posted by ScanLAB on Vimeo (vimeo.com/scanlab). The interesting forms and effects shown in these pictures have been the inspiration and starting point for this project.

7


Figure 1: Example of the movement aspect in a laser scan of a static scene. ref. (vimeo.com/scanlab) 8


9


Figure 2: Idea of turning the scans into art. (vimeo.com/scanlab) 10


11


Figure 3: Use of laser scan properties to get a scene that looks like illuminated by a single light source. (vimeo.com/scanlab) 12


13


- 4 Concept -

14


In our study fields Geomatics and Architecture, the above-described artifacts are usually an undesired side effect in laser scans and should be avoided. We grasp the opportunity in the scope of the 360° LAB to explore the appealing visual effects of artifacts in laser scans. Our goal is to develop a graphic language where this imperfection becomes part of the expression. Especially the distortion caused by moving objects inspired us to focus on the visualization of non- static scenes in point-clouds recorded with a terrestrial laser-scanner. In particular, the aim is a visualization of a concert of the band “Finger Finger” in form of a video. Thus, a concert was recorded with the laser scanner. Further, the same concert was filmed. Having access to the two recordings and different media the goal was to create a fusion of the real video footage and the point cloud to compose a video clip to the song “When it’s done” from Finger Finger. During this project work the following questions shall be examined. -

How can motion be captured using the laser-scan measurement technique?

-

How can we visualize the 3D point-cloud to get an aesthetic that fits the underlying song?

-

How can we virtually explore a 3D point-cloud to give it a realistic feel of being part of the frozen scene?

-

How can we fuse point-cloud renderings with real video footage?

15


- 5 Key ideas -

16


The visualization of the point cloud should highlight the point cloud aesthetics, where the individual measurement points are visible in space.The idea is to focus on scenes where artifacts and interesting details in the point cloud arise from the moving people. To make the visualization “alive� we fuse the point cloud rendering with the real video footage. Further, we render the point cloud with a realistic camera movement, which is either extracted from real camera footage with a structure from motion algorithm or created artificially with a shaky characteristic. The dynamics of the video should follow the dynamics of the underlying song, which we decided to play backwards to stress the imperfection as well with artifacts in the music. And last but not least, we will explore the effect of stroboscopic light to create a stop motion impression and make the scenes appear more energetic. Two approaches to create a fusion of the point-cloud rendering and the real video footage were followed.

17


- 5.1 Approach A: Abstract fusion The point cloud is rendered such that the movement and the position in space of the virtual camera are similar to the ones of the real video footage. This means that the camera movement is shaky like the one of the hand-held camera, with which the real video footage had been recorded. Further, the artificial camera follows a path that could have been produced by a person attending the concert.Therefore, artificial views (like a bird-view fly-through) are to be consciously avoided. Two ways of combining the real footage and the point cloud renderings are used within this approach. First, fast cuts between the two medias are applied to generate an effect similar to a stroboscopic light. Second, the real video footage is used as an overlay to texture the point cloud renderings.The fusion in this approach is achieved on an abstract level, thus earning itself the name “abstract fusion”.

- 5.2 Approach B: Realistic fusion The second approach extracts the camera movement from the video footage, which is then used to render the point cloud with the video footage as the background. This produces a spatially correct fusion of the two media and is therefore named “realistic” fusion.

18


19


- 6 Data collection -

20


As mentioned in chapter 4 Concept, the key idea was to scan moving objects, people to be more precise, and then focus on the thus generated artifacts. Thus, where do people move more than at a concert? Therefore, the laser scans were taken during the concert of the band Finger Finger at the Toni Areal on October 6th 2016. By scanning the natural scene at the concert with the laser-scanner the time shall be visualized in space. The resolution of the laser-scanner was adjusted, such that a single scan takes about 4 minutes to record the 360 degrees environment. Thus, the resulting point cloud is rather a summary of the short time interval than a snapshot. The scanner was placed directly besides the singer in the center of the scene, between the band and the listening people. Below follow a few screenshots of the scans taken and the artifacts found in them. In Figure 4 and Figure 5 one can see how the laser scanner actually acts like a point light source to the scene, projecting “shadows” into the black space. The reader seems to be looking into the 3D-objects, when looking at objects in the point cloud towards the scanner location. Since objects are only recorded from one side. For example, the people’s faces are even visible from behind. In Figure 7 the guitar’s neck is bended, the guitarist has three legs, and the keyboarder three heads. Figure 8 again shows how missing data appears like shadows. And lastly, in Figure 9, the periodic movement of the bassist’s arm cause sinusoidal structures.

21


Figure 4: Screenshot of a laser scan showing how the On thisacts screen it’s interesting notice scanner like a shot, point light source to the to scene. View how we can see through the head of the singer. into the guitarist’s face from the back. 22


23


Figure 5: Screenshot of a laser scan showing how the scanner acts like a point light source to the scene. 24


25


Figure 6: Screenshot of a laser scan. View of the laser scanner and people deformation 26


27


Figure 7: Screenshot of a laser scan. Artifacts: bended guitar neck, keyboarder has three heads, guitarist has three legs. 28


29


Figure 8: Screenshot of a laser scan. Artifacts: bended guitar neck and missing points appear like shadows if we move away from the scanner’s point of view. 30


31


Figure 9: Screenshot of a laser scan. Artifacts: Periodic movement of the bassist’s arm causes sinusoidal structures. 32


33


- 7.1 Approach A: Abstract Fusion -

34


To arrive at the abstract fusion, there were basically three steps necessary: 1. Point cloud visualization 2. Recording videos of the visualized point clouds 3. Cutting the recorded video snippets together according to the underlying music.

Point cloud visualization For step one the 3D visualization software 3DS Max was used. Compared to Cinema 4D (refer to chapter 7.2 Approach B: Realistic Fusion), loading a laser scan into 3DS Max is a standard feature. By creating a point cloud object, one can load point clouds in RCS- or RCP-format and even chose between several visualization options (see Figure 10). However, as the Faro Focus Laser Scanner saves the point clouds in the FLS-format, the point cloud files had to be converted first. This was done by first converting them into E57 files using Faro Scene and then converting them into RCP files using Autodesk ReCap. Coming back to the visualization of the point clouds, displaying the points as white squares in the black space is probably the simplest approach; though, some 3D structures are not very visible like this (e.g. The singer’s eyes in Figure 10 left). Coloring the points according to the direction of the surface normal (Figure 10 middle) lets such detailed structures become more visible, but the colorful visualization doesn’t fit with the band’s concept of black-and-white. However, this colorful version can easily be converted to black-and-white in Adobe Premiere.

35


Figure 10: Point cloud visualizations: white points (left), colorization by normal (middle), colorization by normal converted to black-and-white (right)

Recording videos of the visualized point clouds In order to record videos of the point clouds (step two), camera objects moving on circular or elliptical paths were created. However, this looked unnatural and rather far from the intended realistic camera movement. Also, the rendering of the camera scenes took way too long (up to 3 days for 3 seconds of HDTV quality) and was not exactly robust. Hence, we eventually decided to film the preview of the camera scenes in 3DS Max using DSLR camera, effectively adding the desired realistic shaky feel to the recording in the process and cutting a lot of unnecessary rendering time. On top of it, one gets the possibility to add yet another dimension of reality resp. virtuality. Which in turn, fits perfectly with the name of this lecture: reality to virtuality... and possibly back again?

Figure 11: Adding layers of reality resp. virtuality 36


Cutting the recorded video snippets together according to the underlying music. In step three, the recorded video snippets were then cut together using Adobe Premiere. Experimenting with various overlaying techniques, multiplication, division, hard and soft light along with the strobe effect gave the most appealing results. The tune “When it’s done� from the band Finger Finger served as a guide for the dynamics of the video. In order to stress the topic of artifacts even in the music, we decided to play the song backwards. As the video was actually cut to the backwards version of the song, playing the resulting video reversed has an interesting effect, as the original song is reconstructed and the dynamics of the video change completely.

Figure 12: Overlay effects in Adobe Premiere 37


- 7.2 Approach B: Realistic Fusion -

38


For this approach, the idea was to fuse the two data types laser scans and real video footage into one single scene with a realistic feel. To do so, we had to pass by those following steps: 1: Determine the best software to use de pending on data and goals. 2: Check compatibilities, exportation and importation issues. 3: Extract the camera movement from the real video footage. 4: Integrate the scanned scene into the 3D space determined by the camera. 5: Adjust render options and do the post-production

39


Determine the best software to use depending on data and goals To be able reach our initial idea, a 3D-animation software was needed. For this, two main options were available today. One being 3DS Max from Autodesk and the other one being Cinema4D owned by Adobe. The first one supports a built in option for importing point clouds directly from ReCap (another Autodesk software), which was a considerable advantage in comparison to Cinema4D (C4D). Importing point cloud files in C4D required an external plugin called Krakatoa (from Thinkbox software) to visualize and render the points into the 3D animation software. Despite this disadvantage, we choose to use C4D for its built in tool that generates a 3-dimensional camera path out of a video footage with a structure from motion algorithm.This option was required for the following steps and we couldn’t find another good alternative in 3DS Max.

Check compatibilities, exportation and importation issues The main issue for the conversion was the conversion process from the raw scans produced to the PTS-format supported by Krakatoa. For this, passing through a few steps additional steps was necessary. First, we used Geomagic Wrap to clean, remove and partition the scan produced during the concert.Then we exported them in ASC-files to be able to open them in in Cloud Compare. In Cloud Compare, we were able to see the file as a list type. A little bit of reorganization and it was possible to export it as an E57-file, a format that is pretty stable and good for conversions.

40


Figure 13: Screen shot of the conversion process on Cloud Compare

We finally had to open the E57-file in Autodesk Recap, where we were able to export the point list as a PTS-file, one of the formats supported by Krakatoa (for import into C4D).

Extract the camera movement from the real video footage. This step may sounds very complex, but it is not as complicated as it sounds. As mentioned before, Cinema 4D has a very good built in tool that resolves a 3D camera path in a few minutes. During this process (known as structure from motion) the program automatically tracks hundreds of points while the video plays. Those points have to be easily recognizable throughout the video. Once the points are calculated the program will determine their movements and reconstitute their respective position into the virtual space. If the amount of point and their quality is good enough, the camera will be successfully solved and one is able to superpose the video footage with the 3D animation scene. 41


Figure 14: C4D screen shot: Once the camera tracking solved. The blue line is the path of the 3D camera

Integrate the scanned scene into the 3D space determined by the camera This step was a tricky one. We started by setting a normal plane into the 3D scene. For this we take the tracking points on the floor of the video footage and use them as reference points to create the plane.We will use the plane as a base to import the point cloud into Cinema 4D. Then we had to make some little adjustments. For this, your eyes are the best tools to judge the final aspect.

Figure 15: C4D screen shot, setting of the normal plane 42


Adjust render options and do the post-production We explored the render setting of Krakatoa. Here you can set different types of visualization, like the number of points to be displayed or blurring effect that link the points together.

Figure 16: C4D screen, two rendering option

For the post-production we chose Adobe After Effect. We mainly used this software to smart cut the foreground objects that hide parts of point cloud (like the pillars of the room in our case). This work makes the video much more realistic. We also used After Effect to apply color filters and other additional experimental filters such as the stroboscopic effect. 43


Figure 17: After Effect screen shot, initial view

Figure 18: After Effect screen shot, animation smart cutting

44


Figure 19: After Effect screen shot, final view

45


- 8 Results -

46


Approach A: Abstract fusion

https://vimeo.com/198884264

Approach B: Realistic fusion

https://vimeo.com/198888131

Blog link https://blogs.ethz.ch/360/

47


- 9 Discussion -

48


Although the documentation of the two realizations –abstract and realistic fusion – appears to be a rather smooth sailing process, the actual process included quite a few challenges. And it is only thanks to the broadness and flexibility of this project’s framework lecture that we were still able to arrive at a satisfying result. Had it been necessary to define a precise form of the resulting product in the very beginning, it would have been hardly possible. The main problem was that there was no 3D-modeling and animation software available, which provided all the features and functionalities we needed. The software needed to be able to import and display point clouds in a satisfying way, like it is possible in 3DS Max for example (see chapter 7.1 Approach A: Abstract Fusion and especially Figure 10). In Cinema 4D, point cloud import is only possible with the Krakatoa Plugin, which was not available to us until about two weeks prior to the presentation date. And even with said plugin it is not possible to for example color the points according to the direction of the surface normal as it is in 3DS Max. Cinema 4D in turn is more powerful in extracting the camera movements from real video footage than After Effects, which was necessary because of the dark light settings in the real video footage. This problem was also the reason why we even thought of developing two approaches in the first place – the realistic fusion one using Cinema 4D and the abstract one using 3DS Max. Another challenge concerned the rendering of the animation out of 3DS Max. The time taken in comparison to the length of the to be rendered video was especially unacceptable since the idea was to render or record as many scenes as possible and then use snippets out of them and cut them together into a new video. Investing this much time in rendering scenes that might not even be used in the end is inefficient. However, this problem was nicely solved by filming the screen with a DSLR camera, even gaining some additional possibilities (see chapter 7.1 Approach A: Abstract Fusion). In conclusion, it is reasonable to say we’ve learned a lot during this project. Be this how to use 3D- modeling and animation software, pointcloud data-formats, more experience with the nature of laser scan artifacts, or simply that necessity may very well be the mother of invention. 49


- 10 Future work -

50


We do not consider this project as an ended one. For us, it is much more an experimental research on the application of the laser scan technology in art, which can open us a new world. This is why we could imagine carrying on this project in the future. Up to now, the first approach has been more developed but we think the second one has still a lot of potential. To that, we can add the fact that we only used one scan while we have recorded about 30 scans during the concert, which opens yet another new range of possibilities.

51


Date: 10.01.2017 Authors: Katharina Henggeler, Nico Lang, Nicolas Uebersax Supervisors: Jonathan Banz, Ephraim Friedli Professorships: Chair Karin Sander, Architecture and Art Chair Andreas Wieser, Geosensors and Engineering Geodesy 52


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.