Zhiwei Liao
Collective Visual System Learning from Social Media
Harvard University Graduate School of Design
Collective Visual System: Learning From Social Media
By
Zhiwei Liao
Master of Architecture, University at Buffalo, 2005 Submitted in partial fulfillment of the requirements for the degree of
Master in Design Studies Technology Concentration At the Harvard University Graduate School of Design May, 2017
Copyright Š 2017 by Zhiwei Liao The author hereby grants Harvard University permission to reproduce and distribute copies of this Final Project, in whole or in part for educational purposes.
Signature of the Author ______________________________________________________________________________ Zhiwei Liao Harvard University Graduate School of Design
Certified by ________________________________________________________________________________________ Panagiotis Michalatos Assistant Professor of Architecture Harvard University Graduate School of Design
____________________________________________ John May Master in Design Studies, Co-Chair Design Critic in Architecture
___________________________________________ Kiel Moe Master in Design Studies, Co-Chair Associate Professor of Architecture and Energy
Abstract
Today, the digital displays are incrementally
generated based on Structure From Motion
influencing the spatial experience from
(SFM), a photogrammetric range imaging
sensing to browsing. By the end of 2016,
technique which can reconstruct 3D model
Instagram has already reached 600 million
from a set of 2D overlapping images.
users. Since the built environment are visually consumed unprecedentedly, it is critical to
The Collective Visual System will be discussed
understand the relationship and bidirectional
with three examples and tested in eight
influence between the architecture and social
existing architectural landmarks as case
media.
studies from classical to contemporary architectures including Parthenon, Pantheon,
Addressing the importance of weighted points
Piazza Del Campidoglio, Sagrada Familia,
of view in architectural spaces as opposed to
Villa Savoye, Ronchamp, Guggenheim
Isovist (Benedikt, 1979), the Collective Visual
Museum Bilbao, and China Central Television
System comprises three components: Objects
(CCTV) Headquarters. The case studies
with Design Intent, Collective Visual Field,
explicitly address the importance of the visual
and Preferable Vantage Points. It can be used
conditions from social media in architecture.
to control the flow of visual information and enhance visual outcome in the integrated design fields, including but not limited to, urban design, landscape, and architecture. This thesis conducted research based on photos from social media. Generated by photogrammetric modeling, a set of points in three dimensional spaces is processed with various algorithms including feature matching in images, filtering, clustering, and 3D reconstruction. The 3D information reconstructed from the selected buildings is
3
Contents
Chapter 1 Research Background Chapter 2 Defining Collective Visual System
6 12
2.1 Overview
12
2.2 Isovist
16
2.3 Remote Sensing
18
2.4 Inverted Retina
20
Chapter 3 Case studies
22
3.1 Overview
22
3.2 Instagram
25
3.3 3D Reconstruction
26
3.4 Filtering Attention
27
3.5 Website: https://zhiweiliao.github.io/Viz
29
3.6 Case 1 - The order of Parthenon
32
3.7 Case 2 - Pantheon
36
3.8 Case 3 - Campidoglio
40
3.9 Case 4 - Sagrada Familia
46
3.10 Case 5 - Villa Savoye
50
3.11 Case 6 - Ronchamp
58
3.12 Case 7 - Guggenheim, Bilbao
66
3.13 Case 8 - CCTV Headquarters
72
Chapter 4 Conclusion
84
List of Figures
86
Bibliography
88
Appendix
90
Appendix A Parsing Data
90
Appendix B Data Visualization Index
96
Appendix C Main JavaScript
102
Appendix D Site Plan Scatter
104
Appendix E Frontal View
106
Appendix F Attention
108
Appendix G Event Handler - Top View
110
Appendix H Event Handler - Frontal View
112
Appendix I Iconic Index
114
Appendix J Map
116
Appendix K Style
118
Appendix L Alignment
120
Appendix M DATA
122
Appendix N Final Review Presentation
124
Chapter 1 Research Background Multiplicity
dark room, in which the 3D space, not being
I was once told an inspiring anecdote: the
seen because the illumination is diminished, is
structural engineer, who collaborated with
supplemented by 2D upside-down projection.
Patrick Schumacher, suggested a highly visible
In the terms of Collective Visual System (see
structural element. In fact, the architect did
chapter 2), there are three rearrangements:
not like the idea, but he approved it once
the objects being seen are transformed from
the marketing photos were taken. This story
3D to 2D, the vantage points are also re-
exemplifies the temporality of a building,
situated and constrained, the visual field is
referring to Deleuze’s concept of Multiplicity,
relocated from the outside to the inside.
the different moments and the nature of photography is to capture one of many, one
Stereoscope, a word derived from the Greek
of all of its forms, one of the multiplicities. In
στερεός, solid and σκοπεῖν, to see, is an
Schumacher’s case, the weighted value of the
optical instrument, of modern invention,
moment in the entire life span of the building
for representing, in apparent relief and
can influence the public perception in the
solidity, all natural objects and all groups
media.
or combinations of objects, by uniting into
Historical Context of the Visual Devices and Equipments
one image two plane representations of these objects or groups as seen by each eye separately (Brewster, 1856). To see the relief
The visual devices and equipments influence
from 2D images, two flows of different visual
how people perceive art and architecture. In
information are composed into one. Visual
the timeframe from 16th to 21th century, the
information is transformed through the
visual devices and equipments, from camera
device, which is closely asscociated with space
obscura to photography, from stereoscope
in an intimate scale: the visual field is situated
to kaiserpanorama, from photography to
between the eyes and the images.
Instagram, can be formulated into a series of sequential categories from individuality,
Flatness of Digital Displays
collectiveness to connectedness. The
Photos of spaces can recreate spatial
visual information transferring within the
experience. The representation of the actual
three types of devices influences the visual
space are developed and consumed visually in
perception in spaces which surround us in
a particular preconditioned format in which
various scales. To date, the visual information
the experience is vividly flattened by the
is quantifiable to understand the taxonomy of
screen on the digital devices: dissolving one
visual conditions in relation to time and space.
dimension: z-Axis (reducing spatial content), into the other remaining two: x and y.
Camera obscura (from Latin “camera”: (vaulted) chamber or room, and “obscura”: darkened, plural: camerae obscurae) means 6
Indiviual Device
1500
Collective Use Connectivity Ca
me
1600
ra
Tel e
ob
scu
ra
sco
Mi
pe
cro
sco
Ma
gic
pe
lan
Bin
oc
ula
r
ter
n
1700
Op
tic
al T
ele
gra
Pa
ph
no
1800
Fir
st
Ele
tra
ctr
ns-
Atl
an
tic
ica
l Te
tel
leg
eg
Ka orl
d’s
firs t
h
rap h Zo cabl etr e op e
ise
W
rap
rpa
no
ram
a
1900 blic Firs t vid flig Co eo ht s mm ph erc INM on imul e n ato ial AR t etw r Fib ele SA c e ork Ts r o om hip pti mu ca ni -to l te ca Iko -sh no ore Co leco tions s, fi sat mpu mm sat rst elli e t u co te er nenica llite mm tio co t w mm or ns erc k ial SMunica ing rem TP tio ote em ns sen ail sin gs ate llite Ins tag ram 2000
pu
ram
a
Ph ot Ph ogra Steenak phy reo isto sco sco W pe pe yld 's G Tac rea h tG Ch istos ron cop lobe Pra opho e to xi Nip nosc grap hy o Kin kow pe D e Cin tos isk em cop a e Th e
ate r Te lev Fir isio st n Fir exp eri st Co me mp nta ute l vi de r Fir op s ho t Se p ne i n c Fir sor tur s e First puama s of st bli HM c v the E a D t ide Int o c oph rth fr ern o o et mp ne omE xp ute ne r two lore r6 r k( Fa NA c Tw ebo ) itte ok r
Fig. 1  Historical context of the visual devices
7
Social Media
Photogrammetric Modeling
The photos on social media are often
In the past decade, photogrammetric
selected and clustered. They can be used as
modeling has been an emerging low-cost
a ready-made artifact transforming into an
technology, which reconstructs a 3D model
informational representation. For example,
of a given scene or a building through
hashtag can be used to identify a keyword or
computation based on a series of overlapping
topic of interest. The subjective experience
images. There have been existing applications
using texts, images and videos are posted
to use these reconstructed 3D information for
and recorded at the online social networking
urban tourism (Snavely, 2006). It is also used
platforms, such as Instagram, Facebook,
in architecture to record historical building.
Twitter, etc.
That 3D information includes coordinates of the target subject and camera positions. There
Redundant Visual Information
have been several promising methods for the
Redundant Visual Information is likely
modeling, such as Google Maps in 3D mode
overwhelming our visual world. It is critical to
using satellite images.
distinguish the weighted or significant value from sharable value. In the visual study of
Subjective Points of View vs. Objective
the fish schools (Ariana Strandburg-Peshkin
Points of View
et al, 2013), there is a correlation between
Being an architect, I barely know how people
visual network transitivity and redundant
perceive buildings, such as camera positions
information: higher network transitivity
where they tend to frame their views or
indicates a greater level of redundant
features which draw their attention. Through
information. The visual network in the
computing a series of Instagram photos
animal groups is similar to those in the social
and reconstructing a set of 3D points and
media, from where we will most likely receive
viewpoints, we can now quantify and visualize
redundant visual information.
the otherwise hidden or implicit information, and compare those viewpoints with the ones
Material Lightness and Minimal Visual
preferred by the architects. As a result, urban
Information
photos in the social media at large, whose
In visual design, vantage points are set
camera positions are situated by the built
in advance, e.g. cameras are set up in the
environment, can be analyzed.
desirable positions. There are two steps to
8
follow in order to achieve the visual lightness
Collective Visual Sensing (Sato, 2016)
in architectural elements: 1. predefining the
The project, Collective Visual Sensing, aims
vantage points; 2. given the vantage points,
to understand group attention and activities
the elements are designed to reduce the visual
by analyzing information gathered from
information associated with volume, material
multiple wearable devices, such as wearable
texture, and object edge (contour).
cameras and eye trackers. They describe a
novel motion correlation-based approach to search and localize target individuals from a collection of first person point of view videos. Due to large camera motion and pose variation inherent with first person point of view videos, traditional approaches for person identification such as face recognition perform very poorly in many cases. To cope with the
Individual Use
problem, they introduced a new approach for person identification without relying on appearance features.
Collective Use
Fig. 2 3D images with Earth View
Fig. 3 Collective Visual Sensing (Sato ,2016) is to understand group attention and activities by analyzing information gathered from multiple wearable devices, such as wearable cameras and eye trackers.
Connective Use
Fig. 4 3 categories of Visual Devices
9
Workflow The steps for processing the data are as follows: 1.
Download a large set of images from Instagram associated with building name
as input. 2.
Run Visual SFM program to reconstruct the 3D model from the 2D images
(input) and output a NVM file. Meanwhile, the overlapping images are clustered into different categories. 3.
Generate a C-sharp file in grasshopper to parse data from a NVM file.
4.
The C-sharp file is written to output a JSON file for data visualization and a
3dm file for 3D visualization. 5.
JSON file is used with HTML with JS and CSS to create an interactive website.
6.
Download the satellite map from a particular location for visualization usage.
7.
Process image to identify its characteristics.
8.
Analyze the Collective Visual System by incorporating architectural knowledge
with the visualization. 9.
Interpret the results to intervene the design for the built environment.
Fig. 5  3d reconstruction model showing Detectable edges & Vantage Points
10
DATA BASE
GRASSHOPPER
MAP
3
X, Y, Z coodinates
Json
6
4
COMPUTER VISION FILTER
DATA VISULIZATION
3D RECONSTRUCT. Js
Images
IMAGE PROCESSING
+
X, Y, Z coodinates
HTML
+
7
CSS
2
5
SOCIAL MEDIA
9
ANALYSIS
ARCHITECTURE
INTERPRETATION
8
1 Images
KNOWLEDGE
10 INTERVENE THE BUILT ENVIRONMENT
Fig. 6 Work-flow of analysis
Fig. 7 3d reconstruction model of Guggenheim in front view, a collective elevation view. The distributions of Subject Points are related to the visual information density of the subject. The circle size correlates to the measurement count which is the number of overlapping images. The bigger the circle is, the more images are associated with the center point of the circle.
11
Chapter 2 Defining Collective Visual System 2.1 Overview
The Collective Visual System is defined to
covered walk as the vantage points, the garden
understand the distribution of attention
itself as the collective visual field.
points and the points of view in space within a visual condition. Three Components of Collective Visual System
Objects with Design Intent The Harvard GSD logo is a 2D pattern to represent a 3D object: a capital “H”. This is the so-called “Necker Illusion”. For the 3D
The three components of Collective Visual
object, an isometric view must be defined. In
System are: Objects with Design Intent
this isometric view, the visual information is
(attention points), Collective Visual Field
reduced because the hidden elements only
(visual condition) and Preferable Vantage
exist in the 3D imagery. If we redefine the
Points (points of view).
viewpoints, we will have to recover the hidden
Isovist vs. Collective Visual System
parts to view it as a 3D object. But there are two possible ways to recover the missing
Isovist, as a representation of panorama,
parts from the 2D pattern disregarding the
provides all possible visual information from
preconception of an “H”.
the environment to the viewers, whereas Collective Visual System extracts the weighted
Invariant Occlusion
points of view in architectural space from a
With its entire body in one’s visual field,
given set of images.
a knot is an object containing Invariant
The Central area of Lingering Garden
Occlusion from any vantage points. The Invariant Occlusion eliminates the wholeness
The 23,310-square-meter garden is divided
of the object and takes away a set of visual
into four distinctly themed sections: east,
information, so it entails the potentials that a
central, west, and north. The central area is
3D object can be designed to have less visual
the oldest part of the garden. A unique feature
information within a Collective Visual Field.
of this garden is the 700-meter covered
12
walk which connects them. The ensemble of
Questions in Relation to the Three
structures in the central garden encircles a
Components
pond and grotto main feature. The garden can
Three questions are raised in relation to the
be understood as a Collective Visual System.
three components of the Collective Visual
In this system, the rocks and architectural
System:
elements can be deemed as the Objects
•
What are the attention points?
with Design Intent and they are carefully
•
What are the occlusions?
positioned and situated inside the garden
•
Where are the camera positions and
serving as a series of attention points, the
directions?
3D Reconstruction Model as Collective
visualization in the website is to reveal the
Visual System
camera positions and the collective attention,
A cluster of images generate a 3D
and to represent the major Collective Visual
reconstruction model, which consists of
field of the built environment.
Vantage Points and Subject Points. A Vantage Point contains a set of XYZ coordinates and the vector information of a camera. A Subject Point contains a set of XYZ coordinates and a set of color information. The 3D reconstruction model can be deemed as a Collective Visual System. Subject Points are the Attention Points The Subject Points contribute to the visual information density. The size of the circle is associated with the number of the overlapping images. Subject Points can be deemed as Attention Points that can also be detected by primate retina. By examining the Subject Points of each 3D reconstruction model for the system, we can estimate the location in which the Attention Points are distributed and concentrated on the surface. Website: The implementation of the interactive
Fig. 8  Visual sensory networks and effective information transfer in animal groups, (Ariana Strandburg-Peshkin et al, 2013)
Fig. 9  Necker Illusion and 2 potentials after redefining the Vantage Points
13
Fig. 10 Site Plan of Lingering Garden (1593), (留园), sketch by Peng Yigang
Fig. 14 Site Section showing the Architectural Elements for the composition of Attention
Fig. 11 Ciculation as the path of Vantage Points
14
Fig. 12 Distribution of Rocks in the garden
Fig. 13 Distribution of Architectural Elements in the garden
th Pa
of
Poin View
t s i n M o ti o n
Point of view 2
Occlusion 2
Point of view 1 Occlusion
Fig. 15  MArch Thesis in 2005 by Zhiwei Liao, a knot (mathematic) is made of concrete forming a continuous intertwined loop to generate visual spectacle (6 feet in diameter) and spatial experience: walk around the object in which there is always a invariant occlusion.
15
2.2 Isovist
Fig. 16 The camera location and the area that it can capture
Fig. 17 Isovist is used in the field of architecture for analysis of buildings and urban areas, typically as one of a series of methods used in space syntax.
An isovist is the set of all points visible from a
used the Spatial openness (SO), the volume
given vantage point in space and with respect
of open space measured from all possible
to an environment. The shape and size of
observation points as the quality indicator of
an isovist is liable to change with position.
alternative spatial configurations. (Fisher-
Numerical measures are proposed that
Gewirtzman, 2003)
quantify some salient size and shape features. [Benedikt, 1979]. These measures in turn
Morello & Ratti extend the isovist concept in
create a set of scalar isovist fields that provide
three dimensions to measure visual perception
an alternative description of environments.
over urban spaces. (Morello & Ratti, 2009). The measurement provided a quantifiable
Visibility graphs were developed to investigate
basis for Kevin Lynch’s urban analysis in his
spatial arrangement relationships in
book The Image of the City(Lynch, 1960).
architecture, and to describe a configuration with reference to accessibility and visibility.
Isovist Properties were used to numerically
(Turner et al, 2001)
capture the visual properties of spatial configurations and to optimize spatial
The visual field has their own form results
arrangements of two-dimensional elements
from the interaction of geometry and
(building-footprints) (Scheneider & König,
environment (Batty 2001) .
2012). (Koenig, 2014)
Based on the idea that the geometry and morphology of the built-up environment influence perception, Fisher-Gewirtzman 16
Fig. 18  Optimization process. The rows show the content of the archive with the best (Pareto optimal) solutions. The initial layouts are shown in the bottom row, the final ones (after 20 iterations) are shown at the top row. The colors show the area property of the Isovist field. (Koenig et al, 2014)
17
2.3 Remote Sensing
Fig. 19 The flight path for the Unmanned Aircraft System
Fig. 20 3d point cloud was built from the photos taken from the Unmanned Aircraft System
The aerial photographs for the
for the system. The UAS was also programmed
photogrammetric modeling are provided by the
to take off and land autonomously. The
Unmanned Aircraft System (UAS). This
pre-programmed mission was uploaded to
workshop in 2015 at GSD was to introduce the
the drone before launching. The 3d point
work-flow from planning the flight path for
cloud was built from photos by using Agisoft
the Unmanned Aircraft System (UAS) to using
PhotoScan, a photogrammetric software, then
computer vision software to reconstruct the
geocorrected by using Ground Control Points
textured polygonal models for a soccer field.
and filtered to build Digital Terrain Model
The UAS with an off-the-shelf digital camera
(DTM) and Canopy Height Model (CHM).
was sent to a zone about 50 meters above the measured field and took a set of sequential overlapping photos. The flight path of the UAS was preprogrammed on a Google map through Mission Planner, a ground station application
18
Fig. 21  A set of sequential overlapping photos taken by the UAS
19
2.4 Inverted Retina
Fig. 22 The schematic representation of the palantír of Orthanc, used by the wizard Saruman in Peter Jackson’s film adaptation of The Fellowship of the Ring (2001)
A palantír (pl. palantíri) is a fictional magical artifact from J. R. R. Tolkien’s fantasy legendarium. A palantír (sometimes translated as “Seeing Stone” but literally meaning “Farsighted” or “One that Sees from Afar”; cf. English television) is a crystal ball, used for both communication and as a means of seeing events in other parts of the world.
20
Eye Tracker Sensor as input device
Core: Sensors to measure time, location, motion and light.
Volumetric Display to represent 3D object
Fig. 23 A Frameless electronic display to show a 3D world
The Palantír here is a fictional handheld electronic device to replace the flat-screen smartphone with an advanced mobile computer operating system with machine learning features useful for urban life in the 22nd century. The innovative geometry provides a spherical frameless screen which is the “inverted retina”. The side iso-touchscreen avoid the interference of the fingers between the eyes and the screen in the conventional smartphones.
21
Chapter 3 Case studies 3.1 Overview Eight architecture case studies: Parthenon,
into a feeling of movement that is integral to
Pantheon, Piazza Del Campidoglio, Sagrada
the understanding of the building.
Familia, Villa Savoye, Notre Dame du Haut, Guggenheim Museum Bilbao and China
Notre Dame du Haut - built in 1954, the
Central Television (CCTV) Headquarters, were
building designed by Le Corbusier is a
conducted to test the Collective Visual System
monumental architecture. The sacred nature
and to explicitly address the visual conditions
of the space is shaped by both the exterior and
associated with social media.
interior.
Parthenon - built in 447 BC, the Parthenon is
Guggenheim Museum Bilbao - built in
regarded as one of the world’s greatest cultural
1997, the iconic building designed by Frank
monuments. What are the visual orders that
Gehry. The image of “Bilbao effect” transforms
we can capture from this symbolic building
the city. How do the viewers frame the
with architectural order?
fragmentized surfaces out of the constructed building?
Pantheon - built in 118-128 AD, the oculus of the authentic dome in the Pantheon makes
China Central Television (CCTV)
visible the movement of time. The interior
Headquarters - built in 2012, the looped
space full of perfect geometrical shapes fills the
tower designed by OMA is one of the most
field of vision.
controversial buildings in China with its iconic form and multiple appearance.
Piazza Del Campidoglio - built in 15381650, the piazza is designed by Michelangelo.
Comparing the Pie Charts
The perspective device is calibrated divergently
The Pie Chart is generated for each case study
toward the intimate facade and convergent in
based on the percentage computed from the
the direction of the guided attention.
number of vantage points in the Collective Visual Systems.
Sagrada Familia - groundbreaking in 1882, the church is designed by Antoni Gaudí. The
The comparison of the pie charts indicates the
three grand facades and the vaults of the nave
tendency of simplicity and fragmentation of
are the unique Gaudí design full of critical
the architectural characteristics. For example,
visual elements.
the sequence of the Guggenheim with 22 clusters on the far left, whereas the Pantheon
Villa Savoye - built in 1931, a house designed by Le Corbusier. The house, designed as a second residence and sited as it was outside Paris was designed with the car in mind. The sense of mobility that the car gave translated 22
with 3 on the far right. Interpreting the Subject Points: Horizon and Shape The Parthenon provides a large portion of
1
2
3
4
Parthenon Built in 447BC, the Parthenon is regarded as one of the world’s greatest cultural monuments. What are the visual orders that we can capture from this symbolic building?
Pantheon Built in 118-128 AD, the oculus of the authentic dome in the Pantheon makes visible the movement of time. The interior space full of perfect geometrical shapes fills the field of vision.
Piazza del Campidoglio Built in 1538-1650, the piazza is designed by Michelangelo. The perspectival device is calibrated divergently toward the intimate facade and convergent in the direction of the guided attention.
Sagrada Familia Groundbreaking in 1882, the church is designed by Antoni Gaudí. The three grand facades and the vaults of the nave are the unique Gaudí design and the critical visual elements.
5
6
7
8 China Central Television (CCTV) Headquarters Built in 2012, the looped tower designed by OMA is the most controversial building in China with its iconic form.
2000
4 5 678
1900
3
1800
2
1700
Guggenheim Museum Bilbao Built in 1997, the iconic building designed by Frank Gehry. The image of “bilbao effect” transforms the city. How do the viewers frame the fragmentation of the constructed building?
1600
400 BC
1
Notre Dame du Haut Built in1954, the building designed by Le Corbusier is a monumental architecture. The sacred nature of the space is shaped by the interior lighting.
100 AD
Villa Savoye Built in1931, the building designed by Le Corbusier is the most easily recognized modern architecture as a manifesto of the “five points”. The building has the feeling of the movement.
23
visual information on the Frieze and the
are dominantly catching the attention.
Pantheon on the pediment One Point Perspective vs. Two Vanishing Points
The Monumentality of Ronchamp The thick concrete roof and the penetration on the wall are the detectable features. The
From the observation of the mapping of the
monumentality of the roof is interpreted by the
dominant Collective Visual System (CVS),
image identifier algorithm as a house. Because
there are 4 cases that their CVS have a
the scale is not taken into account, the results
perpendicular camera direction in relation
from the computer vision techniques are not
to the building facade. These are likely the
accurate.
one point perspectives. Those cases include Parthenon, Pantheon, Piazza Del Campidoglio
The Void of the CCTV Headquarters
and Sagrada Familia. On the other hand,
Similar to the oculus of the Pantheon, the
there are 3 cases that their major CVS have
space in-between the two towers of the CCTV
a set of diagonal camera direction causing a
Headquarters is in the central focus and is
2 vanishing points perspective. Those cases
not detected in the 3D Subjects Points. The
include Villa Savoye, Ronchamp and CCTV.
mullions of the curtain wall and the reflection of the adjacent buildings provide the scale
This difference entails the characteristics of the
invariant features which are highly detected.
design intent and architecture style in terms of
The top two CVS are from the diagonal corner
visual perception.
and in the front of the south façade, which
Fragmentation of Bilbao and Lingering Garden
create the “O” and “Γ ” imagery respectively. The histogram and the heat map are used to
There are only 3 small clusters out of 22 in
illustrate those distribution densities of the
the Guggenheim Bilbao project frame the
vantage points of the two CVS. The visual locus
entire building into their images. The other
is linked to the symbol of the appearance of the
19 clusters mainly frame the sculptures and
buildings.
artwork of the building in the foreground with the fragmented façades of the building
The curated photos shown on OMA’s website
in the background. In the three major CVS
appear to be more dynamic, and are hardly
clusters which are mapped on the satellite
found in the CVS clusters. Interestingly, the
image, the sculptures in the foreground
monumentality of the Gateway Arch imagery
are the major Attention Points, which are
is able to be recognized by the image identifier
Anish Kapoor’s Tall Tree & the Eye, Louise
algorithm. In particular, the Azadi Tower
Bourgeois’s Maman and Jeff Koons’ Puppy.
(1971) is picked up by one result.
Echoing the spatial arrangement of the rocks and the architectural elements in the Lingering Garden, the Guggenheim Bilbao’s sculptures 24
3.2  Instagram
Fig. 24  The 2500 raw images from social media are sorted according to their median intensity, compressed in 2x2 pixels and put into a Manhattan grid.
25
3.3  3D Reconstruction
Parthenon 43 cameras, 2223 points on real surface
Pantheon 57 cameras, 8813 points on real surface,
Campidoglio 57 cameras, 3793 points on real surface
Sagrada Familia 60 cameras, 10939 points on real surface
Villa Savoye 257 cameras, 4833 points on real surface
Ronchamp 48 cameras, 3089 points on real surface
Guggenheim Bilbao 122 cameras, 13054 points on real surface
CCTV Headquarters 86 cameras, 5440 points on real surface
Fig. 25  The 3D reconstruction of the 8 landmarks
Note: Cameras in red, points on real surface with texture color
26
3.4  Filtering Attention
Parthenon 249 Test Images 4 Clusters
Pantheon 189 Test Images 2 Clusters
Campidoglio 225 Test Images 6 Clusters
Sagrada Familia 225 Test Images 6 Clusters
Villa Savoye 2025 Test Images 9 Clusters
Ronchamp 237 Test Images 3 Clusters
Guggenheim Bilbao 1353 Test Images 22 Clusters
CCTV Headquarters 245 Test Images 3 Clusters
Fig. 26  The pie-charts of the 8 buildings demonstrate the percentage of the view clusters.
Feature Matching The tested Photos are divided into different clusters, based on their matching features. This process is achieved by using image matching in the VSFM [Wu, 2013]. Therefore, these images are similar in terms of content, angle of view and camera position.
27
Visual Fragmentation Visual Simplicity
28
Guggenheim Bilbao 1353 Test Images 22 Clusters
22
Villa Savoye 2025 Test Images 9 Clusters
9
Sagrada Familia 225 Test Images 6 Clusters
6
Campidoglio 225 Test Images 6 Clusters
6
Parthenon 249 Test Images 4 Clusters
4
Ronchamp 237 Test Images 3 Clusters
3
CCTV Headquarters 245 Test Images 3 Clusters
3
Pantheon 189 Test Images 2 Clusters
2
3.5  Website: https://zhiweiliao.github.io/Viz
Parthenon 43 cameras, 2223 points on real surface
Pantheon 57 cameras, 8813 points on real surface,
Campidoglio 57 cameras, 3793 points on real surface
Sagrada Familia 60 cameras, 10939 points on real surface
Villa Savoye 257 cameras, 4833 points on real surface
Ronchamp 48 cameras, 3089 points on real surface
Guggenheim Bilbao 122 cameras, 13054 points on real surface
CCTV Headquarters 86 cameras, 5440 points on real surface
Fig. 27  The 3D reconstruction cameras are shown in the satellite image, the 3d points on real surface are shown on the elevation
The Satellite Plans The Collective Visual System becomes the lens to investigate the following aspects: 1. The correlation between the camera positions and photos, 2. Data filtering 3. 2D Representation
number of vantage points in the Collective
Comparing the pie charts
clusters on the far left, whereas the Pantheon
The Pie Chart is generated for each case study
with 2 on the far right.
Visual Systems. The comparison of the pie charts indicates the tendency of simplicity and fragmentation of the architectural characteristics. For example, the sequence of the Guggenheim with 22
based on the percentage computed from the 29
30
Satellite Image
Instavist Image The 3D Points on the Real Surface The Photo Moment
Camera Location, Hovering for Image Activation
Event Handler to filter data
Attention index of the activated point The 3D Points on the Real Surface, circle size relates to the attention count The activated point
Event Handler to filter data according to attention index
Fig. 28  Each navigation window has an event handler underneath to filter the data, the dots in colors associated with the related image can be interactively triggered by the hovering mouse.
31
3.6 Case 1 - The order of Parthenon
1
Fig. 29 The 3D Reconstruction diagram of the Parthenon. The circle size refer to viewers’ attention: the number of matching photos.
Le Corbusier visited the Acropolis in Athens in 1911. He did not repeat or seek to precisely study into the orders, the temple form. Instead, he produced a set of sketches which vividly evoke the sequential experience of the ascent of the Acropolis (Stanford, 1984). As Stanford Anderson observed, referring to the sketches: “We hold no vantage point from which we may possess the building objectively. And if we did possess such a vantage point, this drawing tells us we would be missing something else - experience itself and the knowledge which comes only through such experience. ... At a conceptual level, Le Corbusier is concerned with how we correlate experience and knowledge. ... This insistence on experience is more forceful when made in the presence of a work for which we have 32
previously instilled modes of appropriation.” (Standord, 1984). The sketches reveal Le Corbusier’s observation to the Parthenon. To some extent, if such knowledge and experience are not accessible to the public eyes, can those be transferable in the context of social media? Or should the architects retain such seemingly unnoticeable repertoire? Or develop a new set of noticeable repertoire based on the collective vision? If the sketch, as a representational tool to record and to transfer the experience from a designer, can the collective vision match the level of the master’s vision? Or substitute it? The Collective Visual Field is herein juxtaposed
Fig. 31 Sketch from Toward an Architecture, 1923, Le Corbusier
Fig. 30 Image from Toward an Architecture, 1923, with reproductions of photographs by Frederic Boissonnas taken from Le Parthenon
with the personal view curated by Le Corbusier is to reveal the matter-of-fact that the public preference, which is primarily dealing with the framing issue in photography, the composition of foreground and background, the ephemeral nature of architecture as presented in the images in the cloud as opposed to the permanence of buildings themselves and their suggested orders.
Fig. 32 The distribution of viewpoints
33
Parthenon 249 Test Images 4 Clusters
I
e
no nclu ste r
l ab
II
III IV
Count
Percentage
Subject
I II III IV -
43 10 3 3 190
17% 4% 1% 1% 76%
Temple portico front 45 degree view Horizon Close corner Non-cluster-able
Fig. 33  Pie chart in clusters
34
Cluster Type
Type I
Type II
Type III
Type IV Fig. 34  Collage of Photos in clusters
35
3.7 Case 2 - Pantheon
2
Fig. 35 Front View of 3d Reconstruction Model
36
Fig. 36 The interior space full of perfect geometrical shapes fills the field of vision.
Fig. 37 The interior space full of perfect geometrical shapes fills the field of vision.
Fig. 38 Portico Front, the dense holes left on the pediment are for hanging the relief sculpture.
37
Pantheon 189 Test Images 2 Clusters
non- cl u
ster abl
e
I
Count
Percentage
Subject
I II -
56 37 96
30% 19% 51%
Temple portico front Rotunda with oculus Non-cluster-able
Fig. 39  Pie chart in clusters
38
II
Cluster Type
Type I
Type II
Fig. 40  Collage of Photos in clusters
39
3.8 Case 3 - Campidoglio
3
Fig. 41 Front View of 3d Reconstruction Model
40
Fig. 42  Sketches by Michelangelo
41
Fig. 43  Google Maps 3d photogremmetric Model
42
Fig. 44  Mapping of the Collective Vantage Points
43
Campidoglio 225 Test Images 6 Clusters
non- cl u
ster abl
e
I
III
II
V
IV VI
44
Cluster Type
Count
Percentage
Subject
I II III IV V VI -
55 7 7 7 4 3 142
24% Façade of Palazzo Senatorio 3% Statue of Neptune 3% 3 Arches at the corner 3% Roma Barrio Judío 2% Facade Palazzo Nuovo Drinking 1% Fountain 63% Non-cluster-able
Type I
Type II
Type III
Type IV
Type V
Type VI Fig. 45  Collage of Photos in clusters
45
3.9  Case 4 - Sagrada Familia
4
46
Fig. 46  Mapping of Collective Vantage Points in Yellow, Subject in Green
47
Sagrada Familia 225 Test Images 6 Clusters
ra
I
ble
no
lu
s te
nc
II
IV
III
V
vI
48
Cluster Type
Count
Percentage
Subject
I II III IV V VI -
60 17 7 6 4 3 128
27% 8% 3% 3% 2% 1% 57%
Nativity facade Passion facade The roof in the nave Columns & Church Windows Looking to Nativity facade from north Lit side aisle Non-cluster-able
Type I
Type II
Type III
Type IV
Type V
Type VI Fig. 47  Collage of Photos of all clusters
49
3.10  Case 5 - Villa Savoye
5
50
Fig. 48  A perspective sketch by Le Corbusier
Cluster II: 129 cameras
Culster VII: 2 cameras
Cluster I: 257 cameras
Fig. 49  The three camera clusters are shown with view angles overlaying on the satellite image
51
Villa Savoye 2025 Test Images 9 Clusters
te s lu
ra
I
ble
II
n on -c
IV V... IX III
52
Test Images Total View clusters
Count 2025 9
Percentage
Subject
Type I II III IV V VI VII VIII IX -
257 129 46 37 15 11 3 3 3 1521
12.7% 6.4% 2.3% 1.8% 0.7% 0.5% 0.1% 0.1% 0.1% 75.1%
North west corner South east corner Master bath, toward door Master bath, toward window Roof garden Bath sink under skylight West elevation Bench Spiral stair Non-clusterable
Type I
Type II
Type III
Type IV
Type V
Type VI
Type VII
Type VIII
Type IX Fig. 50  Collage of Photos of all clusters
53
Fig. 51  The images of non-cluster-able is 75%
54
Fig. 52 The images of cluster from social media showing the collective vision
Entropy
In an intuitive sense, it is reasonable to assume
The concept of “entropy” was first used by
that the appearance of a less probable event
physicists as a thermodynamic parameter to
(symbol) gives us more surprise, and hence we
measure the degree of “disorder” or “chaos”
expect that it might carry more information.
in a thermodynamic or molecular system. In a
On the contrary, the more probable event
statistical sense, we can view this as a measure
(symbol) will carry less information because it
of degree of “surprise” or “uncertainty.”
was more expected. 55
Max Value
Median Value
Min Value
Fig. 53 Images Sorted by Entropy value from 3.4 (top) to 9.5 (bottom)
Image entropy is a quantity which is used
On the other hand, high entropy images such
to describe the ‘business’ of an image. Low
as an image of heavily cratered areas on the
entropy images, such as those containing a
moon have a great deal of contrast from one
lot of black sky, have very little contrast and
pixel to the next and consequently cannot be
large runs of pixels with the same or similar
compressed as much as low entropy images.
DN values. An image that is perfectly flat will have an entropy of zero. Consequently, they can be compressed to a relatively small size. 56
Entropy Value = 4.21
Entropy Value = 8.89
Entropy Value = 11.18
Entropy Value = 4.15
Entropy Value = 8.88
Entropy Value = 10.17
Fig. 54  Images with Min, Median and Max Entropy Value, arranged per similarity of View angle
Fig. 55  Bar chart showing the entropy value of images
57
3.11  Case 6 - Ronchamp
6
58
Fig. 56  The images of cluster I downloaded from social media showing the collective vision
59
Ronchamp 237 Test Images 3 Clusters
I
-c non
III
lusterable
II
60
Cluster Type
Count
Percentage
Subject
I II III -
48 17 5 167
20% 7% 2% 70%
45 degree exterior Interior Wall Penetration Clerestories Non-cluster-able
Type I
Type II
Type III Fig. 57  Collage of Photos of all clusters
61
Fig. 58  Collage of Type I
62
Fig. 59  Image Identifier
63
Fig. 60  Collage of Type II
64
Fig. 61  Image Identifier
65
3.12  Case 7 - Guggenheim, Bilbao
7
66
Fig. 62  The images of cluster I downloaded from social media showing the collective vision
67
Guggenheim Bilbao 1353 Test Images 22 Clusters
I
I
I
no n
s clu
le b a ter
III
V VI VII ............... ..X IV X II
122 81 58 24 23 18 16 14 11 11 9 8 6 5 5 4 4 4 3 3 3 3 918
I II III IV V VI VII VIII IX X XI XII XIII XIV XV XVI XVII XVIII XIX XX XXI XXII -
Count
68
Cluster Type
Percentage
Subject
9% 6% 4% 2% 2% 1.3% 1.2% 1.0% 0.8% 0.8% 0.7% 0.6% 0.4% 0.4% 0.4% 0.3% 0.3% 0.3% 0.2% 0.2% 0.2% 0.2% 68%
Sculpture & Exterior Facade Sculpture & Exterior Facade Sculpture & Exterior Facade Interior Sculpture Sculpture & Bridge Sculpture Sculpture Building Overview Interior window view Sculpture & Exterior Facade Building Overview Sculpture Painting Art studio Exterior view sculpture Sculpture Sculpture Sculpture Sculpture Building Overview Sculpture Painting Non-cluster-able
Type I
Type II
Type III
Type IV
Type VI
Type VII
Type X
Type XI
Type XV
Type XVI
XVII
Type V
Type VIII
Type XII
XVIII
XIX
Type IX
XIII
XX
XXI
XIV
XXII
Fig. 63  Collage of Photos of all clusters
69
Fig. 64  Fragmented Spatial Experience of Lingering Garden Similar to the exterior setting of Geggenheim Museum
70
Fig. 65  Fragmentated Spatial Experience from the placement of the Sculptures similar to the Lingering Garden
71
3.13  Case 8 - CCTV Headquarters
8
72
Fig. 66  The images of cluster I downloaded from social media showing the collective vision
73
CCTV Headquarters 245 Test Images 3 Clusters
I
non - cl ust era bl e
II
I II
74
Cluster Type
Count
Percentage
Subject
I II III -
48 41 9 147
20% 17% 4% 60%
Exterior Facade Exterior Facade Exterior Facade at night Non-cluster-able
Type I
Type II
Type III Fig. 67  Collage of Photos of all clusters
75
Fig. 68  Schematic representation of Type I
76
Fig. 69  Schematic representation of Type II
77
Fig. 70  The 81 images in the same cluster are identified via Wolfram Mathematica, a mathematical symbolic computation program, to represent what each picture is of, although those images are similar, the first category is shown, the y-axis represent the percentage (top), the 10 categories are shown in percentile (bottom)
78
Fig. 71  The heat-map (top) and the tower comparison based on the results from computer vision (bottom) showing comparison of tow towers with one from image identifier.
Fig. 72  The image identifier recognized the 48th image in the collage as the Azadi Tower (left). Here is the comparison.
79
Fig. 73 (Top) The cameras scatter diagram. (Bottom) The 96 “pixels” Density histograms shows the density distribution of the cameras.
80
Fig. 74 The 24 “pixels” (Top) and the 48 “pixels” (bottom) Density histograms shows the density distribution of the cameras. The highest cameras density (in red) is proximately located on the diagonal axis of the building.
81
82
Fig. 75  Highlighting the Void in red on the collage of Type I
Fig. 76  Highlighting the Void in red on the collage of OMA curated images
83
Chapter 4  Conclusion a.
This research defines the Collective
Visual System and provides a method to
highlight the horizontal line.
understand the Collective Visual Field by 3D
h.
reconstruction of the Vantage Points and the
19th century changes the way people
points on the real surface.
visually consume architecture. The visual
b.
characteristics of the modern architecture
The Collective Visual System relies
The advent of photography in the
on quantifiable data to generate useful
exemplify the technological influence. The
information for spatial analysis.
emerging techniques shifted our visual
c.
perception in style: spatial experience in
The analysis of Collective Visual System
should be able to provide visually effective
two dimensions and three dimensions,
design guidelines for the built environment.
transparency and massiveness, smoothness
d.
and roughness, explicitness and implicitness,
Intensifying Social Imagery – a dominant
image cluster may exist. The images in this
picturesque and fragmentation. The visual
cluster are representing a dominant imagery of
preference becomes the design inquiry in
the architecture..
the highly connected and normalized digital
e.
environment.
The mapping of the Vantage Points, the
collective viewers’ locations, provides a set of visual information to analyze the site context and environmental condition of the existing landmark buildings. The visual information can be developed to guide architects to design architectural components and to manipulate form, material, texture and color. f.
The buildings can be categorized on
whether they are viewed symmetrically or asymmetrically, regardless of their original design. For example, in the cases of Villa Savoye, Ronchamp, CCTV Headquarters, they are designed to be the asymmetrical form, but the distribution scatters show approximately a symmetrical pattern? The preferable perceptions seemingly suggest a tendency: people are constantly in search of balancing imagery in the frame of their photos. g.
In the early architecture cases, the
triangular shape on the facade is quite obvious in the computer vision due to the contour manipulation by the designer: 1, the contract 84
from the background, 2, the casting shadow to
Fig. 77 A Turrell Projection is created by projecting a single, controlled beam of light from the opposing corner of the room to show the correlation between light, space and perception and the importance of situated embodiment.
Fig. 78 An illustration by Marie-Laure Cruschi showing the 2 dimensional shape for delightful visual consumption
Fig. 80 Felice Varini paints on architectural and urban spaces, such as buildings, walls and streets. The paintings are characterized by one vantage point from which the viewer can see the complete painting (usually a simple geometric shape such as circle, square, line), while from other view points the viewer will see ‘broken’ fragmented shapes. Varini argues that the work exists as a whole - with its complete shape as well as the fragments. “My concern,” he says “is what happens outside the vantage point of view.”
Fig. 79 The wrapping artists Christo Vladimirov Javacheff and Jeanne-Claude hided the detachable features from the building to trigger the invisible attention
85
List of Figures Fig. 1 Historical context of the visual devices Fig. 4 3D images with Earth View Fig. 2 Collective Visual Sensing (Sato ,2016) is to understand group attention and activities by analyzing information gathered from multiple wearable devices, such as wearable cameras and eye trackers. Fig. 3 3 categories of Visual Devices Fig. 5 3d reconstruction model showing Detectable edges & Vantage Points Fig. 6 Work-flow of analysis Fig. 7 3d reconstruction model of Guggenheim in front view, a collective elevation view. The distributions of Subject Points are related to the visual information density of the subject. The circle size correlates to the measurement count which is the number of overlapping images. The bigger the circle is, the more images are associated with the center point of the circle. Fig. 9 Necker Illusion and 2 potentials after redefining the Vantage Points Fig. 8 Visual sensory networks and effective information transfer in animal groups, (Ariana Strandburg-Peshkin et al, 2013) Fig. 10 Site Plan of Lingering Garden (1593), (留园), sketch by Peng Yigang Fig. 14 Site Section showing the Architectural Elements for the composition of Attention Fig. 11 Ciculation as the path of Vantage Points Fig. 12 Distribution of Rocks in the garden Fig. 13 Distribution of Architectural Elements in the garden Fig. 15 MArch Thesis in 2005 by Zhiwei Liao, a knot (mathematic) is made of concrete forming a continuous intertwined loop to generate visual spectacle (6 feet in diameter) and spatial experience: walk around the object in which there is always a invariant occlusion. Fig. 16 The camera location and the area that it can capture and its Fig. 17 Isovist is used in the field of architecture for analysis of buildings and urban areas, typically as one of a series of methods used in space syntax. Fig. 18 Optimization process. The rows show the content of the archive with the best (Pareto optimal) solutions. The initial layouts are shown in the bottom row, the final ones (after 20 iterations) are shown at the top row. The colors show the area property of the Isovist field. (Koenig et al, 2014) Fig. 19 The flight path for the Unmanned Aircraft System Fig. 20 3d point cloud was built from the photos taken from the Unmanned Aircraft System Fig. 21 A set of sequential overlapping photos taken by the UAS Fig. 22 The schematic representation of the palantír of Orthanc, used by the wizard Saruman in Peter Jackson’s film adaptation of The Fellowship of the Ring (2001) Fig. 23 A Frameless electronic display to show a 3D world Fig. 24 The 2500 raw images from social media are sorted according to their median intensity, compressed in 2x2 pixels and put into a Manhattan grid. 86
Fig. 25 The 3D reconstruction of the 8 landmarks Fig. 26 The pie-charts of the 8 buildings demonstrate the percentage of the view clusters. Fig. 27 The 3D reconstruction cameras are shown in the satellite image, the 3d points on real surface are shown on the elevation Fig. 28 Each navigation window has a event handler underneath to filter the data, the dots in colors associated with the related image can be interactively triggered by the hovering mouse. Fig. 29 The 3D Reconstruction diagram of the Parthenon. The circle size refer to viewers’ attention: the number of matching photos. Fig. 31 Sketch from Toward an Architecture, 1923, Le Corbusier Fig. 30 Image from Toward an Architecture, 1923, with reproductions of photographs by Frederic Boissonnas taken from Le Parthenon Fig. 32 The distribution of viewpoints Fig. 33 Pie chart in clusters Fig. 34 Collage of Photos in clusters Fig. 35 Front View of 3d Reconstruction Model Fig. 36 The interior space full of perfect geometrical shapes fills the field of vision. Fig. 37 The interior space full of perfect geometrical shapes fills the field of vision. Fig. 38 Portico Front, the dense holes left on the pediment are for hanging the relief sculpture. Fig. 40 Pie chart in clusters Fig. 41 Collage of Photos in clusters Fig. 42 Front View of 3d Reconstruction Model Fig. 43 Sketches by Michelangelo Fig. 44 Google Maps 3d photogremmetric Model Fig. 45 Mapping of the Collective Vantage Points Fig. 46 Collage of Photos in clusters Fig. 47 Mapping of Collective Vantage Points in Yellow, Subject in Green Fig. 48 Collage of Photos of all clusters Fig. 49 A perspective sketch by Le Corbusier Fig. 50 The three camera clusters are shown with view angles overlaying on the satellite image Fig. 51 Collage of Photos of all clusters Fig. 52 The images of non-cluster-able is 75% Fig. 53 The images of cluster from social media showing the collective vision Fig. 54 Images Sorted by Entropy value from 3.4 (top) to 9.5 (bottom) Fig. 55 Images with Min, Median and Max Entropy Value, arranged per similarity of View angle Fig. 56 Bar chart showing the entropy value of images Fig. 57 The images of cluster I downloaded from social media showing the collective vision Fig. 58 Collage of Photos of all clusters Fig. 59 Collage of Type I 87
Fig. 60 Image Identifier Fig. 61 Collage of Type II Fig. 62 Image Identifier Fig. 63 The images of cluster I downloaded from social media showing the collective vision Fig. 64 Collage of Photos of all clusters Fig. 65 Fragmented Spatial Experience of Lingering Garden Similar to the exterior setting of Geggenheim Museum Fig. 66 Fragmentated Spatial Experience from the placement of the Sculptures similar to the Lingering Garden Fig. 67 The images of cluster I downloaded from social media showing the collective vision Fig. 68 Collage of Photos of all clusters Fig. 69 Schematic representation of Type I Fig. 70 Schematic representation of Type II Fig. 71 The 81 images in the same cluster are identified via Wolfram Mathematica, a mathematical symbolic computation program, to represent what each picture is of, although those images are similar, the first category is shown, the y-axis represent the percentage (top), the 10 categories are shown in percentile (bottom) Fig. 72 The heat-map (top) and the tower comparison based on the results from computer vision (bottom) showing comparison of tow towers with one from image identifier. Fig. 73 The image identifier recognized the 48th image in the collage as the Azadi Tower (left). Here is the comparison. Fig. 74 (Top) The cameras scatter diagram. (Bottom) The 96 “pixels” Density histograms shows the density distribution of the cameras. Fig. 75 The 24 “pixels” (Top) and the 48 “pixels” (bottom) Density histograms shows the density distribution of the cameras. The highest cameras density (in red) is proximately located on the diagonal axis of the building. Fig. 76 Highlighting the Void in red on the collage of Type I Fig. 77 Highlighting the Void in red on the collage of OMA curated images Fig. 78 A Turrell Projection is created by projecting a single, controlled beam of light from the opposing corner of the room to show the correlation between light, space and perception and the importance of situated embodiment. Fig. 81 Felice Varini paints on architectural and urban spaces, such as buildings, walls and streets. The paintings are characterized by one vantage point from which the viewer can see the complete painting (usually a simple geometric shape such as circle, square, line), while from other view points the viewer will see ‘broken’ fragmented shapes. Varini argues that the work exists as a whole - with its complete shape as well as the fragments. “My concern,” he says “is what happens outside the vantage point of view.” Fig. 79 An illustration by Marie-Laure Cruschi showing the 2 dimensional shape for delightful 88
visual consumption Fig. 80  The wrapping artists Christo Vladimirov Javacheff and Jeanne-Claude hided the detachable features from the building to trigger the invisible attention
89
Bibliography LE CORBUSIER AT THE PARTHENON. 2015. WASHINGTON, D.C.: . ANDERSON, S., 1987. The Fiction of Function. Assemblage, (2), pp. 19-31. ANDERSON, S., 1984. Architectural research programmes in the work of Le Corbusier. Design Studies, 5(3), pp. 151-158. AYMONINO, C., 2005. Museum Space at the Campodiglio Museum; Piazza del Campidoglio overview. BATTY, M., 2001. Exploring Isovist Fields: Space and Shape in Architectural and Urban Morphology. Environment and Planning B: Planning and Design, 28(1), pp. 123-150. BENEDIKT, M., 1979. To take hold of space: isovists and isovist fields. Environment and Planning.B, 6(1), pp. 47. BREWSTER, D., 1856. The stereoscope : its history, theory, and construction : with its application to the fine and useful arts and to education. London: . COLOMINA, B., 1987. Le Corbusier and Photography. Assemblage, (4), pp. 7-23. CONROY - DALTON, R. and BAFNA, S., 2003. The syntactical image of the city:a reciprocal definition of spatial elements and spatial syntaxes. CORBUSIER, L., 2007. Toward an architecture. Los Angeles, Calif.: . CRARY, J., 1999. Suspensions of perception : attention, spectacle, and modern culture. Cambridge, Mass.: MIT Press. DE FLORIANI, L., MARZANO, P. and PUPPO, E., 1994. Line-of-sight communication on terrain models. International Journal of Geographical Information Systems, 8(4), pp. 329-342. DEUTSCH, R., 2015. Data- Driven Design and Construction: 25 Strategies for Capturing, Analyzing and Applying Building Data. FARINELLA, G.M., 2013. Advanced Topics in Computer Vision. FOSTER, H. and DIA, A.F., 1988. Vision and visuality. Seattle: Bay Press. III, A.E.S., 2005. Isovists, enclosure, and permeability theory. Environment and Planning B: Planning and Design, 32(5), pp. 735-762. KOENIG, R., STANDFEST, M. and SCHMITT, G., 2014. Evolutionary multi-criteria optimization for building layout planning-Exemplary application based on the PSSA framework. LE CORBUSIER, 1887-1965, FRENCH [ARCHITECT] and JEANNERET, PIERRE, 1896-1965,FRENCH [ARCHITECT], 1928. Villa Savoye. Sketches, Poissy, France. LOWE, D.G., 1999. Object recognition from local scale- invariant features. MARR, D., 1982. Vision : a computational investigation into the human representation and processing of visual information / David Marr. San Francisco: . MICHELANGELO, 1., I., 1538. Piazza del Campidoglio, Campidoglio, Rome, Italy. MURRAY, S.(.C., 2013. Interactive data visualization for the web. Sebastopol, CA: . NAGAKURA, T., TSAI, D. and CHOI, J., 2015. Capturing History Bit by Bit. 90
PICON, A., 2010. Digital culture in architecture : an introduction for the design professions. Basel: Birkhäuser : Springer Verlag]. SATO, Y., 2015. Analyzing human attention and behavior via collective visual sensing for the creation of life innovation. SNAVELY, N., SEITZ, S.M. and SZELISKI, R., 2008. Modeling the World from Internet Photo Collections. International Journal of Computer Vision, 80(2), pp. 189-210. STRANDBURG-PESHKIN, A., TWOMEY, C.R., BODE, N.W.F., KAO, A.B., KATZ, Y., IOANNOU, C.C., ROSENTHAL, S.B., TORNEY, C.J., WU, H.S., LEVIN, S.A. and COUZIN, I.D., 2013. Visual sensory networks and effective information transfer in animal groups. WU, C., 2013. Towards Linear-Time Incremental Structure from Motion. WU, C., AGARWAL, S., CURLESS, B. and SEITZ, S.M., 2011. Multicore bundle adjustment. 廖智威 ZHIWEI, L., 2013. KNOT MAKING. 城市建筑, (19), pp. 38-40.
91
Appendix Appendix A  Parsing Data
C#
Note: The codes were written for parsing the 3d information based on the input file *.NVM which was generated from VisualSFM software. These are the C# codes written in Grasshopper Plugin for Rhinoceros. After running these codes in Grasshopper, There will be two Json files as the output: First Json file includes the view points position and the image file names. The second Json file has the points for the subject, the measurement number.
92
Input NVM File
Stream Reader
1 private void RunScript(string file, ref object E, ref object camera, ref object B, ref object subject, ref object D, ref object NumCount) 2 { 3 4 List<Point3d> campt = new List<Point3d>(); 5 List<Plane> camdir = new List<Plane>(); 6 List<Point3d> pt = new List<Point3d>(); 7 List<Color> col = new List<Color>(); 8 List<double> numCount = new List<double>(); 9 10 List<string> names = new List<string>(); 11 12 string line; 13 14 // Read the file and display it line by line. 15 System.IO.StreamReader reader = new System. IO.StreamReader(file); 16 17 line = reader.ReadLine(); 18 if (line.Contains(“NVM_V3”)) { 19 while((line = reader.ReadLine()) != null) 20 { 21 22 //.........................................extract cameras 23 int cameras = 0; //initialize a camera count variable to 0 24 while(!int.TryParse(line, out cameras) && line != null) { // search line by line until the first line that can be converted to an integer number 25 line = reader.ReadLine();//the line was empty read next one 26 } 27 28 Print(cameras.ToString()); 29 30 for(int i = 0; i < cameras; ++i) { 31 line = reader.ReadLine(); //read one line that contains one camera definition 32 string [] campair = line.Split(‘\t’); //split to two at the tab character 33 string imageName = campair[0]; //first part contains the filename of the camera 34 // string imageName = Path.Combine(imagesLocation, temp); 35 36 37 names.Add(imageName); 38 39 string [] camdata = campair[1].Split(‘ ‘); //the second part contains the camera data 40 41 //<focal length> <quaternion WXYZ> <camera center> <radial distortion> 0 42 43 double focalLength = double.Parse(camdata[0]); 44 Point3d campoint = new Point3d( 45 double.Parse(camdata[5]), 46 double.Parse(camdata[6]), 47 double.Parse(camdata[7]) 48 ); 49 50 Quaternion q = new Quaternion( 51 double.Parse(camdata[1]), 52 double.Parse(camdata[2]), 53 double.Parse(camdata[3]), 54 double.Parse(camdata[4]) 55 ); 56 57 //Vector3d camAxis=q.Rotate(Vector3d.ZAxis); 58 Plane camPlane = Plane.WorldXY; 59 camPlane.Transform(q.MatrixForm()); 60 61 camPlane.Origin = campoint; 62 camdir.Add(camPlane); 63 64 campt.Add(campoint); 65 66 // Print(campair[0]); 67 } 68 69 //...............................extract points 70 //read lines until you find the first line that can be converted to a number and that is the number of points 71 int pointcount = 0; 72 while(!int.TryParse(line, out pointcount) && line != null) { 73 line = reader.ReadLine(); 74 } 75 76 Print(pointcount.ToString()); 77
Json Writer
Output Json file
78 for(int i = 0; i < pointcount; ++i) { 79 line = reader.ReadLine(); 80 81 //<XYZ> <RGB> <number of measurements> <List of Measurements> 82 //<Measurement> = <Image index> <Feature Index> <xy> 83 string [] pdata = line.Split(‘ ‘); 84 85 Point3d point = new Point3d( 86 double.Parse(pdata[0]), 87 double.Parse(pdata[1]), 88 double.Parse(pdata[2]) 89 ); 90 91 Color color = Color.FromArgb( 92 255, 93 int.Parse(pdata[3]), 94 int.Parse(pdata[4]), 95 int.Parse(pdata[5]) 96 ); 97 98 double numMeasure = int.Parse(pdata[6]); 99 pt.Add(point); 100 col.Add(color); 101 numCount.Add(numMeasure); 102 } 103 break; 104 } 105 } 106 reader.Close(); 107 108 camera = campt; 109 B = camdir; 110 E = names; 111 subject = pt; 112 D = col; 113 NumCount = numCount; 114 }
93
Appendix A (Continued)
C#
94
Input NVM File
Stream Reader
Json Writer
Output Json file
1 private void RunScript(List<string> IMG, List<Plane> camPlanes, string file) 2 { 3 StreamWriter w = new StreamWriter(file); 4 // date 5 Random randNum = new Random(); 6 DateTime minDt = new DateTime(2010, 10, 6, 10, 0, 0); 7 DateTime maxDt = new DateTime(2017, 1, 17, 10, 0, 0); 8 DateTime ramDate = new DateTime(2010, 10, 6, 10, 0, 0); 9 string date; 10 11 //Random.Next in .NET is non-inclusive to the upper bound (@NickLarsen) 12 int minutesDiff = Convert.ToInt32(maxDt.Subtract(minDt). TotalMinutes + 1); 13 w.WriteLine(“[“); 14 15 //camera 16 for(int i = 0; i < camPlanes.Count; ++i) { 17 Plane cam = camPlanes[i]; 18 // string name = imageNames[i]; 19 //date 20 // some random number that’s no larger than minutesDiff, no smaller than 1 21 int r = randNum.Next(1, minutesDiff); 22 ramDate = minDt.AddMinutes(r); 23 date = ramDate.ToString(“MM-dd-yyyy”); 24 25 if(i < camPlanes.Count - 1){ 26 w.Write(“{“); 27 w.Write(“\”IMG”); 28 w.Write(“\”: \””); 29 w.Write(IMG[i]); 30 w.Write(“\”,”); 31 32 w.Write(“ \”X”); 33 w.Write(“\”:”); 34 w.Write(cam.Origin.X); 35 w.Write(“,”); 36 37 w.Write(“ \”Y”); 38 w.Write(“\”:”); 39 w.Write(cam.Origin.Y); 40 w.Write(“,”); 41 42 w.Write(“ \”Z”); 43 w.Write(“\”:”); 44 w.Write(cam.Origin.Z); 45 w.Write(“,”); 46 47 w.Write(“ \”date”); 48 w.Write(“\”: \””); 49 w.Write(date); 50 w.Write(“\”}, “); 51 } 52 else { 53 w.Write(“{“); 54 w.Write(“\”IMG”); 55 w.Write(“\”: \””); 56 w.Write(IMG[i]); 57 w.Write(“\”,”); 58 59 w.Write(“ \”X”); 60 w.Write(“\”:”); 61 w.Write(cam.Origin.X); 62 w.Write(“,”); 63 64 w.Write(“ \”Y”); 65 w.Write(“\”:”); 66 w.Write(cam.Origin.Y); 67 w.Write(“,”); 68 69 w.Write(“ \”Z\”: “); 70 w.Write(cam.Origin.Z); 71 72 w.Write(“, \”date”); 73 w.Write(“\”: \””); 74 w.Write(date); 75 w.Write(“\”}”);} 76 Vector3d camNormal = cam.Normal; 77 } 78 w.WriteLine(“]”); 79 w.Close(); 80 }
95
Appendix A (Continued)
C#
96
Input NVM File
Stream Reader
Json Writer
Output Json file
1 private void RunScript(List<Point3d> points, List<double> MCount, string file) 2 { 3 StreamWriter w = new StreamWriter(file); 4 5 w.WriteLine(“[“); 6 7 for(int i = 0; i < points.Count; ++i) { 8 Point3d pts = points[i]; 9 double n = MCount[i]; 10 11 if(i < points.Count - 1){ 12 w.Write(“{“); 13 w.Write(“ \”X”); 14 w.Write(“\”:”); 15 w.Write(pts.X); 16 w.Write(“,”); 17 w.Write(“ \”Y”); 18 w.Write(“\”:”); 19 w.Write(pts.Y); 20 w.Write(“,”); 21 w.Write(“ \”Z”); 22 w.Write(“\”:”); 23 w.Write(pts.Z); 24 w.Write(“,”); 25 w.Write(“ \”N”); 26 w.Write(“\”:”); 27 w.Write(n); 28 w.Write(“},”);} 29 else { 30 w.Write(“ {“); 31 w.Write(“ \”X”); 32 w.Write(“\”:”); 33 w.Write(pts.X); 34 w.Write(“,”); 35 w.Write(“ \”Y”); 36 w.Write(“\”:”); 37 w.Write(pts.Y); 38 w.Write(“,”); 39 w.Write(“ \”Z”); 40 w.Write(“\”:”); 41 w.Write(pts.Z); 42 w.Write(“,”); 43 w.Write(“ \”N”); 44 w.Write(“\”:”); 45 w.Write(n); 46 w.Write(“}”);} 47 48 } 49 w.WriteLine(“]”); 50 w.Close(); 51 52 }
97
Appendix Bâ&#x20AC;&#x201A; Data Visualization Index
index.html
98
1 <!doctype html> 70 2 <html> 71 <span class=”fadeOnLoad0”>by Zhiwei Liao</span> 3 72 </h6> 4 <head> 73 </div> 5 <meta charset=”utf-8”> 74 <div id=”preloader” class=”row col-xs-12 col-md-12 text6 <meta name=”description” content=””> center “ style=”font-size: 1.0em”> 7 <meta name=”viewport” content=”width=device-width, initial- 75 <br/> scale=1”> 76 <span><i class=”fa fa-spinner fa-3x fa-spin” 8 <title>Collective Visual Field</title> style=”color: grey; margin: 10px;”></i></span> 9 <link rel=”stylesheet” href=”css/bootstrap.min.css”> 77 <br/> 10 <link rel=”stylesheet” href=”css/font-awesome.min.css”> 78 <span style=”color: grey”>Data is loading.</span> 11 <link rel=”stylesheet” href=”css/leaflet.css”> 79 <br/> 12 <link rel=”stylesheet” href=”css/scrolling-nav.css”> 80 </div> 13 <link rel=”stylesheet” href=”css/overlay.css”> 81 <div class=”row”> 14 <link rel=”stylesheet” href=”css/style.css”> 82 <div class=”col-xs-12 col-md-4 text-center fadeOnLoad1 15 <link href=’https://fonts.googleapis.com/css?family=Raleway’ fact pulse”> rel=’stylesheet’ type=’text/css’> 83 <div class=”title-numbers”><span class=”fixing”></ 16 <link href=’https://fonts.googleapis.com/ span>6 thousand</div> css?family=Raleway:100’ rel=’stylesheet’ type=’text/css’> 84 <div class=”title-numbers-name”> 17 <link rel=”stylesheet” href=”css/font-awesome.min.css”> 85 Instagram Photos Analysis</div> 18 <link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/ 86 </div> font-awesome/4.6.1/css/font-awesome.min.css”> 87 <div class=”col-xs-12 col-md-4 text-center fadeOnLoad2 19 <link href=”https://fonts.googleapis.com/ fact pulse”> css?family=Roboto+Slab:300|Roboto:700” rel=”stylesheet”> 88 <div class=”title-numbers”>7 hundred</div> 20 <link href=”https://fonts.googleapis.com/ 89 <div class=”title-numbers-name”>Visual css?family=Roboto+Slab:300” rel=”stylesheet”> Interaction</div> 21 <link href=’http://fonts.googleapis.com/ 90 </div> css?family=Raleway|Poiret+One’ rel=’stylesheet’ type=’text/css’> 91 <div class=”col-xs-12 col-md-4 text-center fadeOnLoad3 22 </head> fact pulse”> 23 92 <div class=”title-numbers”>8</div> 24 <body data-spy=”scroll” data-target=”.navbar” data-offset=”50”> 93 <div class=”title-numbers-name”>Architectural 25 <!-- ..................................................................................... --> Sites</div> 26 <!-- Navigation bar--> 94 </div> 27 <!-- ..................................................................................... --> 95 </div> 28 <nav class=”navbar navbar-inverse navbar-fixed-top 96 </div> noBackground” id=”navbar” style=”visibility:visible;”> 97 </div> 29 <div class=”container-fluid”> 98 </div> 30 <div class=”navbar-header”> 99 <!-- ..................................................................................... --> 31 <a class=”navbar-brand” href=”#”>Foreground</a> 100 <!-- explanation page--> 32 </div> 101 <!-- ..................................................................................... --> 33 <ul class=”nav navbar-nav”> 102 <section id=”Vision” class=”intro-section”> 34 <li><a class=”page-scroll” href=”#Vision”>Vision</a></ 103 <div class=”container”> li> 104 <div class=”row”> 35 <li><a class=”page-scroll” href=”#Overview”>Overview</ 105 <div class=”col-lg-2”></div> a></li> 106 <div class=”col-lg-8”> 36 <!-- <li><a class=”page-scroll” 107 <h1>Vision</h1> href=”#dualviews”>Dual View</a></li> --> 108 <p>As we may have noticed quite often, a professional 37 <li class=”dropdown”><a class=”dropdown-toggle” data- photographer takes photos different from a normal person. What are the toggle=”dropdown” href=”#”>Dual View<span class=”caret”></span></a> professional efforts to make a better shot if using the same equipment? 38 <ul class=”dropdown-menu”> 109 </p> 39 <li><a class=”page-scroll” href=”#part”>Parthenon</ 110 <p>As one of the architects who design buildings, I a></li> rarely know the way people perceive buildings, such as locations where they 40 <li><a class=”page-scroll” href=”#pant”>Pantheon</ tend to stand and frame their views, features which draw their attention a></li> and even details that they care about. Through computing a series of photos 41 <li><a class=”page-scroll” from social media such as Instagram and reconstructing a set of 3d points href=”#camp”>Campidoglio</a></li> including viewpoints, We can now visualize those information, hidden or not 42 <li><a class=”page-scroll” href=”#sagr”>Sagrada so obvious otherwise, and compare those with the ones that framed by the Familia</a></li> photographers or the architects. As a result, on the one hand, the photograph 43 <li><a class=”page-scroll” href=”#vill”>Villa techniques can be represented to the general public, on the other, urban Savoye</a></li> photos in the social media at large, which are shaped by the space, can be 44 <li><a class=”page-scroll” analysed and thus inform designers.</p> href=”#ronc”>Ronchamp</a></li> 111 <p>So here is the hypothesis: Computer Vision can 45 <li><a class=”page-scroll” improve the way people navigate the urban space and inform how we design href=”#gugg”>Guggenheim, Bilbao</a></li> it, virtually then physically.</p> 46 <li><a class=”page-scroll” href=”#cctv”>CCTV 112 </div> Headquarter</a></li> 113 </div> 47 </ul> 114 </div> 48 </li> 115 </section> 49 <li><a class=”page-scroll” href=”#Iconic”>Iconic Index</ 116 <section id=”Overview” class=”intro-section”> a></li> 117 <div class=”container”> 50 <li><a class=”page-scroll” href=”#site”>Site Location</ 118 <div class=”row”> a></li> 119 <div class=”col-lg-2”></div> 51 <li><a class=”page-scroll” 120 <div class=”col-lg-8”> href=”#Background”>Background</a></li> 121 <h1>Overview</h1> 52 </ul> 122 <p><strong></strong> In the past decade, we have 53 </div> been able to reconstruct 3d model of a given scene or a building through 54 </nav> computation based on a series of overlapping images. There are existing 55 <!-- ..................................................................................... --> application to use these reconstructed 3d information for urban tourism 56 <!-- Cover page--> (Snavely, 2006). However, the attempts have been focusing more on the 3D 57 <!-- ..................................................................................... --> representation, in which the relation between the viewpoints and the subject 58 <div id=”introSection”> is less investigated. This project is to map out the location of the viewpoints 59 <div class=”containervertical”> on a site plan in 2D. The users can thus navigate the Instagram photos by 60 <div class=”row”> referencing their individual viewpoint which can be filtered by brushing the 61 <br> time bar. A photo of the building facade is also shown adjacently to highlight 62 <br> the computer detectable features such as contour and texture, which is 63 <br> closely related to human perception. </p> 64 <div class=”row”> 123 </div> 65 <h1 class=”title-name”> 124 <div class=”col-lg-12”> 66 125 <!-- Trigger the modal with a button 67 <span 126 class=”fadeOnLoad0”>COLLECTIVE VISUAL FIELD <br>IN BUILT 127 <button ENVIRONMENT</span> type=”button” class=”btn btn-info btn-group-sm” data-toggle=”modal” data68 </h1> target=”#myModal”>Reconstruct 3D</button> 69 <h6 class=”title-numbers-name col-xs-12 text-center” 128 --> style=”padding:5px”> 129 <img class=”img-circle” data-toggle=”modal” data-
99
Appendix B (Continued) target=”#myModal0” src=”img/isoPts0.jpg” alt=”” height=”64”> data-dismiss=”modal”>Close</button> 130 <img class=”img-circle” data-toggle=”modal” data202 </div> target=”#myModal1” src=”img/isoPts1.jpg” alt=”” height=”64”> 203 </div> 131 <img class=”img-circle” data-toggle=”modal” data204 </div> target=”#myModal2” src=”img/isoPts2.jpg” alt=”” height=”64”> 205 </div> 132 <img class=”img-circle” data-toggle=”modal” data206 <div class=”modal fade” id=”myModal4” role=”dialog”> target=”#myModal3” src=”img/isoPts3.jpg” alt=”” height=”64”> 207 <div class=”modal-dialog modal-lg”> 133 <img class=”img-circle” data-toggle=”modal” data208 <div class=”modal-content”> target=”#myModal4” src=”img/isoPts4.jpg” alt=”” height=”64”> 209 <div class=”modal-header”> 134 <img class=”img-circle” data-toggle=”modal” data210 <button type=”button” class=”close” datatarget=”#myModal5” src=”img/isoPts5.jpg” alt=”” height=”64”> dismiss=”modal”>&times;</button> 135 <img class=”img-circle” data-toggle=”modal” data211 <h4 class=”modal-title”>3D Reconstruction</ target=”#myModal6” src=”img/isoPts6.jpg” alt=”” height=”64”> h4> 136 <img class=”img-circle” data-toggle=”modal” data212 </div> target=”#myModal7” src=”img/isoPts7.jpg” alt=”” height=”64”> 213 <div class=”modal-body”> 137 <!-- Modal --> 214 <p>Villa Savoye</p> 138 <div class=”modal fade” id=”myModal0” role=”dialog”> 215 <img src=”img/isoPts4.jpg” alt=”” 139 <div class=”modal-dialog modal-lg”> height=”400”> 140 <div class=”modal-content”> 216 </div> 141 <div class=”modal-header”> 217 <div class=”modal-footer”> 142 <button type=”button” class=”close” data218 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 143 <h4 class=”modal-title”>3D Reconstruction</ 219 </div> h4> 220 </div> 144 </div> 221 </div> 145 <div class=”modal-body”> 222 </div> 146 <p>Parthenon</p> 223 <div class=”modal fade” id=”myModal5” role=”dialog”> 147 <img src=”img/isoPts0.jpg” alt=”” 224 <div class=”modal-dialog modal-lg”> height=”400”> 225 <div class=”modal-content”> 148 </div> 226 <div class=”modal-header”> 149 <div class=”modal-footer”> 227 <button type=”button” class=”close” data150 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 228 <h4 class=”modal-title”>3D Reconstruction</ 151 </div> h4> 152 </div> 229 </div> 153 </div> 230 <div class=”modal-body”> 154 </div> 231 <p>Ronchamp</p> 155 <div class=”modal fade” id=”myModal1” role=”dialog”> 232 <img src=”img/isoPts5.jpg” alt=”” 156 <div class=”modal-dialog modal-lg”> height=”400”> 157 <div class=”modal-content”> 233 </div> 158 <div class=”modal-header”> 234 <div class=”modal-footer”> 159 <button type=”button” class=”close” data235 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 160 <h4 class=”modal-title”>3D Reconstruction</ 236 </div> h4> 237 </div> 161 </div> 238 </div> 162 <div class=”modal-body”> 239 </div> 163 <p>Pantheon </p> 240 <div class=”modal fade” id=”myModal6” role=”dialog”> 164 <img src=”img/isoPts1.jpg” alt=”” 241 <div class=”modal-dialog modal-lg”> height=”400”> 242 <div class=”modal-content”> 165 </div> 243 <div class=”modal-header”> 166 <div class=”modal-footer”> 244 <button type=”button” class=”close” data167 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 245 <h4 class=”modal-title”>3D Reconstruction</ 168 </div> h4> 169 </div> 246 </div> 170 </div> 247 <div class=”modal-body”> 171 </div> 248 <p>Guggenheim Museum, Bilbao</p> 172 <div class=”modal fade” id=”myModal2” role=”dialog”> 249 <img src=”img/isoPts6.jpg” alt=”” 173 <div class=”modal-dialog modal-lg”> height=”400”> 174 <div class=”modal-content”> 250 </div> 175 <div class=”modal-header”> 251 <div class=”modal-footer”> 176 <button type=”button” class=”close” data252 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 177 <h4 class=”modal-title”>3D Reconstruction</ 253 </div> h4> 254 </div> 178 </div> 255 </div> 179 <div class=”modal-body”> 256 </div> 180 <p>Piazza Del Campidoglio</p> 257 <div class=”modal fade” id=”myModal7” role=”dialog”> 181 <img src=”img/isoPts2.jpg” alt=”” 258 <div class=”modal-dialog modal-lg”> height=”400”> 259 <div class=”modal-content”> 182 </div> 260 <div class=”modal-header”> 183 <div class=”modal-footer”> 261 <button type=”button” class=”close” data184 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 262 <h4 class=”modal-title”>3D Reconstruction</ 185 </div> h4> 186 </div> 263 </div> 187 </div> 264 <div class=”modal-body”> 188 </div> 265 <p>CCTV Headquarter</p> 189 <div class=”modal fade” id=”myModal3” role=”dialog”> 266 <img src=”img/isoPts7.jpg” alt=”” 190 <div class=”modal-dialog modal-lg”> height=”400”> 191 <div class=”modal-content”> 267 </div> 192 <div class=”modal-header”> 268 <div class=”modal-footer”> 193 <button type=”button” class=”close” data269 <button type=”button” class=”btn btn-default” dismiss=”modal”>&times;</button> data-dismiss=”modal”>Close</button> 194 <h4 class=”modal-title”>3D Reconstruction</ 270 </div> h4> 271 </div> 195 </div> 272 </div> 196 <div class=”modal-body”> 273 </div> 197 <p>Sagrada Familia</p> 274 </div> 198 <img src=”img/isoPts3.jpg” alt=”” 275 </div> height=”400”> 276 </div> 199 </div> 277 </section> 200 <div class=”modal-footer”> 278 <br><br><br><br><br><br><br> 201 <button type=”button” class=”btn btn-default” 279 <!-- ..................................................................................... -->
100
280 <!-- Main page--> 369 281 <!-- ..................................................................................... --> 370 282 <nav class=”navbar navbar-inverse noBackgroundInverse “ data- 371 spy=”affix” data-offset-top=”1530” data-offset-bottom=”1850”> 372 283 <div class=”nav navbar-nav “> 373 284 <div class=”container head6”> 374 285 <div class=”row”> 375 286 <div class=”col-sm-5”> 376 287 <div> 377 288 <svg width=”15px” height=”15px”> 378 289 <circle cx=”15” cy=”15” r=”7.5” fill=”#FFC300” 379 opacity=”1”> 380 290 </circle> 381 291 </svg>&nbsp Prevailing photo interest on site 382 292 <svg width=”15px” height=”15px”> 383 293 <circle cx=”15” cy=”15” r=”7.5” fill=”#B2BABB” 384 opacity=”1”> 385 294 </circle> 386 295 </svg>&nbsp Other Views</div> 387 296 </div> 388 297 <div class=”col-sm-4 “> 389 298 <div> 390 299 <svg width=”17.5px” height=”15px”> 391 300 <circle cx=”7.5” cy=”10” r=”5” fill=”#FFC300” 392 opacity=”1”> 393 301 </circle> 394 302 </svg>Viewer location </div> 395 303 </div> 396 304 <div class=”col-sm-3 “> 397 305 <div> 398 306 <svg width=”17.5px” height=”15px”> 399 307 <circle cx=”7.5” cy=”10” r=”5” fill=”#16a085” 400 opacity=”1”> 401 308 </circle> 402 309 </svg>Detected feature (size represents 403 measurement count)</div> 404 310 </div> 405 311 </div> 406 312 </div> 407 313 </div> 408 314 </nav> 409 315 <div class=”container”> 410 316 <div class=”row pad” id=”part”> 411 317 <div class=”col-sm-4 Titles”> 412 318 <div class=”pies” id=”pie-1”> 413 319 <p id=”Titl-1”> </p> 414 320 </div> 415 321 </div> 416 322 <div class=”col-sm-4 imagePlans”> 417 323 <div id=”imagePlan-1” class=”imgPlan”> </div> 418 324 </div> 419 325 <div class=”col-sm-4 imageElevs”> 420 326 <div id=”imageElev-1” class=”imgElev”> </div> 421 327 </div> 422 328 </div> 423 329 <div class=”row”> 424 330 <div class=”col-sm-4 Titles”></div> 425 331 <div class=”col-sm-4 Times”> 426 332 <div id=”Time-1” class=”Time”> </div> 427 333 </div> 428 334 <div class=”col-sm-4 measures”> 429 335 <div id=”Measure-1”> </div> 430 336 </div> 431 337 </div> 432 338 <div class=”row pad2” id=”pant”> 433 339 <div class=”col-sm-4 Titles”> 434 340 <div class=”pies” id=”pie-2”> 435 341 <p id=”Titl-2”> </p> 436 342 </div> 437 343 </div> 438 344 <div class=”col-sm-4 imagePlans”> 439 345 <div id=”imagePlan-2” class=”imgPlan”> </div> 440 346 </div> 441 347 <div class=”col-sm-4 imageElevs”> 442 348 <div id=”imageElev-2” class=”imgElev”> </div> 443 349 </div> 444 350 </div> 445 351 <div class=”row”> 446 352 <div class=”col-sm-4 Titles”></div> 447 353 <div class=”col-sm-4 Times”> 448 354 <div id=”Time-2” class=”Time”> </div> 449 355 </div> 450 356 <div class=”col-sm-4 measures”> 451 357 <div id=”Measure-2”> </div> 452 358 </div> 453 359 </div> 454 360 <div class=”row pad2” id=”camp”> 455 361 <div class=”col-sm-4 Titles”> 456 362 <div class=”pies” id=”pie-3”> 457 363 <p id=”Titl-3”> </p> 458 364 </div> 459 365 </div> 460 366 <div class=”col-sm-4 imagePlans”> 461 367 <div id=”imagePlan-3” class=”imgPlan”> </div> 462 368 </div> 463
<div class=”col-sm-4 imageElevs”> <div id=”imageElev-3” class=”imgElev”> </div> </div> </div> <div class=”row”> <div class=”col-sm-4 Titles”></div> <div class=”col-sm-4 Times”> <div id=”Time-3” class=”Time”> </div> </div> <div class=”col-sm-4 measures”> <div id=”Measure-3”> </div> </div> </div> <div class=”row pad2” id=”sagr”> <div class=”col-sm-4 Titles”> <div class=”pies” id=”pie-4”> <p id=”Titl-4”> </p> </div> </div> <div class=”col-sm-4 imagePlans”> <div id=”imagePlan-4” class=”imgPlan”> </div> </div> <div class=”col-sm-4 imageElevs”> <div id=”imageElev-4” class=”imgElev”> </div> </div> </div> <div class=”row”> <div class=”col-sm-4 Titles”></div> <div class=”col-sm-4 Times”> <div id=”Time-4” class=”Time”> </div> </div> <div class=”col-sm-4 measures”> <div id=”Measure-4”> </div> </div> </div> <div class=”row pad2” id=”vill”> <div class=”col-sm-4 Titles”> <div class=”pies” id=”pie-5”> <p id=”Titl-5”> </p> </div> </div> <div class=”col-sm-4 imagePlans”> <div id=”imagePlan-5” class=”imgPlan”> </div> </div> <div class=”col-sm-4 imageElevs”> <div id=”imageElev-5” class=”imgElev”> </div> </div> </div> <div class=”row”> <div class=”col-sm-4 Titles”></div> <div class=”col-sm-4 Times”> <div id=”Time-5” class=”Time”> </div> </div> <div class=”col-sm-4 measures”> <div id=”Measure-5”> </div> </div> </div> <div class=”row pad2” id=”ronc”> <div class=”col-sm-4 Titles”> <div class=”pies” id=”pie-6”> <p id=”Titl-6”> </p> </div> </div> <div class=”col-sm-4 imagePlans”> <div id=”imagePlan-6” class=”imgPlan”> </div> </div> <div class=”col-sm-4 imageElevs”> <div id=”imageElev-6” class=”imgElev”> </div> </div> </div> <div class=”row”> <div class=”col-sm-4 Titles”></div> <div class=”col-sm-4 Times”> <div id=”Time-6” class=”Time”> </div> </div> <div class=”col-sm-4 measures”> <div id=”Measure-6”> </div> </div> </div> <div class=”row pad2” id=”gugg”> <div class=”col-sm-4 Titles”> <div class=”pies” id=”pie-7”> <p id=”Titl-7”> </p> </div> </div> <div class=”col-sm-4 imagePlans”> <div id=”imagePlan-7” class=”imgPlan”> </div> </div> <div class=”col-sm-4 imageElevs”> <div id=”imageElev-7” class=”imgElev”> </div> </div> </div> <div class=”row “> <div class=”col-sm-4 Titles”></div> <div class=”col-sm-4 Times”>
101
Appendix B (Continued) 464 <div id=”Time-7” class=”Time”> </div> 539 </p> 465 </div> 540 </div> 466 <div class=”col-sm-4 measures”> 541 </div> 467 <div id=”Measure-7”> </div> 542 </div> 468 </div> 543 </div> 469 </div> 544 </section> 470 <div class=”row pad2” id=”cctv”> 545 <section id=”Background” class=””> 471 <div class=”col-sm-4 Titles”> 546 <!-- ..................................................................................... --> 472 <div class=”pies” id=”pie-8”> 547 <!-- Credits --> 473 <p id=”Titl-8”> </p> 548 <!-- ..................................................................................... --> 474 </div> 549 <div class=”section” id=”creditsSection”> 475 </div> 550 <div class=”containervertical”> 476 <div class=”col-sm-4 imagePlans”> 551 <div class=”row pad”> 477 <div id=”imagePlan-8” class=”imgPlan”> </div> 552 <div class=”col-xs-12 col-md-12”> 478 </div> 553 <h2>Acknowledgements</h2> 479 <div class=”col-sm-4 imageElevs”> 554 <p></p> 480 <div id=”imageElev-8” class=”imgElev”> </div> 555 <p> I really appreciate the structure, teaching 481 </div> supports and knowledge of CS171, which allows me systematically adsorb the 482 </div> fundamental principles of visualization and the techniques around it. I really 483 </div> learn in depth by going through the process of developing the final project. I 484 <br><br><br> would like to thank the entire CS171 team. Their input inspired me in many 485 <div class=”container”> ways. </p> 486 <div class=”row”> 556 </div> 487 <div class=”col-sm-4”></div> 557 </div> 488 <div class=”col-sm-4 Times”> 558 <div class=”container”> 489 <div id=”Time-8” class=”Time”> </div> 559 <div class=”row”> 490 </div> 560 <div class=”col-xs-12 col-md-4 credits”> 491 <div class=”col-sm-4 measures”> 561 <h3>Data sources</h3> 492 <div id=”Measure-8”> </div> 562 <ul> 493 </div> 563 <li> 494 </div> 564 <mark>Structure from Motion</mark>, 495 </div> Changchang Wu, “VisualSFM: A Visual Structure from Motion System”,2011 496 <!-- ..................................................................................... --> 565 <a href=”http://ccwu.me/vsfm/” target=”_ 497 <!-- Dessert page--> blank”>link</a></li> 498 <!-- ..................................................................................... --> 566 <li> 499 <section class=”intro-section “> 567 <mark>Bundle Adjustment</mark>, 500 <div class=”container pad2” id=”Iconic”> Changchang Wu, Sameer Agarwal, Brian Curless, and Steven M. Seitz, 501 <div class=”row”> “Multicore Bundle Adjustment”, CVPR 2011</li> 502 <div class=”col-lg-2” id=”bar-1”></div> 568 <li> 503 <div class=”col-lg-8”> 569 <a href=”https://www.instagram.com/explore/ 504 <h1>Iconic Index</h1> tags/parthenon/” target=”_blank”>Parthenon</a> 505 <p><strong></strong> The bar chart shows the 570 </li> percentage of prevailing views in the total photos analyzed </p> 571 <li> 506 </div> 572 <a href=”https://www.instagram.com/explore/ 507 <div class=”container”> tags/pantheon/” target=”_blank”>Pantheon</a> 508 <!-- Select box to choose the ranking type --> 573 </li> 509 <form class=”form-inline”> 574 <li> 510 <div class=”form-group”> 575 <a href=”https://www.instagram.com/explore/ 511 <label for=”rankingType”>Group by</label> locations/371603/piazza-del-campidoglio/” target=”_blank”>piazza del 512 <select class=”form-control” id=”ranking-type”> campidoglio</a> 513 <option value=”year”>Built Year</option> 576 </li> 514 <option value=”percent”>Prevailing Views 577 <li> Percentage (%)</option> 578 <a href=”https://www.instagram.com/explore/ 515 <option value=”instagram”>Instagram total tags/SagradaFamilia/” target=”_blank”>Sagrada Familia</a> upload</option> 579 </li> 516 </select> 580 <li> 517 <!-- Activity IV onchange=”updateVisualization()” 581 <a href=”https://www.instagram.com/explore/ --> tags/VillaSavoye/” target=”_blank”>Villa Savoye</a> 518 <!--<button type=”button” class=”btn btn-link” 582 </li> id=”change-sorting”><i class=”glyphicon glyphicon-sort”></i> Sort</ 583 <li> button>--> 584 <a href=”https://www.instagram.com/explore/ 519 </div> tags/ronchamp/” target=”_blank”>Ronchamp</a> 520 </form> 585 </li> 521 <!-- Parent container for the visualization --> 586 <li> 522 <div id=”chart-area”></div> 587 <a href=”https://www.instagram.com/explore/ 523 </div> tags/guggenheim/” target=”_blank”>Guggenheim, Bilbao</a> 524 </div> 588 </li> 525 </div> 589 <li> 526 </section> 590 <a href=”https://www.instagram.com/explore/ 527 <section id=”site” class=”pad”> tags/cctvheadquarters/” target=”_blank”>CCTV Headquarters</a> 528 <div class=”container”> 591 </li> 529 <div class=”row “> 592 <li> 530 <!-- map 593 <a href=”https://www.google.com/maps” .....................................................................................--> target=”_blank”>Google Maps</a> 531 <div class=”col-sm-8 “> 594 </li> 532 <div id=”mapid”></div> 595 </ul> 533 </div> 596 <br/> 534 <div class=”col-sm-4”> 597 <h3> </h3> 535 <div> 598 </div> 536 <br> 599 <div class=”col-xs-12 col-md-4 credits 537 <p class=”head4”> creditsmiddle”> 538 <mark>More about the research:</mark> 600 <h3>Images</h3> Collective Visual Field is to examine architecture based on internet photos 601 <p></p> as the collective visual perception. The workflow of the project: Processing 602 <ul> and generating 3D points cloud from internet photos and its data-set, 603 <li> Visualizing the data-set. A photo represents the visual consumption of a 604 <a href=”http://icouzin.princeton.edu/currentspecific subject in a particular moment, which contains information of a biology-visual-sensory-networks-and-effective-information-transferlocalized viewpoint and the interest of the viewer. This project is to examine in-animal-groups/” target=”_blank”>Current Biology: Visual sensory the visual perception in a collective way in the built environment through networks and effective information transfer in animal groups</a> by Ariana internet photos. As those photos are created with time and site specifically, Strandburg-Peshkin et al. we are able to investigate a pattern about the viewer’s behavior in the context 605 </li> of built environment through the geoData of the area, the exact time of the 606 <li> photo that was taken, the camera coordinates and the 3D point coordinates 607 <a href=”https://www.flickr.com/ of the subject. SFM (structure from motion) is implemented to compute the photos/79117486@N02/13918225550/sizes/l/” target=”_ dataSet. blank”>Guggenheim, Bilbao</a> by haymartxo
102
608 </li> 609 <li> 610 <a href=”https://www.dezeen. com/2014/11/26/rem-koolhaas-defends-cctv-building-beijing-chinaarchitecture/” target=”_blank”>Image of the CCTV building from Dezeen is courtesy of Shutterstock</a> 611 </li> 612 <li> 613 <a href=”http://library. artstor.org.ezp-prod1.hul.harvard.edu/library/ExternalIV. jsp?objectId=8CJGbzQuJTE6NjU8ZlN7R3srWXkseFp9&fs=true” target=”_ blank”>Museum Space at the Campodiglio Museum; Piazza del Campidoglio overview</a> by Carlo Aymonino 614 </li> Image and original data provided by ART on FILE </a> 615 <li> 616 <a href=”https://www.flickr.com/photos/ bradkaren/3642761216/sizes/l/” target=”_blank”>Parthenon</a> by Brad & Karen Francis 617 </li> 618 <li> 619 <a href=”https://www.instagram.com/p/ BKk4YA6Aa-L/?tagged=ronchamp” target=”_blank”>Ronchamp</a> by o.steinbauer 620 </li> 621 </ul> 622 </div> 623 <div class=”col-xs-12 col-md-4 credits”> 624 <h3>Plugins</h3> 625 <ul> 626 <li> 627 <a href=”http://www.crummy.com/software/ BeautifulSoup/bs4/doc/” target=”_blank”>Beautiful Soup</a> (web scraping) 628 </li> 629 <li> 630 <a href=”http://d3js.org/” target=”_blank”>d3. js</a> (visualizations) 631 </li> 632 <li> 633 <a href=”http://getbootstrap.com/gettingstarted/” target=”_blank”>Bootstrap</a> (webpage layout) 634 </li> 635 <li> 636 <a href=”http://fortawesome.github.io/FontAwesome/icons/” target=”_blank”>Font Awesome</a> (icons) 637 </li> 638 <li> 639 <a href=”http://leafletjs.com/” target=”_ blank”>Leafletjs</a> 640 </li> 641 </ul> 642 <br/> 643 <h3>Process book</h3> 644 <a href=”data/processBook.pdf” target=”_ blank”>Click here to 645 download our process book.</a> 646 <h3>Screen cast</h3> 647 <a href=”https://vimeo.com/195404358” target=”_ blank”>link</a> 648 </div> 649 </div> 650 </div> 651 </div> 652 </div> 653 </section> 654 <!-- Load JS libraries --> 655 <script src=”js/jquery.min.js”></script> 656 <script src=”js/jquery.easing.min.js”></script> 657 <script src=”js/bootstrap.min.js”></script> 658 <script src=”js/queue.min.js”></script> 659 <script src=”js/colorbrewer.js”></script> 660 <script src=”js/scrolling-nav.js”></script> 661 <script src=”js/leaflet.js”></script> 662 <script src=”js/stationMap.js”></script> 663 <script src=”js/d3.min.js”></script> 664 <script src=”js/d3.tip.js”></script> 665 <script src=”js/vis-facade.js”></script> 666 <script src=”js/vis-scatterPlot.js”></script> 667 <script src=”js/attention.js”></script> 668 <script src=”js/Timeline.js”></script> 669 <script src=”js/measure.js”></script> 670 <script src=”js/viewPercent.js”></script> 671 <script src=”js/main.js”></script> 672 </body> 673 674 </html>
103
Appendix Câ&#x20AC;&#x201A; Main JavaScript
main.js
104
1 // Variables for the visualization instances 2 var Plans = [], 3 Elevs = [], 4 Times = [], 5 Measures = []; 6 7 // decimal digits to minimize files’ size 8 var digit = 1; 9 10 // Date parser to convert strings to date objects 11 var parseDate = d3.time.format(“%m-%d-%Y”).parse; 12 var formatDate = d3.time.format(“%b-%y-%Y”); 13 var Configs; 14 var map; 15 var pie0, pie1, pie2, pie3, pie4, pie5, pie6, pie7; 16 17 //var dataAttention = [55,7,7,7,4,3,142]; 18 var partCamCount = [{ “type”: “Other”, “value”: 194 }, { “type”: “Major”, “value”: 55 }]; 19 var pantCamCount = [{ “type”: “Other”, “value”: 132 }, { “type”: “Major”, “value”: 57 }]; 20 var campCamCount = [{ “type”: “Other”, “value”: 168 }, { “type”: “Major”, “value”: 57 }]; 21 var sagrCamCount = [{ “type”: “Other”, “value”: 165 }, { “type”: “Major”, “value”: 60 }]; 22 var villCamCount = [{ “type”: “Other”, “value”: 1768 }, { “type”: “Major”, “value”: 257 }]; 23 var roncCamCount = [{ “type”: “Other”, “value”: 189 }, { “type”: “Major”, “value”: 48 }]; 24 var guggCamCount = [{ “type”: “Other”, “value”: 1231 }, { “type”: “Major”, “value”: 122 }]; 25 var cctvCamCount = [{ “type”: “Other”, “value”: 196 }, { “type”: “Major”, “value”: 49 }]; 26 27 pie0 = new PieChart(“pie-1”, partCamCount); 28 pie1 = new PieChart(“pie-2”, pantCamCount); 29 pie2 = new PieChart(“pie-3”, campCamCount); 30 pie3 = new PieChart(“pie-4”, sagrCamCount); 31 pie4 = new PieChart(“pie-5”, villCamCount); 32 pie5 = new PieChart(“pie-6”, roncCamCount); 33 pie6 = new PieChart(“pie-7”, guggCamCount); 34 pie7 = new PieChart(“pie-8”, cctvCamCount); 35 36 queue().defer(d3.csv, “data/configs.csv”) 37 .await(function(error, data) { 38 if (!error) { 39 Configs = data; 40 41 } 42 43 for (i = Configs.length; i > 0; i--) { 44 // load images 45 $(‘#imageElev-’ + i.toString()) 46 .prepend(‘<img src=”data/’ + Configs[i - 1].key + ‘/Elev.jpg” />’) 47 48 $(‘#imagePlan-’ + i.toString()) 49 .prepend(‘<img src=”data/’ + Configs[i - 1].key + ‘/Plan.jpg” />’) 50 } 51 52 for (i = 1; i <= (Configs.length); i++) { 53 54 $(‘#Titl-’ + i) 55 .prepend(‘<p>’ + “<br>” + Configs[i - 1].title + “, built in “ + Configs[i - 1].year 56 57 + ‘</p>’) 58 } 59 map = new Map(“mapid”, Configs) 60 61 loadData(); 62 63 }) 64 65 function loadData() { 66 queue() 67 .defer(d3.json, “data/partCam.json”) 68 .defer(d3.json, “data/partPts.json”) 69 70 .defer(d3.json, “data/pantCam.json”) 71 .defer(d3.json, “data/pantPts.json”) 72 73 .defer(d3.json, “data/campCam.json”) 74 .defer(d3.json, “data/campPts.json”) 75 .defer(d3.json, “data/sagrCam.json”) 76 .defer(d3.json, “data/sagrPts.json”) 77 78 .defer(d3.json, “data/savoCam.json”) 79 .defer(d3.json, “data/savoPts.json”) 80 81 .defer(d3.json, “data/roncCam.json”) 82 .defer(d3.json, “data/roncPts.json”) 83 .defer(d3.json, “data/guggCam.json”) 84 .defer(d3.json, “data/guggPts.json”)
85 .defer(d3.json, “data/cctvCam.json”) 86 .defer(d3.json, “data/cctvPts.json”) 87 .await(function(error, data0, data1, data2, data3, data4, data5, data6, data7, 88 data8, data9, data10, data11, data12, data13, data14, data15) { 89 90 data11 = data11.filter(function(d) { 91 return d.X > -1.0 && d.X < 1.5 92 }) 93 94 data13 = data13.filter(function(d) { 95 return d.X > -20 96 }) 97 98 if (!error) { 99 for (i = 0; i < (Configs.length) * 2; i++) { 100 if ((i % 2) == 0) { 101 eval(“data” + (i).toString()).forEach(function(d) { 102 d.date = parseDate(d.date); 103 d.X = +parseFloat(d.X).toFixed(digit); 104 d.Y = +parseFloat(d.Y).toFixed(digit); 105 d.Z = +parseFloat(d.Z).toFixed(digit); 106 }); 107 } else { 108 eval(“data” + (i).toString()).forEach(function(d) { 109 d.X = +parseFloat(d.X).toFixed(digit); 110 d.Y = +parseFloat(d.Y).toFixed(digit); 111 d.Z = +parseFloat(d.Z).toFixed(digit); 112 }); 113 } 114 } 115 116 enableNavigation(); 117 118 for (i = 0; i < (Configs.length) * 2; i++) { 119 if ((i % 2) == 0) { 120 Plans.push(new Plan(“imagePlan-” + ((i + 2) / 2).toString(), eval(“data” + (i).toString()), eval(“data” + (i + 1).toString()))); 121 Times.push(new Time(“Time-” + ((i + 2) / 2).toString(), eval(“data” + (i).toString()))); 122 123 } else { 124 Elevs.push(new Elev(“imageElev-” + ((i + 1) / 2).toString(), eval(“data” + (i).toString()))); 125 Measures.push(new Measure(“Measure-” + ((i + 1) / 2).toString(), eval(“data” + (i).toString()))); 126 } 127 } 128 } 129 }) 130 } 131 132 function brushed() { 133 for (i = 0; i < Plans.length; i++) { 134 Plans[i].z.domain(Times[i].brush.empty() ? Times[i].x.domain() : Times[i].brush.extent()); 135 Plans[i].wrangleData(); 136 } 137 } 138 139 function brushedMeasure() { 140 141 for (i = 0; i < Elevs.length; i++) { 142 Elevs[i].z.domain(Measures[i].brush.empty() ? Measures[i].x.domain() : Measures[i].brush.extent()); 143 Elevs[i].wrangleData(); 144 } 145 } 146 147 function enableNavigation() { 148 $(‘body’).removeClass(‘noscroll’); 149 $(‘#preloader’).css(“visibility”, “hidden”); 150 151 }
105
Appendix Dâ&#x20AC;&#x201A; Site Plan Scatter
vis-scatterPlot.js
106
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 “)”); 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
Plan = function(_parentElement, _data, _data1) { this.parentElement = _parentElement; this.data = _data; this.data1 = _data1; this.displayData = []; this.initVis(); } Plan.prototype.initVis = function() { var vis = this; vis.margin = { top: 100, right: 100, bottom: 100, left: 100 }; vis.width = 300 - vis.margin.left - vis.margin.right, vis.height = 300 - vis.margin.top - vis.margin.bottom; vis.svg = d3.select(“#” + vis.parentElement).append(“svg”) .attr(“width”, vis.width + vis.margin.left + vis.margin.right) .attr(“height”, vis.height + vis.margin.top + vis.margin.bottom) .append(“g”) .attr(“transform”, “translate(“ + vis.margin.left + “,” + vis.margin.top + vis.ratio = 1; vis.x = d3.scale.linear() .range([0, vis.width * vis.ratio]); vis.y = d3.scale.linear() .range([vis.height * vis.ratio, 0]); vis.z = d3.time.scale() .range([0, vis.width]) .domain(d3.extent(vis.data, function(d) { return d.date; })); vis.xAxis = d3.svg.axis() .scale(vis.x) .orient(“bottom”); vis.yAxis = d3.svg.axis() .scale(vis.y) .orient(“left”); vis.zAxis = d3.svg.axis() .scale(vis.z) .orient(“bottom”); // filter out the far points vis.data = vis.data.sort(function(a, b) { return b.Z - a.Z; }); vis.data = vis.data.slice(0, -6); // minimize the data size vis.data = vis.data.map(function(d) { return { IMG: d.IMG, X: d.X, Z: d.Z, date: d.date } }); // filter out the subject points by measurement vis.data1 = vis.data1.filter(function(d) { return d.N > 5 }) vis.data1 = vis.data1.map(function(d) { return { X: d.X, Z: d.Z, N: d.N } }); vis.wrangleData();
}; Plan.prototype.wrangleData = function() { var vis = this; var temp = vis.z.domain(); vis.displayData = vis.data.filter(function(d) { return (d.date > temp[0] && d.date < temp[1]) }); vis.updateVis(); }; Plan.prototype.updateVis = function() { var vis = this; vis.min = d3.min(vis.data, function(d) { return d.X; }); vis.max = d3.max(vis.data, function(d) { return d.X; }); vis.min1 = d3.min(vis.data, function(d) { return d.Z; }); vis.max1 = d3.max(vis.data, function(d) { return d.Z; }); if (vis.min > vis.min1) { vis.min = vis.min1 } if (vis.max < vis.max1) { vis.max = vis.max1 } vis.x.domain([vis.min, vis.max]); vis.y.domain([vis.min, vis.max]); vis.svg.select(“.x‐axis”) .transition() .attr(“transform”, “translate(0,” + vis.height + “)”) .call(vis.xAxis);
95 vis.svg.select(“.y‐axis”) 96 .transition() 97 .call(vis.yAxis) 98 vis.svg.select(“.z‐axis”) 99 .transition() 100 .attr(“transform”, “translate(0,” + vis.height + “)”) 101 .call(vis.zAxis); 102 vis.tip = d3.tip() 103 .attr(‘class’, ‘d3-tip’) 104 .offset([0, 0]) 105 .html(function(d) { 106 return “<strong></strong>” + “<img src=’data/IMG/” + d.IMG + “’ alt=’’ >” + formatDate(d.date) + ‘</img>’; 107 }) 108 vis.svg.call(vis.tip); 109 /* ...this is a convenient way to create fuzziness for each dots to create a pseudo heatmap although it is not used in the final visualization 110 vis.radialGradient = vis.svg.append(“defs”) 111 .append(“radialGradient”) 112 .attr(“id”, “radialgradient”); 113 vis.radialGradient.append(“stop”) 114 .attr(“offset”, “0%”) 115 .attr(“stop-color”, “rgba(255, 0, 255, 0.5)”) //#C70039 116 .attr(“stop-opacity”,1); 117 vis.radialGradient.append(“stop”) 118 .attr(“offset”, “100%”) 119 .attr(“stop-color”, “rgba(255, 0, 255, 0.5)”) 120 .attr(“stop-opacity”, 0); 121 */ 122 // Camera points 123 vis.circles = vis.svg.selectAll(“.camera”) 124 .data(vis.displayData) 125 vis.circles 126 .enter() 127 .append(“circle”) 128 .attr(“class”, “camera”); 129 vis.circles 130 .transition() 131 .attr(“cx”, function(d) { 132 return vis.x(d.X); 133 }) 134 .attr(“cy”, function(d) { 135 return vis.y(d.Z); 136 }) 137 .attr(“r”, 2.5) 138 vis.circles 139 .on(‘mouseover’, vis.tip.show) 140 .on(‘mouseout’, vis.tip.hide); 141 vis.circles 142 .exit() 143 .remove(); 144 145 // building points 146 points(); 147 148 function points() { 149 var circle1 = vis.svg 150 .selectAll(“.subject”) 151 .data(vis.data1) 152 circle1 153 .enter() 154 .append(“circle”) 155 .attr(“class”, “subject”); 156 circle1 157 .transition() 158 .attr(“cx”, function(d) { 159 return vis.x(d.X); 160 }) 161 .attr(“cy”, function(d) { 162 return vis.y(d.Z); 163 }) 164 .attr(“r”, 2) 165 circle1 166 .exit() 167 .remove(); 168 } 169 }
107
Appendix Eâ&#x20AC;&#x201A; Frontal View
vis-facade.js
108
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 “)”); 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
95 96 97 98 99 100 Elev.prototype.initVis = function() { 101 var vis = this; 102 vis.margin = { top: 30, right: 10, bottom: 20, left: 10 }; 103 vis.width = 300 - vis.margin.left - vis.margin.right; 104 vis.height = 300 - vis.margin.top - vis.margin.bottom; 105 vis.svg = d3.select(“#” + vis.parentElement).append(“svg”) 106 .attr(“width”, vis.width + vis.margin.left + vis.margin.right) 107 .attr(“height”, vis.height + vis.margin.top + vis.margin.bottom) 108 .append(“g”) 109 .attr(“transform”, “translate(“ + vis.margin.left + “,” + vis.margin.top + 110 111 vis.x = d3.scale.linear() 112 .range([0, vis.width]); vis.y = d3.scale.linear() .range([0, vis.height]); vis.z = d3.time.scale() .range([0, vis.width]) .domain(d3.extent(vis.data, function(d) { return d.N; })); vis.xAxis = d3.svg.axis() .scale(vis.x) .orient(“bottom”); vis.yAxis = d3.svg.axis() .scale(vis.y) .orient(“left”); vis.zAxis = d3.svg.axis() .scale(vis.z) .orient(“bottom”); vis.data = vis.data.filter(function(d) { return d.N > 2 }) vis.wrangleData(); }; Elev.prototype.wrangleData = function() { var vis = this; Elev = function(_parentElement, _data) { this.parentElement = _parentElement; this.data = _data; this.initVis(); };
};
return vis.x(d.X); }) .attr(“cy”, function(d) { return vis.y(d.Y); }) .attr(“r”, function(d) { return (d.N) * .15; }) vis.circle .on(‘mouseover’, vis.tip.show) .on(‘mouseout’, vis.tip.hide); vis.circle.exit().remove(); vis.svg.append(“defs”).append(“clipPath”) .attr(“id”, “clip”) .append(“rect”) .attr(“width”, vis.width) .attr(“height”, vis.height);
vis.data = vis.data.map(function(d) { return { X: d.X, Y: d.Y, N: d.N } }); var temp = vis.z.domain(); vis.displayData = vis.data.filter(function(d) { return (d.N > temp[0] && d.N < temp[1]) }); vis.updateVis();
}; Elev.prototype.updateVis = function() { var vis = this; vis.min = d3.min(vis.data, function(d) { return d.X; }); vis.max = d3.max(vis.data, function(d) { return d.X; }); vis.min1 = d3.min(vis.data, function(d) { return d.Y; }); vis.max1 = d3.max(vis.data, function(d) { return d.Y; }); if (vis.min > vis.min1) { vis.min = vis.min1 } if (vis.max < vis.max1) { vis.max = vis.max1 } vis.x.domain([vis.min, vis.max]); vis.y.domain([vis.min, vis.max]); vis.svg.select(“.x‐axis”) .transition() .attr(“transform”, “translate(0,” + vis.height + “)”) .call(vis.xAxis); vis.svg.select(“.y‐axis”) .transition() .call(vis.yAxis) vis.tip = d3.tip() .attr(‘class’, ‘d3-tip’) .offset([100, 0]) .html(function(d) { return “<strong></strong>” + “<h1>” + d.N + “</h1>” }); vis.svg.call(vis.tip); vis.circle = vis.svg.selectAll(“.subject”) .data(vis.displayData); vis.circle.enter() .append(“circle”) .attr(“class”, “subject”); vis.circle.transition() .attr(“cx”, function(d) {
109
Appendix Fâ&#x20AC;&#x201A; Attention
attention.js
110
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 “)”); 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
PieChart = function(_parentElement, _data) { this.parentElement = _parentElement; this.data = _data; this.displayData = []; this.initVis(); }; PieChart.prototype.initVis = function() { var vis = this; vis.margin = { top: 0, right: 0, bottom: 0, left: 0 }; vis.width = 300 - vis.margin.left - vis.margin.right; vis.height = 300 - vis.margin.top - vis.margin.bottom; vis.svg = d3.select(“#” + vis.parentElement).append(“svg”) .attr(“width”, vis.width + vis.margin.left + vis.margin.right) .attr(“height”, vis.height + vis.margin.top + vis.margin.bottom) .append(“g”) .attr(“transform”, “translate(“ + vis.width / 2 + “,” + vis.height / 1.5 +
}
vis.pie = d3.layout.pie(); vis.color = d3.scale.ordinal() .range([“#B2BABB”, “#FFC300”]); vis.outerRadius = vis.width / 3; vis.innerRadius = vis.width / 24; // Relevant for donut charts vis.arc = d3.svg.arc() .innerRadius(vis.innerRadius) .outerRadius(vis.outerRadius); vis.wrangleData();
PieChart.prototype.wrangleData = function() { var vis = this; vis.displayData = vis.data; vis.displayData = vis.data.map(function(d) { return d.value }); vis.updateVis(); }; PieChart.prototype.updateVis = function() { var vis = this; vis.g = vis.svg.selectAll(“.arc”) .data(vis.pie(vis.displayData)) .enter() .append(“g”) .attr(“class”, “arc”); vis.g.append(“path”) .attr(“d”, vis.arc) .style(“fill”, function(d, index) { return vis.color(index); }); vis.g.append(“text”) .attr(“transform”, function(d) { return “translate(“ + vis.arc.centroid(d) + “)”; }) .attr(“text-anchor”, “middle”) .attr(“fill”, “#fff”) .attr(“class”, “head5”) .text(function(d) { return d.value }); }
111
Appendix Gâ&#x20AC;&#x201A; Event Handler - Top View
Timeline.js
112
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 “)”); 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
Time = function(_parentElement, _data){ this.parentElement = _parentElement; this.data = _data; this.displayData = this.data; this.initVis(); } /* * Initialize area chart with brushing component */ Time.prototype.initVis = function(){ var vis = this; // read about the this vis.margin = {top: 10, right: 30, bottom: 50, left: 10}; var areaWidth = $(“#Time-1”).width(); if (areaWidth > 400) { vis.width = $(“#Time-1”).width() - vis.margin.left - vis.margin.right; } else { vis.width = 400 - vis.margin.left - vis.margin.right; } vis.height = 80 - vis.margin.top - vis.margin.bottom; vis.svg = d3.select(“#” + vis.parentElement).append(“svg”) .attr(“width”, vis.width + vis.margin.left + vis.margin.right) .attr(“height”, vis.height + vis.margin.top + vis.margin.bottom) .append(“g”) .attr(“transform”, “translate(“ + vis.margin.left + “,” + vis.margin.top + vis.x = d3.time.scale() .range([0, vis.width]) .domain(d3.extent(vis.displayData, function(d) { return((d.date)); }));
}
vis.xAxis = d3.svg.axis() .scale(vis.x) .orient(“bottom”); vis.brush = d3.svg.brush() .x(vis.x) .on(“brush”,brushed); vis.svg.append(“g”) .attr(“class”,”x brush”) .call(vis.brush) .selectAll(“rect”) .attr(“y”,-7) .attr(“height”,vis.height +7); vis.svg.append(“g”) .attr(“class”, “x-axis axis”) .attr(“transform”, “translate(0,” + vis.height + “)”) .call(vis.xAxis); vis.svg.append(“defs”).append(“clipPath”) .attr(“id”, “clip”) .append(“rect”) .attr(“width”, vis.width) .attr(“height”,vis.height);
113
Appendix Hâ&#x20AC;&#x201A; Event Handler - Frontal View
measure.js
114
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 “)”); 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
Measure = function(_parentElement, _data) { this.parentElement = _parentElement; this.data = _data; this.displayData = this.data; this.initVis(); } Measure.prototype.initVis = function() { var vis = this; // read about the this vis.margin = { top: 0, right: 30, bottom: 50, left: 10 }; var areaWidth = $(“#Measure-1”).width(); if (areaWidth > 400) { vis.width = $(“#Measure-1”).width() - vis.margin.left - vis.margin.right; } else { vis.width = 400 - vis.margin.left - vis.margin.right; } vis.height = 80 - vis.margin.top - vis.margin.bottom; vis.svg = d3.select(“#” + vis.parentElement).append(“svg”) .attr(“width”, vis.width + vis.margin.left + vis.margin.right) .attr(“height”, vis.height + vis.margin.top + vis.margin.bottom) .append(“g”) .attr(“transform”, “translate(“ + vis.margin.left + “,” + vis.margin.top +
}
vis.x = d3.scale.linear() .range([0, vis.width]) .domain(d3.extent(vis.displayData, function(d) { return ((d.N)); })); vis.xAxis = d3.svg.axis() .scale(vis.x) .orient(“bottom”); vis.brush = d3.svg.brush() .x(vis.x) .on(“brush”, brushedMeasure); vis.svg.append(“g”) .attr(“class”, “x brush”) .call(vis.brush) .selectAll(“rect”) .attr(“y”, -7) .attr(“height”, vis.height + 7); vis.svg.append(“g”) .attr(“class”, “x-axis axis”) .attr(“transform”, “translate(0,” + vis.height + “)”) .call(vis.xAxis); vis.svg.append(“defs”).append(“clipPath”) .attr(“id”, “clip”) .append(“rect”) .attr(“width”, vis.width) .attr(“height”, vis.height);
115
Appendix Iâ&#x20AC;&#x201A; Iconic Index
viewPercent.js
116
1 var margin = { top: 40, right: 10, bottom: 60, left: 60 }; 2 var width = 960 - margin.left - margin.right, 3 height = 300 - margin.top - margin.bottom; 4 var svg = d3.select(“#chart-area”).append(“svg”) 5 .attr(“width”, width + margin.left + margin.right) 6 .attr(“height”, height + margin.top + margin.bottom) 7 .append(“g”) 8 .attr(“transform”, “translate(“ + margin.left + “,” + margin.top + “)”); 9 var x = d3.scale.ordinal() 10 .rangeRoundBands([0, width], .4); 11 var y = d3.scale.linear() 12 .range([height, 0]); 13 var xAxis = d3.svg.axis() 14 .scale(x) 15 .orient(“bottom”); 16 var yAxis = d3.svg.axis() 17 .scale(y) 18 .orient(“left”); 19 var xAxisGroup = svg.append(“g”) 20 .attr(“class”, “x‐axis axis”) 21 .attr(“transform”, “translate(0,” + height + “)”); 22 var yAxisGroup = svg.append(“g”) 23 .attr(“class”, “y‐axis axis”); 24 var data; 25 loadData(); 26 27 function loadData() { 28 d3.csv(“data/configs.csv”, function(error, csv) { 29 csv.forEach(function(d) { 30 d.year = +d.year; 31 d.percent = +parseFloat(d.percent).toFixed(2) * 100; // parseFloat(d.X).toFixed(digit) 32 d.instagram = +d.instagram; 33 }); 34 35 // Store csv data in global variable 36 data = csv; 37 console.log(data) 38 updateVisualization(); 39 }); 40 } 41 d3.select(“#ranking-type”).on(“change”, updateVisualization); 42 43 function updateVisualization() { 44 var change = d3.select(“#ranking-type”).property(“value”); 45 data.sort(function(a, b) { 46 return b[change] - a[change]; 47 }); 48 x.domain(data.map(function(d) { 49 return d.title; 50 })); 51 y.domain([d3.min(data, function(d) { 52 return d[change] 53 }), d3.max(data, function(d) { 54 return d[change] 55 })]); 56 var updatedRect = svg.selectAll(“rect”) 57 .data(data) 58 var tip = d3.tip() 59 .attr(‘class’, ‘d3-tip’) 60 .offset([0, 0]) 61 .html(function(d) { 62 return “<strong></strong>” + ‘<div>’ + (d[change]) + ‘</div>’; 63 }) 64 svg.call(tip); 65 updatedRect.enter().append(“rect”); 66 updatedRect.transition() 67 .attr(“class”, “rec”) 68 .attr(“fill”, “rgba(255, 195, 0, 1)”) 69 .attr(“x”, function(d) { 70 return x(d[“title”]); 71 }) 72 .attr(“y”, function(d) { 73 return y(d[change]); 74 }) 75 .attr(“width”, x.rangeBand()) 76 .attr(“height”, function(d) { 77 return height - y(d[change]); 78 }); 79 svg.select(“.x‐axis”) 80 .transition() 81 .attr(“transform”, “translate(0,” + height + “)”) 82 .call(xAxis); 83 svg.select(“.y‐axis”) 84 .call(yAxis); 85 updatedRect 86 .on(‘mouseover’, tip.show) 87 .on(‘mouseout’, tip.hide); 88 updatedRect.exit().remove(); 89 }
117
Appendix Jâ&#x20AC;&#x201A; Map
stationMap.js
118
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Map = function(_parentElement, _data) { this.parentElement = _parentElement; this.data = _data; this.displayData = []; this.initVis(); } Map.prototype.initVis = function() { var vis = this; vis.map = L.map(vis.parentElement).setView([41.893359, 12.482802], 3); L.Icon.Default.imagePath = ‘img/’; vis.capmap = L.tileLayer( ‘https://api.mapbox.com/styles/v1/mapbox/dark-v9/tiles/256/{z}/{x}/{y}?’ + ‘access_token=pk.eyJ1IjoiemxpYW8iLCJhIjoiY2l2eDV2NGJ2MDFncDJ0cHM5ZTJ1NDB1bSJ9.BiUu91Qd1Z2eWl7sBKYKrA’, { minZoom: 0, maxZoom: 18, }).addTo(vis.map); for (i = 0; i < vis.data.length; i++) { vis.circle = L.circle( [+vis.data[i].latitude, +vis.data[i].longitude], 50000, { color: ‘#FFC300’, fillColor: ‘#FFC300’, fillOpacity: 1 }) .bindPopup(vis.data[i].title) .addTo(vis.map) } vis.popup = L.popup();
}
function onMapClick(e) { popup .setLatLng(e.latlng) .setContent(“You clicked the map at “ + e.latlng.toString()) .openOn(vis.map); } vis.map.on(‘click’, onMapClick);
119
Appendix Kâ&#x20AC;&#x201A; Style
style.css
120
1 /* my CSS STYLES */ 94 .head4 { 177 animation-name: 260 stroke: #ffffff; 345 .creditsmiddle { 2 95 color: whitesmoke; pulse_animation; 261 stroke-width: 1.5px; 346 font-size: 0.8em; 3 body { 96 font-size: 12px; 178 animation-duration: 262 } 347 padding: 0px 25px 0px 4 position: relative; 97 text-align: left; 5000ms; 263 20px; 5 font-family: ‘Raleway’, 98 position: relative; 179 transform-origin: 70% 264 #preloader img { 348 border-right-color: sans-serif; 99 left: 5%; 70%; 265 width: 50pt; white; 6 background-color: 100 } 180 animation-iteration- 266 } 349 border-right-width: #000; 101 count: infinite; 267 1px; 7 font-size: 15px; 102 .head5 { 181 animation-timing268 .loadercontainer { 350 border-right-style: 8 color: white; 103 font-family: ‘Roboto’, function: linear; 269 width: 66px; solid; 9 } sans-serif; 182 } 270 height: 100px; 351 border-left-color: 10 104 font-size: 20px; 183 271 margin: 20px auto; white; 11 .title-name { 105 } 184 .affix { 272 margin-top: 20px; 352 border-left-width: 1px; 12 font-size: 5em; 106 185 top: 50px; 273 } 353 border-left-style: solid; 13 text-align: center; 107 .head6 { 186 width: 100%; 274 354 } 14 font-weight: 100; 108 font-size: 9px; 187 z-index: 20 275 .block { 355 15 } 109 } !important; 276 position: relative; 356 mark { 16 110 188 } 277 box-sizing: border357 font-family: ‘Raleway’, 17 .subtitle-name { 111 .intro-text { 189 box; sans-serif; 18 font-size: 2.5em; 112 z-index: 4; 190 .navbar { 278 float: left; 358 font-size: 1.1em; 19 text-align: center; 113 color: #FFF; 191 margin-bottom: 0px; 279 margin: 0 10px 10px 0;359 padding: 0em; 20 font-weight: 100; 114 margin: 0 auto; 192 } 280 width: 9px; 360 background-color: 21 } 115 position: relative; 193 281 height: 9px; black; 22 116 display: inline-block; 194 .affix ~ .container-fluid { 282 border-radius: 2px; 361 color: rgba(22, 160, 23 .title-numbers { 117 width: 100%; 195 position: relative; 283 background: #FFF; 133, 1); 24 font-size: 3.5em; 118 margin: auto; 196 top: 0px; 284 } 362 ; 25 } 119 opacity: 1; 197 } 285 363 } 26 120 } 198 286 .block:nth-child(4n+1) { 364 27 .title-numbers-name { 121 199 #introSection { 287 animation: wave 2s 365 a{ 28 font-size: 1.5em; 122 #preintro, 200 background: linear- ease .0s infinite; 366 color: rgba(255, 195, 29 font-weight: 300; 123 #byline { gradient(rgba(0, 0, 0, 0.8), 288 } 0, 1); 30 } 124 z-index: 4; rgba(0, 0, 0, 0.8)), url(‘../img/ 289 367 } 31 125 padding: 5px; ShinersFOV.jpg’); 290 .block:nth-child(4n+2) { 368 32 #bg { 126 font-size: 16px; 201 height: 650px; 291 animation: wave 2s 369 btn:hover { 33 margin: 0; 127 margin: auto; 202 } ease .2s infinite; 370 stroke: rgba(255, 195, 34 height: 100%; 128 text-align: center; 203 292 } 0, 1); 35 overflow: hidden; 129 font-family: ‘Raleway’; 204 .intro-section { 293 371 } 36 } 130 } 205 background-color: 294 .block:nth-child(4n+3) { 37 131 #000; 295 animation: wave 2s 38 #bg img { 132 #introtitle { 206 } ease .4s infinite; 39 min-height: 100%; 133 z-index: 4; 207 296 } 40 min-width: 100%; 134 font-size: 110px; 208 .noBackground { 297 41 height: 600px; 135 font-family: ‘Poiret 209 background-color: 298 .block:nth-child(4n+4) { 42 width: auto; One’; rgba(0, 0, 0, 0.8); 299 animation: wave 2s 43 position: absolute; 136 opacity: 0.85; 210 } ease .6s infinite; 44 top: 0; 137 color: white; 211 300 margin-right: 0; 45 bottom: 0; 138 } 212 .noBackgroundInverse { 301 } 46 margin: auto; 139 213 background-color: 302 47 margin-left: -2%; 140 @keyframes pulse_ #000; 303 @keyframes wave { 48 transform: animation { 214 border-color: #000; 304 0% { translateX(-00%); 141 0% { 215 } 305 top: 0; 49 } 142 transform: 216 306 opacity: 1; 50 scale(1.0); 217 .modal { 307 } 51 #mapid { 143 } 218 color: #333; 308 50% { 52 z-index: 0; 144 10% { 219 } 309 top: 30px; 53 min-height: 100%; 145 transform: 220 310 opacity: .2; 54 min-width: 100%; scale(1.01); 221 .pad { 311 } 55 position: absolute; 146 } 222 padding-top: 150px; 312 100% { 56 height: 400px; 147 20% { 223 } 313 top: 0; 57 } 148 transform: 224 314 opacity: 1; 58 scale(1.02); 225 .pad2 { 315 } 59 .wrap { 149 } 226 padding-top: 90px; 316 } 60 text-align: left; 150 30% { 227 } 317 61 } 151 transform: 228 318 #creditsSection { 62 scale(1.03); 229 .axis path, 319 background: linear63 .tooltip-title { 152 } 230 .axis line { gradient(rgba(0, 0, 0, 1), rgba(0, 64 display: block; 153 40% { 231 fill: none; 0, 0, 1)); 65 margin-bottom: 5px; 154 transform: 232 stroke: white; 320 background-size: 66 font-weight: 600; scale(1.05); 233 shape-rendering: 100% auto; 67 } 155 } crispEdges; 321 background-repeat: 68 156 50% { 234 } no-repeat; 69 .labels { 157 transform: 235 322 position: relative; 70 font-size: 15px; scale(1.08); 236 .axis text { 323 color: white; 71 text-anchor: start; 158 } 237 font-size: 10px; 324 font-size: 0.85em; 72 fill: yellow 159 60% { 238 fill: white; 325 padding: 0px 25px 0px 73 } 160 transform: 239 } 20px; 74 scale(1.05); 240 326 } 75 .d3-tip { 161 } 241 .axisText { 327 76 z-index: 10; 162 70% { 242 font-size: 11px; 328 @media all and (max77 font-size: 12px; 163 transform: 243 fill: white; width: 1300px) { 78 line-height: 0; scale(1.03); 244 } 329 #creditsSection h1 { 79 padding: 10px; 164 } 245 330 font-size: 2.5em; 80 background: rgba(0, 0, 165 80% { 246 .brush .extent { 331 } 0, 0.5); 166 transform: 247 stroke: none; 332 #creditsSection h3 { 81 color: white; scale(1.02); 248 fill-opacity: 1; 333 font-size: 1.4em; 82 position: absolute; 167 } 249 shape-rendering: 334 } 83 top: 0px; 168 90% { crispEdges; 335 #creditsSection { 84 left: 0px; 169 transform: 250 } 336 font-size: 1.0em; 85 border-radius: 3px; scale(1.01); 251 337 } 86 } 170 } 252 .brush { 338 } 87 171 100% { 253 fill: #ccc; 339 88 .d3-tip.n:after { 172 transform: 254 fill‐opacity: .5; 340 .credits { 89 margin: -1px 0 0 0; scale(1.00); 255 pointer‐events: none; 341 font-size: 0.8em; 90 top: 100%; 173 } 256 } 342 padding: 0px 25px 0px 91 left: 0; 174 } 257 20px; 92 } 175 258 .line { 343 } 93 176 .pulse { 259 fill: none; 344
121
Appendix Lâ&#x20AC;&#x201A; Alignment
overlay.css
122
1 .imagePlans img { 2 z-index: 0; 3 width: 100%; 4 margin: 10px; 5 opacity: 1; 6 } 7 8 .imageElevs img { 9 z-index: 0; 10 width: 100%; 11 margin: 10px; 12 opacity: 1; 13 } 14 15 .Times .measures { 16 padding-top: 30px; 17 height: 120px; 18 z-index: 30; 19 } 20 21 .pies { 22 z-index: 20; 23 } 24 25 #imagePlan-1 svg { 26 z-index: 0; 27 position: absolute; 28 top: 0; 29 opacity: 1; 30 transform: scale(1.6) rotate(-15deg) translateX(3px) translateY(65px); 31 } 32 33 #imageElev-1 svg { 34 z-index: 0; 35 position: absolute; 36 top: 0; 37 opacity: 1; 38 transform: scale(1.65) rotate(0deg) translateX(-25px) translateY(-30px); 39 } 40 41 #imagePlan-2 svg { 42 z-index: 0; 43 position: absolute; 44 top: 0; 45 opacity: 1; 46 transform: scale(1.5) rotate(-0deg) translateX(15px) translateY(45px); 47 } 48 49 #imageElev-2 svg { 50 z-index: 0; 51 position: absolute; 52 top: 0; 53 opacity: 1; 54 transform: scale(1.65) rotate(0deg) translateX(-50px) translateY(100px); 55 } 56 57 #imagePlan-3 svg { 58 z-index: 0; 59 position: absolute; 60 top: 0; 61 opacity: 1; 62 transform: scale(1.81) rotate(-.5deg) translateX(10px) translateY(45px); 63 } 64 65 #imageElev-3 svg { 66 z-index: 0; 67 position: absolute; 68 top: 0; 69 opacity: 1; 70 transform: scale(1) rotate(0deg) translateX(20px) translateY(0px); 71 } 72 73 #imagePlan-4 svg { 74 z-index: 0; 75 position: absolute; 76 top: 0; 77 opacity: 1; 78 transform: scale(1.65) rotate(-0deg) translateX(18px) translateY(50px); 79 } 80 81 #imageElev-4 svg {
82 z-index: 0; 156 83 position: absolute; 157 84 top: 0; 158 /* camera and subjects 85 opacity: 1; */ 86 transform: scale(1.5) 159 rotate(0deg) translateX(-45px) 160 translateY(20px); 161 /********************** 87 } ******************************** 88 ******************************** 89 #imagePlan-5 svg { **************************/ 90 /*savo*/ 162 91 z-index: 0; 163 .camera:hover { 92 position: absolute; 164 stroke: rgba(0, 0, 255, 93 top: 0; 0); 94 opacity: 1; 165 stroke-width: 0px; 95 transform: scale(1.0) 166 fill: rgba(0, 255, 0, 1); rotate(-0deg) translateX(15px) 167 } translateY(100px); 168 96 } 169 .camera { 97 170 stroke: rgba(255, 255, 98 #imageElev-5 svg { 255, 1); 99 z-index: 0; 171 stroke-width: 0.5px; 100 position: absolute; 172 fill: rgba(255, 195, 0, 101 top: 0; .8); 102 opacity: 1; 173 } 103 transform: scale(1.4) 174 rotate(5deg) translateX(35px) 175 .subject { translateY(50px); 176 stroke: rgba(0, 0, 0, 1); 104 } 177 stroke-width: 0.2px; 105 178 fill: rgba(22, 160, 133, 106 #imagePlan-6 svg { .5); 107 z-index: 0; 179 } 108 position: absolute; 180 109 top: 0; 181 .subject:hover { 110 opacity: 1; 182 stroke: rgba(255, 87, 111 transform: scale(2.2) 51, 1); rotate(-0deg) translateX(24px) 183 stroke-width: 0.1px; translateY(35px); 184 fill: rgba(255, 87, 51, 112 } 1); 113 185 fill-opacity: 1 114 #imageElev-6 svg { 186 } 115 z-index: 0; 116 position: absolute; 117 top: 0; 118 opacity: 1; 119 transform: scale(1.7) rotate(0deg) translateX(-20px) translateY(20px); 120 } 121 122 #imagePlan-7 svg { 123 z-index: 0; 124 position: absolute; 125 top: 0; 126 opacity: 1; 127 transform: scale(1.55) rotate(-0deg) translateX(38px) translateY(93px); 128 } 129 130 #imageElev-7 svg { 131 z-index: 0; 132 position: absolute; 133 top: 0; 134 opacity: 1; 135 transform: scale(1.4) rotate(0deg) translateX(20px) translateY(25px); 136 } 137 138 #imagePlan-8 svg { 139 z-index: 1; 140 position: absolute; 141 top: 0; 142 opacity: 1; 143 transform: scale(1.5) rotate(-0deg) translateX(0px) translateY(50px); 144 } 145 146 #imageElev-8 svg { 147 z-index: 0; 148 position: absolute; 149 top: 0; 150 opacity: 1; 151 transform: scale(2.1) rotate(0deg) translateX(-50px) translateY(90px); 152 } 153 154 155 /********************** ******************************** ******************************** **************************/
123
Appendix Mâ&#x20AC;&#x201A; DATA
campCam.json
124
1 [ 2014” }, { “IMG”: “14504819_180 673280_n.jpg”, “X”: 3.36122096268, “Z”: 2 { “IMG”: “14547592_3 157535725502_12864929903526 -2.09618032518, “Y”: -15.807905602, “date”: “03-1724823171218627_7345188846973 54336_n.jpg”, “X”: 1.07146166839, “Z”: 2014” }, { “IMG”: “14350586_180 222912_n.jpg”, “X”: -0.522900058402, “Y”: -12.5383258618, “date”: “06-20- 8707566015324_42651675244101 -0.0498555475535, “Y”: 0.319565822892, “Z”: 2014” }, { “IMG”: “14705153_1100 632_n.jpg”, “X”: 0.0792231051697, “Z”: -4.45579722371, “date”: “01-05- 468153402560_696571917917749 -0.929863706305, “Y”: -0.664674942873, “date”: 2014” }, { “IMG”: “14566630_352 248_n.jpg”, “X”: 2.54628443293, “Z”: “06-11-2015” }, { “IMG”: “145832 654431739164_175710584130922 0.424942947803, “Y”: -21.4350328985, “date”: “04-2547_985068561621337_75421536 0864_n.jpg”, “X”: -0.0177404382394, “Z”: 2012” }, { “IMG”: “14474485_337 5487099904_n.jpg”, “X”: -0.338521749763, “Y”: -2.33454703548, “date”: “01-25- 927303219608_12032201243083 -1.15156426078, “Y”: 0.519557926322, “Z”: 2016” }, { “IMG”: “14716369_1118 40736_n.jpg”, “X”: 0.0230747936005, “Z”: -6.04202589995, “date”: “02-17- 217801598857_499267715691288-1.14659149712, “Y”: -1.05364269569, “date”: “12-02- 2011” }, { “IMG”: “14487201_606 9856_n.jpg”, “X”: -0.32114773005, “Z”: 2016” }, { “IMG”: “14487306_247 916609517027_30373812007809 0.395978064719, “Y”: 1.97118475699, “date”: “02-07016352362857_14145393099098 8432_n.jpg”, “X”: -0.0286745299955, “Z”: 2011” }, { “IMG”: “14482703_880 03008_n.jpg”, “X”: -0.377585876448, “Y”: -2.26491671665, “date”: “04-26- 496502081070_8233284983096 -0.591279875707, “Y”: 0.559065510271, “Z”: 2016” }, { “IMG”: “14564990_115 082432_n.jpg”, “X”: 0.075536323373, “Z”: -6.52366132809, “date”: “07-15- 5526757847011_34676313363641 0.319803342964, “Y”: -0.102040058429, “date”: 2016” }, { “IMG”: “14449075_184 79456_n.jpg”, “X”: 0.177143064555, “Z”: “06-21-2014” }, { “IMG”: “145500 0291642857247_1943401709925 -0.417541379111, “Y”: 4.6282063727, “date”: “03-1766_317037028665577_22709017 564416_n.jpg”, “X”: 0.443741353555, “Z”: 2013” }, { “IMG”: “14482795_694 96698324992_n.jpg”, “X”: -0.700125654388, “Y”: -5.44917684512, “date”: “06-02- 437414037011_123832391422076 -0.43617278151, “Y”: 0.29650315715, “Z”: 2012” }, { “IMG”: “14473920_923 5184_n.jpg”, “X”: -0.00605521529114, “Z”: -4.66373031377, “date”: “12-17- 916927713293_229100241114732 -0.664908136199, “Y”: 0.127607381193, “date”: “04-04- 2014” }, { “IMG”: “14488330_532 9536_n.jpg”, “X”: 0.294438628029, “Z”: 2016” }, { “IMG”: “14624437_1112 313126968436_14233831286086 -0.0379537769105, “Y”: -4.96348578421, “date”: “11-05752835486704_73375312767667 0416_n.jpg”, “X”: 0.0194636863038, “Z”: 2011” }, { “IMG”: “14596624_308 4048_n.jpg”, “X”: -0.436692268939, “Y”: -2.11751054418, “date”: “11-18652192838076_92159249329306 -0.759073964896, “Y”: 0.544739790894, “Z”: 2016” }, { “IMG”: “14591974_351099264_n.jpg”, “X”: -0.0720716339654, “Z”: -6.15438852806, “date”: “05-12- 97281897906_1995201137229168 -0.0922762014316, “Y”: -0.836418055436, “date”: “04-02-2014” }, { “IMG”: “14676633_1614640_n.jpg”, “X”: 0.0617292548693, “Z”: 2011” }, { “IMG”: “14659428_1157 976608807832_7263839696780 -0.733344621982, “Y”: 0.943055194587, “date”: “06-09468117679552_375135922528505 787712_n.jpg”, “X”: -0.232559136219, “Z”: 2013” } 0368_n.jpg”, “X”: -0.431560437085, “Y”: 0.877504671678, “date”: “06-29- 3 ] -0.520866041288, “Y”: 0.86989228881, “Z”: 2011” }, { “IMG”: “14676537_5720 0.163836736946, “Z”: -8.72861382234, “date”: “03-31- 48142979046_4700641547111104 -2.97090578833, “date”: “10-21- 2014” }, { “IMG”: “14723516_1985 512_n.jpg”, “X”: 2012” }, { “IMG”: “14504869_343 16867239238_168065192304207 -0.0433029754867, “Y”: 095189363090_13567475914794 4624_n.jpg”, “X”: 0.177509203792, “Z”: 59840_n.jpg”, “X”: -0.548168235566, “Y”: -3.63358076235, “date”: “02-130.322656011907, “Y”: -8.32667268469E-17, “Z”: 2016” }, { “IMG”: “14566592_333 0.0258302397397, “Z”: -1.01980984873, “date”: “09-21- 100360376930_75825603807742 0.820470590823, “date”: 2011” }, { “IMG”: “14726444_170952544_n.jpg”, “X”: “09-24-2016” }, { “IMG”: “145337 083669412941_51548247308212 -0.0286998432745, “Y”: 42_796830047123355_6094852463360_n.jpg”, “X”: 0.00936379893367, “Z”: 89046130688_n.jpg”, “X”: -0.0397457260818, “Y”: 0.52368071465, “date”: “01-28-0.565728095691, “Y”: 0.0544834114618, “Z”: 2012” }, { “IMG”: “14566787_1725 -0.0364809952933, “Z”: -1.73985390503, “date”: “12-12- 765864342075_591557150461499 2.49130743845, “date”: “03-01- 2014” }, { “IMG”: “14549909_177 8016_n.jpg”, “X”: 2015” }, { “IMG”: “14574072_245 5754866046005_1130700726796 -0.974631988208, “Y”: 554135847197_70760755807094 812288_n.jpg”, “X”: 0.0598425948301, “Z”: 04672_n.jpg”, “X”: -0.785636101256, “Y”: 1.30049394003, “date”: “01-20-1.00530567246, “Y”: -0.0306164738033, “Z”: 2012” }, { “IMG”: “14482746_406 -5.55111512313E-17, “Z”: 1.35022722847, “date”: “02-18- 213876169384_50998688375625 1.33632856714, “date”: “11-102013” }, { “IMG”: “14693747_110 48224_n.jpg”, “X”: 2016” }, { “IMG”: “14592168_108 0167123413667_25916338825959 0.247393384664, “Y”: 4517808291976_9809668281445 83360_n.jpg”, “X”: -0.0921540689995, “Z”: 90848_n.jpg”, “X”: -0.353476253312, “Y”: 0.375153394723, “date”: “08-150.174902062191, “Y”: 0.884397871978, “Z”: 2016” }, { “IMG”: “14574017_1158 0.141473607576, “Z”: -8.87854276937, “date”: “06-26- 278187582690_91568932359671 0.155717226329, “date”: “12-31- 2012” }, { “IMG”: “14478357_178 64416_n.jpg”, “X”: 2016” }, { “IMG”: “14566787_350 918752554749_691744325714418 -0.366785380576, “Y”: 0, “Z”: 345318643858_49907790863107 0736_n.jpg”, “X”: 1.40294382298, “date”: “09-2735872_n.jpg”, “X”: -0.00371722974716, “Y”: 2011” }, { “IMG”: “14583542_507 -0.517805209219, “Y”: -0.0471021265066, “Z”: 236772812156_116765232760409 0.167942504947, “Z”: 0.797208921228, “date”: “04-25- 2928_n.jpg”, “X”: -4.81771707144, “date”: “11-20- 2013” }, { “IMG”: “14498937_315 -0.95329824352, “Y”: 2010” }, { “IMG”: “14606995_354 375642188053_49237071031949 -0.0187811804391, “Z”: 595178264934_857851148223119 06624_n.jpg”, “X”: -1.46027166035, “date”: “02-173600_n.jpg”, “X”: -0.996932528045, “Y”: 2016” }, { “IMG”: “14597459_1712 -0.451223206208, “Y”: 1.64936282543, “Z”: 810955709933_370531107701837 0.0501627483505, “Z”: -6.02720043264, “date”: “06-28- 0048_n.jpg”, “X”: -1.77166230588, “date”: “10-06- 2015” }, { “IMG”: “14547775_1520 -0.160660356166, “Y”: 2013” }, { “IMG”: “14474243_1152 18891924620_1112977363861241 -0.0560716817102, “Z”: 497081497295_86263912493660 856_n.jpg”, “X”: -1.10966466025, -2.08194563747, “date”: “03-2824192_n.jpg”, “X”: “Y”: 0.0950850897049, “Z”: 2015” }, { “IMG”: “14726432_184 0.0802530646745, “Y”: 0.991762103798, “date”: “03-25- 390535335546_65426958930055 0.694773562314, “Z”: 2014” }, { “IMG”: “14448398_194 00416_n.jpg”, “X”: -7.05028110209, “date”: “03-28- 514150984817_28995600811189 -1.51599553081, “Y”: 2015” }, { “IMG”: “14582286_886 8624_n.jpg”, “X”: -0.000414020954557, “Z”: 673218130530_57538955696553 -0.586670910914, “Y”: 0.524247858696, “date”: “04-0632864_n.jpg”, “X”: -0.0176667531914, “Z”: 2016” }, { “IMG”: “14478350_350 0.133244665622, “Y”: -1.99841413267, “date”: “07-27- 203631987153_39906082825646 -0.0670062869157, “Z”: 2013” }, { “IMG”: “14553207_167 89920_n.jpg”, “X”: -1.39243511904, “date”: “05-25- 8431032474223_9194910697333 -1.22003442351, “Y”: 2012” }, { “IMG”: “14566816_183 456896_n.jpg”, “X”: -0.0742962039674, “Z”: 4018816834332_7429519086721 -0.566508324914, “Y”: 0.909633840429, “date”: 171456_n.jpg”, “X”: -0.123685735178, “Z”: “02-09-2013” }, { “IMG”: “147013 -1.06556579169, “Y”: 0.123400434333, “date”: “01-14- 02_917356805063384_26212173 -0.0139828043302, “Z”: 2013” }, { “IMG”: “14482104_998 03798218752_n.jpg”, “X”: -0.301786441727, “date”: “02-19- 626300283357_24759585334351 2.10684818175, “Y”: 2014” }, { “IMG”: “14474290_858 62624_n.jpg”, “X”: 19.0320472331, “Z”: 981740869375_41746988486832 -0.585744920988, “Y”: -52.5086468221, “date”: “10-089472_n.jpg”, “X”: 0.558721386622, “Z”: 2012” }, { “IMG”: “14582322_319 -0.512251603089, “Y”: -6.51484596631, “date”: “03-03- 189341772965_513007514069185 0.515296897576, “Z”: 2015” }, { “IMG”: “14550108_312 3312_n.jpg”, “X”: -6.14119648477, “date”: “08-27- 686059103028_48141872668029 -0.447179091145, “Y”: 2013” }, { “IMG”: “14582355_267 74720_n.jpg”, “X”: 0.812541323872, “Z”: 633313637407_46953128026617 -0.202392331072, “Y”: -8.43307423926, “date”: “09-2893792_n.jpg”, “X”: -6.19392788909E-06, “Z”: 2015” }, { “IMG”: “14561861_1237 -0.572293251698, “Y”: -2.18336761498, “date”: “10-05- 380426283262_25512870714865 0.655725440397, “Z”: 2016” }, { “IMG”: “14474385_102 09056_n.jpg”, “X”: -7.08032028711, “date”: “02-10- 5266467572085_9059767206312 -0.766843578516, “Y”:
125
Appendix Nâ&#x20AC;&#x201A; Final Review Presentation
126