MArch 5th year architecture portfolio: β city: 50 years of human-machine collaboration

Page 1


Fenland, England

21st century: a machine age. Inspired by Trevor Paglen’s invisible images, the project focuses on the current extraordinary time-frame where the relationship between the sentience and the autonomous become ambiguous. It starts in a machine era when society is marching towards the fourth industrial revolution. “The City of Tomorrow” written by Carlo Ratti and Matthew Claudel points out the core question of this project - cities had been evolved for thousand years, but in a linear form. Nonetheless, the presence of digital layers will inevitably bring radical changes to our cities. Ratti and Claudel call it futurecraft, a new approach to envisioning cities. Everyone on planet Earth, professionals or the public, are contributing and in control of shaping our future; with machine’s assistance. Yet, in what form of human-machine collaboration do we need to achieve our dream? With that, the project questions the future of the built environment with the omnipresence of autonomous agents such as self-driving cars and home robots. It is a 50-year operation origin from the Architecture Experiment Lab, which carries out deep data analysis and experiments on the current built environment. New forms of the built environment will be generated under the collaboration of human and machine. However, it also questions the level of consciousness involved in machine techniques; and how or when shall human designers be involved in the process. Setting at the Fenland, it will be the ideal site for the experimental events due to its abundance of natural energy resources, vastness and nakedness in technology. At the hinterland, the operation provides a real-life safe site that allows the autonomous agents to learn and analyse the surrounding. Lastly, the site will become a safe place for the inhabitation of both sentience and autonomous agents over a three phases blueprint. However, the operation promises that humans will continue to lead planet Earth with the aid of artificial intelligence and machines.


“ We [humans] no longer look at images-images look at us. They [machine] no longer simply represent things, but actively intervene in everyday life... We need to unlearn how to see like humans. ” Trevor Paglen


“ Cities are, by definition, plural, public, and productive. They are created by society itself, and they function as culture’s petri dish for progress... Designers produce mutations, some of which will grow, evolve, and develop into tangible artifacts that cause global change. ”

Ratti and Claudel, 2016


HUMAN-MACHINE COLLABORATION | GAUGANS INPUT

The images are machine readable segmentation map. Data is assigned to each colour to convert the colour into pleasant image for humans. Large amount of training data is need to created a desired output.


HUMAN-MACHINE COLLABORATION | GAUGANS OUTPUT

The output images then post-processed by human to create a form of visual representation that is readable by human. Through the process, human shall understand more about the new visual language created by machines.


MASS SURVEILLANCE | PRIVACY

Today, the society become a modern Panopticon with the omnipresent machine vision. Machine vision is widely adopted across different fields which invisibly bring fears among humans. Machine eye becomes the enemy of humans due to the lack of understanding of this piece of new technology.


A NEW BUILT ENVIRONMENT

The presence of machine vision is non-negligible as if it is a future “smart dust”. A new built environment is needed to provide a safe place for humans and machines inhabitation.


Factors and considerations for a future of humans and machines

13

20 hours of practice 45 hours

serious car crashes.

2x

car crashes per million miles driven on public roads compare to human driving, but tend to be less severe.

driving lessons + past experience on a car

275 miles

(25 years) of training + as many useful footage

Efficiency of regulation

Industry investment

EV charging station

Population living on site

Government funds

Innovation

4G coverage

Digital skills

Data-sharing environment

Cybersecurity

Quality of roads

Acceptance

Future orientation

Clouds, IoT and AI

Readiness

Society technology use

DEFINE THE PROBLEMS AUTONOMOUS | ACCIDENT

There needs to be a public site for the autonomous agents to be tested on; a site that encompassed real lives. The autonomous agents are designed to improve humans’ daily life. But it does not seem so with the reports on accidents caused by autonomous agents in recent years. While the technology is advancing and preparing machines to serve mankind, is the built environment ready to welcome the new co-habitant?


DEFINE THE PROBLEMS UNKNOWN | FEARS | BALANCE

Humans shall not be afraid. Correct channel is needed to spread the knowledge on machines and artificial intelligence to the general public. Following the increase of machine vision among society, the general public lost confidence in the technology due to the lack of understanding. Meanwhile, most of the technologies are used as a surveillance tool. Humans do not find any benefits and starting to hide away from it. Humans and machines need to develop a balanced relationship to march towards a better future.


SELF DRIVING CAR | PROPOSED LOCATIONS FOR CURRENT OR FUTURE TESTING IN PUBLIC, CONTROLLED, AND VIRTUAL ENVIRONMENTS


K-CITY, KOREA 79-ACRE

MCITY, MICHIGAN 32-ACRE

SELF DRIVING CAR | CONTROLLED ENVIRONMENT 1 Handling & stability track 2 Off-road test track 3 Testing facilities 4 Narrow road 5 Rural road 6 Motorway 7 Urban area 8 Pedestrian-centric area 9 Autonomous parking facilities 10 Tunnel 11 Roundabout 12 Asphalt/ concrete road 13 Unpaved road 14 Road safety features test ground Steering pad Universal road Low friction track 15 Tree-line street 16 Road construction noise 17 Open test area

18 Metal bridge deck 19 Building 2D facade 20 Ramp metering 21 Software controlled traffic 22 Bike lanes 23 Open test area 24 Roundabout 25 Blind curve 26 Intersection 27 Brick paver road 28 Tunnel


SELF DRIVING CAR | AFFECTING FACTORS


project note

project note

FRI 03/01/2021

TUES 14/01/2021

Research: Machine vision across different dimensions.

Research: Image recogni�on - how machine perceive and interpret images?

Scenario Scenario 1: Alteration 1: Alteration of colourof colour project note

1D: A space that builds up of lines in which each object possesses a unique code as its iden�ty.

FRI 03/01/2021

project project note note TUES 14/01/2021 TUES 14/01/2021 Research: Research: Image recogni�on - how Image recogni�on how machine perceive -and machine perceive interpret images? and interpret images?

project note

Research: Machine vision across different dimensions.

TUES 14/01/2021 Research: Image recogni�on - how machine perceive and interpret images?

2D: A space that appears flat, conveyed via pa�erns or electronic screens.

1D: A space that builds up of lines in which each object possesses a unique code as its iden�ty.

3D: A space with depth and distance.

1D

2D

process4D:through AI [Convolution neural network (CNN)] Straw processA space through AI Straw [Convolution neural network (CNN)] with the presence of

Wig

Wig

Zebra

�me and speed. It is a collabora�on between human and machine vision.

Zebra

Scenarios when machine vision may not obtain accurate result: Scenario 1: Alteration of colour

1D

2D 5D A space with a new visual culture. The machine starts to think and perceive.

4D: A space with the presence of �me and speed. It is a collabora�on between human and machine vision.

Image processing process through AI [Convolution neural network (CNN)]

IMAGE PROCESSING PROCESS THROUGH AI [CONVOLUTION NEURAL NETWORK (CNN)]

3D: A space with depth and distance.

2D: Image processing A spaceImage that appears flat, processing conveyed via pa�erns or electronic screens.

Image processing process through AI [Convolution neural network (CNN)]

Scenarios when machine vision may not obtain accurate result: Straw

Wig

Zebra

Road

Agriculture land

Carpet

Scenario 1: Alteration of colour

5D A space with a new visual culture. The machine starts to think and perceive. Straw

3D

Road

4D

Road

Wig

AgricultureAgriculture land land Carpet Road

Zebra

Carpet

Scenario 2: Implementing noise

Agriculture land

Carpet

Scenarios when machine vision may not obtain accurate result: 3D 4D SCENARIOS ACCURATE RESULT: Scenarios when machine vision may not obtain accurate result: WHEN MACHINE VISION MAY NOT OBTAIN Scenario 2: Implementing noise

Scenario Scenario 2: Implementing 2: Implementing noise noise

Scenario 1: Alteration of colour Scenario 1: Alteration of colour Scenario 1: Alteration of colour

Scenario 2: Implementing noise

Scenario 3: Object displacement

Research Outer-space Test Record 53% Confidence

Wig Wig Wig

Zebra Zebra Zebra

by: Research Test Record

Road Road Road

by:

Agriculture land Agriculture land Agriculture land 5D

Michelle & Eve

Scenario 2: Implementing noise Scenario 2: Implementing noise

Carpet Carpet Carpet

by: Research Test Record

Outer-space by: 53% Confidence

Michelle & Eve Outer-spaceOuter-space 7 53% Confidence 53% 7 Confidence

Michelle & Eve 7

Scenario Scenario 3: Object3:displacement Object displacement

Research Test Record

Research Research Research TestTest Test Record Record Record

Glass 98% Confidence Glass 98% Confidence

Scenario 3: Object displacement Scenario 3: Object displacement

by: by: Michelle & Eve 7 7

7 7

Michelle & Eve

8 8

8 8

8

7

8

Glass 98% Confidence

Glass Glass 98% Confidence 98% Confidence

AI | MACHINE LEARNING | IMAGE RECOGNITION

Outer-space 53% Confidence Outer-space 53% Confidence

Glass 98% Confidence

Scenario 3: Object displacement 5D

Michelle & Eve

Glass 98% Confidence

Scenario 3: Object displacement

Research Test Record

Straw Straw Straw

Outer-space 53% Confidence

8

8


project note

project note

FRI 17/01/2021 Test: CNN through Google Deep Dream Generator Google deep dream generator uses convolu�on neural network to process an image and enhance features. Google deep dream trains on ImageNet, which the training model is dominant by images of cat and dog. Thus, giving the dream-like hallucinogenic appearance result.

MON 20/01/2021 Test

Input

Output Neural network layer

01: Basic

5

10

15

20

Increasing in neural network layer causing the increase in detail features.

02: Noise alteration

Noise level 5%

50%

90%

Research: Image genera�on through machine learning [Genera�ve adversarial networks (GANs)]

GANs architecture

GANS is a system that pits two AI systems (generator and discriminator) against each other to improve the quality of their results.GANs create new contents that resemble the training data. It is a type of machine unsupervise learning. Reference works Mario Klingemann - Memories of Passerby

03: Pixel Value alteration

Pixel value Original

Bright spot

Black and white

04: Two inputs

Trevor Paglen - Adversarially Evolved Halluc

AI | MACHINE LEARNING | GOOGLE DEEP DREAM Google deep dream generator uses convolution neural network to process an image and enhance features. Google deep dream trains on ImageNet, which the training model is dominant by images of cat and dog. Thus, giving the dream-like hallucinogenic appearance result.

Research Test Record

Research Test Record


GANS ARCHITECTURE

WORKS DONE THROUGH GANS Mario Klingemann - Memories of Passerby

Trevor Paglen - Adversarially Evolved Hallucinations

AI | MACHINE LEARNING | GENERATIVE ADVERSARIAL NETWORKS (GANS) GANS is a system that pits two AI systems (generator and discriminator) against each other to improve the quality of their results. GANs create new contents that resemble the training data. It is a type of machine unsupervised learning.


Test image: The Aftermath of the First Smart War Adversarial Evolved Hallucination, 2017, Trevor Paglen

Material gathering: The GANS model used to produce the image composes of desertification, birth defects,burning oil fields, depleted uranium and other effects of first Gulf war.

Output:

GANS IMAGE VS HUMAN REGENERATION This test is a reverse interpretation of the image generated by machine. It compares the differences between the machine image and the image that is familiar to human eyes.


CONCEPTULISE GANS MACHINE LEARNING PROCESS


CONCEPT IMAGE

OUTPUT IMAGE

IMAGE ANALYSIS

2D input

Sky_53.9% Canal_2.6% Road_7.2% Built_0.3%

Vision projection

3D REALISATION

COLLABORATION WITH MACHINE | EXPERIMENTAL SCENE GENERATION


CONCEPT IMAGE

OUTPUT IMAGE

IMAGE ANALYSIS

2D input

Sky_53.9% Tree_20.2% Farmland_10.2% Canal_2.6% Built_0.3%

Vision projection

3D REALISATION

COLLABORATION WITH MACHINE | EXPERIMENTAL SCENE GENERATION


15 megawatt

data centre used up to

360,000 gallons

of water a day to cool down the server floor consume

contribute to

global electricity

carbon emissions

3%

2%

ENERGY FORECAST

DATA | ENERGY

6 layers of physical protections 01- property boundary 02- security perimeter 03- building access 04- security operation room 05- data center floor 06- disk destroy room

MACHINE LEARNING IN BUILDING MANAGEMENT SYSTEM


1940

1990

2000

2010

Exploration of AI

AI as a service tool

AI as an interacting tool

AI as an interacting and interpretation tool

1956, the term artificial intelligence was coined at Dartmouth conference. Throughout the period of exploration, AI winter happened at 1973 and 1988 which allow scientist reset the direction and aim of AI research. First ever autonomous robot-Grey Walter’s Machine Speculatrix was created. Also, first autonomous vehicle by NovLab was created at 1986. Meanwhile, Alan Turing proposed Turing test to test intelligence behaviour within machines.

Intelligence machine move towards the direction of human servicing, leading to the raising of Online services and entertainment. 1997, IBM-built machine won world chess champion Garry Kasparov. Robot also replaced human in amazon fulfillment centre. The first robot for home-Roomba invented.

Launching of Facebook at 2004 starts the rise of Online connection with the rest of the world. Follow by the starting of social media such as WhatsApp, SnapChat and WeChat. Robot starts to interact with people through speech recognition, e-payment system and popstar live concert. Facebook face recognition system and ImageNet drive AI to a new era.

AI deep learning involved in Generative adversarial network (GANs) and other machine learning framework which suggests that AI have the capability to carry out selflearning. Google Deep Mind Alpha Go beat human in the Go game. Move 37 particularly portrait the creativity within AI.

KEY QUESTION

HOW WILL AUTONOMOUS AGENTS AND AI BRING CHANGES TO THE FUTURE BUILT ENVIRONMENT AND HUMANITY?

2025-2075

Future


Click here to view film [00:08:50]


TYPE OF VISIONS

This proposal draws from two different perspectives to dive into the world of machine vision via humans eyes. Participants will be able to sense the change of view by detecting the edge of the display board. Below are the two views:

SENTIENT AGENT

Contents label with

Google Play store:

AUTONOMOUS AGENT

can be viewed thorough Artivive app which can be downloaded from:

Apple App store:


DAY 00001

CHAPTER 1 2025 | EARLY EXPLORATION: MACHINE TAKE-OVER Day 00001 Curious Dawn of the machine age, humans are now co-exist on planet Earth with the machine – the agent that views the surrounding differently. What makes sense to them do not make sense at all to humans. Actions should be taken to evolve a language that can link between both.


DAY 00029

VIDEO RECORD: First documentation and thoughts toward Fenland District.

data collection big data (inputs) image processing recreation output

Click here to view full film


DAY 00046

WHAT IS FENLAND DISTRICT?

data collection big data (inputs) image processing recreation output


DAY 00050

INITIAL SITE SURVEY AND SELECTION

data collection big data (inputs) image processing recreation output


DAY 00134

GNF001: WHAT IS F.E.N.S.?

data collection big data (inputs) image processing recreation output


DAY 00143

GNF001: ANALYSING F.E.N.S.

data collection big data (inputs) image processing recreation output


DAY 00216

GNF001: ANALYSING INPUT IMAGES.

data collection big data (inputs) image processing recreation output


DAY 00241

GNF001: RECREATION OF F.E.N.S. THROUGH GAUGANS.

data collection big data (inputs) image processing recreation output


DAY 00249

GNF001: RECREATION OF F.E.N.S. THROUGH GAUGANS.

data collection big data (inputs) image processing recreation output


DAY 00260

GNF001: GENERATING RESULT.

data collection big data (inputs) image processing recreation output


DAY 00260

GNF001: OUTPUT.

data collection big data (inputs) image processing recreation output


DAY 00260

GNF001: OUTPUT.

data collection big data (inputs) image processing recreation output


FL1: ENCODING THE SITE.

52o30’31.6”N 0O08’42.5”E Scale 1-10000 on A1


DAY 00299

FL2: DATA COLLECTION FOR DRAWING FL2.

Legend 01 City grid 02 Extruded volume (Black) 03 Subtracted volume (grey)


DAY 00311

FL2: DATA ANALYSING FOR DRAWING FL2.


DAY 00320

FL2 OUTPUT: β MASTERPLAN

Legend Building Empty volume Main building Data centre Energy zone Existing building Main road Road Scale 1-10000 on A1


DAY 00332

50-YEAR PLAN: DEFINING β.

What is β?

46.0% Sky

Nature

Farmland

Road

2.1% Road F.E.N.S. Built

Data centre (SPINE) 0.4%

Canal Tree

β

2.6% Power resources (SPINE)

19.2% Artificial Tree 29.7% Edifice

Testbed 1 (SPADES) Testbed 2 (HEART) Testbed 3 (DIAMONDS)


DAY 00333

50-YEAR PLAN: MASTERPLAN STRATEGY.


DAY 00334

50-YEAR PLAN: RELATIONSHIP DIAGRAM.

SECTION

RELATIONSHIP


DAY 00335

50-YEAR PLAN: TIMELINE AND COLOUR.


DAY 00336

50-YEAR PLAN: SPATIAL ORGANIZATION CHART AND CODE SYSTEM.

Infrastructure Home Infrastructure


DAY 00340

50-YEAR PLAN: DETAILING MASTERPLAN - TESTBED 1

OUTPUT 2 Overlay output 1 as noise to the initial masterplan to generate detail components.

INPUT CORPUS [Text to image generation]

image 1 image image image image image image image image

1: 2: 3: 4: 5: 6: 7: 8:

image 2

image 3

image 4

image 5

floors road plan view error on the building plan building section wall error in building plan floor corridor

OUTPUT 1 [Machine generated testbed 1]

image 6

image 7

image 8


DAY 00355

50-YEAR PLAN: DETAILING MASTERPLAN - TESTBED 2

OUTPUT 2 Overlay output 1 as noise to the initial masterplan to generate detail components.

INPUT CORPUS [Text to image generation]

image 1 image image image image image image image

1: 2: 3: 4: 5: 6: 7:

image 2 glitches glitches error on building glitches window windows

image 3

image 4

image 5

within the building plan on a landscape within the building on the Manhattan grid the triangular building on the Manhattan grid within the building plan

OUTPUT 1 [Machine generated testbed 2]

image 6

image 7


DAY 00363

50-YEAR PLAN: DETAILING MASTERPLAN - TESTBED 3

OUTPUT 2 Overlay output 1 as noise to the initial masterplan to generate detail components.

INPUT CORPUS [Text to image generation]

image 1 image image image image image image

1: 2: 3: 4: 5: 6:

image 2

image 3

image 4

image 5

nature error on the liner building plan building wall anomaly walls stair

OUTPUT 1 [Machine generated testbed 3]

image 6


DAY 00365

DATA COLLECTION FROM THE SURROUNDING TOWN: MANEA


DAY 00375

DETAIL INTERPRETATION: Wall [D01] OUTPUT:

5mm, 300mm, 100mm, 200mm, 5mm; layers upon layers, some give a strong collision, some a gentle touch, and some hiccups in between. 10 seconds after, here comes an increase in degree Celsius, I know I am at the other end.


DAY 00390

DETAIL INTERPRETATION: Stair [D02] OUTPUT:

The way we meet is rather distressing, manoeuvre at the edge of something, almost falling, and takes a few trials to figure out the right way; the transition of depth, colour, height, width, as if a romantic encounter of things; I am lost in it.


DAY 00405

DETAIL INTERPRETATION: Floor [D03] OUTPUT:

Like a deja-vu, a similar feeling when I penetrate through the first object, but different; the further I go, I can almost feel the long lost feeling of interacting with the ground, the particles cannot wait to cuddle me, I join in the party of nature, dancing in it.


DAY 00425

DETAIL INTERPRETATION: Window [D04] OUTPUT: Reaching maximum lux level, my light sensors get distracted by the anomaly, it went out of function; the white thing almost get me blind, what I can see was the frame that holds it up, please tell me where am I.


DAY 00525

β OPERATION OUTPUT SET 1 | ARCHITECTURE EXPERIMENT LAB [1DE056]

The architecture experiment lab is the ground-zero of the β operation. It plays a fundamental role in experimenting and generating suitable edifices under different machine vision requirements and conditions. The experiment is requisite because machine vision differs significantly from humans’ perception. The lab is classifying into three sections that carry out respective tests on facade materials and architectural structure. Data collected from the autonomous device that manoeuvre around the city will be sent to the data office to carry out further data analysis and test in real-life condition. Through this, the machine will generate a set of architectural outputs that suit the machine environment. The output will also turn into physical and virtual 3D model, acting as a study model for later research. Architectures across all three testbeds will be the outcomes of this lab.


DAY 00525

525 days


DAY 00574

ARCHITECTURE EXPERIMENT LAB


DAY 00678


DAY 00891

EDIFICE TESTING AND GENERATION PROCESSES

1_ DATA INPUT

2_FACADE AND MATERIAL EXPERIMENT

3_STRUCTURAL EXPERIMENT

4_3D MODEL PRINTING

5_1:1 MODEL ASSEMBLY, RESEARCH AND REAL-LIFE TESTING


DAY 01235

GENERATING A SPACE

1DE056

Data DATA input INPUT set SET 11

MACHINE VISION OUTPUT:

[Machine vision] 01_Point cloud :5.0% 02_Lines :10.0% 03_Anomaly :75.0% 04_Normal :10.0% [Architecture elements] 01_wall :8.3% 02_wall II :39.5% 03_structure :32.3% 04_detail structure :5.3% 05_floor :8.5% 06_opening :6.1% 07_circulation :0.0% DATA INPUT SET 2 [Segmentation map] Wall Wall II Structure Detail structure Floor Opening

DATA INPUT SET 3 [plan and location] =2;010203040506;3

DATA INPUT SET 4 [Scene set up]


HUMAN VISION

ARCHITECTURE EXPERIMENT LAB Date: 28/09/2028 [DE0456] 2;010203040506;3 Shot distance: approx. 1m


DAY 01286

β OPERATION OUTPUT SET 2 RESEARCH HUB [1D0456] | HUMAN ACTIVITY ZONE [1D06] | MACHINE WORKSHOP [1DE06]

RESEARCH HUB

HUMAN ACTIVITY ZONE

MACHINE WORKSHOP

The research hub is the gathering point for global research in artificial intelligence and machine learning. It houses researchers (computer scientist, architect, urban planner, sociologist, ecologist and other professions) from different backgrounds to study the existing problem between machine and human. Upon entrance, the researchers will direct to the respective lift to travel to their research base. Open space is provided for group discussion and research. It is a place for collective brainstorming on the future built environment.

The human activity centre is created to introduce the machine back to the humans. It is a place to spread the knowledge of machines and reduce the fear of human towards the unknown agent. This is a crucial step to prepare humans for a future machine age.

The machine workshop is a space for handson experiments with the products created from Architecture Research Lab and Research Hub. It also provides space to accommodate self-driving car repair and upgrade facilities.


DAY 01342

1342 days


DAY 01825

RESEARCH HUB


DAY 01843


DAY 01867

GENERATING A SPACE DATA INPUT SET 1 [Machine vision] 01_Point cloud :5.0% 02_Lines :10.0% 03_Anomaly :75.0% 04_Normal :10.0% [Architecture elements] 01_wall :30.3% 02_wall II :18.7% 03_structure :8.9% 04_detail structure :0.8% 05_floor :6.2% 06_opening :21.1% 07_circulation :14.0% DATA INPUT SET 2 [Segmentation map] Wall Wall II Structure Detail structure Floor Opening Circulation

DATA INPUT SET 3 [plan and location] =2;0506;3

DATA INPUT SET 4 [Scene set up]

1D0456 MACHINE VISION OUTPUT:


HUMAN VISION

RESEARCH HUB | ENTRANCE Date: 31/01/2030 [DE0456] 2;0506;3 Shot distance: approx. 3m


DAY 01923

1923 days


DAY 02640

HUMAN ACTIVITY ZONE


DAY 02653


DAY 02674

GENERATING A SPACE DATA INPUT SET 1 [Machine vision] 01_Point cloud :5.0% 02_Lines :10.0% 03_Anomaly :75.0% 04_Normal :10.0% [Architecture elements] 01_wall :14.7% 02_wall II :4.9% 03_structure :17.7% 04_detail structure :33.7% 05_floor :9.6% 06_opening :19.4% 07_circulation :0.0% DATA INPUT SET 2 [Segmentation map] Wall Wall II Structure Detail structure Floor Opening

DATA INPUT SET 3 [plan and location] =0;040506;2

DATA INPUT SET 4 [Scene set up]

1D06 MACHINE VISION OUTPUT:


HUMAN VISION

HUMAN ACTIVITY ZONE | INTERACTION ZONE Date: 06/12/2032 [DE0456] 0;040506;2 Shot distance: approx. 3m


DAY 02920

2920 days


DAY 03015

MACHINE WORKSHOP


DAY 03276


DAY 03297

GENERATING A SPACE DATA INPUT SET 1 [Machine vision] 01_Point cloud :5.0% 02_Lines :10.0% 03_Anomaly :75.0% 04_Normal :10.0% [Architecture elements] 01_wall :61.3% 02_wall II :10.0% 03_structure :11.4% 04_detail structure :11.2% 05_floor :5.9% 06_opening :0.0% 07_circulation :0.0% DATA INPUT SET 2 [Segmentation map]

Wall Wall II Structure Detail structure Floor

DATA INPUT SET 3 [plan and location] =2;030405;4

DATA INPUT SET 4 [Scene set up]

1DE06 MACHINE VISION OUTPUT:


HUMAN VISION

MACHINE WORKSHOP | SELF-DRIVING CAR REPAIR ZONE Date: 31/10/2035 [DE0456] 2;030405;4 Shot distance: approx. 1m


DAY 03315

SUPPORTING FACILITIES: VERTICAL ACCESS


DAY 03387

GENERATING A SPACE DATA INPUT SET 1 [Machine vision] 01_Point cloud :5.0% 02_Lines :10.0% 03_Anomaly :75.0% 04_Normal :10.0% [Architecture elements] 01_wall :1.0% 02_wall II :0.5% 03_structure :20.2% 04_detail structure :0.8% 05_floor :0.5% 06_opening :51.9% 07_circulation :25.1% DATA INPUT SET 2 [Segmentation map] Wall Wall II Structure Detail structure Floor Opening Circulation

DATA INPUT SET 3 [plan and location] =2;040506;4

DATA INPUT SET 4 [Scene set up]

003 MACHINE VISION OUTPUT:


HUMAN VISION

TESTBED 1 | VERTICAL ACCESS Date: 12/02/2034 [DE0456] 2;040506;4 Shot distance: approx. 10m


DAY 03413


DAY 03612

TESTBED 1 MASTERPLAN STRATEGY


DAY 03650

CHAPTER 2 2035 | MID EXPLORATION: HUMAN & MACHINE INTERACTION Day 03650 A little lost but hopeful A new architecture system has built by the machine with the inputs of humans knowledge. Humans are fellow machine inventors but also the greatest obstacles. New chemistry is needed between the two to ensure the safety of this novel ecosystem. The site is no longer solely a working place; it is becoming a new paradise for the sentience and autonomous. It is a huge experiment ahead, but it will be a brand new way of living.


DAY 03660

β OPERATION OUTPUT SET 3 HOME [2EF067] | GARDEN [004]

HOME

GARDEN

“We will shift from living in a home to living with a home (Ratti and Claudel, 2016).”

The first step towards a self-sustainable city. This vertical garden set among the Eden, becoming part of the functional artificial landscape within nature. It provides the main source of foods, driven by human and machine system. The different atmospheric conditions ensure nation across all counties enjoy their local foods in this new city.

This Home questions the form of living space with the presence of machines and artificial intelligence. What if the bed can perceive users’ routine? What if windows can detect light level and transform by themselves to the most suitable configuration? It is designed in the simplest cubic form that is transformable into different spatial quality, customised to its occupants.


DAY 05700

5700 days


DAY 05761

HOME


DAY 05780

HOME CONCEPT


DAY 07665

7665 days


DAY 07703

GARDEN


DAY 07749

β OPERATION OUTPUT SET 4 POWER ZONE [002] | PICK-UP POINT [005]

POWER ZONE

PICK-UP POINT

Electricity is fundamental in this new ecosystem. The power zones located along the street will be the charging point for self-driving cars and other machines. It also acts as an emergency office for humans to report any machine errors.

The pick-up point is the modern bus station. It comes with wavelength releasers to interact with other machines. This intervention is to ensure human safety when they are engaging with selfdriving cars on the street level.


DAY 07793

SUPPORTING FACILITIES: POWER ZONE


DAY 07830

SUPPORTING FACILITIES: PICK-UP POINT


DAY 07899

GENERATING A SPACE DATA INPUT SET 1 [Machine vision] 01_Point cloud :2.0% 02_Lines :20.0% 03_Anomaly :38.0% 04_Normal :40.0% [β elements] 01_edifice I :0.4% 02_artificial tree :59.3% 03_edifice II :0.8% 04_circulation :14.6% 05_nature :24.9%

DATA INPUT SET 2 [Segmentation map]

Edifice I Artificial tree Edifice II Circulation

Nature

DATA INPUT SET 3 [plan and location] =0;020304;2

DATA INPUT SET 4 [Scene set up]

Ef067 MACHINE VISION OUTPUT:


HUMAN VISION

TESTBED 2 | JUNCTION OF MACHINES AND HUMANS Date: 16/06/2048 [BC03456] 0;020304;2 Shot distance: AERIAL, approx. 30m


DAY 08210

TESTBED 2 MASTERPLAN STRATEGY


DAY 08395

β OPERATION OUTPUT SET 5 DATA CENTRE [EZCDE05671] | WATER TOWER [EZCDE05672] | WIND CATCHER [EZCDE0671] | SOLAR PANEL [EZCDE0672]

DATA CENTRE

Data is the new food of the century. Data centre becomes the new architecture landscape. Considering the Fenland condition, water that is once threatening to the hinterland may become an advantage to data farm. Instead of forming a banal landscape on the ground, the data farm submerges into pools of underground water.

WATER TOWER

Processed water will be used to maintain the temperature of the server floors while providing clean water to the city.

WIND CATCHER Energy captors

SOLAR PANEL Energy captors


DAY 08591

ENERGY ZONE MACHINE VISION OUTPUT: PLAN

UNDERWATER ELEVATION


DAY 08642


DAY 08951


DAY 10935

CHAPTER 3 2055 | LATE EXPLORATION: A NEW HOME FOR HUMAN Day 10953 Excited, confident Dusk, thirty-year since the β operation. The operation is reaching the end of the tunnel. The city is now a self-sustain paradise with the collaboration from both agents. More spaces are created to turn the site into a human-familiar place. It is not a normal city; it is a kindergarten for humans to relearn the way of seeing. Once testbed 3 is ready, the site will be open for public inhabitation.


DAY 10950

β OPERATION OUTPUT SET 6 BUNKER [3C03] | CITY HALL [3C041] | AVS CENTRAL STATION [3C08]

AVS CENTRAL STATION

CATHEDRAL

CITY HALL

A human-machine encounter.

A new belief and norm.

A place of carnival and celebration.


DAY 10950

β OPERATION OUTPUT SET 6 EDUCATION HUB [3BC05] | CATHEDRAL [3C042] | MACHINE ARCHIVE AND MUSEUM [3C043]

EDUCATION HUB

BUNKER

MACHINE ARCHIVE AND MUSEUM

Not the normal school.

An escape from the machine world.

A machine graveyard.


DAY 11351

[AVS CENTRAL]


DAY 12045

[CATHEDRAL]


DAY 12410

[CITY HALL]


DAY 12697

[EDUCATION HUB]


DAY 13870

[BUNKER]


DAY 14965

[ARCHIVE AND MUSEUM]


DAY 16316

TESTBED 3 MASTERPLAN STRATEGY


DAY 16425

β CITY WELCOMING MANUAL


Testbed 1 09/08/2048 11;54 pm Shot distance: approx. 35km


Testbed 2 21/10/2053 12:13 pm Shot distance: approx. 35km


Testbed 3 23/11/2064 02:09 pm Shot distance: approx. 40km


VIEW FROM B1098 Date: 28/06/2056 [C045] Shot distance: STREET VIEW +1.8m, approx. 0.5km


TESTBED 1 | AERIAL VIEW Date: 06/08/2045 [DE0456] Shot distance: AERIAL, approx. 10m


TESTBED 1 | ON THE ARCHITECTURE EXPERIMENT STREET Date: 06/01/2038 [D05] Shot distance: STREET VIEW +1.5m, approx. 0.1km


TESTBED 2 | MEET AT THE JUNCTION OF HUMANS AND MACHINES Date: 01/04/2050 [F07] Shot distance: STREET VIEW +3m, approx. 0.1km


TESTBED 2 | EDEN Date: 31/05/2060 [E07] Shot distance:+10m, approx. 3m


ENERGY ZONE | UNDERWATER SERVER FARM Date: 26/04/2060 [D07] Shot distance: -30m, approx. 2m


ENERGY ZONE | THE POWER BANK Date: 14/02/2065 [D06] Shot distance: AERIAL, approx. 20m


TESTBED 3 | A NEW PARADISE Date: 04/09/2075 [BC03456] Shot distance: STREET VIEW +1.8m, approx. 0.1km


THE β CITY | DUSK Date: 31/12/2075


THANK Y O U E

N

D


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.