ADAM HEISSERER DESIGN PORTFOLIO
Solar Tracking Facade
Passive Sun Tracking with Thermally Active Materials The following prototypes are dynamic sun-tracking systems that rely only on thermally active materials to operate. This eliminates the need for external power and makes solar energy both the object and actuator of the system. They were developed as part of the University of Texas at Arlington Digital Architecture Research Consortium, and later as part of the Lake Flato Research and Development Program. Static sun shading solutions are effective for the north and south facades, but there is no optimal solution for the east and west. Photovoltaics are becoming more efficient and widely used every year, but the limiting factor is the timing imbalance between mid-day peak energy production and morning and evening peak demand. The solution to both the daylighting problem and the energy problem is a dynamic system that rotates to face the sun at any hour of the day. Several dynamic solutions exist, but the benefits are partially offset by the technical complexity, material cost, or operating cost of the system.
Prototype 1.0 The first prototypes were built with thermally active bimetal. The bimetal is composed of two layers of metal that expand at different rates when heated, and bend when exposed to sunlight. The metal fins remain unbent and parallel at room temperature, and curl into a more horizontal position when heated. This allows for more solar heat gain from low-angle morning or winter sun when temperatures are typically colder, while preventing solar heat gain from high-angle mid-day or summer sun when temperatures are typically warmer. The metal fins sit within a double glazed system, with an uninsulated single pane on the exterior that allows outdoor heat to actuate the system, and an insulated IGU on the interior side.
Prototype 1: Before heating
Prototype 1: After heating
Prototype 2.0 The following prototypes improved the rotational ability of the system by using bimetal coils to actuate the system rather than bimetal fins. This allows the fins, now a non-bending component, to be made from a thin-gauge aluminum cut with a CNC mill. They are rotated by a wire suspended from the coil at the top of each column of fins.
Prototype 3.0 In this iteration, the wire is wound continuously from a disk attached to the center of the bimetal coil to a disk attached to the center of the aluminum fin. With a higher gear ratio, the fins rotate more actively and consistently. The aluminum fins are cut flat, and then pressed in a wooden mold to create indentions along the center of the fin. The indentations provide rigidity and allow the fin to rotate to a more vertical position without colliding with the vertical support.
Prototype 4.0 This prototype excludes any wire or vertical support in order to decrease points of failure and increase visual transparency. One end of a central axis is adhered to the glass itself, creating a thermal bridge to the bimetal coil that directly rotates the fin when heated. The drawback of this system is the difficulty of calibrating the desired rotation to the temperature, and the misalignment of the fins.
Prototype 2
1. Fins are CNC cut from a single sheet of aluminum. 2. Each fin is pressed in wooden mold to provide rigidity. 3. The resulting profile is stronger and accommodates the vertical supports.
Prototype 5.0 This prototype uses thermally active plastic rather than metal. Two types of plastic with different thermal expansion coefficients are adhered to each other to create a composite that bends when heated. Polyethylene, the thermally active material, is black to absorb heat from sunlight. The combination of thin plastics is readily available and inexpensive. It also bends much more actively, although it produces less force. By producing the bi-plastic by hand, it can be adhered either flat, or on a curved radius at room temperature. This provides the design flexibility to make plastic that can either bend, or unbend when heated. Two strips of the bi-plastic are fixed on either side of a central rudder. When one side is heated from sunlight, the plastic on that side bends inward, pushing the rudder and rotating the entire frame and surface towards the sun. The surface of the panel was made with laser-cut pieces that hold a paper surface, and rotate on a central wire. This prototype is capable of about 90 degrees of rotation.
Mobile Mapping
Mapping the Human Experience of Spaces Mobile Mapping is the practice of recording someone’s experience of a space with mobile devices and sensors, and then creating a map based on the collected data. How do people move through spaces? What do they look at? What unseen characteristics of the space do they experience? This is an experiment in recording someone’s position in space while recording environmental data, and then combining the two into a map of the space. Mobile phones and cameras can record position, either through GPS or through photogrammetry. These easily-available positioning devices are paired with other sensors such as fitness watches or air quality meters to map environments based on an individual’s experience. This pairing of devices allows any designer to gather data about the built environment at a very granular level. The following is a series of post-occupancy evaluations that map air quality, sound, or occupant’s heart rate. This data was gathered with mobile phone cameras and off-the-shelf sensors, and then mapped with Grasshopper.
Mobile Positioning GPS is suitable for recording position in large outdoor spaces, but other spaces require a different technique. Photogrammetry is the process of constructing a 3D model from photographs, or from frames of a video. This makes it possible to model a space and record movements through it by recording video on a camera or phone. This photogrammetry process is done with COLMAP, a Structure-from-Motion software that creates a 3D point cloud model from video frames.
Mobile Data Collection While the camera or phone is recording position, other devices and sensors are recording environmental data with a timestamp. I can record light levels, air quality, biometric data, or visual data from the video itself. This is done with any combination of Arduino sensors, fitness watches, or other off-the-shelf sensors. The timestamped data can then be synchronized with the timestamped position to visualize the data in 3D space. This example is a visualization of heart rate data logged while walking through the Confluence Park Pavilion in San Antonio.
Recording heart rate data while walking through the Confluence Park Pavilion.
Object Detection Object detection adds another layer of qualitative mapping to the space. Several video frames can be sampled and processed with an object detection algorithm, such as YOLOv4. The detected objects can then be mapped onto the location where the video frame was captured. Converting the subjective experience of object recognition into real spatial data allows for the comparison of different spaces, and the potential to identify correlations between the objects present in the space and the biometric experience of the space.
Post-Occupancy Evaluation The atrium of Lake Flato’s Austin Central Library was tested for indoor air quality. Positioning was done through a pair of GoPro cameras. A combination of sensors on the Arduino microcontroller and other off-the-shelf data logging air quality sensors were used to collect temperature, humidity, carbon dioxide, PM2.5, PM10, VOC’s, illuminance, sound, and the heart rate of the surveyor. These air quality data points are mapped onto the nearest point on the point cloud and colored with a gradient to visualize the properties of the space. Object detection mapped onto the Confluence Park Pavilion
Various perspectives of air quality mapping at the Austin Central Library atrium
Filming Methods These spaces were recorded with a mobile phone camera, but the collection process can be improved by using a camera with a wider angle of view. One or two GoPro action cameras with a wide fish eye lens are a good substitute for a mobile phone camera. There is also the potential to attach cameras and sensors to UAVs, vehicles, or crowd-sourced online. The image to the left is a reconstruction from online footage of a small racing drone flying through Confluence Park. The yellow line indicates the drone’s flight path.
Visualization Methods Some maps were visualized by overlaying a heat map over a drawing, or inserted within the 3D point cloud. These images were created by color-coding the 3D point cloud itself to take on a color gradient relative to the nearest data collection point. The 3D point cloud is only a by-product of the photogrammetry process, but it can be rendered and densified to create a compelling 3D model. Visual photogrammetry produces point clouds based on visually unique objects, unlike LIDARbased point clouds that are sampled evenly.
Mapping heart rate, illuminance, and temperature onto a point cloud of Confluence Park.
Quality Views Tool This is a Grasshopper script that identifies the areas of a building with views to the outside. A views calculation is often done manually, which can be extremely time-consuming for large projects with unconventional floor plans. Automating this process saves time and provides a way to quantify the quality of views in a building beyond the typical pass/fail criteria of LEED certification. The following images are an analysis of the first and second floor of Lake Flato’s Austin Central Library, which achieved 95% of the regularly occupied area with outside views. The library prioritized wide-angle views to Lady Bird Lake to the south and the downtown skyline to the east. A line is drawn connecting the corner of each window to all visible wall vertices. When a line is tangential to a vertex, it is projected onto the wall beyond. The endpoints of each of these lines are then joined into a single polyline that designates the view shed from the corner of each window. This process is repeated for each window corner and all view sheds are compiled with transparency, revealing a gradient of the quality of views.
A line is drawn connecting the corner of each window to all visible wall vertices. When a line is tangential to a vertex, it is projected onto the wall beyond.
The endpoints of each of these lin a single polyline that designates t corner of each window.
nes are then joined into the view shed from the
This process is repeated for each window corner and all view sheds are compiled with transparency, revealing a gradient of the quality of views.
Energy Calendars Each of these radial calendars represent one year of building energy use for different buildings. Each dot represents one hour of energy , with a larger dot indicating more energy consumption. The color of each dot represents the outside temperature, from 40°F in blue, to 90°F in red. Midnight is on the inside of the circle, with 11 pm on the outside. They read clockwise, with January at the top. Lake Flato monitors the energy consumption of several buildings as a means to learn more about how a house or building actually performs when occupied. The actual energy use is often radically different from the energy model produced during design. This detailed energy data helps the architect and building owner identify sources of unexpected energy use and remediate the issues that arise. When this data is displayed graphically, it reveals energy patterns and anomalies that are not apparent from the numerical data alone. It also proves that every building is heavily influenced by those who operate it.
This net zero part-time home in Fort Davis, Texas has the lowest energy use of all monitored projects. The calendar clearly indicated when the house is occupied for a few days at a time.
This house in Long Island, NY has a distinctive color range, indicating the colder outdoor temperatures compared to the other Texas projects. This is a part time home, mainly used in the spring and late summer of this year.
This is one floor of an office building in San Antonio. The eMonitor was installed in February, and a clear 5-day work week pattern is visible.
This is a 100 year old house in San Antonio. Energy use is low, and a regular pattern of occupancy is visible. Heating and cooling loads stand out compared to the shoulder seasons.
This is data from the same house, showing only appliances.
This is data from the same house, showing only lighting loads.
This is the energy use for a small pavilion in North Texas. The highest energy is from well pumps being used during rain events. In this way, this calendar is roughly the inverse of it’s solar energy production calendar.
Rather than energy use, this calendar shows energy production from a solar array at a small pavilion in North Texas. Cloudy days interrupt solar production, and there are more daylight hours in the summer.
This is a net-zero energy house in San Antonio. This calendar indicates when the house is consuming more energy than it is producing.
Mobile Meshes
Drawing with Photogrammetry This is a series of drawings that were made with photogrammetry software and a Grasshopper script. The results are usually unpredictable, and vary depending on what happened to be picked up by the camera on that day. The light falling on objects and the direction of the camera influence the point cloud that gets constructed. The whole process usually takes less than an hour for each image, making for an interesting snapshot of what a space was like at a particular time. The resulting meshes can be thought of as what the camera and computer “see” when they try to make sense of the space. Computer vision picks up on the most visually dense and recognizable pixels in the image, rather than uniform planes and fields of color. That’s why the images tend to be made of edges and other high-contrast or detailed objects like street lights, foliage, and ornamental facades.
Seattle - Bainbridge Island Ferry
Scottish Rite Cathedral, San Antonio, Texas
Confluence Park, San Antonio, Texas
Houston and Alamo, San Antonio, Texas
Capitol Hill, Seattle, Washington
Broadway and Travis, San Antonio, Texas
Small Church in Anchorage, Alaska
Life Graphs Since 2014, I’ve kept a daily log of my life. I record aspects of productivity such as sleep, diet, exercise, socializing, music, work, and learning, as well as 58 feelings or affective states. This is a way of holding myself accountable and keeping track of what I accomplish each day. It’s become a valuable tool for memory, self-analysis, and an excellent reference for reflecting on the past and planning for the future.
This is an unlabeled record of my productivity in 2018. Each column represents a day, each row represents a separate productivity category. A brighter square represents a better daily performance in that category.
Data-Driven Classification of Feelings Rather than thinking of all feelings as being either notionally positive or negative, they can be classified based on their association with different states or behaviors. Happiness and productivity are probably the most meaningful classifiers of feelings. There’s a clear positive correlation between the two, and the 58 feelings fall in distinct clusters. In the top right, there are positive feelings associated with productivity (accomplishment, focus, responsibility). The cluster just to the left (in orange) are positive feelings associated with happiness more so than productivity (love, content, joy, fun). In the middle (green) there’s a cluster of neutral feelings that are not strongly associated with happiness or productivity. These tend to be physical states (drunk, pain), or challenging affective states (fear, wonder, panic). Towards the bottom is a cluster (light blue) that can be considered mildly negative feelings (frustration, stress, misery, tired). These are redeemable negatives. At the bottom left (dark blue) are the irredeemable negatives (loneliness, boredom, despair, depression). Feelings can be classified in countless ways, for example, caloric and water intake, on the right. With few exceptions, high calorie intake is strongly correlated with negative feelings.
Generative Design
Context-Based Massing Options with Optimization Algorithms This is a two-part experiment in generative design. In part one, a site is scanned with drone photography and converted into a digital model. In part two, this site data will be used to evaluate the success of several building design options generated with a parametric model. First, a drone photographs a vacant site from several different angles. COLMAP, a photogrammetry software is used to construct a 3D point cloud from the photos (pictured here). After importing the point cloud into Rhino, each point is classified into one of a few categories based on its color and position. These categories include, roads, buildings, and vegetation. This creates a datarich digital environment that will be used later in the design evaluation process. Any generative design process is only as good as its understanding of the surrounding context.
Estimated locations of roads and paths.
Estimated locations of trees and vegetation.
Generating a Footprint After scanning the site, several different building footprint options are generated. Each is a different variation of the same parametric Grasshopper script. In this example, each of these footprints are composed of 3 overlapping rectangles, and one subtractive rectangle. This is a relatively simple script that manipulates 16 variables to produce a wide range of options. In this system of data-driven idea generation, the architects role is to write the parameters that effect the family of design options, rather than to design static, bespoke options. Building footprint options generated with the same 16-variable parametric model.
Evaluating the Footprint The next step is to create a process for evaluating the performance of the building footprints that are generated based on space-making, building area, perimeter to area ratio, solar orientation, access to roads and pathways, quality of views, and obstacle avoidance. Each performance test is done with different algorithms that run simultaneously. The scores are averaged into one number for each footprint.
Performance test for spacemaking (above) and proximity to roads and paths (below).
Public Potential
Optimization Algorithms Daylight Potential
Quality View Potential
An optimization algorithm is a cyclical process that tests several solutions until an optimum solution is found. The algorithm constructs a predictive model, or a fitness function, of how successful all solutions are across a range of variables. There are several types of optimization algorithms, each with their own strengths. In this case, I used RBFMOpt, which excels at arriving at good results with as few iterations as possible. The optimization algorithm is stopped when enough satisfactory options are available, or as long as the designer is willing to wait. In this example, a few hundred options were generated in a few minutes.
Useful Application of Optimization Algorithms This process is only practical for well-constrained design problems with relatively few variables and clear performance criteria, such as optimizing fenestration for daylight performance, or other focused tasks. I consider optimization algorithms to be useful for these narrow use cases, but would need to be paired with more flexible and abstractable methods such as neural networks in order to achieve a more realistic generative design process.
Public, daylight, and quality views potential across building footprint.
Generative Adversarial Networks Drawing with Machine Learning
These images were made using StyleGAN2, a generative adversarial network (GAN) introduced by Nvidia. This model was trained for about 24 hours on a data set of 500 architectural drawings that included several different perspectives, authors, and rendering styles. The results of this model tend to resemble section drawings, with columns and beams, light and shadow, and a distinct horizon line. As is, the model produces striking compositions, if not very substantive drawings. For training future models, it may be best to separate drawings by their type, and have separate models for perspectives, axonometrics, plans, etc. Perspective drawings are poorly represented compared to other orthogonal views. This is a first step in exploring the limits of neural networks for producing designs and drawings.