VR + AR
90000>
9 780578 768502
STUCKEMAN CENTER FOR DESIGN COMPUTING
ISBN 978-0-578-76850-2
STUCKEMAN CENTER FOR DESIGN COMPUTING
STUCKEMAN CENTER FOR DESIGN COMPUTING
SCDC Director José Pinto Duarte SCDC Advisory Committee Felecia Davis David Goldberg Benay Gürsoy Marc Miller Rodney Allen Trice Stuckeman School of Architecture and Landscape Architecture Patricia Kucker, Director Eliza Pennypacker, Landscape Architecture Department Head Mehrdad Hadighi, Architecture Department Head Phil Choo, Graphic Design Department Head
This Publication Editor Rodney Allen Trice Book Design Brian Reed Cover Image Credit Janejira Kalsmith Symposium Participants Jesse Schell Tomás Dorta Rodney Allen Trice
Open House and Symposium Organization Rodney Allen Trice SCDC Open House Poster Designs Janejira Kalsmith Administrative Support Jamie Behers Video and Photo David Goldberg Stephanie Swindle Thomas Web Design scdcflash2019.psu.edu Janejira Kalsmith
ISBN
978-0-578-76850-2
Open House Exhibiting Instructors Ali Memari Anne Verplanck Benay Gürsoy Darla Lindberg Doug Miller Esther Obonyo Felecia Davis Guido Cervone Hong Wu James Cooper José Pinto Duarte Julian Wang Lisa Iulo Loukas Kalisperis Marc Miller Marcus Shaffer Nicholas Meisel Tim Simpson Travis Flohr Shadi Nazarian Sven Bilén
Open House Exhibiting PhD Students Ali Ghazvinian Anja Wutte Cemal Koray Bingöl Danielle Oprean Debora Verniz Eduardo Costa Elena Vazquez Eric Mainzer Fahimmeh Farhadi Farzaneh Oghazian Keunhyoung Park Krystian Kwieciński In Pun Jimi Demi Ajayi Julio Diarte Karen Kuo Mahyar Hadighi Mina Rahimian Mona Mizaiedamabi Naveem Muthumanickam Nastaran Tebyianian Nate Watson Negar Ashrafi Özgüç Bertuğ Çapunaman Paniz Farrokhsiar Sam Dzwill Sukanya Ranade Zainab Hakanian
Contents Preface
Patricia Kucker
Welcome
José Pinto Duarte
1
3
Introduction Rodney Allen Trice
5
Symposium How VR and AR Are Changing the World
9
Jesse Schell
Social VR in Design Tomás Dorta
25
Open House: SCDC Affiliated Research Projects BIM + Optimization for Additive Construction
41
Mars Habitat Design: NASA Centennial Challenge
43
Data-Driven Building Design: Using Multi-Objective Optimization
45
Advanced Computational Studio: The Rio Studio
47
Form-Finding Exploration of Architectural Knitted Textiles Behavior
49
Shape Grammar in Designing 3D Architectural Knitted Textile Forms
51
Computationally Assessing Pollinator Habitat Resiliency
53
P/Elastic Space
55
Robotic Apprentices
57
Material Characterization for 3D Printing Concrete: Evaluating the Relationship Between Deposition and Layer Quality in Large-Scale 3D Printing
59
Computing Stitches and Crocheting Geometry
61
Open House: SCDC Affiliated Research Projects (Continued) Phototropic Fiber Composite Origami Structure: Arch 497 Architectural Responsive Fiber Composites
63
The Grammar of Late Period Funerary Monuments at Thebes
65
Using Grammars to Trace Architectural Hybridity in American Modernism: The Case of William Hajjar’s Single-Family Houses in State College, PA
67
Urban Form and Energy Demand: A Comprehensive Understanding Using Artificial Neural Networks
69
Tooling Cardboard for Smart Reuse: A Digital and Analog Workflow for Upcycling Waste Corrugated Cardboard as a Building Material
71
Concrete Printing System Design: Six-Axis Robotic Concrete Printing
73
Mycelium-Based Bio-Composites for Architecture: Assessing the Effects of Cultivation
75
Understanding the Genesis of Form in Brazilian Informal Settlements: Towards a Grammar-Based Approach for Planning Favela Extensions in Steep Terrains in Rio De Janeiro
77
Designing for Shape Change: 3D-Printed Shapes and HydroResponsive Material Transformations
79
7
Preface
PATRICIA KUCKER
Stuckeman School Interim Director and Teaching Professor of Architecture
Patricia Kucker, an alumna of the Penn State Architecture program who has more than 25 years of experience in higher education administration, is the interim director of the Stuckeman School. Kucker, who holds an Ed.D. in Higher Education, Leadership and Policy from Vanderbilt University, began her career as a licensed architect, entered the teaching ranks of architecture programs, and later moved into the broader realm of higher education administration in the arts and design. She most recently served as the provost and vice president for academic affairs at the University of the Arts (UArts) in Philadelphia. In her position, she worked closely with the university president to translate his vision into actionable strategies. As the chief academic officer, she oversaw and managed academic program evaluations, curricular development, the faculty development grant program, the academic operating budgets, and the faculty promotion and reappointment process, among other duties. She also initiated the development of the first UArts Day, which is a campus-wide celebration that is designed to connect students across the university, expand connections to alumni and the city of Philadelphia, and strengthen the creative and professional skills of students. Prior to her appointment at UArts, Kucker was the associate dean for faculty affairs and curriculum in the College of Design, Art, and Planning at the University of Cincinnati from 2010 to 2016. During this time, Kucker led the National Association of Schools of Art and Design 10-year accreditation process of 11 degree programs across the three schools within the college. She also led several initiatives that supported the development of summer camp offerings annually serving more than 300 high school students and, in particular, students from underrepresented groups. Before becoming associate dean, Kucker was the graduate program director and associate director of the School of Architecture and Interior Design at Cincinnati for four years. She also served as the head of the Department of Architecture for three years prior. Kucker previously held full-time teaching positions in architecture at the University of Arkansas, University of Virginia, and the University of North Carolina at Charlotte. She was also an adjunct faculty member at Temple University, University of Pennsylvania, Philadelphia College of Textiles and Science, Penn State, and Rensselaer Polytechnic Institute. Kucker was nationally elected secretary of the Association of Collegiate Schools of Architecture, serving from 2010 to 2012. She led accreditation teams to architecture programs for more than 10 years and then served on the National Architectural Accrediting Board from 2012 to 2015 and was twice elected treasurer. In 2008, she received a College of Arts and Architecture Alumni Award. A licensed architect in Pennsylvania and New Jersey, Kucker began her career with Geddes Brecher Qualls Cunningham Architects in Philadelphia. After a brief stint with another practice in the city, she started her own firm, Patricia Kucker AIA Architects, where she practiced for six years before setting her sights on higher education as a full-time profession. Kucker served on the inaugural advisory team of the Stuckeman School at the request of former College of Arts and Architecture Dean Bobbi Korner in 2009 and on the School’s Advisory Board from 2011 to 2017.
1
SCDC ‘19
Everyday faculty and students of the Stuckeman School imagine an improved future world. As designers in an educational institution, we commit our research and educational resources toward the betterment of the world around us. Our graduates will be the design professionals stewarding the natural and physical environment and will be arbiters of communication and influence for a just society. In the Stuckeman School we investigate the complex challenges posed by contemporary society and use our disciplinary knowledge to offer solutions and inspiration for design that matters. The Stuckeman School established three research centers to develop new knowledge to inform the disciplines, lead industry practices, and serve local communities in need to achieve their goals. While serving professional degree education, the Stuckeman School research centers create a vibrant multi-disciplinary eco-system that nourishes collaboration and forward-looking design professionals. The subject of the 2019 flash symposium, “VR/AR,” is led by faculty in graphic design and identifies new modalities for designers to both imagine and interact with a future reality—today. Both of our symposium speakers challenge us to consider how we depict and experience reality through digital images. Jesse Schell’s talk begins with a timeline for the ubiquitous technology of television to help us understand that technological breakthroughs reach us over time. Schnell offers us William Gordon’s words: “The future is here now; it’s not evenly distributed.” Unlike our passive experience with television, the virtual reality (VR) environment is a digital fabrication that the body and mind can sense and experience, as if it were completely real. The body’s complete engagement or immersion is an essential component of virtual reality technology. And as a result, VR
technology has become an important flexible and low-risk training tool for airline pilots, surgeons, and the military, all from the vantage point of a non-descript classroom. Today a VR roller coaster ride might be available at the mall and the VR environment allows designers and clients to experience site locations that are distant and unreachable, or do not yet exist. Augmented reality (AR) on the other hand overlays dynamic images onto the digitally captured real environment. For example, there are now hundreds of AR apps available for our smart phones that will allow us to augment our Snapchat images and phone calls. The AR environment is well known to TV football fans when a graphic down marker appears on the field, or those of us on Zoom who can add a charming beret or dark sunglasses to our own camera image. In the AR environment, images and information can change in response to the viewer, resulting in a dynamic interplay of mixed realities. Tomas Dorta’s work introduces us to “Social VR” as the virtual environment that allows designers and clients to engage collaboratively in real- time design decisions even though they themselves may be distant. Social VR is the designer’s collaborative environment and clients and consultants from all over the world can work together to evolve a design model within a virtual environment. Visualization and communication through real experiences and with mixed realties are incredibly powerful instruments of design that help us experience the future today. The 2019 VR/AR symposium is sponsored by the faculty and students of the Stuckeman Center for Design Computing (SCDC) research lab. The hopeful and future-looking research of the SCDC members is also included in this publication as a testament to our commitment to visioning a future with design that matters.
2
Welcome
JOSÉ PINTO DUARTE
Stuckeman Chair in Design Innovation & Director of the Stuckeman Center for Design Computing, Penn State
José Duarte is the Stuckeman Chair in Design Innovation and director of the Stuckeman Center for Design Computing at Penn State. He is also a professor of architecture and landscape architecture, and an affiliate professor of architectural engineering and engineering design. After obtaining his doctoral degree from M.I.T., Duarte returned to Portugal where he became dean of the Lisbon School of Architecture. A former president of eCAADe, the European association for education and research in computer-aided design, his research interests are in the use of computation to support context-sensitive design at different scales. One of his areas of research is concerned with the use of digital technologies to design and build customized housing. With this goal in mind, he cofounded the Additive Construction Lab at Penn State, which focuses on the additive manufacturing of architectural structures using concrete. Recently, he co-edited the book Mass Customization and Design Democracy (Routledge, NY, 2019) and co-led the Penn State team that was awarded second place in the finals of the NASA 3D-Printed Mars Habitat Challenge.
3
SCDC ‘19
The Stuckeman Center for Design Computing (SCDC) was created following the establishment of an endowment donated by the Stuckeman family, with the mission to support the integration of digital technologies in the Stuckeman School at Penn State. The center’s main goal is to be a hub for research in the design computing field, understood from a broad perspective. The focus is not on technology, but on strategically deploying technology to address contemporary societal issues. In this regard, the activity of the Center is linked to the mission of the school and the college; it supports the nurturing of future architects, landscape architects, and graphic designers first by empowering them with cutting-edge computing technology and then by enabling them to contribute to advance such technology. The Center has a strong commitment to contribute to the achievement of social and environmental good and aligns its goals with the United Nations’ Sustainable Development Goals. The ambition is for SCDC to become a reference for design computing research, both nationally and internationally. Activity is transversal to the four research focus areas around which graduate education and research at the Stuckeman School is organized—Design Computing; Material Matters; Sustainability; and Culture, Society, Space—and seeks collaboration with faculty and students in all of them. Nonetheless, it is more closely articulated with design computing, which examines the impact of emerging digital technologies on the creative processes that shape our built environment. Design computing identifies critical knowledge and advanced skills in the use of digital technologies in architecture, landscape architecture, and graphic design. Projects can explore contemporary discourse on digital media and design and investigate how design computing can be productively utilized in sustainable design, interdisciplinary collaboration, and fabrication. Research is organized into the following research thrusts: (1) high-performance built
environments; (2) additive manufacturing at construction scale; (3) bio and smart materials; (4) decoding and recoding informal settlements; and (5) smart cities and communities. To support research, the center encompasses the following laboratories: Immersive Environments (IEL), Remote Collaboration (CoLab), Advanced Geometric Modeling (AdGeom), Digital Fabrication (DigiFab), Computational Textiles (SoftLab), Form and Matter (ForMat Lab), and Additive Construction (AddCon Lab, run jointly with the College of Engineering). The SCDC has several initiatives to improve the dissemination of its work to the academic community, and to society at large. The SCDC Open House is one of them. It is a one-day event designed to communicate the value of the research taking place in the Center through demonstrations, an exhibition, and talks on finished and ongoing projects. It is articulated with the SCDC Flash Symposium, an event that takes place on the same day, with the aim to showcase internally cutting-edge research being developed at other institutions, promoting cross-fertilization and collaboration. This book documents the SCDC Open House and Flash Symposium that took place in 2019, which was focused on virtual reality (VR) and augmented reality (AR). VR and AR have a tradition of several decades at the Stuckeman School that goes back to the creation of the Immersive Environment Laboratory (IEL), a pioneering installation at Penn State, replicated several times. Today, VR and AR are again highlighted with recent technological developments that provide more reliable and accessible systems, allowing the development of sophisticated applications in the areas of arts and design, which the symposium aimed to discuss.
4
Introduction
RODNEY ALLEN TRICE
Professor of Practice in Graphic Design, Penn State
Rodney Allen Trice is a 1987 alumnus of the graphic design program at Penn State who returned to his alma mater as a faculty member in August 2018. Trice spent the previous 30 years working primarily in the magazine publishing industry in New York, where he art directed and contributed on a number of mainstream publications, including Allure, Cosmopolitan, Essence, InStyle, Glamour, People, Seventeen, Teen Beat, US Weekly, and Vibe. More recently, he expanded his media experience by producing TV pitches and developing video and film concepts. Trice also created the visual identity for HIVE, a luxury-brand concept launch for one of the most successful and popular female entertainers in the world. Trice has been the recipient of numerous industry awards, including Society of Publication Designers Merit Awards, Content Council Pearl Awards, and Content Marketing Institute Magnum Opus Awards. He was honored by being named to Time magazine’s The Green Design 100 list. He says that teaching design students at Baruch College, Parsons School of Design, the Fashion Institute of Technology, and within his own studio concept courses has brought him invaluable opportunities for understanding the symbiotic relationship that he believes is essential for teaching visual communication. “This generation has changed what visual communications is through the various platforms they relentlessly use today,” says Trice. “It’s amazing to bring my knowledge to the table along with their engagement and watch the sparks fly.” Currently, Trice is particularly inspired by the evolution of media into delivering more personalized, experience-driven content and has thus begun exploring the realms of virtual and augmented reality.
5
SCDC ‘19
From the moment I realized what augmented reality had the potential to do, even likely a decade from mass adoption, I knew that I had to be a part of authoring the new design gestalts that would surely be at the center of an entirely different way to even imagine communication design. I believe that augmented reality is going to completely change the graphic design world with the same impact as the printing press launched it in the beginning. I’m thrilled to be a part of this new fourth wave and was equally excited when José Pinto Duarte, director of the Stuckeman Center of Design Computing (SCDC), put me in charge of creating a symposium on this subject. Now, virtual reality has been of increased interest amongst architecture students in the Stuckeman School but beyond that, there has been little push in these directions, which are surely going to influence or affect all of the disciplines we teach in the Stuckeman School going forward. I think it was wise for José to put focus here. For this event, there was amazing dialogue and engagement created by assembling people like Jesse Schell, who has been a major player in the virtual reality world and continues to teach at the Entertainment Technology Center at Carnegie Mellon University. Talking with him and bringing him here to lecture was an exciting opportunity to further develop dialogue on the subject. Tomas Dorta from Montreal has developed, and continues to evolve, his concept of how virtual reality can become a tool that allows architects and designers around the world to work on something together at the same time. Tomas brought his invention to the Stuckeman School and had us all experiment and play with this amazing new device. Along with our own Penn State TLT (Teaching and Learning with Technology), as well as the CIE (Center for Immersive Experiences), we gave attendees a chance to listen to some very compelling developments, as well as play with some of the newest equipment of this technology. I have thrown my entire life into the development of this immersive technology and
how it can impact the masses through various forms of mixed media and large-scale events. My interest in these immersive technologies will only grow as they become more mainstream. Those I spoke to at the event feel the same. Many of our interests in these technologies and the influence of augmented reality going forward in social mass conditioning are important elements we all need to consider, regardless of whether you have any interest in augmented reality or virtual reality or not. It will be part of all of our future. We just need to make sure we get a better handle on it before it gathers mass appeal—a better handle on it than current social media platforms are doing. If we can learn from our mistakes, this new technology will be amazing. That is why it is with tremendously growing enthusiasm and excitement that I was able to produce this event. This 2019 symposium on the subject matter of augmented reality and virtual reality gave us a brilliant look into the future. I hope this remaining booklet is informative and helpful, and serves as a bit of a starting place for those of you who were unable to attend the event.
6
7
SCDC ‘19
8
How VR & AR Are Changing the World
9
SCDC ‘19
JESSE SCHELL Jesse Schell is the CEO of Schell Games and a faculty member of the Entertainment Technology Center (ETC) at Carnegie Mellon University. Started in 2002, Schell Games is the largest full-service educational and entertainment game development company in the United States. Prior to joining the ETC at Carnegie Mellon, Jesse was the creative director of the Disney Imagineering Virtual Reality Studio, where he worked and played for seven years as designer, programmer, and manager on numerous projects for Disney theme parks and DisneyQuest. He is celebrated for his design work on Toontown Online, the first massively multiplayer game for kids.
10
HOW VR & AR ARE CHANGING THE WORLD I want to talk to you about what I’ve been learning about virtual reality (VR) and augmented reality (AR). These are things I’ve been thinking about for a long time and they are starting to have more importance. As
Rodney mentioned, I previously had worked at the Disney Virtual Reality Studio. There was a time back in the 1990s when virtual reality was new. Then, people thought that VR was going to take over quickly and there was a lot of experimentation and a lot of work being done then. I worked on a thing called DisneyQuest. To sum it up, it was Disney’s virtual reality theme park in Florida that was built in the 1990s and it stayed there for about 20 years or so. I was able to work on a number of very interesting attractions there. While I was working on that project I met a gentleman named Randy Pausch. You may know him from his famous book The Last Lecture. He’d helped out a bit, working on some of these Disney projects. Later he founded the Entertainment Technology Center, where I teach now. I came from Carnegie Mellon, and in 2002, I returned to start teaching there. It is 11
SCDC ‘19
a master’s program at Carnegie Mellon and it is very unusual because it’s designed to be a professional master’s in entertainment technology. So we have about 160 students. Forty percent of them have a computer science background, 40 percent of them are artists, and 20 percent have degrees in all kinds of disciplines, including biology, management, psychology, and architecture. The idea or the point of the Entertainment Technology Center is that here in the future, the way advances happen is when you have all different disciplines working together on something. Divided disciplines can only go so far. Look at something like the invention of the smartphone. The iPhone could not have been made by engineers because it’s too beautiful. Engineers don’t think that way. They don’t think about making things especially pleasing to use. Nor could it have been built by artists because it just involves way too much technology. It could only be built by a group of artists and engineers. We’re willing to work with people outside their discipline and work together to build something new. So that’s what we focus on at the ETC. Virtual reality has been a core part of the curriculum at the ETC for 20 years. There is a class called “Building Virtual Worlds” that Randy Pausch himself began all those years ago and that myself and that Dave Culyba and I continue to teach today. The main rule of the class is that we’re always trying to give students interfaces and devices that are from the future. The rule is, if you can buy it at Walmart, you’re not alone. We’re not going to work with that hardware.
We’re going to look at something that’s coming from farther down the road. It’s been a really fun way to be able to experiment with different virtual worlds over the years. We’ve created, I believe at this point, something like 1,800 different virtual worlds through the class. I also run a video game company in Pittsburgh called Schell Games. Here is a video that tells more about us. So when virtual reality came into the marketplace, I became very interested in how good Schell Games could be at creating experiences in this medium. At this point, I think we’ve built about 15 different VR and AR experiences over the last few years. Some of our most popular ones include our escape room game I Expect You to Die. Also we developed Happy Atoms, a physical and digital molecular chemistry set. Another game we are spending time on now is Until You Fall, a VR sword fighting game. But part of the reason that I really wanted to get in early on VR, even though it’s not obviously profitable in the beginning, is that by getting in early and then learning about the best practices as the market starts to
become solid and stable, then we would be in a good position to do more with it. I remember when computers first came into homes, and at that time, there was a lot of debate. People really wanted them but it was so expensive that it seemed silly. Who’s going to want this technology? But gradually people found ways to make it cheaper and more useful. And it came into the home. I think we’re getting ready to see the same evolution with VR. People debate about why we are talking about this change right now but what’s inside comes from the future. It’s not really here now, but like William Gibson says, “The future is here now; it’s just not evenly distributed.” In recent years, there’s been a surprising amount of advancements in VR technology. I just want to be clear about the kind of technologies that I’m talking about. I’m mainly talking about headsetbased technologies that can track the position of your head. When we are tracking XYZ position in space as well as rotational position, we refer to it collectively as “six degrees of
12
freedom” or “6DOF” tracking because we’re going to code position and orientation into the space ahead. Most virtual reality systems right now are parasitic systems because they are running on hardware that was not designed to do virtual reality. Phones were not designed for virtual reality. PCs were not designed for it. Game consoles were not designed for VR. So we have this situation right now where the tech is janky because we’re trying to make it work on these weird systems. This situation is not going to last long though. Very soon we are going to see platforms like the Oculus Quest, which just came out a couple months ago. This system is completely self-contained, requires no PC, no game console, no phone, and it has no wires. You just put it on your head and you have these two wireless controllers and you’re completely good to go. These systems are going to become very much the new way because they’re just so convenient, so easy to use and so affordable. It’s worth taking the moment to talk about the difference between virtual reality and augmented reality. A lot of my talk is going to focus on virtual reality because there’s so much more that can be done with it right now—the difference being virtual reality completely blocks out the real world and paints a completely artificial world for us, whereas AR lets you look at the real world either directly or through cameras and then selectively add certain parts of simulation to the real world. These are two different technologies. Right now, we are likely over time to see them merge. Yet I want to talk about some of the challenges of augmented reality. Here is an interesting example. It was created by some of the students in my “Building Virtual Worlds” class. It uses the Hololens technology that Microsoft developed and lets you look directly at the world and paint over some of the world. In this case, the students created an artificial 13
SCDC ‘19
hospital room. It’s where you get to experience having a child at the hospital. It is a powerful and moving experience because you have a real identity and a real kind of setting and there are virtual characters interacting with you. When she comes over and says, “Please hold my hand,” you can’t help but to reach out. Since it’s tracking the hands, it’s like we came to make this moment where you hold her hand. The main problem with this technology—it’s central weakness—is because of the way it works, you only get a very limited field of view. Even though there is a clear display, there’s only a bit of the virtual card that is floating in one spot. So it all looks fine on this screen but the person here is actually seeing it cut off. From a distance it looks fine, but when you walk up, it cuts off and your brain immediately says it is all fake, it’s not a person, and I’m not really here anymore. So that’s a challenge with these kinds of displays all the time. I think what’s going to happen is that VR headsets are going to use cameras to direct users inside the technology. They’re doing it already. They use the cameras for safety with the Oculus Quest system. When you get to the edge of your safe area, it immediately just switches to cameras that show you in the real world. I think what’s going to start happening is that VR headsets are going to become augmented reality headsets as well. We will see it happening in the near future. Technologies that use these systems have to make major leaps in display technology. I wouldn’t expect to see them until the 2030s, so I think these sorts of forward strides are likely to be dominant in the 2020s. People often have confused views about the way technology comes into being. People assume that something’s invented and either it’s going to be a hit or it’s not. That’s it. And that’s never ever how technology works. Consider television. When you ask people “When was television invented?” and they say, “Oh, I don’t
know, like 1948 or something like that.” That’s not right. Television was invented in 1884 as an independent system for sending remote images. It was invented as a single motor. People just kept working on it for 43 years. They kept working, trying to find better ways to send images. Can you actually imagine how nutty somebody must’ve seemed in the year 1915? We’ve never seen this television thing yet you’re working on it for 30 years! But 43 years later what happens is a 20-year-old punk named Philo Farnsworth is out mowing the lawn and he realizes, “Hey, wait a minute! Just like I’m mowing the lawn back and forth, what if I could use a magnet to control a cathode ray tube and make it move back and forth like I’m holding the wand.” It turns out this is the massive breakthrough that television needs and then it still takes several years more before it could be really commercially viable. It’s the same thing with VR. It was actually invented in 1968. When we had the first working VR system, it used actual mechanical tracking in this big mechanical device up to your neck. That’s how virtual 3D displays were created 43 years later. It’s weird that a 20-year-old punk named Palmer Luckey shows up and he says, “Hey, you guys! The way you’ve been doing it, there’s a lot of ways it could be improved. If you use an LED and plastic lenses. And then do color correction in there and then get the frame rate up to 90 frames per second (FPS), we might really have something.” It was a meaningful breakthrough. So technically it’s important to remember technologies do take time, even though we think of them as a boom or breakthrough. We think they came out of nowhere, but it’s not true. It’s never just that this color TV came out in 1960. It had this really slow growth because what’s the point of having a party if there’s no shows and what’s the point of having shows in color when no one has a TV. This kind of “priming the pump” takes a while. It’s the same thing with VR.
So next, people will say “VR is just fancy 3D glasses. 3D TV was a flop, so VR is gonna flop. It’s just 3D.” And it’s a reasonable argument today. The key is an understanding of why 3D TV was a flop. The TV manufacturers thought it was going to be the next big thing. Everyone’s going to want a 3D TV. Who here has a 3D TV? Really? No one? It’s because we don’t care about 3D images. We don’t care about stereoscopic images. They’re not new. In 1849, we started taking photographs in stereo. You can take all your photographs in stereo. It would be easy to make 3D things for your phone to take 3D pictures. You could look at them on your phone in 3D. 3D is nice once-in-a-while and it’s nice when you go to a theme park or to go see a fancy movie in 3D. It’s a nice ‘oncein-a-while’ thing. It’s not something that we care about using every day. We just don’t care. So the key thing to understand is VR is not a technology about the eyes. It’s not about 3D. VR, by tracking your motion, is able to create the illusion that your body is in a place that it isn’t actually and this illusion can be very powerful. I can’t give you that illusion when I show you a TV screen or a movie. You’ll look at it, but you never think, “Oh, I am in a different place.” But when you are in VR, your body starts to make assumptions about what’s real and what’s not real. In terms of bodily presence in the world of AR and VR, it’s a question of here versus there because it’s almost like fiction versus nonfiction. AR is all about the real world with some things we’re gonna put on it. I’m actually here but I’m now going to interact with a physical virtual object in the physical space. But VR takes you somewhere else, someplace that’s not here. It takes you to a distant place, somewhere else. In terms of what makes VR different is that it brings your body and all of your bodily thinking and all the ways you interact with everything in the world and puts it into the world of the simulation.
14
A question that comes up a lot is about motion sickness. Some people tried VR once and got motion sickness. “I’m never trying VR again!” But motion sickness is a thing so it’s important to understand how it works. So what’s motion sickness? It is a very strange phenomena. You see certain visuals and you want to throw up. Why would nature do this? Well, the way it works is this. You know your balance sensor in your inner ear, there are these motion detectors. There is fluid moving around, the little kind of tear-shaped things, and these hairs can detect what motion is happening. This is how you balance. This is how you keep from falling over because you have all these sensors. Now one thing that nature learned at some point is there are certain things in the world, certain toxins, that will separate what you see from your balance. Mushrooms are a good example. With toxic mushrooms, if you eat them, you start to get loose antigenic effects. It’s hijacking your optic nerve and guess what? Nature has figured out that if your balance and your optics don’t line up, you’re probably going to die in a minute. That’s why we have this involuntary response. It’s just nature trying to keep you from dying and it’s fine in terms of 98 percent of toxins. It’s a problem if you’re on boats. It’s a problem if you’re in the back seat of a car. It’s a problem riding a roller coaster or in certain kinds of virtual reality experiences. It’s a real thing that we have to deal with but it can be dealt with. Part of it is having a fast refresh rate. In normal video games, you either draw 30 frames a second or if you’re getting fancy, 60 frames a second. In VR, we have to draw in 90 frames a second in order to “kind of” make the visuals seem really real. Then there’s a trick of this fast, fast refresh LED, which has been a meaningful breakthrough. You have to be very careful about the kind of motion that you present to the guests. Certain kinds of motion can easily get someone sick but other kinds can be very nice, so there’s a real art to it. Motion 15
SCDC ‘19
sickness is tricky. It’s a lot like coffee or beer. Everyone likes a different one because of the way the bitterness sensors work in our mouths. You only have one sugar sensor so you don’t hear people talking a lot about which of the 20 kinds of sugar you want. But people say “oh, which of 150 kinds of coffee do you like?” It’s because you have 23 bitterness sensors and everybody’s different. Everyone has very unique tastes. In the same way, about 20 percent of people never get motion sickness at all and then some people only get motion sickness in certain ways. So it’s very possible to keep motion sickness to a minimum. So the next question is actually how are we going to use this technology. We’re finding a lot of applications certainly in the world of enterprise and industry. At Finger Food in Canada, they have this whole freakin’ truck sitting in their lab. You put on this headset and it’ll put this virtual augment on the truck. It’s crazy because when you are in it, it’s hard to know what’s real and what’s not. I walked right into it and started designing right on it. Traditionally, the way new truck design is done is that they carve the whole thing out of clay. They need to do wind tunnel tests so a physical form is needed. It takes ages and is a huge amount of work. Now they can do virtual wind tunnel tests and can adjust and change everything on the fly. So there’s a lot of spaces in terms of three-dimensional design where these technologies can be very useful. It is certainly useful in the world of architecture. Medical training now is able to use augmented reality, too. In military applications, things have been happening for 20 years. The new technologies are just making it more affordable and more flexible. Entertainment certainly is a great place for this. We’ve seen crazy stuff like virtual reality roller coasters, which sounds nuts. When they’re executed badly, it’s a bad experience. When executed well, it’s exciting.
There’s a new company, holoride, that we’ve worked with. They take this same experience, except you sit in the back seat of a car and it communicates all of its motion to the virtual reality headset. So wherever the car moves, you move in the virtual world. It’s a very powerful experience. So you’ve got these sort of big things that people are doing at location-based entertainment centers but there’s a lot of stuff that’s happening in the home already in terms of entertainment. When the initial VR systems came out, you’d have to buy a $1,000 or $1,500 PC and then you’d have to buy a $700 VR system. That’s a lot of money. And the people
who are really interested are the teenagers. Teenagers get very excited about VR. They’re looking to explore crazy things. They have time to themselves. They’re looking to have these deep experiences but so far they haven’t been able to afford the technology. So we’re looking at the change likely to come in the next couple years as VR starts to become more affordable. I’ll show you some trailers for some of the games that we’ve developed. I Expect You to Die is a fun one where you are a spy. It is an escape-the-room experience where you are solving puzzles to complete your mission. Another entertainment game we’ve developed is called Until You Fall, a sword-
16
fighting experience. In it, you are slaying scary monstrosities using weapons in both hands, and because you are playing, you don’t realize how much of a workout you are getting. Entertainment is a fine thing, but there’s also a use for VR as a school for art. There was a time when it could only be used as a tool by people who are both programmers and artists but that’s really changing. This is a video by Glen Keane who’s one of the Disney animators talking about what it’s like to paint in the 3D space. It’s been exciting to watch artists start to embrace these tools. I remember an experience with Tilt Brush. I first brought it home and showed my daughter who loves to draw. She was maybe 12 at the 17
SCDC ‘19
time. She was fascinated with it and that night when she went to bed, she got out of bed an hour later. “I’m sorry, I can’t sleep at all. I just can’t get the things I did today out of my mind.” It’s just so powerful. Education is certainly another important area for VR technology. In a survey by Greenlight Insights a few years ago, which asked people “what do you want to use VR for,” it was assumed that everyone was going to say “anything game-related” and people did say “gaming,” but not nearly as many as people who said things like “tourism” and “watching movies.” In fact, more people said they would use it for “education,” then said they would use
it to play games. Of course, it is always this way with technology; there are people who really want to find ways to use these technologies for education. So now you have a question: “What are you going to do? What subjects are going to be good?” Certainly anything that involves space and interacting with space is exciting. It’s important for geography or traveling to distant places. It’s a great use of VR because it really gives you that sense of being in a space and being in a place. It could be through simulation or improved buildings. VR can be very useful since it creates the illusion of being in a space. Certainly anything involving abstraction, like math, since you can make the abstract more tangible. Part of what makes math hard is that you’re trying to imagine it all in your head or on the little scratches on paper. If you can actually surround yourself with some of these structures and reach out and mentally experience these abstract structures, it becomes easier to understand because you can bring your bodily
“
sense of space into them. Certainly, many fields of science with their experiments are useful to replicate in VR because they involve rare substances or strange machines or exotic materials. This application got me thinking a lot about how to do more with chemistry. One of the projects we developed called Happy Atoms was very much inspired by a 10-year-old girl who was given access to a college-level system of making atoms and molecules. The teacher explained how the molecules are put together and she started building. She built this molecule and she said to the teacher, “I think it works” and he said “I don’t know this molecule; let me look it up.” He looked at the CRC guides and he couldn’t find anything so he started calling college professors he knows and said, “What is this thing? It should exist.” As it turned out, there was no known instance of this molecule and this student discovered a legitimate molecule not previously identified. This story got me thinking about what would
If you can actually surround yourself with some of these structures and reach out and mentally experience these abstract structures, it becomes easier to understand because you can bring your bodily sense of space into them.
”
18
happen if we could give these kinds of tools to as many kids as possible. Part of the challenge here is that though she built it, how did she know it was real? How did she know what it was? She had this teacher right by her side but I can’t give everybody a teacher. So we started thinking about different ways we can do that. So we built an application called Happy Atoms for students to discover the world of molecules. Happy Atoms is a physical and digital molecular chemistry set that lets students learn more about atoms and molecules in an intuitive hands-on way. The complete set contains 50 models representing 16 elements. The magnets actually represent the electrons, so you can easily tell what connects even though you don’t know what it is. The free app invites you to explore the world of molecules in a fun and engaging way. The state-of-the-art image 19
SCDC ‘19
recognition technology identifies the molecules and provides detailed information about them. Now you can understand what the molecule you created is. One of the phrases I find myself saying is that augmented reality works best with augmented reality. If you are just augmenting empty space, what’s the point? I’ve just shown you a 3D model floating in space. I can just look at it on the screen, but what’s the point of the empty space? But if you can actually augment physical objects that are interesting and learn more about them, that’s when augmented reality is at its strongest. So that’s for atomic modeling but also what about a laboratory? It is a real issue and real problem, particularly with high school students. A situation where you’re going to give high school students access to glassware,
toxins, fire. Like a lot of things, it is problematic and as a result, the experiments that are done in the physical world are kind of few and far between. We started wondering what if we could create a lab environment in virtual reality so that everyone can have access to the traditional chemistry lab. HoloLAB Champions is an immersive, safe, and entertaining environment that makes mastering lab skills fun. Players can perform a variety of lab challenges leading to a show-stopping final lab. Or in practice mode, they can hone their skills on specific tasks. Players are scored on accuracy and safety as they perform work that prepares them for success in a real lab. HoloLAB Champions is a chemistry companion developed with input from chemistry teachers. When developing this game initially, we assumed the most useful application would be the ability to experiment with lots of different chemical reactions, but as we talked to teachers more and more, they said that’s fine but what we really need to teach is basic lab skills. There’s lots of other subjects where we can apply this technology, like historical events. It is something that people think about a lot because of the ability to put you in places where things are going on. It’s a way to kind of make history real and to make situations that might not normally be possible to be in. We had an interesting project at Carnegie Mellon that was all about dealing with police brutality. What is it like to be innocent and see someone being confronted by the police? This VR experience is really powerful and engaging and uses voice recognition so you have these conversations. One of the things that we noticed is how one participant put his hands up. We’re not tracking hands; that experience was his body feeling real in the space. The thing that we noticed is a lot of people didn’t put their hands up.
How do you create history in VR, though? It’s tough. In a game like Assassin’s Creed they spent $20 million to reconstruct ancient Rome. It’s really hard and really expensive because when you talk about ancient cities and ancient battles, the sheer amount of modeling with animation is just overwhelming. So we started thinking, is there anything we can do with history that would be possible? Then we started thinking about virtual reality as a performance medium. In HistoryMaker VR, we create avatars of historical figures and then you use inverse cinematics so that you can basically move and interact as these characters. We can automatically track it, so effectively what we’re doing is creating a virtual television studio for you. So we created a set of historical figures and we give you the ability to upload a speech that you may have written. You then get a teleprompter and perform as the character. Then you can export YouTube videos of you as this animated character. It’s something that history teachers were very interested in, not only to give as assignments to their students, but to my surprise, to create their own performances to use it in the classroom. VR in education is not going to happen overnight. The technology is very new right now and schools are slow to adopt new technologies. Which systems should people buy? It’s hard to know which ones to buy. The technology is fragile. It’s not designed for school environments. And you need to consider hygiene. It’s a real issue. It’s weird enough you’ve got to touch somebody else’s computer with the tips of your fingers, now you’re gonna rub the computer all over your face. And all those schools are just slow to adopt anyway. They take their time and are careful, so we’re not going to see it in the schools overnight. It’s weird to think about communicating in
20
VR but a lot of our communication is about body language and about presence with other people. Let’s do an experiment right now. Pick somebody who is near you and partner up. Just kind of look at them right now. Now take your hands and you’re going to bring them three inches from your neighbor’s face. You can feel that, right? That’s because there’s a special nucleus in your being that deals with things inside arm’s reach. Things outside arm’s reach don’t register in the same way. This video demonstrates the type of ways we are going to be able to communicate in VR. By 2025, VR home movies will be our most treasured possessions. When you ask people right now, “what’s your most valuable possession,” older people will talk about photo albums and younger people will talk about their phone. Photos and videos are important to record important events. And it’s one thing to see a photo of baby’s first steps and it’s another thing when you can put on the headset and be in the room again when it happens. How is that not going to be incredibly powerful for people? So I believe that we may find that 360 video may start to become surprisingly possible because the thing about these technologies is they are built over reality. They are already what we can see, just in different ways. There are ways we can potentially see through other people’s eyes.
“ 21
There’s another technology that is coming that’s going to augment all of this and that’s artificial intelligence. I’m going to introduce you to a boy named Milo. The idea of a virtual character that can make eye contact with us and can sense our emotions and have a meaningful conversation is something we’re very interested in. Conversations are something very natural for us. There’s a professor, Chris Swain, who makes an interesting prediction about how things are gonna go down in the 21st century. He talks about the history of film as it relates to the history of games. He points out that when film was new, people didn’t take it seriously. It was not considered an artistic medium. It was something for children or for stupid people. It was just trivial entertainment that no one took seriously. But then something changed. Films began to talk. Then it totally dominated the media. It took over our minds in terms of the way that we communicated in important ways. It became an art form that people really cared about and was very meaningful to almost everybody. So here is my parallel to games because games are in a similar position. Many people talk about games just being for children or a waste of time. So this professor argues what games need to do since they already talk, or maybe even talk too much, is that games
When we get to a place where games listen to what we say and respond intelligently, this medium is going to change in a significant way.
SCDC ‘19
”
need to learn to listen. When we get to a place where games listen to what we say and respond intelligently, this medium is going to change in a significant way. He argues that it will change in a way that may well become the 21st century’s most emotional, most immersive, and most intellectual medium that has ever been created. It’s a big statement, but it’s interesting to think about the fact that these technologies are creeping along at the same time that VR is developing. You have these characters who want to talk and you start to interact with them in a more physical way. When I think about the future of technology, I always end up thinking about children and how children feel about it. Children, of course, love video games. How do they feel about virtual reality? As it turns out, they like it, rather a lot. They can easily embrace these things because their imaginations see it and their imaginations are just much stronger. Children have beautiful, blooming, growing imaginations. And it ends too quick. A quick story...I was at a friend’s house once and he’s showing me a new kitchen and a new stainless steel refrigerator. It’s pretty cool but he can’t stick magnets to it. The magnets fall off. I said, “Isn’t that weird, I mean it’s stainless steel. It’s steel. It has iron so why do magnets fall off? How’s that even possible?” His 10-year-old was standing right there and pulls out her phone and says, “stainless steel is infused with nickel, which interrupts the magnetic power of the iron, so therefore the magnets do not work.” All of the knowledge in the world is right there, right in her hands, on the phone. I didn’t grow up with the internet. The internet is still kind of a strange intrusion on life that I’m still getting used to. Computers not so much. I have a degree in network programming. Computers were there from very near the beginning for me and I feel like it’s a technology I know. The internet is still a “come late to the party” thing. We’re all late; we’re
going to have a generation of children who are all thinking of virtual things in space normally. Maybe in the year 2035, children might have these magical glasses. When they put the glasses on, they’ll see their imaginary friend who’s right there playing with them. They can have a conversation with their imaginary friend and they’ll play all kinds of games with this imaginary friend. It will seem totally normal to them. All kinds of games and these characters you know. This friend will be up for all kinds of adventures. Whatever you want to do, your friend’s going to want to do it with you. It is not just for playing. This friend is going to be always looking for social moments. Your friends will come over and they’ll introduce your imaginary friends to each other or maybe the imaginary friends will introduce you to each other. You can imagine the idea of every kid walking around with a companion, and of course, they’re always looking for teachable moments. It’s part of the power; it’s what good parents and good teachers do. Sure they play with you, but then at the right moment they say, “Hey, what can we learn from this?” and they find ways to push you and get you to grow. This is exactly what these imaginary friends will do. And what parent will be able to resist a playmate with infinite patience that is there to help your child. That’s what I think when I think about the future with this technology. I don’t think of it as all these weird little gimmicks. If we’re going to build this thing, we need to think about the fact that what we’re doing is going to be building the eyes of the next generation and that’s pretty important. I like to think that we should take on the responsibility and try to do the best for the future. Thanks very much!
22
Social VR in Design
25
SCDC ‘19
TOMÁS DORTA Tomás Dorta is a professor at the School of Design of the University of Montreal, lecturer at Concordia University and the Polytechnique Montreal, associate professor at the Université de l’Ontario Français in Toronto, and the director of the Design Research Laboratory Hybridlab. He teaches Ph.D. seminars and leads workshops and theoretical courses in industrial design, interaction design, and co-design, and is bringing to Penn State and the Stuckeman School the project he is the lead designer of, which is an innovative social virtual reality co-design system, the Hybrid Virtual Environment 3D, Hyve-3D. Dorta has developed and continues to evolve his concept of how virtual reality can become a tool that allows architects and designers around the world to work on something together at the same time.
26
SOCIAL VR IN DESIGN
Figure 1. Virtual Reality continuum
In the context of this talk, I will be using the term “hybrid” in the scope of Augmented Virtuality, when moving from Virtual Environment towards Real Environment along Milgram’s continuum (1994) of Virtuality Reality (VR) (see Figure 1). With regard to this SCDC symposium on VR+AR, the term “hybrid” stands for virtual reality that is “augmented” in the real environment. This clarification is important since I will be referring to the term “Social VR” as a particular kind of VR.
IN PERSPECTIVE Now let’s go back and consider how we treated information in design representations, but let’s start that conversation on how information was processed in computer graphics. The first computer was mechanical (Babbage, 1833). At that time, the focus was on “numbers” and the machine was, in fact, a calculator. During World War II, the aim was to decode the German machine that encoded the messages, “Enigma.” Alan Turing invented the machine named “Colossus” (1943-45) that completed this endeavor by going from the “numerical” approach to a “logical” one. Shortly after that, we should mention that at 27
SCDC ‘19
University of Pennsylvania, Mauchly & Eckert (1946) invented an electronic computer called “ENIAC.” Then later in 1963, one of my idols (you will see him mentioned twice in my presentation), Ivan Sutherland, invented the Sketchpad (Sutherland, 1963). With this system, the information was processed “graphically” for the first time. However, instead of giving attention to the interaction arising between the light pen and the cathode-ray tube screen the Sketchpad proposed, the emphasis was put on the software as a computer-aided drafting tool (CAD). Almost at the same time (1965), Doug Engelbart focused on the “innteraction” between man and computer with the mouse. Then at the time of my birth (1968), Sutherland invented the first h-mounted display (HMD), which marked the genesis of VR, allowing its user to be immersed in a virtual cube, projecting the images directly to each eye of one user (Sutherland, 1968). Many years later in the 1990s, Carolina Cruz-Neira was referring to the term of “social immersion,” asking what would result if we could get together in those virtual environments (Cruz-Neira, et al.,1993)! With regard to design representations, information was treated mechanically
and manually (analog). As far back as the Renaissance, architects were making mock-ups. As I see, you also do a lot of mock-ups here. However, during the Renaissance, the mock-up was literally the design process in and of itself, where the architect was designing the building by doing the mock-up because planning the whole project with mathematical calculations was not always suitable to propose such orthogonal representations. They also did such mock-ups to present the projects to the clients. Then, something very important arrived during the Renaissance through the work of Filippo Brunelleschi: the “perspective” was proposed as a way to represent the project before its construction. These perspective representations struck the eye as being much closer to reality. As we saw before, one main heritage from the Sketchpad was CAD (the “D” here standing for “drafting,” before it was later used with the meaning of “design” in computer-aided design). Then came computer animations, born through making sequences of perspectives. From the interactions we also began doing the same rendered perspectives but in real time. Ultimately the information processing in design emerged into VR. But through all this, what happened with “social immersion” in design? That will be the main subject of this talk. At the beginning of my research career, during my doctoral research in the late 1990s, I was studying the active versus passive role of designers while they used VR. One can be actively representing through sketching during “ideation” as we saw with the Sketchpad or one is passively observing or navigating while “communicating” projects. In this case, the project was designed using other 3D modeling programs and then only visualized in VR. At that time (Dorta & Lalande, 1998), we found that the traditional sketch was contributing almost the same to the design process for ideation
(active) while VR was contributing more for communication (passive). These seminal results brought us to the first steps into our work to combine both sketch and VR.
FIRST STEPS During the 19th century panoramas were the medium to get several participants immersed. At that time, a kind of rotunda or cylindrical building was proposed in combination with cylindrical panoramic paintings to give the sense of immersion. The scenes were often inhabited with horses, real people, and real objects (props) in front of the panorama to achieve a greater immersive effect of being in a battlefield, for example (Oettermann, 1997). In Italy you can see the trompe-l’oeil effect in baroque churches’ domes. This gave them the effect of having high domes by using anamorphic views. Anamorphic techniques are important because when you are in the middle of such images, these projection techniques are the ones responsible for the feeling of being immersed in the depicted scene, of the paintings’ depth extending beyond the surface holding the paint. In 2006, Luc Courchesne proposed the Panoscope using 360-degree panoramas all around one person (Courchesne, 2006). Additionally, in planetariums you have 360-degree domes as ceiling. In my opinion, it seems irrelevant to always have to look up since it is not comfortable for a working position and for the neck. In an anamorphic image, when we see that image from a particular point of view, the perspective appears correct but from other points of view the image appears deformed. We started using that technique and between 1999 and 2004 we began exploring this with the drafted VR, or non-immersive, approach. The computer was representing some 3D shapes in cylindrical 360-degree panoramas (equirectangular images). Then
28
these anamorphic images—used as a graphical template—were printed in order to use alreadyacquired skills to freehand sketch by using the traditional drawing tables you have in the design studio. Those sketches were then digitalized back to the computer. Finally, by using the QuickTime VR software, the panorama was wrapped and the user was able to get corrected real-time perspectives (Figure 2).
abstraction. In the case of DVR, the client was open to talking about the proposed ideas. It was a sketch where we can see the space from the inside in real time. Later we used the same technique, but this time it was immersive using a spherical virtual mirror to produce a 360-degree spherical panorama as a graphical template. Then we projected that template in Courchesne’s Panoscope 360° and used a
Figure 2. Drafted Virtual Reality-DVR (non-immersive) (Dorta, 1999; 2004).
I was working as a practitioner architect in Montreal in an architectural office now named Aedifica and we showed 360-degree panoramas, both photorealistic render and freehand sketched, to a client. They commented, “It is already built?” about the photorealistic one. That is the issue with photorealistic computer graphics: it is too accurate. It doesn’t leave enough space for ambiguity, nor does it properly support 29
SCDC ‘19
Wacom tablet to sketch in real time all around us (Dorta & Perez, 2006). You can see some beautiful examples of equirectangular and spherical handmade 360-degree panoramas (Figure 3). Equirectangular sketches are deformed but you can understand the shape to sketch over, while the spherical ones have a central vanishing point allowing you to sketch. You can also see an old video (2006) where some of my interior design students
were sketching the space from inside in real time by using basic geometry as background reference. The deformed spherical panorama was sketched using the central vanishing point by putting the finger in the center of the image. We did that without developing a specific software (Figure 3). Now we refer to Vistisen, Luciani & Ekströmer (2019) when viewing those equirectangular sketches techniques proposed to be used in head-mounted display (HDM).
screen. In this system, which was originally for one user, we developed a software that corrects the deformed anamorphic equirectangular view of the perspective image used to sketch in the Wacom. Another aspect of the HIS is the immersive model making. While putting a scaled mock-up into the model station (e.g., a car), we can now project the car model immersively at life size. It is then also possible to sketch over this image. At that time, we
Figure 3. Equirectangular and spherical DVR panoramas and the DVRi (immersive) technique.
In 2007, we developed the first augmented design tool: Hybrid Ideation Space (HIS) (Dorta, 2007). It was different from the Panoscope because it used a mirror at the top center of the space along with a central sketching station, using a Wacom table and the projector pointing upwards at the mirror attached to the sketching station. This projection technique was inspired by Courchesne (2000) and Bourke (2005), the last used in planetariums and both using anamorphic images. As you can see in the HIS, the participant is immersed in the space while sketching in real time on the Wacom. Within the tablet, the participant sees the same projected perspective accordingly to the direction the Wacom Tablet is in relation to the immersive
began implementing collaboration features to the system, allowing multiple HIS to connect simultaneously. Participants were able to see other participants’ IP cameras (from other systems). This was the first step of our social immersion explorations.
IMMERSIVE DESIGN COLLABORATION In 2010 we interconnected the system remotely for the first time between us, at our HIS system in Montreal, and Professor Y. Kalay through the HIS at the University of California, Berkeley College of Environmental Design. While the system was initially in a dedicated space, it was then integrated into the design studio. Of
30
course, the system was a functional prototype. We had to mask the natural light of the design studio by putting a dark fabric on top of the whole system. At that time, we started calling this space the “augmented design studio” since students and professors could now collaborate with external experts and collaborators on the project. We quickly noticed one implication of using a 360-degree screen was that the participants were literally squeezed inside the space. By looking at their shoes, you could see the architect, the financial person, the engineer, and the client or land-owner (with running shoes!). The client primarily connected with those immersive panorama images of the land instead of scaled models or technical drawings. Then in 2011, we began replicating this prototype in several remote locations like Basel, Switzerland; Aachen, Germany; and Quebec City, Canada. The goal was to understand and do some research on remote immersive collaboration. The HIS also used ModBook laptops, before the arrival of the iPads, a combination of a MacBook with a sortof Wacom screen. A laptop was too heavy but it still allowed several participants to use various devices together in the same space connected to the same system. This allowed multi-user interaction and simultaneous immersive sketching.
EVOLUTION OF THE INTERACTION
We still needed an evolution of the system regarding the model station and the projection mirror. In the beginning, we used a light bulb with the mirrored surface as very basic spherical optics to capture the scaled mock-up on the model station. In the original prototypes we also used plastic mirrored domes—those you can find attached to the ceilings of drugstores and airports—to project the immersive images. We discovered these optics produced deformations and were not accurate enough. We were using a hidden laptop computer or a Mac mini to run the system and a Wacom tablet as the user interface.
At the beginning, before 2001, we were freehand sketching equirectangular panoramas (cylindrical) using pen and paper. Then in 2007, participants were using the Wacom to sketch very nice spherical panoramas using a painter software placing all the vanishing points at the center. Later in 2010, we developed our software to correct the anamorphic image in a corrected perspective in the ModBook (Figure 4). At that time, we also tested sketching with an iPod touch since the iPad was not developed yet, and even sketching using a laser pointer. The laser pointer was very popular because it was easy for the participants to point and
31
SCDC ‘19
SOCIAL VR AS A CO-DESIGN TOOL These evolutionary needs, tests, and explorations allowed us to develop the Hybrid Virtual Environment 3D (Hyve-3D) (Dorta, et. al. 2014; 2016a). This system was used during the co-design workshop we held yesterday here in the Immersive Environments Lab of the Stuckeman Family Building. The system was presented for the first time at the Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH) 2014 Conference in Vancouver, patents were submitted, and our startup company was created—Systèmes Hybridlab. The concave spherical screen was open compared to the HIS 360-degree screen, allowing more distancing between participants. This new design was a local and remote VR co-design tool without the need of using VR headsets, but it also included a very important component: the 3D cursor. The 3D cursor allows every participant to sketch using iPad pros. All accessories—like speakers, camera, keyboard, etc.—were combined into a single and reachable laptop (MacBook Pro). Other components were the optical mirror and a 4K projector pointed directly upwards to the mirror at the center of the space.
Figure 4. Evolution of the immersive sketching techniques.
express design decisions, but the accuracy of the sketches was an issue.
in 3D, all the perspectives of the cube are obtainable simply by navigating around.
In 2015 we developed the 3D cursor that was also patented and presented at the SIGGRAPH 2015 Conference in Los Angeles (Dorta et al, 2015). The goal of the 3D cursor is to sketch in 3D while having a complementary orthogonal representation—top, side, and front views—instead of a perspective view, equal to the projected immersive image. It was also a navigation device with the novel approach of a “3D Trackpad,” which allowed direct interaction by selecting and manipulating virtual objects in 3D through natural gestures. The idea of the Hyve-3D was illustrated by taking the Wacom or the ModBook in our hands and changing the position in 3D as well as the orientation of the drawing area. In the HIS, the perspective is sketched once and if you need to see the object from another point of view, you need to redraw the perspective. In the Hyve-3D, you are sketching in 3D by moving the 3D-tracked iPad (6DOF) and then placing the drawing area (3D cursor) where the participant desires in the 3D space. For example, once a cube is sketched
SOCIAL VR AS A RESEARCH TOOL Because we are working in academia, we are also expected to use the system as a research tool. In 2008 we developed a technique to measure the user experience (UX) during designing that we called “design flow” (Dorta et al. 2008). The technique was based on Csikszentmihalyi’s notion of optimal experience called “flow” (Csikszentmihalyi, 1990). The flow refers to the balance between perceived challenges and skills. During ideation, before participants are determining the form to their ideas, they are “stressed” (high challenges – low skills). When the concepts begin to form and very satisfactory results appear, the participants hit the state of flow (high challenges – high skills) and then participants switch to the state of “control” (low challenges – high skills). The design flow is presented in the form of specific patterns identified from field data. In 2016 we developed a less intrusive technique and more granular way of assessing it by using
32
retrospection. Using a midi controller—one slider for perceived challenges and another for perceived skills—participants can assess themselves by looking at a video of their design sessions (Safin, et. al., 2016). We can describe the challenges and skills of a design task through the analogy of a tennis game. When the ball is coming, participants face a challenge of the design problem while the skill counterpart relates to the way one hits back the ball, resolving the problem via a design solution. This can be done individually or in collaboration. Later we developed the immersive retrospection technique in 2017 (Beaudry-Marchand, et al., 2017). Using an immersive video in Hyve-3D, participants can assess their key UX moments and “feel again” what happened during the activity through that immersive video. Then, using those key moments (changes on stress, flow, and control), we can realize immersive interviews giving them the opportunity to explain why they felt the way they did. Firstly, we used this technique to evaluate the UX in a museum installation, then to evaluate a co-design process. We found that there is a “first order of exchanges” where the participants talk during the activity, referring to design instructions and decisions. Then, during immersive interviews, participants were exteriorizing a “second level of exchanges” that were hidden in the co-design conversation (first order). This can be considered as an exteriorization of the co-design cognition (Dorta, et al., 2018). Another aspect that this kind of social VR system can help in research is through “ethnography by telepresence” (Dorta, et al., 2012). By interconnecting two HIS systems, one in France and the other in Belgium, we were able to be immersed in their representations and analyze participants’ exchanges from an omnipresent point of view, without affecting
33
SCDC ‘19
the activity. We observed a pedagogical setting between two teams of students and one professor during a project review. We also observed asymmetrically remote participants co-designing in a non-immersive setting of the Hyve-3D while, locally, we stood inside an immersive system. Moreover, the analysis can be asynchronous, where we can be reviewing the scene immersively after the activity is done.
SOCIAL VR AS A PEDAGOGICAL TOOL In 2012, we installed a HIS system in a professional design office in Milan and reviewed students’ projects interconnecting another HIS from Montreal. At that time, students also prepared immersive videos of their projects. All the representations in the design studio were interrelated in a “representational ecosystem” (Dorta, et al., 2016c): going from abstract mock-ups that represented functional facets to immersive videos and rapid prototyping models. Another aspect was to use this representational ecosystem with lay persons. Non-designers participated in the design of a library by making scaled models that were perceived in immersion by using a 360-degree camera, making a sort of endoscopic incursion into the model. For a few examples of using social VR as a pedagogical tool, now you can see some design studios using our social VR system like in Wellington, New Zealand, where several students can be immersed at the same time. Also, it is in operation in Bordeaux, France, where they are using the system to teach theater and history courses. In my augmented design studio in Montreal, we made projects internationally connected to Metz, France, while also reviewing students’ projects connected with another system in Adelaide, Australia.
THE IMPACT OF IMMERSIVE CONTEXT
SOCIAL VR
In the beginning of my talk, I showed some aerial photogrammetric images of the Hybridlab building in Montreal from Google Earth. The photogrammetric process allows you to build a 3D model by taking several pictures using some parallax. We collaborated with Victoria University of Wellington (New Zealand), where they scanned a part of a town in China using a drone (that I showed during the workshop). At the CAADRIA 2016 Conference in Melbourne, we did a workshop together using photogrammetry via Hyve-3D. We connected several systems to the same model, even some non-immersive systems at the same time, interacting in the same representation. In 2018 we evaluated the impact of the virtual immersive context (using photogrammetry) during the design process in a pedagogical setting by comparing three kinds of modes: pen and paper, VR without context, and VR with context. The findings showed that participants were engaging more with the immersive context using design gestures. We found participants were sketching less in the immersive context since traditionally they had to sketch the context to propose the idea (Beaudry-Marchand, et al., 2018).
Yesterday in this symposium we were talking about using VR headsets. For us VR using headsets is an individual experience. For instance, just imagine a virtual world where participants want to be teleported, but in fact they are in a “normal space.” In that space they are looking in different directions, they don’t see each other, and they are cut from their bodies. When you go to a restaurant, it is to have a social and shared experience. When you go alone, you feel uncomfortable. The same is happening in movie theaters because, in fact, we have a desire to share that experience with others. Visiting some movie theaters nowadays, you can find 3D stereoscopic experiences but in the form of “reliefs” (raised areas akin to embossing) from the screen, or you have “tunnel” theaters (with the two lateral walls) where you watch the film from outside via a frame.
“
In immersive movies that use HMD, which are shot with a particular setup where the technical crew is hidden from the scene, users can look around everywhere in the scene so the storytelling is affected. Users only need to see a particular part of the scene to follow
The findings showed that participants were engaging more with the immersive context using design gestures. We found participants were sketching less in the immersive context since traditionally they had to sketch the context to propose the idea.
”
34
the narrative, not everywhere around. In Amsterdam, you have VR cinemas where you can experience movies individually. Couples can grab each other’s hands, but they cannot point to the image by saying, “Look at that!” They use chairs than can pivot 360 degrees. In 2016 we compared Hyve-3D to VR headsets and found that 80 percent of participants missed at least one content of the scene when using an HMD. We also found more social interaction in the Hyve-3D experience compared to VR headsets (Dorta et al. 2016b). Of course, some solutions focus on avatars— like Facebook—but I really prefer to see real faces with real expressions. Others even 3D scan the headsets to be in the virtual world! It is important to see the facial expressions, like in video games, but they are hidden by the headset. In Hyve-3D you get a shared experience. It is hybrid because you have the real space where you can use and point to the screen of other devices (phones, laptops, etc.) and then you have the immersive part, projected in the spherical concave screen. It is possible to perceive the information “emitted” by others. Furthermore, an environment’s visual and audio information supports social interaction and spontaneous non-verbal communications. In Systèmes Hybridlab, we have been interconnecting several systems around the world to build a social VR network for codesign. There is always the possibility to be asymmetrically collaborating with immersive and non-immersive systems (using a laptop) to work together. Since the system is mobile, some Hyve-3D users have placed it in museums and are interacting with the general public. We also started doing co-design workshops in several locations, for example, in Rome at the eCAADe 2017 and in 2018 at the ESTEED (Tunisia), where design students did their own photogrammetric 35
SCDC ‘19
models. In Paris, non-designers designed a library in 2018, engaging themselves also with physical mock-ups for better understanding; eCAADe 2018 in Poland, always interconnecting with other Hyve-3D systems in Australia and Montreal; 2018 in Nantes (France), where the room was very suitable, taking the curtains off when you don’t have direct sunlight; in 2019 in CAADRIA in Wellington; and in Aalborg (Denmark), where we also did the photogrammetric model of the context, like in other workshops. For our workshop here at the Stuckeman Building we use that Alborg model since we didn’t have the time to make our own photogrammetric model. Those workshops started first by learning the co-design process using analog representations (pen and paper) before engaging Hyve-3D. This summer (2019), we did a professor school in my university, involving non-designers as participants by using mock-ups once again. The professors came from different disciplines like cinema or nursing. The Hyve-3D system had some international and local visibility in mass media like Bloomberg and Fastcompany, and some Canadian local media like Radio Canada. Thank you very much again for this invitation.
REFERENCES Beaudry-Marchand, E., Han, X., Dorta, T., Immersive Videogrammetry as an UX assessment tool for interactions in museums. In: Fioravanti, A., Cursi, S., Elahmar, S., Gargaro, S., Loffreda, G., Novembri, G., Trento, A. (Eds.), ShoCK! - Sharing Computational Knowledge! Proceedings of the 35th eCAADe Conference - Volume 2, Sapienza University of Rome, Italy, 2017: pp. 729-738. Beaudry-Marchand, E., Dorta, T., Pierini, D., Influence of Immersive Contextual Environments on Collaborative Ideation Cognition - Through design conversations, gestures, and sketches. In: Kepczynska-Walczak, A., Bialkowski, S. (Eds.), Computing for a better tomorrow Proceedings of the 36th eCAADe Conference - Volume 2, Lodz University of Technology, Lodz, Poland, 2018: pp. 795-804. Bourke, P., Spherical Mirror (Mirrordome) - A new approach to hemispherical dome projection. Planetarian, 2005; Vol 34(4): pp. 5-9. Cruz-Neira, C., Sandin, D., and Defanti, T., Surroundscreen Projection-based Virtual Reality: The design and implementation of the cave. In: Proceedings of the SIGGRAPH’93, Anaheim, CA, 1993: pp. 135–142. Courchesne, L., Panoscope 360. In: Proceedings of the Siggraph Conference: New Orleans, 2000. Csikszentmihalyi, M., Flow: The Psychology of Optimal Experience. Harper and Row, New York, 1990. Dorta, T., Lalande, P., The Impact of Virtual Reality on the Design Process. In: Seebohm, T.; Van Wyk, S. (Eds.) Digital Design Studios: Do Computers Make A Difference? Proceedings of ACADIA 1998 Conference. Quebec, 1998: pp. 138-161. Dorta, T., La Realidad Virtual Dibujada: Como una nueva manera de hacer computación. In: Llavaneras, G.; Negrón, E. (Eds.) 1ra.Conferencia sobre aplicación de computadoras en Arquitecturales. Proceedings of the conference. Caracas, December 1999: pp. 175-182. Dorta, T., Drafted Virtual Reality: A new paradigm to design with computers. In: Lee, H., Choi, J. (Eds.) Proceedings of the conference CAADRIA 2004. Seoul, South Korea: CAADRIA, 2004: pp. 829-843. Dorta, T., Pérez E. Immersive Drafted Virtual Reality: A new approach for ideation within virtual reality. In: Luhan, G., Anzalone, P., Cabrinha, M., Clarke, C. (Eds.) Synthetic Landscapes. Proceedings of the ACADIA conference. Louisville, KY, 2006: pp. 304-316. Dorta, T., Implementing and Assessing the Hybrid Ideation Space: A cognitive artifact for conceptual design. International Journal of Design Sciences and Technology. 2007; 14(2): pp. 119-133. Dorta, T., Pérez, E., Lesage, A. The Ideation Gap: Hybrid tools, design flow, and practice. Design Studies. 2008; 29(2): pp. 121-141.
Dorta, T., Lesage, A., Di Bartolo, C. Collaboration and Design Education through the Interconnected HIS: Immature vs. mature CI loops observed through ethnography by telepresence. In: Achten, H., Pavlicek, J., Hulin, J., Matejdan, D. (Eds.) Physical Digitality Proceedings of the eCAADe 2012 conference, Volume 2, Prague, Czech Republic, 2012: pp. 97-105. Dorta, T., Kinayoglu, G., Hoffmann, M. Hyve-3D: A new embodied interface for immersive collaborative 3D sketching. In: Proceedings of the Conference and Installation ACM SIGGRAPH Studio. Vancouver, Canada: SIGGRAPH, 2014: p. 37. Dorta, T., Kinayoglu, G., Hoffmann, M. Hyve-3D and Rethinking the “3D cursor”: Unfolding a natural interaction model for remote and local co-design in VR. In: Proceedings of the Conference and Installation ACM SIGGRAPH Studio. Los Angeles, CA: SIGGRAPH, 2015: p. 43. Dorta, T., Kinayoglu, G., Hoffmann, M. Hyve-3D and the 3D Cursor: Architectural co-design with freedom in virtual reality. International Journal of Architectural Computing, 2016a; 14(2): pp. 87-102. Dorta, T., Pierini, D., Boudhraâ, S. Why 360° and VR headsets for Movies? Exploratory study of social vR. In: Proceedings of the 28th Conference Francophone sur l’Interaction Homme-Machine. Fribourg, Switzerland. 2016b: pp. 211-220. Dorta, T., Kinayoglu, G., Boudhraa, S. A New Representational Ecosystem for Design Teaching in the Studio. Design Studies, November 2016: pp. 164-186. Dorta, T., Beaudry-Marchand, E., Pierini, D. Externalizing Co-Design Cognition through Immersive Retrospection, Design Computing, and Cognition. In: Gero, J.S. (Ed.), Springer. 2018: pp. 97-113. Safin, S., Dorta, T., Pierini, D., Kinayoglu, G., Lesage, A. Design Flow 2.0 – Assessing Experience during Ideation with increased Granularity: A proposed method. Design Studies, 47, November 2016: pp. 23-46. Oettermann, S. The Panorama: History of a mass medium. Zone books. New York. 1997 . Sutherland, I. E. Sketchpad: A man-machine graphical communication system. In: Proceedings of the AFIPS Sprint Joint Computer Conference: Washington, D.C. Spartan Books. 1963: pp. 329-346. Sutherland, I. E. A Head-mounted Three-dimensional Display. In: Proceedings of AFIPS 1968 Fall Joint Computer Conference, Part I, 1968: pp. 757–764. Vistisen, P., Luciani, D., Ekströmer, P. Sketching Immersive Information Spaces: Lessons learned from experiments in ‘sketching for and through virtual reality’. In Steinø, N., and Kraus, M. (Eds.), eCAADe RIS 2019. Virtually Real. Immersing into the Unbuilt: Proceedings of the Seventh Regional International Symposium on Education and Research in Computer-aided Architectural Design in Europe, Aalborg Universitetsforlag, 2019: pp. 25-36.
36
OPEN HOUSE: SCDC-Affiliated Research Projects BIM + Optimization for Additive Construction
NAVEEN KUMAR MUTHUMANICKAM, KENHYOUNG PARK, NEGAR ASHRAFI, EDUARDO COSTA, SHADI NAZARIAN, JOSE DUARTE, ALI MEMARI
Mars Habitat Design: NASA Centennial Challenge
JOSE DUARTE, SHADI NAZARIAN, NAVEEN KUMAR MUTHUMANICKAM, KENHYOUNG PARK, NEGAR ASHRAFI SAM DZWILL, ERIC MAINZER
Data-Driven Building Design: Using Multi-Objective Optimization
NAVEEN KUMAR MUTHUMANICKAM, TIM SIMPSON, JOSE DUARTE, LOUKAS KALISPERIS, LISA IULO, GORDON WARN
Advanced Computational Studio: The Rio Studio
FACULTY: JOSE P. DUARTE, TIM BAIRD, MARC L. MILLER, ESTHER OBONYO TA: DEBORA VERNIZ, SUKANYA RANADE, FAHIMEH FARHADI COLLABORATORS: DANIELLE OPREAN, ERIC MAINZER, EDUARDO C. COSTA
Form-Finding Exploration of Architectural Knitted Textiles Behavior FARZANEH OGHAZIAN, PANIZ FARROKHSIAR, FELECIA DAVIS
Shape Grammar in Designing 3D Architectural Knitted Textile Forms FARZANEH OGHAZIAN, PANIZ FARROKHSIAR, FELECIA DAVIS
Computationally Assessing Pollinator Habitat Resiliency TRAVIS FLOHR, DOUG MILLER, HONG WU, NASTARAN TEBYANIAN
P/Elastic Space
MONA MIRZAIEDAMABI SUPERVISORS: DARLA LINDBERG, FELECIA DAVIS
Robotic Apprentices
EDUARDO CASTRO E COSTA, JOSÉ DUARTE, SVEN BILÉN
Material Characterization for 3D Printing Concrete: Evaluating the Relationship Between Deposition and Layer Quality in Large-Scale 3D Printing NEGAR ASHRAFI, JOSE P. DUARTE, SHADI NAZARIAN, NICHOLAS MEISEL, SVEN BILEN
39
SCDC ‘19
Computing Stitches and Crocheting Geometry
ÖZGÜÇ BERTUĞ ÇAPUNAMAN, CEMAL KORAY BINGÖL, BENAY GÜRSOY
Phototropic Fiber Composite Origami Structure: Arch 497 Architectural Responsive Fiber Composites JIMI DEMI AJAYI, FELECIA DAVIS, JULIAN HUANG, KAREN KUO, IN PUN, R.A. ZAINAB HAKANIAN
The Grammar of Late Period Funerary Monuments at Thebes ANJA WUTTE, JOSE DUARTE, PETER FERSCHIN, GEORG SUTER
Using Grammars to Trace Architectural Hybridity in American Modernism: The Case of William Hajjar’s Single-Family Houses in State College, PA
MAHYAR HADIGHI, JOSE DUARTE, LOUKAS KALISPERIS, JAMES COOPER, ALI MEMARI, ANNE VERPLANCK
Urban Form and Energy Demand: A Comprehensive Understanding Using Artificial Neural Networks MINA RAHIMIAN, JOSE DUARTE, LISA IULO, GUIDO CERVONE
Tooling Cardboard for Smart Reuse: A Digital and Analog Workflow for Upcycling Waste Corrugated Cardboard as a Building Material JULIO DIARTE, ELENA VAZQUEZ, MARCUS SHAFFER
Concrete Printing System Design: Six-Axis Robotic Concrete Printing NATE WATSON, NICHOLAS MEISEL, SVEN BILEN, JOSE DUARTE, SHADI NAZARIAN
Mycelium-Based Bio-Composites for Architecture: Assessing the Effects of Cultivation ALI GHAZVINIAN, PANIZ FARROKHSIAR, BENAY GÜRSOY
Understanding the Genesis of Form in Brazilian Informal Settlements: Towards a Grammar-Based Approach for Planning Favela Extensions in Steep Terrains in Rio De Janeiro DEBORA VERNIZ, JOSE DUARTE
Designing for Shape Change: 3D-Printed Shapes and Hydro-Responsive Material Transformations ELENA VAZQUEZ, BENAY GÜRSOY, JOSE DUARTE
40
BIM + Optimization for Additive Construction
NAVEEN KUMAR MUTHUMANICKAM1, KENHYOUNG PARK2, NEGAR ASHRAFI3, EDUARDO COSTA4, SHADI NAZARIAN5, JOSÉ DUARTE6, ALI MEMARI7 1
PhD Candidate, Department of Architecture
2
PhD Candidate, Department of Civil Engineering
3
PhD Candidate, Department of Architecture
4
Post-Doctoral Researcher, SCDC
5
Associate Professor, Director of the Glass and Ceramics Laboratory
6
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
7
Professor and Bernard and Henrietta Hankin Chair in Residential Building Construction, Director of the PHRC
Autonomous construction assembly and 3D printing of concrete are becoming promising technologies to build structures in near future. However, such technologies require design professionals to optimize the building design for constructability constraints imposed by the robots and 3D printed material parameters in addition to traditional design criteria. Recent advances in generative design showcase promising potential to leverage Building Information Modeling (BIM) for optimizing the building design for multiple objectives. Pennsylvania State University participated in the NASA 3D-Printed Mars Habitat challenge which had a Virtual Construction level focusing on leveraging BIM for the design of a Martian habitat, and an Actual Construction level focusing on the autonomous construction of a sub-scaled version of the design. This presentation shall demonstrate the underlying BIM process that our team developed to identify the optimal design and perform 4D simulation of the robotic construction process. ADDITIVE MANUFACTURING
Additive Manufacturing (AM) leverages layer-by-layer deposition of material for rapid and efficient product manufacturing. Recent advances in CNC techniques and robotic arm innovations have enabled design for AM and part production much easier for researchers and industry professionals alike.
41
SCDC ‘19
ADDITIVE CONSTRUCTION
Various software tools are used to accomplish each step in the process, such as generation of part geometry, slicing (dividing geometry into individual layers), generating toolpath for the printer to deposit material along, and motion solvers to detect and avoid collisions. Taking advantage of the recent advances in AM and 3D concrete printing research, layer-bylayer deposition of concrete using robotic arms aids in rapid construction. However, unlike individual parts, buildings are a system of multiple interdependent systems such as the envelope, structural, Mechanical, Electrical and Plumbing (MEP) systems dealt by various design professionals. Traditionally BIM software has been used as a platform to exchange 3D design data of these systems between multiple parties to arrive at a consensus. However, for additive construction, it is essential to extend the current capabilities of BIM to include data exchange between existing building design analysis tools and the list of slicing, toolpath design and motion solver tools specific to AM. Based on a brief literature survey, very little work is done on this front. Hence a novel BIM framework was developed, which facilitates parametric generation of multiple building design options (using Dynamo for Revit), multi-disciplinary analysis (structural, environmental and manufacturing), multi-objective optimization and 4D simulation of the robotic construction process. Custom Application Programming Interfaces (API’s) were used for data exchange between multiple software and an in-house algorithm developed in C# was used for multi-objective optimization. DESIGN FOR ADDITIVE CONSTRUCTION GENERATE OPTIONS
SELECT DESIGN
SIMULATE CONSTRUCTION
This research was supported by grants and prize money received from various stages of the NASA Mars Habitat 3D Printing Centennial Challenge. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Aeronautics and Space Administration (NASA). This is a collective effort from a team of interdisciplinary members comprising of faculty members, student researchers, staff and technicians from multiple departments across Penn State, namely Architecture, Architectural Engineering, Materials Engineering, Electrical Engineering, Structural Engineering, Additive Manufacturing and Agricultural Engineering. • •
Khoshnevis, B 1998, ’Innovative rapid prototyping processmakes large sized, smooth surfaced complex shapes in a wide variety of materials.’, Materials Technology, 13(2), pp. 52-63 Kolarevic, B 2003, Architecture in the Digital Age: Design and Manufacturing, Sponn Press, New York
42
Mars Habitat Design NASA Centennial Challenge JOSÉ DUARTE1, SHADI NAZARIAN2, NAVEEN KUMAR MUTHUMANICKAM3, KENHYOUNG PARK4, NEGAR ASHRAFI5, SAM DZWILL6, ERIC MAINZER7 1
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
2
Associate Professor, Director of the Glass and Ceramics Laboratory
3
PhD Candidate, Department of Architecture
4
PhD Candidate, Department of Civil Engineering
5
PhD Candidate, Department of Architecture
6
BAE Student, Department of Architectural Engineering
7
PhD Candidate, Department of Architecture
A team of interdisciplinary researchers from Penn State participated in the NASA 3D-Printed Habitat Challenge, competing to push the state of the art of additive construction to design and build sustainable shelters for humans to live on Mars. It included two major levels, namely Virtual Construction (where the criteria was to leverage Building Information Modeling for design generation) and Actual Construction (where the criteria was to print a subscaled version of the virtual design using robots autonomously). Our design proposal included four modules, namely work, kitchen, sleep and farming with additional spaces for exercise and lounge facilities in the second floor of the sleep module. Each module is linked by a connector with well-defined safety doors, which can be sealed in case of emergency. All relevant sub-systems such as life support, Mechanical, Electrical, and Plumbing, and entry and exit systems were designed to make the habitat withstand Martian conditions (as described in the brief).
Render of Functionally Graded Material (FGM) transitioning from marscrete to cork to glass (Left),microscopic scan showing transition in material composition (Right) This research was supported by grants and prize money received from various stages of the NASA Mars Habitat 3D Printing Centennial Challenge. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Aeronautics and Space Administration (NASA). This is a collective effort from a team of interdisciplinary members comprising of faculty members, student researchers, staff and technicians from multiple departments across Penn State, namely Architecture, Architectural Engineering, Materials Engineering, Electrical Engineering, Structural Engineering, Additive Manufacturing and Agricultural Engineering.
43
SCDC ‘19
Longitudinal Section
Floor Plan | Second Floor
Floor Plan | First Floor
Assembly Sequence | Step 4
Rendering | Martian Habitat
Rendering | Lounge Space in Second Floor
Assembly Sequence | Step 3 Rendering | Work Module in Second Floor
Rendering | Kitchen and Dining Space
Assembly Sequence | Step 2
Rendering | Work Area in First Floor
Assembly Sequence | Step 1
Autonomous Construction | Process Simulation
44
Data-Driven Building Design Using Multi-Objective Optimization NAVEEN KUMAR MUTHUMANICKAM1, TIM SIMPSON2, JOSÉ DUARTE3, LOUKAS KALISPERIS4, LISA IULO5, GORDON WARN6 1
PhD Candidate, Department of Architecture
2
Paul Morrow Professor of Engineering Design and Manufacturing and Co-director of the Center for Innovative Materials through Direct Digital Deposition
3
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
4
Emeritus Professor of Architecture
5
Associate Professor of Architecture, Director of the Hamer Center for Community Design
6
Associate Professor of Civil and Environmental Engineering
Orchestration of various sub-systems such as structural, envelope, mechanical, electrical and plumbing systems in building design makes it into a complex multi-factor problem dealing with multiple conflicting environmental, economic, social, and technical design objectives. Hence, it is necessary for building designers to proactively evaluate large sets of design options to make better informed decisions. Especially, it is necessary to optimize all sub-systems simultaneously to find an optimal solution that satisfies designer goals. However, it is difficult to establish design information regarding all sub-systems early in the design phase and hence most optimization efforts to date are developed to handle particular design phases with well-defined design information. Lack of interactivity to incorporate change in design information (feedback) as design progresses and limitated data exchange capabilities between analysis tools have been identified as major drawbacks. A novel optimization framework that uses a centralized server for enhanced data exchange between Building Information Modeling (BIM) tools and analysis tools and an interactive front end to allow changes in design information as design progresses is proposed. DESIGN OPTIONS MATRIX SHOWING OPTIMAL SOLUTIONS (YELLOW) THROUGH PROGRESSIVE PHASES
CONCEPTUAL 45
SCDC ‘19
PRELIMINARY
DETAILED
USER INPUT
DAYLIGHTING
COOLING LOAD
ANALYSIS
GENERATE
EC
H
-E
LE
C-
PL
UB M
ANALYSIS
LO
PE
GENERATE ANALYSIS
VE
INTERACTIVE TRADEPACE VISUALIZER
ENERGY
EN
VISUALIZE
INTERACTIVE USER INPUT
M
Design Option 12 Daylighting Factor: 48% Energy Consumption: 345.5 K kWhr/Sq.m/yr Cooling Load: 97.34 K kWhr/Sq.m/yr
CENTRALIZED DATABASE
OPTIMIZATION
ST
RU
CT
U
RA L
GENERATE
This research was supported by NSF grant CMMI-1455424 and CMMI-1455444, RSB/Collaborative Research: A Sequential Decision Framework to Support Trade Space Exploration of Multi-Hazard Resilient and Sustainable Building Designs. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
46
Advanced Computational Studio The Rio Studio FACULTY: JOSÉ DUARTE1, TIM BAIRD2, MARC L. MILLER3, ESTHER OBONYO4 TA: DEBORA VERNIZ5, SUKANYA RANADE6, FAHIMEH FARHADI7 COLLABORATORS: DANIELLE OPREAN8, ERIC MAINZER9, EDUARDO C. COSTA10 1
Professor of Architecture and Landscape Architecture and Stuckeman Chair in
Design Innovation, Director of the SCDC
6
MS Student, Department of Landscape Architecture
7
MS Student, Department of Landscape Architecture
2
Professor of Landscape Architecture
8
Post-Doctoral Researcher, SCDC
3
Assistant Professor of Landscape Architecture
9
PhD Candidate, Department of Architecture
4
Associate Professor, Director, Global Building Network
5
PhD Candidate, Department of Architecture
10
Post-Doctoral Researcher, SCDC
Globally, cities are experiencing significant growth in population coupled with fast urbanization processes—most notably in the global south. This growth towards urban areas puts an enormous pressure on cities, giving rise to numerous social, infrastructural, ecological, and urban problems. The formal sector is unable (or unwilling) to provide enough appropriate dwellings to this growing population, who then resort to self-help processes, giving rise to informal settlements. These settlements are affordable, spatially rich, and adjusted to the site, but lack infrastructure and public spaces. This advanced studio addresses informal settlements located in Rio de Janeiro, Brazil, to investigate the complex problems associated with rapid and informal development using a computational design approach. The first iteration of the studio happened in 2017 and it was focused on the redesign of Santa Marta (Figure 1).
Figure 1 | View of Favela Santa Marta
Figure 2 | Students Presenting Their Proposals at Santa Marta
The studio was concerned with the requalification and expansion of the community and the primary goal was to develop a methodology to support design. The results of the studio were presented to the local community of Santa Marta in May of 2017. The second iteration explored issues of informal settlement and scarcity in Rio. The studio focused on organizing a new site for occupation, while speculating how a site might be articulated to provide the greatest amount of access with the least of amount of oversight or control. The third iteration took a proactive approach, designing infrastructure system for settlements. This presented a host of problems and questions for the design team that reflected the complicated relationship between city dwellers that reside in favelas, and formal social, political, and planning structures that put these peoples and their homes at risk. At the end of each semester, students have the opportunity to travel to Brazil, present their proposals either to Santa Marta’s residents (Figure 2) or at UFRJ (Federal University of Rio de Janeiro) (Figure 3), and to visit some Brazilain architecture (Figure 4). 47
SCDC ‘19
Figure 3 | Students Presenting Their Proposal at UFRJ
Figure 4 | Students Visiting Brazilian Modern Architecture
VIRTUAL ENVIRONMENTS
Increasingly architects are asked to work globally. However, at the basis of architectural design rests the need to gain a deeper understanding of the site and of the people for whom they are designing, with many architects stating that good design stems from a correct interpretation of the design context, supported by site visits. This becomes a difficult endeavor when the site has a remote location. This studio explores the hypothesis that the use of technological aids, such as standard and 360-degree still images and movies, 2D plans, 3D digital models, and specifically, immersive virtual environments (VE), will adequately simulate the experience of site visiting, thereby enabling architects to work for remote locations with significant cost and time savings. Preparatory work was concerned with the 3D reconstruction of the favela using cutting-edge scanning technology (Figure 5). The obtained model was then used as a basis to develop a basic VE environment. The array of digital aids developed for the studio will be tested in the learning environment (Figure 6). The VE model is used to introduce students in the United States to the site (Figure 7 and Figure 8). The model should enable students to navigate through the site and let them gather information similar to some of the information that architects normally gather during site visits. The VE might be used later on to test the design solutions developed by students. At the end of the studio, students may actually visit the site, enabling them to compare their initial immersive experience with the one in the real world and, therefore, verify the effectiveness of the developed VE in conveying the site to designers.
Figure 5 | Digital Model
Figure 6 | Physical Model
Figure 7 | Immersive Model
Figure 8 | Immersive Model
This studio uses expertise and results from an ongoing research, Decoding and Recoding Informal Settlements, a research that explores the use of digital technology to overcome limitations and promote social integration in informal settlements.
48
Form-Finding Exploration of Architectural Knitted Textiles Behavior
FARZANEH OGHAZIAN1, PANIZ FARROKHSIAR2, FELECIA DAVIS3 1
PhD Candidate, Department of Architecture
2
MS Student, Department of Architecture
3
Associate Professor of Architecture, Director of SoftLab
VISION Considering knit as a material for architectural design brings up two related problems. These are uncertainty in the behavior of knit and lack of digital graphical interface understandable by architects in common three-dimensional software applications like Rhino. While the former problem makes it hard to predict the behavior of regular and irregular 3D forms made of knitted textile, the latter problem hinders architects to investigate more on the behavior of knit material. This study is about exploring the behavior of knitted structures and developing an algorithm for simulating knit behavior as a material for designing architectural forms. STEPS/METHOD This is a practice-based research study and comprises two parts of physical modeling and digital graphical simulation to explore the behavior of knitted material by physical modelings and translate the results into a digital algorithm that can be used for further development of architectural forms. The final digital algorithm has developed through a consecutive process of making four families of knitted textile forms, and these are single tubular, tubular forms with rectangular head, conic and multiple conic shapes. A process for digital simulation of knitted textiles is presented; this process comprises five steps and these are: 1. Generating the Base Grid 2. Designing Low-Poly Meshes 3. Smoothening Low-Poly 4. Simulating Physical Behavior 5. Generating Final Algorithm
Fig 1. Quad mesh is selected because of the better representation of the stitch and application of rules like increasing and decreasing number of stitches.
Fig 2. Exploration on different mesh divisions, shape of the mesh, and joints in macro-scale using WeaverBird plug-in (stage 1,2,3)
49
SCDC ‘19
NEWS •
The shape and knitting procedure of the selected overall forms are developed through the implication of shape grammar theory.
•
The role of height and looseness as independent variables on the behavior of the specified forms is investigated.
•
A method for translating the behavior of physical modeling into digital inputs is developed, considering the shape of knitted textile, joints, and patterns of knitting.
CONTRIBUTION The main contributions of this research are: •
Developing a design tool and method using knitted textile as a material for architectural applications.
•
Developing an algorithm for simulating knitted textiles considering the behavior of the material.
50
Shape Grammar in Designing 3D Architectural Knitted Textile Forms FARZANEH OGHAZIAN1, PANIZ FARROKHSIAR2, FELECIA DAVIS3 1
PhD Candidate, Department of Architecture
2
MSArch Student, Department of Architecture
3
Associate Professor of Architecture, Director of SoftLab
VISION This is a research study about implementation of shape grammar theory in designing four families of knitted structures based on form and behavior of the material. These forms are tubular, tubular with rectangular head, conic, and double conic structures. To design a 3D form with knitted textile material, considering the proportion of form and force in determining the final shape, there is a spectrum of design alternatives that can fall between full force to full form. Our models in this study belong to the middle of this spectrum, where the desired form is knitted in a semi-3D shape and tension force gives the final form to the textile. Therefore, the rules of knitting are informed based on both the desired form and behavior of the knitted material under the tension.
Fig 1. Spectrum of Form/Force equilibrium for developing 3D forms.
STEPS/METHOD Comprises four steps of trials and errors to find the appropriate logic and rules for knitting specific forms: 1. Process of Knitting 2. Procedure 3. Unrooled Surfaces 4. Shape Grammar SHAPE GRAMMAR AND DEFINING 3D FORMS There are different approaches to implement shape grammar in designing knitted structures. We used shape grammar rules to develop seamless structures like: • •
Individual Forms with Different Geometrical Properties (Tubular, Conic) Aggregate Multiple Shapes with the Same Geometrical Properties (Multiple Cones)
Fig 3. Application of rules for developing part of tubular forms.
51
SCDC ‘19
Fig 4. Unrolled diagrams for developing individual forms with different geometrical properties.
Fig 5. Application of shape grammar for aggregating forms with similar geometrical properties.
CONTRIBUTION The main contributions of this research are: • Developing a design tool and method using knitted textile as a material for architectural applications. •
Developing an algorithm for simulating knitted textiles considering the behavior of the material
ACKNOWLEDGEMENT This project has been supported by the Stuckeman Collaborative Design Research Award, and the Agnes Scollins Carey Early Career Professorship.
52
Computationally Assessing Pollinator Habitat Resiliency TRAVIS FLOHR1, DOUG MILLER2, HONG WU3, NASTARAN TEBYANIAN4 1
Assistant Professor of Landscape Architecture
2
Research Professor, Director of the Center for Environmental Informatics
3
Assistant Professor of Landscape Architecture
4
PhD Candidate, Department of Architecture
This project presents a new computational planting design tool for assessing urban pollinator habitat quality and resiliency. Worldwide, pollinator populations are crashing. This decline, in large part, is due to habitat and plant diversity losses. Design computing and geospatial analyses can facilitate a better understanding of existing habitat quality and the impacts proposed planting schemes have on pollinators, assisting landscape architects to efficiently design healthy, connected, and resilient landscapes in the face of climate change.
FRAMEWORK
EXISTING POLLINATOR HABITAT ASSESSMENT
53
SCDC ‘19
EXISTING POLLINATOR HABITAT ASSESSMENT (Cont.)
PLANTING DESIGN RESLILENCY ASSESSMENT
54
P/Elastic Space MONA MIRZAIEDAMABI1 SUPERVISORS: DARLA LINDBERG2, FELECIA DAVIS3 1
PhD Candidate, Department of Architecture
2
Professor of Architecture
3
Associate Professor of Architecture, Director of SoftLab
The purpose of this project is to experiment with fabric as an architectural material in order to develop a building intervention that works with an existing gridded structure to provide space—a practical ornamentation out of fabric, shadow, and comfort. Taking advantage of the elastic aspect of fabric opens up the possibility of building an alwayschanging structure. This change might be the result of users’ decision through their experience in the space, or it might happen in response to environmental conditions such as climate, light, etc. The focus of the project is on designing a framework to provide parameters, which allows the user to change their physical environment according to their needs and desires—an architectural space that allows the user to define it by using designed parameters. Considering the acoustic and light-filtering aspects of fabric, the sub-structure made out of fabric is embedded in the grid structure to accomodate work/live life schedule by providing variety of public and private spaces, also for bringing variety types of light into the space. The light filtering and space making are achieved by investigating the shape of the sub-structure and the quality of the fabric of which the sub-structure is made. In future applications, this project can assist designing flexible social work environments supporting contemporary fluid work/life schedules.
55
SCDC ‘19
56
Robotic Apprentices Leveraging Augmented Reality for Robot Training in Manufacturing Automation EDUARDO CASTRO E COSTA1, JOSÉ DUARTE2, SVEN BILÉN3 1
Post-Doctoral Researcher, SCDC
2
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
3
Professor of Engineering Design, Electrical Engineering, and Aerospace Engineering, Head of SEDTAPP
Applications of digital technologies such as Augmented Reality (AR), Machine Learning (ML), and Robotic Manufacturing (RM) have been developing rapidly in recent years due to increased availability and reduction in cost. These technologies are widely considered to be enablers for the next industrial revolution, what is known as Industry 4.0. AR can potentially improve operator productivity, while ML enables a better use of production data generated in industrial processes. Finally, RM bears the potential of both enhancing labor capacity and reducing labor costs in industrial manufacturing. In spite of the decreasing cost of RM, the complexity of programming and operating a robotic arm can still constitute an obstacle for integrating such practice in particular industries such as Architecture, Engineering, and Construction (AEC), which take longer to adopt technological innovation. Another characteristic of such industries is that they deal with customized manufacturing scenarios, requiring more flexibility than their mass production counterparts. For these scenarios, a robot needs to learn more than just how to repeat a sequence of tasks. Typically, industrial robots are programmed by technicians or engineers whose expertise is programming robots, rather than performing the tasks that the robot is being programmed for. Alternatively, we hypothesize that by “learning” from an experienced manufacturing line operator, the robot is likely to perform a manufacturing task better than by being “trained” by a robot expert. Therefore, we propose exploring a system in which a robot can learn particular tasks by mimicking the actions of a manufacturing line operator, similarly to how an apprentice learns from a master.
57
SCDC ‘19
In the role of master, the operator performs their normal tasks while wearing an Augmented Reality Headset (ARH), which enables capturing data related to the actions needed for completing those tasks. The captured data would comprise a geometrical mesh corresponding to the workspace and the objects in it, and the poses of the operator’s hands, both of which can be captured by current commercial ARH solutions. In an initial phase, the captured data can be used, either after the fact or in real time, to directly instruct the robot how to perform the task by repeating the operator’s actions. In a subsequent phase, the data captured by ARH can be used to train the robot through Machine Learning, enabling it to perform the task in contexts that have not been captured during the training. A possible case study to support this research is an ongoing Penn State project that focuses on additive manufacturing of concrete structures. METHODOLOGY
The first step toward this objective is to establish a connection between an augmented reality headset and a robotic arm. We are currently experimenting with a Microsoft Hololens AR headset version 1 and an ABB robotic arm model IRB 2400 with 6 degrees of freedom and a maximum payload of 16 kilo. For prototyping the connection between these two elements, we are using Grasshopper, a visual programming interface for Rhinoceros—a CAD application. These two pieces of software are popular in the AEC industry given their reduced cost and fast learning curve. Additionally, we need three plugins for Grasshopper Fologram, which establishes connection between Hololens and Grasshopper HAL Robotics, which provides inverse kinematics and collision detection and Machina, which enables real time control of the robotic arm. The Grasshopper definition performs a number of tasks, identified by groups of components, namely positioning the virtual robot, defining the tool, acquiring Augmented Reality data, calculating geometric data for the robot to follow, calculating joint angles through Inverse Kinematics, compiling robot actions, sending data to the robot controller in real time, and sending visualization data back to the headset. RESULTS
We have been testing at Digifab, the Digital Fabrication Lab of Stuckeman School of Architecture and Landscape Architecture. Besides the referred Hololens and the ABB robot, hardware comprises an ABB IRC 5 controller and a VR ready Alienware 15 laptop. In our current setup, the robot mimics the movements of the user across a vertical mirroring plane. In Augmented Reality, the user is able to visualize the movements of both the existing, physical robot, and its virtual counterpart. Even though Hololens is able to track the user’s hands without any markers, such tracking is limited to position only The attached Aruco cubes enable more precise positional tracking, as well as determining orientation. Reportedly, improved hand tracking capabilities announced for the next generation Hololens should waive the need for such cubes. At the moment hands must be within the user’s field of vision in order to be tracked.
58
Material Characterization for 3D-Printing Concrete Evaluating the Relationship Between Deposition and Layer Quality in Large-Scale 3D Printing NEGAR ASHRAFI1, JOSÉ DUARTE2, SHADI NAZARIAN3, NICHOLAS MEISEL4, SVEN BILÉN5 1
PhD Candidate, Department of Architecture
2
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
3
Associate Professor, Director of the Glass and Ceramics Laboratory
4
Assistant Professor of Engineering Design and Mechanical Engineering
5
Professor of Engineering Design, Electrical Engineering, and Aerospace Engineering, Head of SEDTAPP
Scaling up Additive Manufacturing (AM) for automated building construction requires expertise from different fields of knowledge, including architecure, material science, engineering, and manufacturing, to develop processes that work for practical applications. OBJECTIVES
Schematic of Printing System and Setup
While concrete is a viable candidate for printing due to its common use in building, it raises important challenges in deposition due to the material deformation that occurs as concrete transitions from fresh to hardened states. This study aims to experimentally quantify the deformation of printed concrete layers under the influence of different processing variables, including layer thickness, layer width, and printing time. The goal is to develop a software tool to design toolpaths that compensate for material deformation.
PRINTING SYSTEM
The 3D-printer that was used in this study consists of a pump (Duo-mix 2000) for extruding the material and an industrial 6-axis robotic arm (ABB IRB 6640) that provides various ranges of movement in different axes.
Schematic of Material Deformation in Concrete Printing
Generally, there are three main sources of deformation that apply to an extruded concrete layer, including self-weight, the weight of following layer(s) printed on the top of it, and the extrusion pressure. To study the 3D-printed concrete behavior, three different sets of experiments were designed to study the layer height and width deformation along with the effect of printing time. 59
SCDC ‘19
METHODOLOGY
In the first set of experiments, four layers were printed to study the material deformation under the load of multiple layers. In each layer the nozzle distance to the previous layer was measured at 10 selected points to identify the layer deformation. The results of these tests was used to design a new toolpath that can take the layer height deformation into account. After designing the new toolpath, the same process was followed to print new specimens. Layer Height Deformation After Printing Several Layers
The Comparison of Layer Height Between the Previous Toolpath and New Toolpath
In the second set of experiments, 1, 2, 3, 4, and 5 beads of concrete were printed in 5 layers and layer width was measured at 10 selected points to identify the layer deformation. The results were used to find a relationship between the number of printed layers and number of beads.
Printing Length: 2000mm Printing time: 11.83 sec
Layer Width Deformation After Printing Several Layers
Printing Length: 4000mm Printing time: 23.6 sec
Printing Length: 13000mm Printing time: 76.9 sec
In the last experiments, 3 different squares were printed with different scales to represent different printing time. The layer height deformation was measured at selected points to identify the layer deformation in relation to printing time.
60
Computing Stitches and Crocheting Geometry ÖZGÜÇ BERTUĞ ÇAPUNAMAN1, CEMAL KORAY BINGÖL2, BENAY GÜRSOY3 1
PhD Candidate, Department of Architecture
2
Instructor, Istanbul Bilgi University
3
Assistant Professor of Architecture, Director of ForMat Lab
This research explores the computability of the craft of crochet. Development process of this project consists of two main stages: an analytical and systematic approach to crocheting threedimensional objects to discover the underlying computational aspects, and a formal representation of the deducted computational logic in the form of a computer algorithm. During the first stage, different materials, tools, and techniques were systematically experimented in order to understand their effects on the process of crocheting three-dimensional objects. These experimentations and comparative analyses result in a set of rules and relations determining geometric properties of the crocheted pieces. This set of rules were then used in the development of a computer algorithm, aiming to generate crochet patterns out of three-dimensional objects that are modeled in a CAD software. Constraints and opportunities discovered in the first stage constituted variables of the algorithm, including determinate variables such as the material properties (yarn weight) and the tool size (crochet hook), and indeterminate variables such as the effect of the crafter’s hand (grip on the yarn) while producing stitches. Additionally, during this phase, several workshops were conducted with groups of different experience levels in both computation and crocheting, in order to gain further insights on computational variables of crocheting as well as to test validity of the algorithm. The output of the developed computer algorithm is a crochet pattern that offers materialization of the digital simulation by hand. The overall process is thus a transition from the digital simulation to the physical object, where physical variables continuously inform the algorithm and shape the outcome.
61
SCDC ‘19
Cross-referencing of the different-sized swatches and outputs of the algorithm.
Abstractions for crocheting rules and their materialization on crocheted surfaces (plain stitch, increase, decrease).
The steps of the crochet pattern generation algorithm.
ACKNOWLEDGEMENTS This reseach has been submitted in partial fullfillment of the requirements for the degree of Bachelor of Science in Industrial Design at İstanbul Bilgi University in 2016 and later published in CAAD Futures 2017. Çapunaman Ö.B., Bingöl C.K., Gürsoy B. (2017). Computing Stitches and Crocheting Geometry. Computer-Aided Architectural Design. Future Trajectories. CAAD Futures 2017. Communications in Computer and Information Science, vol 724. Springer, Singapore.
The four surface topologies used and the crocheted outcomes of each topology.
62
Phototropic Fiber Composite Origami Structure Arch 497 Architectural Responsive Fiber Composites JIMI DEMI AJAYI1, FELECIA DAVIS2, JULIAN HUANG3, KAREN KUO4, IN PUN5, R.A. ZAINAB HAKANIAN6 1
LARCH, Department of Landscape Architecture
2
Associate Professor of Architecture, Director of SoftLab
3
LARCH, Department of Landscape Architecture
4
LARCH, Department of Landscape Architecture
5
BArch, Department of Landscape Architecture
MArch I, Department of Landscape Architecture
6
Our team of landscape architects and architects developed a responsive fiber composite foldable structure by embedding conducive and resistive yarns into a fiberglass knit fabric to initiate a responsive project. We used origami as a method to make folds in the fabric, allowing the structure to collapse and be flat. We hoped to make a lightweight portable structure that could take on different shapes when clipped and positioned. This could be used as a shelter in a landscape setting or as a portable structure. Our primary innovation and contribution dwells in the introduction of simple electronic components to make an e-textile that permits communication through the fabric through LEDs. We embedded conducive thread to carry current up a length of fiberglass knit that could then carry an electronic signal to a series of LEDs sewn onto the front side of our origami project. These LEDs were connected to a photocell that turned the LEDs on and off according to the level of light. In bright daylight the LEDs are off and as evening arrives the LEDs are on. We were inspired by Joe Choma and students’ origami structure at Clemson University, as well as the origami structure for the African Burial Ground Memorial and Museum that communicates its message through its surface by Felecia Davis (Testado, 2017, Davis, 2000). We decided to try a smaller, flexible version to which we could also add some light-sensing capacity.
63
SCDC ‘19
Testado, J. “See How Joseph Choma Built the ‘Chakrasana” Arch using his Fiberglass Folding Technique,” Sept. 2017. https://archinect.com/news/article/150026343/see-how-josephchoma-built-the-chakrasana-archusing-his-fiberglass-folding-technique Davis, F. “Un Covering/Re Covering: Memorial and Museum for the African Burial Ground “in White Papers, Black Marks, Architecture Race and Culture, Ed. L. Lokko, 2000 Routledge, U.K.
64
The Grammar of Late Period Funerary Monuments at Thebes
ANJA WUTTE1, JOSÉ DUARTE2, PETER FERSCHIN3, GEORG SUTER4 1
Researcher, SCDC, PhD Candidate, Department of Architecture, TU Wien, Member of the Center for Geometry and Computational Design (GCD) and
the Digital Architecture Research Group 2
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
3
Assistant Professor of Digital Architecture and Planning at TU Wien, Member of the GDC and Head of the Digital Architecture Research Group
4
Associate Professor of Digital Architecture and Planning at TU Wien, Member of the GCD and Head of the Design Computing Research Group
CONCEPTS • Analysis Methods to Study Ancient Egyptian Funerary Architecture • Procedural Modeling of Architectural Heritage • Shape Grammar ABSTRACT The described project examines funerary architecture of ancient Egypt, using interdisciplinary methods from architecture, computer science, and Egyptology. Late Period private rockcut monuments of Thebes are analyzed in terms of tradition and evolution of their geometry and architecture. The primary purpose is to examine design principles and building parameters of ancient Egyptian architecture. To fulfill this aim, private funerary monuments of Thebes from the 25th and 26th are compared. The analyses include semi-automatic methods to study circulation and accessibility paths, natural lighting paths, proportions, and decoration concepts of the funerary monuments. The deriving parameters of the building elements can be considered as design or shape rules and define a procedural modeling system of the buildings’ design characteristics to enable new perspectives on design concepts of these funerary monuments.
The tombs were modeled as conceptual HBIM models to examine the rooms for adjacencies, light conditions, and consistencies. Data for the models was gathered from excavation reports and architectural documentation like floorplans. With a C++ implementation for AutoCAD (by G. Suter, TU Wien), the space source data was processed and converted into graph-based layout views representing architectural adjacencies, pedestrian circulations, and a natural lighting system.
65
SCDC ‘19
A user-friendly SPACE MODEL COMPARISON TOOL including spatial graphs, tomb information, and color shading to identify spatial qualities of room components allows the user to locate correlations between the buildings. Additionally, information about occurrence and distribution of decoration scenes was included. The user is able to choose a decoration category and scene through a user-interface to see its location in the tombs. This also allows the user to identify relationships between access, light, and decoration in the monuments. Furthermore, corresponding rooms of the monuments were compared in terms of their room dimensions and spatial proportions to be examined for a consistent proportion system. To study the proportions of the buildings, the ancient Egyptian measuring unit cubit was used. CURRENT WORK AND EXPECTED RESULTS To be able to describe and learn about the existing design language of the funerary monuments, a shape grammar is written. The grammar is capturing the compositional conventions of the tombs’ designs and generates existing designs as well as new, hypothetical designs. It is expected that the analytic grammar will not only represent general design strategies but will also provide insight into the typology of Late Period rock-cut funerary monuments. In a future project other ancient Egyptian building sets can be analyzed in terms of their shape grammar to be able to compare them. This could lead to a better understanding of the design process of ancient Egyptian architects.
FURTHER READING A. Wutte, P. Ferschin, G. Suter, ”Excavation goes BIM. Building Analysis of Egyptian Funerary Monuments with Building Information Modeling Methods,” CHNT, 2015, Vienna. A. Wutte, “Data Visualization of Decoration Occurrence and Distribution. A Comparative Study of Late Egyptian Funerary Decoration in Thebes,” Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage, 2018, Vienna. A. Wutte, P. Ferschin, “Proportion Analysis of Ancient Egyptian Funerary Monuments.” (in prep.) A. Wutte, P. Ferschin, G. Suter, “Spatial Analysis of Egyptian Late Period Private Funerary Monuments at Thebes.” (in prep.)
66
Using Grammars to Trace Architectural Hybridity in American Modernism The Case of William Hajjar’s Single-Family Houses in State College, PA MAHYAR HADIGHI1, JOSÉ DUARTE2, LOUKAS KALISPERIS3, JAMES COOPER4, ALI MEMARI5, ANNE VERPLANCK6 1
PhD Candidate, Department of Architecture
2
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
3
Emeritus Professor of Architecture
4
Associate Professor of Architecture
5
Professor and Bernard and Henrietta Hankin Chair in Residential Building Construction, Director of the PHRC
Associate Professor of American Studies, School of Humanities, Penn State Harrisburg
6
Hajjar House by William Hajjar
Hansen House by William Hajjar
ABSTRACT The purpose of this study is to analyze William Hajjar’s single-family houses in State College, PA, within the context of the traditional architecture of an American college town. This analysis is performed using shape grammars as a computational design methodology. Hajjar was a member of the architecture faculty at Penn State, a practitioner in State College, and an influential figure in the history of architecture in the area. Shape grammars are used specifically to verify and describe the influences of traditional American architecture on Hajjar’s domestic architecture. The focus is on establishing Hajjar’s single-family architectural language and comparing it to the traditional architectural language of the context.
67
SCDC ‘19
A Traditional House in the Area
METHODOLOGY This study is part of a larger-scale research, in which Hajjar’s domestic architecture is compared with the modern and traditional architecture of the time, in the following steps: 1. Tracing Hajjar’s life and practice to identify likely influences on his work 2. Developing a shape grammar for the houses he designed in State College 3. Identifying or developing grammars for some of his likely influences 4. Comparing Hajjar’s grammar to the grammars of these influential works to determine the nature and extent of such influences 5. Identifying aspects of the social and technological context that may explain such an influence—i.e., available technology A. WILLIAM HAJJAR, A FACULTY, RESEARCHER, AND PRACTITIONER AT PENN STATE
68
Urban Form and Energy Demand A Comprehensive Understanding Using Artificial Neural Networks MINA RAHIMIAN1, JOSÉ DUARTE2, LISA IULO3, GUIDO CERVONE4 1
PhD Candidate, Department of Architecture
2
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
3
Associate Professor of Architecture
4
Professor of Geography, Associate Director of the Institute for Computational and Data Science
ABSTRACT In order to fully comprehend how the spatial and physical structure of communities impacts their energy demands, the multidimensionality of their urban form needs to be considered. However, due to computational limitations and lack of data-rich environments, past research has neglected urban form’s complexity and resulted in confined outcomes. Unlike the deductive and abstract nature of previous literature, this paper benefits from machine learning as a potential to factor in the many dimensions associated with the spatial structure of urban form and explores their combined effect on building energy demand. The main advantage of using machine learning for knowledge discovery herein lies in its capacity to handle non-normal data distributions and model non-linear relationships. METHODOLOGY Nineteen energy-relevant indicators of urban form have been selected related to density, compactness, diversity, green areas, orientation, passivity, and shading (Figure 1).
FIGURE 1: NINETEEN SELECTED VARIABLES OF URBAN FORM
Using parametric algorithms and GIS methods of spatial analysis, measurement of these nineteen indicators has been extracted for all zip codes in the San Diego county of California. Moreover, monthly values of energy consumption for each zip code were acquired through San Diego’s main utility company, which have been aggregated since 2012. Figure 2 shows part of the dataset, which is then prepared to train an artificial neural network on it in order to find the relational pattern between urban form and energy consumption. In the synethesize dataset, quantities of urban form act as the predictor variable and monthly values of energy consumption are the response variable.
69
SCDC ‘19
The dataset is then cleaned to technically prepare it for training purposes. But before that, in order to gain insight on the distribution and pattern of data, it’s useful to visualize it. The pair plot in figure 3 demonstrates the different relationships between urban form variables.
FIGURE 2: PART OF THE SYNTHESIZED DATASET
In addition to gaining insight on how the dataset is distributed, visualizing it helps with detecting outlier data points, which significantly differ from other observations. After cleaning the data and ensuring its quality, the dataset is ready for analytical purposes. Not all independent variables in a dataset impact the response variable and with feature selection, the right subset of features will be selected, ensuring the accuracy of the learnt model. By omitting irrelevant independent variables, potential noise in the dataset is reduced and chances of the model being overfitted are prevented. After omitting irrelevant features, an artifical neural network was trained on the dataset. Figure 4 shows the architecture of the artificial neural network as well as a plot showing the performance of the model by comparing the prediction values to true values of the test dataset.
FIGURE 4: THE ANN ARCHITECTURE + COMPARING THE PREDICTION RESULTS TO THE TEST RESULTS
Findings of the trained model show that among the nineteen variables of urban form, only these have the value of community-wide energy consumption: floor space index, layer, aspect ratio, volumetric compactness, mixed-use index, green space density, community-building orientation, obstruction sky view, and passivity ratio.
70
Tooling Cardboard for Smart Reuse A Digital and Analog Workflow for Upcycling Waste Corrugated Cardboard as a Building Material JULIO DIARTE1, ELENA VAZQUEZ2, MARCUS SHAFFER3 1
PhD Candidate, Department of Architecture
2
PhD Candidate, Department of Architecture
3
Associate Professor of Architecture
Here we present a description of a hybridized digital and analog workflow for reusing waste corrugated cardboard as a building material. The work explores a combination of digital design and analog fabrication tools to create a workflow that would help designers/builders to negotiate with the material variability of waste cardboard. The workflow discussed here was implemented for designing and fabricating a variety of building parts using different sheets of waste cardboard combined with plywood.
STEP 2: Material Documentation
STEP 1: Material Collection
STEP 3: Building Parts Design with Parametric Tools 71
SCDC ‘19
The implementation shows that combining digital and analog tools can create a novel approach to material reuse, and facilitate a design/fabrication culture of smart reuse that supports informal building and making at recycling collection centers in developing countries for housing alternatives.
STEP 4.1: Fabrication of Building Parts (Floor Panels)
STEP 4.2: Fabrication of Building Parts (Concrete Formwork)
STEP 4.3: Fabrication of Building Parts (Acoustic Panels)
72
Concrete Printing System Design Six-Axis Robotic Concrete Printing NATE WATSON1, NICHOLAS MEISEL2, SVEN BILEN3, JOSÉ DUARTE4, SHADI NAZARIAN5 1
MS Student in Additive Manufacturing and Design
2
Assistant Professor of Engineering Design and Mechanical Engineering
3
Professor of Engineering Design, Electrical Engineering, and Aerospace Engineering, Head of SEDTAPP
4
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
5
Associate Professor, Director of the Glass and Ceramics Laboratory
73
SCDC ‘19
74
Mycelium-Based Bio-Composites for Architecture Assessing the Effects of Cultivation ALI GHAZVINIAN1, PANIZ FARROKHSIAR2, BENAY GÜRSOY3 1
PhD Candidate, Department of Architecture
2
MS Student, Department of Architecture
3
Assistant Professor of Architecture, Director of ForMat Lab
Population growth increases the demand for construction. However, with the dominant “produce, use, discard” framework, the construction industry is not providing sustainable solutions. Moreover, our natural resources are getting scarce. These changes necessitate a search for renewable materials or alternative ways of using existing resources in construction. Research in the field of bio-design started to offer promising solutions for this problem. Bio-design practices integrate “biology-inspired approaches to design and fabrication.” Architects and designers collaborate with biologists and materials scientists to explore how they can incorporate biomaterials into their design processes in the search for more sustainable solutions. Biomaterials are materials that are grown, rather than manufactured using living systems, such as bacteria, algae, and fungi. The growth process of these materials significantly affects the material properties. Therefore, growth of these materials becomes part of the design process. One of the commonly used biomaterials in bio-design fields is mycelium. Mycelium is the fibrous root system of fungi made of hyphae. Mycelium-based composites result from the growth of mycelium on organic substrates under controlled environmental conditions. It is a lightweight, foam-like matter that is capable of bearing load and can be fully composted. The aim in this research is to explore new ways to use mycelium-based materials in temporary and/or low-rise constructions, by enhancing the material properties and developing unreinforced masonry structural systems using computational form-finding techniques. The initial studies of this research to assess the effects of cultivation factors. The effect of substrate types and supplements used on the compressive strength of mycelium-based composites is reported here.
There are three main stages in this process: inoculation, cultivation/forming, and drying/heating. Material properties of mycelium-based composites depend on various cultivation factors: the strains of fungi used for inoculation, the substrate size and type, environmental conditions during growth, time of growth and forming, forming procedure, and processing techniques. 75
SCDC ‘19
Working with living systems in design fields proposes a novel ontological and epistemological framework for design. First, because the materials are generated—grown—by designers themselves, designers can control and tweak material properties as part of the design process. This implies a design process where the form is not imposed on the material from above but the growth parameters and resulting material properties can help determine the form. Second, because growth processes are predominantly unpredictable, the inherent uncertainties of these growth processes can inform the design process, and unexpected discoveries can be made, which can benefit the outcome.
Six different mixtures of grey oyster mushroom spawns and three different substrate types: sawdust, straw and a mixture of half sawdust, and half straw with and without supplement have been compared. The compressive strength tests of mycelium-based bio-composite samples are conducted with a 22 KIP Instron test machine. Results of the tests are shown below.
We would like to express our gratitude to John Pecchia and Fabricio Vieira from the Mushroom Research Center at Penn State for their support during the cultivation process, and to Ali Memari and Anand Swaminathan from the Department of Architectural Engineering at Penn State for their support during the compressive strength tests. This research has been partially funded by Stuckeman Center for Design Computing interdisciplinary faculty research grants in 2018 2019. Image credits: Paniz Farrokhsiar
76
Understanding the Genesis of Form in Brazilian Informal Settlements Towards a Grammar-Based Approach for Planning Favela Extensions in Steep Terrains in Rio De Janeiro DEBORA VERNIZ1, JOSÉ DUARTE2 1
PhD Candidate, Department of Architecture
2
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
INTRODUCTION The lack of housing is a worldwide problem. Informal settlements emerge as a result of selfconstruction and in Brazil they are called favelas. Although favelas are a result of social and historical problems, they have resisted and grown in number and size during more than a century of existence, often occupying the limits of the formal city. Their urban fabric exposes an economic logic that optimizes the use of contruction resources. Although favelas are not planned, they show an attractive visual configuration and have become an emblematic symbol of Rio de Janeiro. The physical attributes of favelas located in steep terrains inside the city can provide a basis for the development of a grammar-based computational approach to designing social housing settlements as a “planned favela.” This research uses favela Santa Marta as a case study (Fig. 1). By analyzing a favela’s physical attributes through a quality assessment tool, and by determining which changes can be made to these attributes can be done, propose physical attributes for that “planned favela.”
FIGURE 1: VIEW OF FAVELA SANTA MARTA
METHODOLOGY 1. Review of the literature related to informal settlements and quality standards for housing. 2. Data collection on the case study—favela Santa Marta. 3. Modeling Santa Marta, creating digital, physical, and immersive models (Fig. 2, Fig. 3, Fig. 4, and Fig. 5). 4. Creating a computational model of Santa Marta, which is an analytic shape grammar (Fig. 6 and Fig. 7). 5. Evaluating Santa Marta with a quality assessment tool. 6. Data collection on the planned favela, by offering a workshop with specialists on informal settlements (architects and urban planners) that proposed physical changes to the favela urban fabric and evaluating the resulting proposals with a quality assessment tool; 7. Proposing the planned favela shape grammar that compiles the results from the workshop (fig. 8). 77
SCDC ‘19
FIGURE 2: PHYSICAL MODEL
FIGURE 3: DIGITAL MODEL
FIGURE 4: IMMERSIVE MODEL
FIGURE 5: IMMERSIVE MODEL
FIGURE 6: SIMPLIFIED RULES TO PLACE A NEW BUILDING (R1 TO R4).
FIGURE 7: SIMPLIFIED DERIVATION USING THE COMPUTATIONAL MODEL.
FIGURE 8: WORKSHOP IN RIO DE JANEIRO
EXPECTED RESULTS This research intends to use the shape grammar theory to study the Brazilian informal settlements and to develop a framework to plan expansions on favelas for housing settlements. It is hoped that this framework may constitute a step towards the development of alternative approaches to expand existing favelas. Although favelas are a result of social and historical problems, they have resisted and grown in number and size for more than a century of existence. Their physical aspects have an attractive visual configuration that has become an emblematic symbol of Rio de Janeiro. This research builds on existing shape grammar theory on stylistic evolution by applying it to move from an existing favela, to a planned expansion. The ultimate goal is to extract design guidelines to follow in the design of new settlements in contexts similar to those where favelas are usually found. The idea is to produce urban environments that are as affordable and have the same kind of spatial richness as favelas but avoid their flaws.
78
Designing for Shape Change 3D-Printed Shapes and Hydro-Responsive Material Transformations ELENA VAZQUEZ1, BENAY GÜRSOY2, JOSÉ DUARTE3 1
PhD Candidate, Department of Architecture
2
Assistant Professor of Architecture, Director of ForMat Lab
3
Professor of Architecture and Landscape Architecture and Stuckeman Chair in Design Innovation, Director of the SCDC
Over the past years, architects and designers have become increasingly interested in shapechanging materials to design more efficient and adaptable systems at various scales. Shapechange in these systems is usually actuated by a change in environmental conditions (i.e. heat, humidity, lighting) and is embedded in the properties of the materials used without the need for any electronic or mechanical control. One of the greatest challenges of designing with these materials is their dynamic nature that dares architects to design with the fourth dimension, time. The question is then, how can we design with this dynamic behavior and gain enough control over material transformations to incorporate shape-change in an FIGURE 1: PROPOSED MODEL iterative design process? This research aims to develop a formal methodology for designing with the dynamic behavior of shape-changing materials, establishing a framework cycling between abstraction and materialization processes. The framework developed (figure 1) integrates material transformations to visual computing (figure 2). The framework was developed through a case study where we 3D-printed hydro-responsive wood-based composites. We first developed custom 3D-printing protocols and analyzed the effects of 3D-printing parameters on shape-change. In the following stage, we 3D-printed kirigami-inspired geometries to amplify material transformation of wood-based composites, and FIGURE 2: RULES FOR SHAPE CHANGE selected the most promising ones (figures 3 and 4). The selected prototypes were studied to track shape-change using 3D scanning (figure 5). Through these experiments, we present how geometry, materials, fabrication settings, and activation conditions are integral constituents of shape-changing systems. 79
SCDC ‘19
FIGURE 3: SELECTED PROTOTYPES FROM STUDY
FIGURE 4: PROTOTYPE FOR A RESPONSIVE SKIN SYSTEM
FIGURE 5: TRACKING SHAPE-CHANGE WITH 3D SCANNING
80
This publication is available in alternative media on request. The University is committed to equal access to programs, facilities, admission, and employment for all persons. It is the policy of the University to maintain an environment free of harassment and free of discrimination against any person because of age, race, color, ancestry, national origin, religion, creed, service in the uniformed services (as defined in state and federal law), veteran status, sex, sexual orientation, marital or family status, pregnancy, pregnancy-related conditions, physical or mental disability, gender, perceived gender, gender identity, genetic information, or political ideas. Discriminatory conduct and harassment, as well as sexual misconduct and relationship violence, violates the dignity of individuals, impedes the realization of the University’s educational mission, and will not be tolerated. Direct all inquiries regarding the nondiscrimination policy to Dr. Kenneth Lehrman III, Vice Provost for Affirmative Action, Affirmative Action Office, The Pennsylvania State University, 328 Boucke Building, University Park, PA 16802-5901; Email: kfl2@psu.edu; Tel 814-863-0471. U.Ed. ARC 21-43