Craftsmanship and Risk in Robotic Fabrication

Page 1

Craftmanship and Risk in Robotic Fabrication: Reimagining industrial workflows for interactive fabrication

A Master’s Thesis submitted in partial fulfillment of the requirements for the degree of

Master of Science in Computational Design Carnegie Mellon University - School of Architecture 2014


This master’s project, completed by Brian Smith, and entitled

Craftmanship and Risk in Robotic Fabrication: Reimagining industrial workflows for interactive fabrication has been approved in respect to its intellectual content and contributions.

Advisors:

Prof. Joshua Bard Prof. Ramesh Krishnamurti



table of contents 7 9

abstract overview craftsmanship & risk (CAM) vs digital fabrication

17

new fabrication control / input interface error management

21

case studies & precedents assistive fabrication crafted fabrication

31

contributions fall 2013 spring 2014 fall 2014

51 55 59

speculation conclusions appendix


abstract


abstract

Abstract: In recent years Digital Fabrication has experienced rapid growth and widespread acceptance in Architecture. With it comes the ability to design and create in ways that have never been possible before. However, there is a major drawback that is not being addressed and in many critics eyes discredits the merits of using digital fabrication. This problem stems from the fact many of the tools are designed for productivity and repetition, not creativity and design. Because of this, there is a disconnect between the typical process of designing and making found in Architecture. In an attempt to investigate and argue a solution, I am taking a Craftsmanship approach to Digital Fabrication by emphasizing the importance of the skilled hand. Through the hand, Risk is introduced into a process that is typically methodical and predetermined, transforming the typical omni-directional fabrication process into one that is interactive and haptic. Through a series of case studies and several experiments in Robotic Fabrication, I hope to gain insight and support for the argument that introducing risk into a digital workflow not only enhances the resulting output, but can arguably become an integral part of the design process. While the resulting object might have a unique quality that distinguishes it from objects created using standard fabrication processes, the primary goal of this research is to instead discover how the process of designing and making the object can be changed. Initially this research is situated within an educational scope of Robotic Fabrication in Architecture, but has an outward facing trajectory that has implications for other fields and fabrication processes in the commercial world.


overview


Overview : Craftsmanship and Risk

Craftsmanship and Risk: History

Architecture has a rich history of portraying the architect as being a multi-functional role. The term Master Builder is often used to describe the architect as having multiple roles that engage a design at various stages from concept to construction. It was not uncommon for an architect to have been first a skilled tradesman, and later adopting the role of architect. The architect had intricate knowledge of how to make and build, and leveraged this knowledge in all stages of the design process. In many cases “… apprenticeship to a craft like carpentry or bricklaying was the background for many of the first professional architects in America” [From Craft to Profession 53]. These highly skilled, hands on processes often taught invaluable knowledge that was evident in all aspects of the design process. It can be argued that this embedded knowledge is what allowed the Master Builder architects of old to create the masterpieces that are still occupied and studied today. “Frampton reminds us that through the mechanism of skill, the builder (like the carver) engages with the internal forces of material; these, in turn, provide a set of constraints the test and shape the building” [Thinking Through Craft 101].

figure 1.1 : interior design sketch that illustrates an abstract design concept in drawn form


Today however, this knowledge has been lost. Architects rarely have the hands on knowledge of the craft professions that historically was the norm. This is largely due to changes in the design process, and drastically different methods of fabrication and construction that are available today. It would be nearly impossible for a designer to have adequate knowledge of all the fields found in Architecture today. Drastic differences in laws and licensure make it so the Master Builder definition of the architect is no longer applicable, however approaching Architecture and Design with a Craftsmanship-like ideology begins to embed the knowledge of making back into the design process.

Design vs Workmanship

The distinction between Design and Workmanship is one that is often confused and misinterpreted, sometimes being used interchangeably. In this research I am treating them as related but distinctly separate ways of thinking and making. As David Pye states, “Design is what, for practical purposes, can be conveyed into words and by drawing: Workmanship is what, for practical purposes cannot [Pye 162] In an Architectural sense, Design is the expression of an idea or concept through drawing and modeling, whereas Workmanship is the act of realizing a design in the real world. In a sense the Designer generates a set of instructions that are then carried out by the Worker, resulting in the conceptual idea taking physical form. Typically, the goal of any design is to express the ideas in the clearest way possible, so that when the design goes to fabrication, there is little room for misinterpretation and error. If the set of instructions is detailed enough, the resulting object, whether it’s a building or an object, is nearly guaranteed and in most cases required to perfectly match the design (within a slight degree of tolerance). This process front-loads the risk, putting it almost entirely on the design, which is evident in the way that Architects have to carry significant insurance for their designs. When a building fails, the blame typically works its way back to the architect / designer. This process has several large flaws, the most significant being that the designer is

figure 1.1 : interior design sketch that illustrates an abstract design concept in drawn form

figure 1.2 : an array of hand tools used by the craftsman - demonastrates the importance of the skilled hand in craftsmanship

figure 1.3 : metal fabricatior ‘producing’ an object. workmanship describes

removed from the physical world. Despite having some common knowledge of how to make, Design is often lacking in the intricacies of how to make things, that can only be understood through gained experience. With today’s advances in technology and industry, the dichotomy of Design and Workmanship continues to be accentuated, widening the gap between “design conception / workmanly execution, abstract design / concrete object, designer / worker” [A theory of Craft 166]. Speed and efficiency are driving less to be built by hand and more by machine. This poses a troubling problem where the act (and therefore knowledge) of making gets further and further removed from the act of design. - 10 -


Overview : Craftsmanship and Risk

Craftsmanship

Situated somewhere between Design and Workmanship is Craftsmanship. Traditionally defined as being “the quality of design and work shown in something made by hand,” it is often misused and generalized to describe something that is well made. For this research I am throwing out the traditional definition in favor of one that was written by David Pye. In his book, The Nature of Art and Workmanship he defines it as: The workmanship of risk [Craftsmanship] would be workmanship using any kind of technique or apparatus in which the quality of the result is not predetermined, but depends on the judgement, dexterity, and care which the maker exercises as he works. The essential idea is that the quality of the result is continually at risk during the process of making. He makes the distinguishing point that Craftsmanship is not defined by the quality of the objects produced, but instead by the fact that the end result is not predetermined. Ultimately the skill of the craftsman’s hands can greatly influence the design of the object, and allows for a subtle feedback loop between the object and the hand to emerge. As the master craftsman works with a material, they are constantly getting visual, tactile, and sensory feedback that allows them to modify their tool motion and mentally update the design. In doing so they ultimately create a designed object that is unique to the current piece of material, sensitive to its inherent features and flaws. This non-deterministic quality introduces the element of Risk into the process of making. The outcome of the resulting object is not only defined by the initial Design, but rather by the interpretation and execution of the maker’s hand. In contrast, a fabricator (t) working with the same material, would simply execute the design which could result in a malformed or inaccurate object. This occurs because the role of the fabricator is to produce exactly what is specified. There is no interpretation needed, and despite having a high level of skill learned through using the hand, there is no impetus for the fabricator to challenge the design. This is an important distinction to make, because with it comes the responsibility for the object that is made. No longer is the maker there to simply execute the design. Instead, the maker is now an integral part of the design process. At any moment the design can completely change based on a single stroke of the hand.

- 11 -


CAM and Digital Fabrication: While many people try to make a distinction between Computer-Aided Manufacturing and Digital Fabrication, I feel there is only a minimal difference between the two. In a sense, Digital Fabrication can be classified as a subset of CAM in that it relies on the same machines, and that they are nearly indistinguishable from each other while an object is being made. The difference, however, lies primarily in their respective design priorities and focus, but ultimately still rely on the same processes.

CAM DIGITAL FABRICATION INDUSTRIAL & MANUFACTURING Minimize Risk Automation Repetition & Efficiency Accuracy / Precision Emphasis on ‘Manufacturing’ Long Term Cost Time

ARCHITECTURE & DESIGN Somewhat minimize Risk Iteration Customization Accuracy / Precision (often with larger tolerances) Design Tool & Process Focuses on ‘Design’ techniques tesselation, sectioning, folding, contouring, forming

- 12 -


Overview : Computer-Aided Manufacturing vs Digital Fabrication

Computer-Aided Machining (CAM)

CAM is the process of controlling any machine tool through with a computer. Widely used in the industrial and manufacturing sectors, it is typcially associated with processes that require high levels of precision and accuracy. Its repeatability is favorable to any workflow where the ‘production’ of an object is the primary goal, and it is expected that the result is within a minimal marigin of error. There is little to now risk in the outcome becase the machine simply executes the commands that are sent to it by the computer. For this very reason, once the process is started, there is minimal to no human interaction required, and in many cases as a saftey feature, the user is required to be entirely outside of the workspace. Throughout every process that uses CAM, the primary goal is to minimize Risk through: Automation Repetition & Efficiency Accuracy & Precision Long Term Cost Model

figure 2.1 : the first number-controlled milling machine was created in 1942 to produce a metal punch for helicopter rotors

figure 2.2 : first computer-controlled milling machine was designed at MIT in 1952

With the rapid advances in control and the technologies that are used in these machines, they continue to be at the forefront of making. Currently the limiting factor to many of these machines and processes is that they are extremely cost and space prohibitive. Many require entire shops to house, and can only be set up by a trained installer / integrator. In recent years, however, theses costs are going down and these tools are becoming more readily available to the public.

figure 1.1 : CNC multi-axis milling maching

figure 1.2 : Metal milling machine

figure 1.1 : Water jet milling machine

- 13 -

figure 1.2 : 6+ axis robotic arms


Digital Fabrication

Digital fabrication is currently a ‘buzz’ word in the maker and architectural worlds. A multitude of projects throw it around, often incorrectly. In most cases, Digital Fabrication is used when describing any process that requires customized and unique parts to be made, however the machines that are used are those found in CAM. This begs the question then, “What is Digital Fabrication and what seperates it from CAM.” In my opinion the two are more closely related then people tend to admit, but the main difference is in the process. Digitally Fabricated projects rely on CAM tools beccause they call for objects that could not be produced with any other method. Despite having different goals and priorities, when it comes down the

COMPUTER-AIDED MANUFACTURING

DIGITAL FABRICATION figure 2.3 : digital fabrication is situatied within CAM becasue it uses the same machines and processes, but with a focus on customization

the fabrication workflow (which this reseach is focused on) they are one in the same. Digital Fabrication is currently defined by the separating of the processes of design and making. During the design process, there are some considerations regarding how to leverage the capabilities of a machine to best suit the design, but in the end there is a missing relationship and a distinct separation between the two processes that is epitomized by hitting a “Start” button. At that point the role of the maker is to simply sit back and observe as the machine carries out the making process. There is no interplay or interaction between the two. No feedback system that allows for subtle changes to the design or the making that are inherent in all other ‘hand’ making processes. The start button on the machine is pressed at the ‘end’ of the design, but instead I argue that it should be pressed from the start, and that there should be a flowing of information, whether haptic, visual, sensory, or digital, that informs both the design, and the process of realizing that design in built form. The inherent level of risk associated with current digital fabrication methods is minimal. Since machines are built for repetition and certainty, when something is made, there are good assurances of what the result will be. This greatly limits and reduces the role of the designer to being as robotic as the machine being used. This calls for methods of making in Digital Fabrication that can possibly produce unknown results, thus putting the impetus of design back on the designer.

figure 2.4 : digital fabrication experiment at CCA involving laser cut paper that is assembled to create a complex surface

figure 2.5 : ChromaTEX project by SoftLAB constructed of printed paper and connected using binder clips

- 14 -

figure 2.6 : ICD / ITKE Research Pavilion was constructed using robotic facation methods to struturally weaver carbon fiber thread


Overview : Computer-Aided Manufacturing vs Digital Fabrication

- 15 -


new fabrication


New Fabrication: Risk and Interaction

Risk and Interaction: After exploring the different case studies and developing an understanding of the varying levels of interaction found in each example, I am proposing a new way to approach fabrication that begins to answer the question ‘How do you incorporate Risk and interaction into processes that are traditionally used for automation?� In order to do so there are three primary things that need change in current fabrication processes in order for Risk to become an integral element of the workflow: Interactive methods of control and input, New interfaces that better engage the control / input, And error management that gives the user control over how to handle errors.

Control and Input

Currently the method of input within a Robotic Fabrication workflow is the mouse and keyboard. The only way for a user to interface with the workflow is my manipulating objects in a 3D environment with the mouse. This method is neither intuitive nor interactive and the only feedback comes from the digital representation of the model. In a an attempt to change this I am referencing the way a Craftsman works, and am suggesting that the paradigm of user input should be focused on the body and the hand. Through the body and hand, the Craftsman is able to directly manipulate an object they are working with, transferring the skill and knowledge that has developed over the years into the object itself. Together the body and hand introduce the idea of Gestures a means of control. Gestural control and input can be achieved using a range of methods. In terms of tracking the hand, there is the Leap Motion Controller1, which uses Infrared to detect the location of multiple hands simultaneously. A newer, but equally powerful method is to use the Myo armband to track motion and EMG2 data. This


data allows for the accurate recognition of predefined hand poses, as well as tracking where the hand is in space and how it is oriented. Both systems of hand tracking allow for subtle motion of the hand to be used as a more natural way to control the robot. To track the body, a motion capture system like OptiTrack1 can be used. This system uses an array of Infrared cameras and lights to track the location of reflective markers in space with a high level of precision. Using this system would allow for high-precision object and full skeletal tracking. An alternate method is to use the Kinect sensor2 from Xbox, which houses an array of sensors that provide depth detection as well as skeletal tracking. Both methods allow for the entire body to become the controller, making for a more interactive way to provide input for the Robotic Fabrication workflow. Despite not being a novel idea, incorporating gesture into the Robotic Fabrication process as a means of introducing Risk can have large implications in the process of making.

figure 3.1 : the standard model of control pointing and clicking with a mouse.

figure 3.2 : the introduction of hand figure 3.3 : gestures of the body can furgestures begins to change the way that a ther expand the interactivity of control workflow can be controlled

figure 3.4 : innovative methods of control / input allow for a feeback system to develop, in which the user has direct control over the workflow

Interface

Accompanying a new method of user input, comes the need for a new interface. The typical method of using a computer with software to design an object and then sending the motion to the robot to follow the motion is a limiting and antiquated way of working. It immediately separates the act of making from design because all the work is done in the computer. In order to step away from the current model, the interface needs to become something that while being interactive, is unobtrusive and in a sense fades into the background. The user should have their focus on the object being made, and not the interface that is being used. The interface should also work well with the new methods of input and control, while providing some form of haptic feedback that is not typically found in traditional interfaces.

figure 3.5 : current interfaces are minimal, omni-directional, and don’t provide interactivity or feedback while making

figure 3.6 : new interfaces will open the door for richer interaction during the fabrication process, allowing for the development of new workflows

1. OptiTrack Motive 2. Xbox Kinect Sensor - 18 -


New Fabrication: Risk and Interaction

Error Management

The critically limiting factor in Robotic Fabrication today is the way Risk is handled through errors. Typically the instant an error is encountered, regardless of the type or severity, current processes come grinding to a halt. In order for Risk to be evident in Robotic Fabrication, rather than eliminating errors altogether, they should be managed. Giving the user the option to determine what to do with an error allows them to override it at the risk of messing up the object that is being made. Obviously errors pertaining to user safety should not be overridden, but errors that involve the object itself are associated with the Risk of making any object by hand. An excellent example of good error management is found in the FREE-D project when the user attempts to remove too much material. The tool initially shuts down, but then provides the user with the ability to override it by pressing a button. If too much is actually removed, the software then updates the model so that it takes into account the extra material that was removed. Allowing this level of control over the errors allows for the process of fabrication to become truly interactive.

- 19 -


case studies & precedents


Case Studies and Precedents: From Industrial to Crafted

From Industrial to Crafted:

INDUSTRIAL

R

REE

O

Before attempting to propose and test a new fabrication process, I looked through a variety of projects and noticed that they could be categorized into a spectrum of interactivity. On the one side is what I am calling Industrial Fabrication, and on the other is Crafted Fabrication. Somewhere in between is what I am calling Assistive Fabrication. Each form of fabrication has different priorities that effect elements of inherent Risk, the process and interfaces that are used, and how the user controls the workflow. By examining the differences in these different methods I have begun to understand the strengths and weaknesses of each, and am able to determine what is necessary to create Robotic Fabrication processes that are situated towards the Crafted side of the spectrum

ASSISTIVE figure 4.1 : various percedents that fall in to different areas of the spectrum of interactivity

CRAFTED


Industrial

Industrial Fabrication is more easily related to traditional CAM processes. Its primary goal is to minimize risk and cost through the use of efficiency, precision, and repetition. Projects that fall into the industrial category are meant to ‘produce’ objects, and the end result is expected to be within a low threshold of tolerance from the original design. The fabrication process is used as a tool to merely bring an object into existence. For the sake of this research, I don’t focus on any Industrial Fabrication precedents, but instead looked for projects that fell in the assistive to crafted range

figure 4.2 : factory with multiple milling machines, all operating as in an assembly line. Oriented for mass production

figure 4.3 : mass produced steel part. Every piece is expected to be within a margin of tolerance

- 22 -

figure 4.4 : most CAM tools are designed to be used in an industrial setting, with uniform material, in order to mass produce expected results


Case Studies and Precedents: Assistive Fabrication

Assistive Fabrication: Assistive Fabrication is what I am classifying any process that allows the user to have some control over the fabrication process. There is a higher level of Risk inherent in these processes because the user now has some control over how the object is actually made, however at certain points the in the process the machine will step in and take over control.

Interactive Construction Project Description

This project is an early look at ways in which we can begin to change the paradigm of how we fabricate. It takes a laser cutter, which typically resides firmly within the Industrial Fabrication side of the interactivity spectrum, and through the clever use of ironically laser pointers, they created a workflow that allows a user to have moderate control over the process of cutting a model with a laser cutter. Using a camera installed under the hood of the machine, they are able to detect where on the surface of the material a laser is being pointed. In order to make the process actually applicable, they have a range of laser pointers each with different colors, allowing a wide range of functionality. Each laser corresponds to a different line type, thickness, or function. Once the camera detects a laser beam, custom written software analyzes the motion and color of the laser to determine what the user is trying to do. For example, a red laser could indicate a rectangle, and then a rectangle would be fitted to the path that the user traces. Control would then be handed over to the laser cutter, and the rectangle would be made. Several interesting functionalities they implemented is the ability to created dashed lines or add tabs to an outline.

- 23 -


figure 4.5 : various laser pointes used to control the laser cutter. Different colors are used for different functionality

figure 4.6 : user interacting with the laser cutter.

Analysis

Interactive Construction poses an interesting way to interact with a machine that is not designed for interaction. While there is still a separation of design and fabrication, the fact that you can cut something, add some features, and cut another object, all without the use of the category make this project a good model to exemplify in my research. Despite the interesting level of interaction, I do feel that it falls into Assistive Fabrication because after the initial design, the machine takes control and cuts the object. There is no user control over the process while the laser is running, and therefore the interaction halts until the cutter finishes

Laser Origami Project Description

Laser Origami a follow-up project that builds off the work of their first project Interactive Construction. This project uses the same methods and principles in their previous project, but begins to look at adding a third dimension to the process. By lowering the laser bed and therefore defocusing the laser, they are able to heat up acrylic that is being cut, allowing for the material to bend from the weight of gravity. This process uses the same laser pointers as the interface to use the machine, but adds the functionality of creating a bend along an edge. While not being dramatically different in process from their previous project, it greatly broadens the range of objects that can be produced using a laser cutter

figure 4.7 : using a custom built jig, as the laser is heating up the acrylic, the peice being cut is rotated to allow for a 180 degree bend

figure 4.8 : after being heated, gravity pulls the cut acrylic downward, creating a spring like structure.

Analysis

figure 4.9 : different levels of bending and folding can be achieved, depending on the rig and the amount of time that the acrylic is heated

The added functionality puts this fabrication process slightly more towards being Crafted Fabrication, in that it introduces further levels of interaction with the object that is being created. An important takeaway from this precedent is that external forces can be used as a way to increase interactivity and risk that is found within a design process. - 24 -


Case Studies and Precedents: Assistive Fabrication

FREE-D

Project Description

FREE-D is an investigation into the idea that rather than the machine becoming an extension of the human, making the human become an extension of the machine. This project utilizes a custom made cutting device that is reminiscent of a standard Dremel1, which is embedded with sensors that relay real time location and orientation data to custom written software. The intent is that the user is able to select a model to fabricate, and then begins to remove material however they see fit. If the user attempts to remove material that is within the boundary of the desired object, the tool shuts down. The interesting feature is that the tool has a button that allows for the user to ignore surface boundary of the object and to remove more material than they should. Once the boundary of the object has been overridden, the user has the option of updating the model. By doing so, the form of the object is modified so that the extra material removed is no longer within the surface boundary.

Analysis

figure 4.10 : the cutsom made smart milling tool.

figure 4.11 : example model that displays the imperfect signature of being made by the human hand

figure 4.12 : the paths that are continaully being tracked during the milling process are graphically displayed here

This project is particularly interesting because it is an extreme case of Assisted Fabrication, in that the user is quite literally guided by the machine to produce the desired result. There is a moderate level of risk involved because the guiding can be overridden, and a small feedback loop will arise when the user does override and then updates the model. However the result of the model as a whole is somewhat expected. Some key points to take away from this project is the idea of Risk Management and the ability to override warnings that arise during the fabrication process. Also one thing that I want to avoid in my research is making the human simply become the ‘gantry’ that the machine uses to move the tool into position and make the object. The the machine needs to be an extension of the hand and not the other way around.

RobARCH 2014 - CMU Project Description

As part of the annual RobArch conference, Carnegie Mellon University developed a robotic steam bending process that proposed a new method of steam bending that avoided the use of elaborate molds to hold the bent wood while it cured. The process used two robots to synchronously perform the bending of the wood, and throughout the process the curvature of the bend was monitored using a motion capture system. If the system detects that a curve is incorrect, the idea is to have the motion of the robots updated so that the correct curvature is created. After the bending is complete, the curvature would once again be checked, and it still deviates from the expected curvature, would be updated to accommodate for the measured curvature of the wood, changing how the remaining pieces are bent.

Analysis

All Bent Out, while not allowing for much interaction from the user during the fabrication process, proposes

- 25 -


figure 4.13 : Two robot syncronously perfroming the bend. The black strips on the wood hold the reflective markers that were used to track the bending with the motion capture system.

figure 4.14 : Final module produced in the workflow as part of the Carnegie Mellon University workshop during RobARCH

the idea of the feedback loop in a fabrication process that is typically omni-directional. The ability to sense how an object is in the real world and compare that to the digital model is critical in creating any interactive workflow. In order to be functional and successful, the machine has to know the present state of the physical model and have the ability update the digital version accordingly. This notion of feedback will be important to implement in my research.

- 26 -


Case Studies and Precedents: Crafted Fabrication

Crafted Fabrication Crafted Fabrication is any process in which the machine takes a secondary control, and the user is able to control the tool as if it were being held in the hand. This method of fabrication is the closest thing to true Craftsmanship and involves a high level of Risk due to the user having complete control of how the machine makes the object. These methods are truly interactive and rely on the skill of the user to ensure that a high quality object is produced.

Spatial Sketch Project Description

Spatial Sketch is a project that is part of a larger research project titled Interactive Construction: Interactive Fabrication of Functional Mechanical Devices. The goal of this research was to create machines that would allow for truly interactive experiences when utilizing them to make. Spatial Sketch aimed to allow the user to sketch in 3D space, and to the seamlessly translate that ‘sketch’ into a 3d model that could be cut with a laser cutter and assembled into a custom lamp. The user’s only interaction with the machine is in the recording of the 3d sketch, and the rest is carried out behind the scenes.

Analysis

This project beings to take a step in the right direction, giving the user access to an interface that gives the user control of the process in an interactive way, however it falls short once the motion is converted into a model and sent to the laser. That stage feels as if it is post production, and there no control over that aspect of the process. I do feel that this project could have moved more towards Crafted Fabrication if the 3d sketching more directly controlled the laser cutter. It is important to show that just because the initial input / control is interactive and allows for Risk, does not mean that those elements will carry over into the actual making of the object. In need to find a way to integrally link the designing and making in an interactive way. - 27 -


figure 4.15 : Drawing in 3d using the custom made controller

figure 4.16 : Various lamps that were made using the Spatial Sketch workflow.

Robo.Op (WIP) Project Description

Robo.Op has the primary goal of developing an open-source toolkit to be used in robotic fabrication, but the process of robot interaction that is proposed is interesting and as close to the Crafted Fabrication workflow that I am arguing for. Based in Processing1, this project opens the door for a wide range of ways to control the robot. In its current state, Processing is used as the hub through which the robot motion is passed through. The goal for doing so is to allow the user to avoid using the typical Robotic Fabrication workflow (which will be discussed later) and use more interactive and hands on methods of control. The project relies heavily on the ability to stream motion commands to the robot in real-time, opening the possibility of live control.

Analysis

This project is extremely interesting because of all the framework that it sets up for Robotic Fabrication. Using these tools allow for an large number of interactive workflows to be developed that all draw upon the simple idea of passing data through a central hub. It is also the best example I can find that is looking towards creating a Crafted Fabrication workflow. It introduces the notion that by creating these new ways of using various fabrication tools we are able to reduce the machine-oriented skill that is typically necessary to operate these machines. In doing so, people who dont have those skills are now able to use these fabrication processes to make and create. In a sense, these new workflows can serve to somewhat democratize the fabrication process and make it accessible to all. However, just because everyone is able to use these machines, does not mean that everyone can do it well, reemphasizing the need for Craftsmanship and Risk in these workflows.

figure 4.17 : Multiple end effector and var- figure 4.18 : An open source toolplate ious adapters that are used for different for the robot that allows for simpler tool experiments with the robot. Each has its prototyping and experimentation own mounting plate, and is inefficient

figure 4.19 : Diagram demonstraing the Processing HUB that allows connnection between a wide range of software and hardware

- 28 -

figure 4.20 : Code that is used to pass data from Processing into RAPID which is read by the robot to run


Case Studies and Precedents: Crafted Fabrication

- 29 -


contributions


Contributions: Interactive Plaster Workflows

Interactive Plaster Workflows Over the course of 3 semesters I worked on a variety of projects in Robotic Fabrication. While each project had different problems they were attempting to solve, everyone in some form or another begins to introduce and explore new workflows that will enable a typically CAM oriented robot cell to be used in interactive ways, allowing for varying levels of risk to be introduced into the fabrication process. From the beginning theses projects were not intended to tackle the problem all at once, rather each is an incremental step that investigates a specific aspect of crafted fabrication. The knowledge gained from each project is incorporated into the next, and built upon. It is important to note that my goal is not to simply find and put forth a new workflow, but to instead investigate and better understand the different elements that will be needed to have a truly crafted fabrication process.

figure 5.1 : ABB 6640 7-axis robot that was used for the all of the following projects

figure 5.2 : View of that robot in work space


For each project that follows, I will briefly discuss: • the project description & my goals • the software and tools used • the process • the results and my observations The projects used a variety of software including: Rhinoceros 3D (Rhino)1, Grasshopper2, HAL3, OptiTrack Motive4, Robot Studio5, as well as Python6, PyQt7, Arduino8, Processing9 and RAPID10 programming languages and all took place within the dFAB11 lab here at Carnegie Mellon University. The robot cell referenced in the following projects is an ABB6640 mounted on a 6m track within the dFAB lab.

Gripper Bend (Fall 2013) Project Summary

In the first project, GripperBend, the primary goal of the project was to simply gain familiarity with Robotic Fabrication and to build the skills necessary to work on deeper investigations into this method of fabrication. After experimenting with the software, I quickly learned how difficult it is to setup a simple toolpath. Besides trying to learn the basics, my secondary goal was to create a more flexible workflow that would allow for simple changes to be made to the input, without having to completely modify the workflow downstream. The project brief tasked us to develop a fabrication process that used a material of our choice in an innovative way. As our material we chose to use sheet metal, and to create a workflow & end-effector8 that would allow us to produce custom bends with the robot. While the primary focus of the project was on the end-effector, for this paper I am drawing more attention to the process we developed.

Software

Rhino, Python, Grasshopper, Robot Studio, RAPID A simple toolpath was generated in Rhino + Grasshopper to allow for it to be parametrically defined. This would allows us to change the toolpath easily in the future, without having to make many changes in the rest of the workflow. This toolpath is then fed into the HAL components to generate the motion commands for the Robot, and the motion was finally simulated in Robot Studio then run on the robot.

figure 5.3 : diagram showing the workflow that was used for this projcet.

PYTHON : plane generation

1. Rhinocerous - commercial 3D computer graphics and computer-aided design (CAD) application software developed by Robert McNeel & Associates 2. Grasshopper - visual programming language developed by David Rutten at Robert McNeel & Associates. Runs within Rhino workspace 3. HAL - grasshopper plugin for industrial robots programming supporting 4. OptiTrack Motive - optical motion tracking software that uses IR tracking 5. Robot Studio - robotic simulation and offline programming tool 6. Python - widely used general-purpose, high-level programming language 7. PyQt - python binding of the cross-platform GUI toolkit Qt. It is one of Python’s options for GUI programming 8. Arduino - family of single-board microcontrollers, intended to make it easier to build interactive objects or environments. 9. Processing - open source programming language and integrated development environment (IDE) built for the electronic arts, new media art, and visual design 10. RAPID - high-level programming language used to control ABB industrial robots 11. dFAB - the Carnegie Mellon School of Architecture’s Digital Fabrication Lab - 32 -


Contributions: Interactive Plaster Workflows

Process

This process is the standard workflow template that is used when working with robot cells in most digital fabrication processes. One primary drawback in how the toolpath is initially generated. Typically a design process generates the curve that is fed into the workflow, and the robot carries out the motion once the commands are sent. This workflow inherently separates the acts of design and making that I am attempting to find an alternative to. In an attempt to find the alternative, we wrote a custom script in Python that would allow for the input to become a simple string, rather than a curve generated in Grasshopper. All we have to feed into the Grasshopper definition is a string of 1s and 0s which correspond to a left-hand or right-hand bend. Ultimately we used this script to generate folded patterns in sheet metal based on L-systems.

figure 5.4 : This diagram demonstrates the repetetive process of performing a fold with sheet metal

Results

After completing the project and looking back, I feel that this process still falls closer to the Industrial Fabrication side of the spectrum. While we created an alternate method of input for a robotic workflow, this process still has a minimal element of Risk, and has a pretty clear separation of Design and Workmanship. We did, however, create the foundation that could eventually become a more interactive fabrication workflow. Ultimately the takeaway from this project is that the workflow needs to be open to a wide range of input and should never be specific to a certain process or tool.

figure 5.5 : Through a series of right and left folds, this workflows is able to create a large number of variations. The string “00001111� creates this pattern

figure 5.6 : Because the input is binary, we are able to feed in fractal and L-system bassed patterns to create some interesting folds

- 33 -


Better End Effector (Fall 2013) Project Summary

The Better End Effector (BEE) was a programming project completed for the 15112 Fundamentals of Programming course. It was intended to be the final cumulative project for the semester that would demonstrate the skills that we learned throughout the class and could be any ‘program’ that we wanted it to be. I choose to write a simple simulation and visualization tool that would enhance some Robotic Fabrication research that I was helping with at the time. The software was intended to allow the user to visualize a resulting profile curve that is created with a variable trowel tool1. I wanted the software to allow for easier control of the tool, and to give the user the ability to control the tool in real time.

Better.End.Effector 15122 . fall term project . 2013 brian smith . sectionA . briansmi

figure 5.7 : Image of the start screen for BEE program

figure 5.8 : variable trowel tool used in Morpheaux plastering research. The tool consists of two contoured trowel blades that are moved horizontally to allow for a varying profile to be acheived. It was used to create variable running molds while attached to a robot.

Software

Python, PyQT, Arduino The program was written in Python using the PyQt graphics package to create the interface. In doing so I was able to make a standalone program that simplifies and obfuscates the difficult components of the workflow, allowing for simple user interaction and control of the robot during the fabrication process

Process

The Robotic Fabrication process that inspired the project was done as part of the Morpheaux2 plaster research. It used the variable trowel tool to create custom and changing running molds. The concept of a variable running mold is entirely new in the world of plaster due to the limitations of current hand plastering techniques. The major drawback to the tool in its current state was that there was no way to easily control the variable too. Rotation angles for the servo were pre-programmed into the Arduino, and had to be timed perfectly with the start of the robot motion. Because of this the process involved a large amount of trial and error without knowing what the profile would produce. The software I wrote attempted to mitigate this issue by giving the user direct control over the rotation of the servo from within the interface. Several knobs and programmable sequences allow the user to preview what the profile might look like, and update or change the servo rotation while the robot is creating the running mold.

1. Variable Trowel Tool - tool used to create changing running mold profiles. Created by Josh Bard as part of Morpheaux research 2. Morpheaux - large research project studying the application of plastering techiniques in digital domains - 34 -


Contributions: Interactive Plaster Workflows

Results

After completing this project, I realized that on one side, I created an interactive workflow that allows for the Craftsmanship to define the creation of the running mold. While controlling the variable trowel there is the inherent Risk that the skill (or lack thereof) of the user could drastically change the resulting running mold that is created. The user is able to interactively design this running mold as it is being produced. However, I also introduced a method of eliminating the risk entirely by providing a way to preview the running mold before it is actually made. The ability to have preprogrammed servo rotations also minimizes the risk, creating a workflow that falls back on the Industrial Fabrication side of the spectrum. This observation led me to the understanding that it is important to not merely introduce Risk for the sake of Risk, and that not knowing the end result doesn’t necessarily mean the crafted object will be a well designed one. It also shows that the problem of making a Crafted Fabrication workflow is not solved only in software. There is much more that is needed to create a truly interactive workflow that gives the skill of the hand an integral place in the design process.

figure 5.9 : Main window layout of BEE program. The dials on the left allow for direct control of the Variable Trowel Tool, and at the center the visualization of the current plaster profile can be seen

- 35 -


ASSISTIVE

import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.net.ServerSocket; import java.net.Socket; import processing.core.PApplet; import toxi.geom.Quaternion; import toxi.geom.Vec3D; import toxi.processing.ToxiclibsSupport; public class RobComms extends PApplet { private static final long serialVersionUID public ToxiclibsSupport gfx;

1L;

/*

BASIC DIRECT-MOTION COMMAND CLIENT */

// TCP/IP vars String hostName “128.2.109.20”; int portNumber 1025; public ServerSocket servSock; InputStream in; OutputStream out; String outMessage; String inMessage; BufferedReader inReader; boolean connected false; boolean recievedMessage false; boolean newMessage false; // ////User Jog Vars/// String userKey

“”;

myo prelim tests

ACADIA stylus demo

2x4 panel 3rd Test Trustees 2x4 demo

ACADIA 2x4 demo

plaster precision motion capture

swatch & cure

live stream to robot

Spring 2014

CRAFTED figure 5.10 : The first three projects discussed fall towards the assistive fabrication side of the spectrum


Contributions: Interactive Plaster Workflows

Motion Capture (Spring 2014) Project Summary

As part of ongoing research in our lab, we acquired a motion capture system that would be used for the ongoing plaster research and other various projects. During the semester I was tasked with getting the system up and running, as well as helping to develop a workflow that would allow us to seamlessly take motion capture data and bring it into the Rhino workspace. This seemingly simple step would allow for the motion data to easily be integrated into a range of other research projects. My goals for the motion capture system focused primarily on the many opportunities that it opens for developing new ways to interface with and control a Robotic Fabrication workflow.

Software

OptiTrack Motive, Python, Rhino, Grasshopper, Robot Studio The motion capture system we used is created by OptiTrack and runs with the Motive software. We developed two different methods of bringing the motion tracking data into the Rhino environment; the first, developed by Garth Zeglin, involved a powerful custom python script that parses the data, performs translations, and exports planes for Rhino, while the second developed by Josh Bard was a python component in grasshopper that similarly parses the motion data, but doesn’t perform any translations.

figure 5.12 : A sample motion capture environment. The setup in our lab for these projects only used 6 cameras with various arrangements

figure 5.13 : Objects that are intended to be tracked need to have reflective markers that are detected by the cameras. This is the setup that was used on our trowel to capture the motion of plastering

Process

After several simple tracking experiments, we developed a method of calibrating and orienting the Motion Capture system so that is coordinate system aligned with that of the robot. In doing so we drastically simplified the process of bringing the motion data into Rhino + Grasshopper. Since we didn’t need the translations anymore, we were able to use the python component in Grasshopper to bring in the motion data.

mocap axis

figure 5.14 : Software view of what the cameras are detecting. At this point in time, only 4 of the 6 cameras are actually detecting and tracking the markers

robot axis

figure 5.15 : Diagram depicting the different coordinate axis that are used in this workflow. The difficulty comes in trying to convert from one to the other


Results

While this project was more of an open-ended learning process, it opens many doors and proposes different method of interacting with a workflow. Human gesture, both in terms of the body as a whole, and the hand, can now be used with a high level of precision to act as user input for a workflow. This method of control will however, initially be used as a simple recording device, but in future experiments can be used as a method of interactive control for a Robotic Fabrication workflow.

figure 5.16 : Perspective view of first motion capture test in Rhino

figure 5.17 : Right veiw of first motion capture test in Rhino

figure 5.18 : Top view of first motion capture test in Rhino

figure 5.19 : Front view of first motion capture test in Rhino

Replicating Master Plasterer (Spring 2014)

ACADIA 2014 Paper - Seeing and is Doing: Synthetic Tools for Robotically Augmented Fabrication in High-Skill Domains

Project Summary

This project was part of a paper submission to the 2014 ACADIA1 Conference. The paper was submitted as a team by the Human-Machine Virtuosity Group2 here at Carnegie Mellon University, and I was responsible for motion capture and robot demo portion of the paper. We were proposing an open-source toolkit for Robotic Fabrication that can be used to carry out various high-skilled processes. My obvious interest in the paper was the development of a workflow that would allow for the skill and risk inherent in these processes to be transferred into the various robotic workflows that we created.

Software

OptiTrack Motive, Python, Rhino, Grasshopper, HAL, RAPID, Robot Studio

Process

The process that we developed was to first record a Master Plasterer as he performed a finishing routine on a 8x4 panel. Once the recording was complete we brought the motion data into Grasshopper, and had to perform several rounds of cleanup and filtering in order to get a clean toolpath for the robot to follow. In the end, due to many unanticipated errors, we took a single horizontal finishing pass and used that as the toolpath that the robot would replicate. The final motion was performed without plaster, because at this stage we are merely testing proof of concept.

figure 5.20 : Series of images depicting Master Plasterer applying finishing pass to panel

figure 5.21 : Series of images depicting robot replicating the same finishing pass 1. ACADIA 2014 : Design Agency - yearly conference focused on research and development of computational methods that enhance design creativity 2. Human-Machine Virtousity Group - 38 -


Contributions: Interactive Plaster Workflows

Results

With the motion capture system we were able to get pretty accurate motion that closely matched the motion of the Master Plasterer. There was however a significant amount of post production work that was necessary to get good motion. This poses some serious hurdles that will have to be overcome in order to create an interactive workflow, most notably being noise reduction. We had this much trouble when working from a static recording, but to be able to do the same level of post processing while using a live recording will be extremely difficult. In order to do so while still enabling direct live control, we will need to develop some smart noise filtering and motion processing so that the robot will always have good motion. Another important observation is that with motion capture, the visibility of markers is extremely important to the accuracy of the tracking. In a process where the trowel is moved all over, we were constantly losing markers. Also, any messy fabrication process will over time obscure the reflectivity of the markers, making the tracking inherently more difficult and error prone. In future experiments it will be important to develop a workflow that can handle noise and inconsistent data, as well as have redundant motion tracking capabilities. The alternative is to explore other methods of recording human gesture and input.

figure 5.22 : Master Plasterer performing finishing pass

figure 5.23 : Motion capture data recorded from finishing pass

figure 5.24 : Robot using the same trowel to perform finishing pass

figure 5.25 : Simulation of robot performing finishing pass

Swatch & Cure (Spring 2014) Project Summary

This project is comprise of two smaller projects that dealt with different tradition plastering techniques and their translation into Robotic Fabrication processes. The goal of the project was to not only reproduce the traditional techniques with the robot, but to also being to think about ways to leverage the strengths of the robot to accentuate the traditional techniques in ways that were not previously possible. The first project looked at creating a 2D ‘Swatch’ and experimenting with different ways to texture plaster. The second project moved into 3D in which we were called to explore an intersection of two planes (in our case the ceiling and a column) and to use running mold techniques as a way of transitioning from one plane to another.

figure 5.26 : 2d ‘Swatch’ exploration in profiling and marking

figure 5.27 : 3d ‘Cure’ exploration in contouring and intersection - 39 -


Software

Rhino, Grasshopper, HAL, RAPID, Robot Studio

Process

The geometry and toolpaths were created in Rhino + Grasshopper, and the final form was the result of several design iterations, both physical and digital. This was our first time actually using plaster with the robot, let alone using plaster in general, and as a result we spent a large percentage of our time learning simple plastering techniques. There were several failed tests before we were able to find the right plaster mixing ratios and timing. Unlike in other projects, we settled on the robot motion early on, performing several tests all with the same robotic motions. In the end we spent more time working with the process of the material, and less with the process of of the robot.

figure 5.28 : Custom end effector tool made to hold plaster trowels

figure 5.29 : Robot about to perfrom plastering procedure on inverted cone

figure 5.30 : Rendering of how the profile will change with the radius of the cone, and blend into the plane of the ceiling

Results

Ultimately these two projects fell close to the Assistive Fabrication category, but still within Industrial Fabrication. Both projects have potential to become interactive processes but in their current state are still very industrial. The end result is known before the fabrication begins, and there is little Risk while the process is carried out. However, this process is nearly Assistive Fabrication because of the fact that it is being done with plaster. The Risk and interaction fall within the material itself and is evident in the large amount of work that needs to be done cleaning and fixing the plaster as the robot is working. This introduces an interesting notion that the Risk doesn’t necessarily need to come from the machine process, but can originate from the unknown inherent in the material.

figure 5.31 : The finished plaster cone. In an architectural application would be used as a the capital of a column to transition into the ceiling

figure 5.31 : The trowel profile held against the surface of the plaster to demonstrate the accuracy provided with the robot

- 40 -


Contributions: Interactive Plaster Workflows

ASSISTIVE

import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.net.ServerSocket; import java.net.Socket; import processing.core.PApplet; import toxi.geom.Quaternion; import toxi.geom.Vec3D; import toxi.processing.ToxiclibsSupport; public class RobComms extends PApplet { private static final long serialVersionUID public ToxiclibsSupport gfx;

1L;

/*

BASIC DIRECT-MOTION COMMAND CLIENT */

// TCP/IP vars String hostName “128.2.109.20”; int portNumber 1025; public ServerSocket servSock; InputStream in; OutputStream out; String outMessage; String inMessage; BufferedReader inReader; boolean connected false; boolean recievedMessage false; boolean newMessage false; // ////User Jog Vars/// String userKey

“”;

myo prelim tests

ACADIA stylus demo

2x4 panel 3rd Test Trustees 2x4 demo

ACADIA 2x4 demo

plaster precision motion capture

swatch & cure

live stream to robot

Fall 2014

CRAFTED figure 5.32 : The projects completed in the fall progressed towards Crafted Fabrication

- 41 -


Small Panel - First Test (Fall 2014)

ACADIA 2014 DEMO - Seeing and is Doing: Synthetic Tools for Robotically Augmented Fabrication in High-Skill Domains

Project Summary

The first test with a small 2x4 panel was developed as a Demo for our ACADIA paper1 that was accepted. The goal was to take the Master Plaster replication test that we did as part of the paper proposal, and to expand on it by beginning to look at ways in which a robot can can begin to apply and render plaster itself. For this project we wanted to take a look at the motions of a Master Plasterer and to break it down into a series of motions that can be performed by the robot. The ultimate goal was to begin to make progress towards the ability for the robot to finish a surface comparable to that of a human. This research is not intended to replace the human, but to learn from the plasterer so that a robot can eventually work with a human.

Software

Rhino, Grasshopper, HAL, RAPID, Robot Studio

Process

We began by analyzing the motion of a plasterer and breaking it down into two different types of motion: Loading and Finishing. In most plasterwork, there are motions where the primary goal is simply to load the plaster onto the surface. Once the surface is adequately covered, the goal of the motion changes to simply finish, or smooth the surface until it fully dries. We then broke these two primary motions into two sub categories: Horizontal and Vertical. With those four variations of plastering motion, we created the toolpaths that would be loaded on the robot: Horizontal Loading Vertical Loading Horizontal Finishing Vertical Finishing The purpose for loading four separate toolpaths on the robot was to allow us to manually choose which toolpath we wanted to use depending on the quality of the finish on the panel. We also incorporated a Load procedure that would be called in between the different loading passes so that we could put plaster on the trowel.

figure 5.33 : Diagram depicting the vertial finishing pass.

figure 5.34 : Diagram depicting the horizontal finishing pass

- 42 -


Contributions: Interactive Plaster Workflows

Results

While this project was large step in the direction of Crafted Fabrication, it was limited by the fact that we only had 4 types of motion, and the only way to update or modify that motion if needed was to go back into grasshopper, make the change, and then reload the new toolpaths on the robot. Many of the expectations for how the motion and the plaster would behave were false, and we were not able to easily change our motion based on what we were observing. This poses the notion that the motions that we are using need to be segmented further to allow for more control over the fabrication process. Another notable problem was that the interface relied on the a few buttons on the robot’s teach pendant. Due to this the number of motions that could be called is limited to the number of buttons on the teach pendant (which is only 5), and the teach pendant itself is an antiquated and unintuitive interface to use with the robot.

figure 5.35 : The robot working on plastering the first 2x4 panel

figure 5.36 : While being able to cover the surface, the ultimate finish was far from the smooth finish we were looking for. Many changes to to the process and the tool were required

Small Panel - Second Test (Fall 2014) Board of Trustees Demo

Project Summary

The second test on the 2x4 panel was developed as part of a demo to the Board of Trustees at here at Carnegie Mellon University. In this project, we still had the same primary goal; to finish a panel with equivalent skill to that of a Master Plasterer. We however were looking to make the process more interactive, further removing the computer from the process, and putting more emphasis on using the teach pendant as the main interface. In developing the motion for the robot, I was focused on finding ways to make the user have more control over the motion, allowing for their skill (and Risk) to play a more significant role in the fabrication process

Software

Rhino, Grasshopper, HAL, RAPID, Robot Studio

figure 5.37 : Extremely ineffiecient, and non parametric grasshopper definition

figure 5.38 : For every seperate region that needed to be plastered, an entirely seperate toolpath needed to be created. This is not at all practial nor efficient

- 43 -


Process

Beginning similar to the first 2x4 panel test, I took the four primary plaster motions, and attempted to find a way to break them even further into smaller motion. In doing so I was hoping to give more control to the user, allowing them to choose specific regions to plaster or finish. After examining the motion, I came to the conclusion that the loading passes needed more segmentation, while the finishing passes can be nearly unchanged. To do so, the vertical loading passes were broken into separate row, and further divided into columns. This would create a grid out of the surface that we could then choose a specific region for the robot to plaster. The finishing passes were split into two vertical and four horizontal motions, just enough to go over the whole surface with a small amount of overlap. All the different motions were loaded onto the robot, and a series of nested buttons were created on the teach pendant to allow for selection of finish type, direction, row, and column. Again, a Load procedure was incorporated in between any of the loading passes to allow us to put plaster on the trowel

figure 5.39 : Diagram depicting seperated Vertical passes

figure 5.40 : Diagram depicting seperated Horizontal passes

Results

This process was quickly the most engaging and interactive process we had developed, and Risk definitely played a significant role in the process. Several times we had a good finish on one region of the panel, and we decided to go over it again to see if the finish could be improved, only to worsen the quality of the finish. Despite the better interaction, there are several serious negatives to this process. Again, if there were any problems with the motion, we would have to go back into Grasshopper to fix them and generate new motion, but with this workflow, large amounts of work would have to be done to the Grasshopper script even for minor changes. This is due largely to the way the script had to be structured. In order to get all the different toolpaths, I had to copy a series of HAL components for each separate toolpath. Because of this, the script is not at all parametric, and was extremely inefficient. Also the buttons that had to be navigated to select the desired motion were not a good interface to use. From this project I took the notion that while the level of interaction is important, the behind the scenes implementation needs to be parametric, scalable, and efficient. Otherwise the crafted workflow is only viable in specific applications, where I am proposing a workflow that can be applied to a wide range of applications.

- 44 -


Contributions: Interactive Plaster Workflows

figure 5.41 : The robot would bring the trowel to a ‘Load’ positiong between each move in order to get more plaster

figure 5.42 : The finish on this test was closer to being of the quality we wanted, but some changes to the lenght of motion amount of overlap will be necessary

figure 5.43 : In order to avoid sticking, the motion extend past the edge of the panel before the robot would retract

Interactive Table - Stylus (Fall 2014) Board of Trustees Demo

Project Summary

As the second Demo for the Board of Trustees, this project takes a broader approach to plastering then the other projects have. As part of a team, we wanted to develop a workflow that would allow for an interactive sketching and visualization interface on the front end, which would then generate robot motion on the back end. The project as a whole focuses on putting the interaction in the earlier stages of the design, but I am interested in understanding how that interaction can extend to the actual Robotic Fabrication as well

Software

Optitrack Motive, Rhino, Grasshopper, HAL, RAPID, Robot Studio

Process

Using an overhead projector and our motion tracking system, the process starts with a user sketching out different running mold curves. As the user sketches, the motion is captured and imported into Rhino, which is then projected back onto the surface as an extruded rendering of what the resulting running mold will look like. As part of the projection, there is a small info panel that shows the current tool selected, where in the larger grid this particular piece will go, and several viewing options. Once a curve is selected, it is refined and rebuilt in Grasshopper and then processed through HAL to generate the robot motion. It then is loaded on the robot to be performed with the plaster.

figure 5.44 : Illustration of the ‘interface’ that we projected onto the worktable for this project. It gives proximity informationg, and shows the current trowel profile that is selected

- 45 -


Results

While the interaction in the design stages of this project is really powerful; creating an interesting feedback between sketching and model, the interaction stops once a final curve is picked to be produced in plaster. This is partly due to the nature of the project itself and the material being used. Since plaster has a set drying time, and afterwards cannot be reworked, it is difficult to extend the interactivity to the production of the actual object. A similar workflow with a material that is malleable but not time dependent could begin to work with this feedback between sketching, visualizing, and making. This project demonstrated an excellent way to interact with the workflow and showed how to begin to introduce feedback into the process.

figure 5.45 : The first step in the process is to draw a curver while the motion capture system tracks the location of the drawing tool

figure 5.46 : The drawn curve is then turned into a 3d profile and projected back onto the worktable for viewing

figure 5.47 : Once a curve is selected, the motion is sent to the robot and the plaster running mold is created

Small Panel - Third Test (Fall 2014) Project Summary

The third and most recent test of the 2x4 panel plastering was intended to put the interface temporarily back on the computer in order to create a framework that will allow for other gesture based interfaces in the future. For this project I had several simple goals from the beginning: To keep the implementation as efficient as possible, make it easy to make changes, and to give an even higher level of control to the user than in the two previous panel projects. I also began to look at experiments beyond this project, and wanted to structure it so that it could easily adapt to future projects that incorporate automated and truly live control workflows.

Software

Rhino, Grasshopper, Python, HAL, RAPID, Robot Studio

figure 5.48 : Simple input allows this workflow to be flexible. An outside script anc bypass the drop down list and instead give grasshopper a number to select which region to plaster

figure 5.49 : Having built in parametric control over the overlap and other measurements allows for the workflow to be adaptive

Process

Rather than approach this by creating every motion we could need, generating all those toolpaths, and loading them all on the robot, I chose to radically simplify the process. The current process only uses 2 toolpaths (Horizontal & Vertical) and instead relies on a server streaming an updated workobject1 to the robot. Once the workobject is updated to a new position, the loaded robot motion will then be run by the robot, referencing the new workobject which essentially changes the start position of the motion. This - 46 -


Contributions: Interactive Plaster Workflows

method allows for any location to be plastered with minimal code on the robot and the computer, and gives the user ultimate control over where and how the panel is loaded and finished. In its current state, either a start point can be moved around, or a region can be selected with a single number. Both methods of selection can serve as a framework to incorporate other methods of user input/control.

Results

I have yet to test this workflow with actually plastering the panel, but I can speculate that it will be much more successful than in previous iterations and be a large step towards crafted fabrication. I do feel that by having the computer serve as the interface, the interaction will be somewhat limited. This process will allow for the user to have complete control over where the robot loads and finishes the plaster, and in having complete control, the resulting finish is entirely dependent upon the skill of the user.

figure 5.50 : Diagram that depicts single finishing pass

figure 5.51 : Diagram that depicts single horizontal pass

Myo (Fall 2014) Project Summary

This short project was an introductory look into the Myo1 and the potential it has to offer. In several small tests, I began to explore the data that is accessible with the sensors that are built into the hardware. It gives access to Gyroscope, Accelerometer, Orientation, and EMG2 data and has been wrapped into a wide variety of languages which make it easy to use and understand this data.

Software

fist

fingers spread

double tap finger

wave in & out

Myo Connect3, Various Programming Languages, Python, Rhino, Grasshopper, Processing

EXPERIMENTS myo

figure 5.52 : Myo electromyography (muscle) sensor on arm

figure 5.53 : Visual representation of the data that corresponds to the different poses that are built into the Myo getsture recognition

- 47 -


Process

On the simplest level I attempted to use the Myo Connect software’s ability to control native OS functions like keyboard and mouse input to manipulate a point in space in Rhino. This point is parametrically tied to the workflow that I developed in the previous plaster panel test, and would simply change the start point that the workobject and simple toolpath reference. Ideally the user would be able to make a fist to trigger the software to start sending data to Rhino, which would then move the point around on the virtual panel in Rhino. Once the point is at the desired location, the gesture of spreading the fingers will tell Rhino to send the selected location to the robot as an updated workobject, and to perform the finishing motion. EMG

ACC

GYRO

ORIENTATION ROLL PITCH YAW

EMG 1 EMG 2 EMG 3 EMG 4 EMG 5 EMG 6 EMG 7 EMG 8

figure 5.54 : Along with EMG data, the Myo also gives accelerometer, gyroscope, and orientation data

Results

Again I have not been able to test this workflow, but am interested to see how this higher level of interaction effects the quality of the finished plaster surface. I think this takes a large step forward, giving the user the sense of have more direct control over the robots motion, and thereby giving the sense that the robot becomes an extension of the hand.

- 48 -


Contributions: Interactive Plaster Workflows

- 49 -


speculation


Speculation: Future Outlook

Future Outlook Over the course of this research, this work has steadily progressed towards my ideal of Crafted Fabrication. In the future there are several projects that I foresee could continue to propel this work forward, while attempting to change the understanding of what it means to work in a Robotic Fabrication context.

CV 1/Kinect 2 Automated Plaster Rendering Description

The natural direction that I feel the 2x4 panel projects should go is towards automation. Now that I have developed a workflow that will allow for easy selection of regions or specific points on the surface to plaster, the next step is to have the computer determine which of those regions need to be plastered next. While any automated process is obviously antithetical to the majority of my research, I feel that if successful it could be incorporated into a much larger workflow; where the interaction shifts to the robot working in union with a human to plaster a surface. To do so, the plasterer would begin working, while the robot ‘watches’ and once it ‘learns’ the motion, begins to work on a different portion of the surface. This process would pose an interesting paradigm where neither human nor robot is obsolete, and that together they are able to accomplish that which neither could alone.

1. Computer Vision - field that includes methods for acquiring, processing, analyzing, and understanding images 2. Xbox Kinect Sensor - motion sensing input devices by Microsoft


Software

Built on a similar framework to the one used in the last panel experiment, the only new software/hardware needed is the CV/Kinect. Either method will allow for the state of the physical model to be recorded, analyzed, and updated in the digital model. In doing so either the user in a non-automated process or the computer in an automated process will be able to make plastering accurate plastering decisions as they work.

figure 6.1 : Open CV is a standard computer vision library that allows for relatively easy image processesing

figure 6.2 : The Xbox Kinect 2.0 sensors is the most compact and powerful sensing kit that is cheaply and readily available.

Direct Myo Control Description

In an attempt to better explore the strengths and weaknesses of the Myo, several small experiments with using the Myo to directly control the robot would offer some key insight into developing Crafted Fabrication workflows. The first experiment would be to use the myo as a means of simply moving the TCP1 of the robot. Essentially wherever you point, the end of the robot will follow. This level of control has interesting implications because it would demonstrate the most literal way of making the robot become an extension of the body, with the tool at the end becoming the maker’s ‘hand’. Another simple experiment would be to track the motion of the Myo in space, and translate that motion into a ‘gestural’ toolpath that the robot will follow. Doing so would allow the robot to move in more expressive and less mechanical ways, and could begin to move the Robotic Workflow in the direction of working ‘with’ the person rather. A third test would be to control multiple robots at once, using a Myo on each arm. Despite mainly being an experiment that will test what is possible with this device, it could uncover new robotic workflows and enable users to design and make in novel ways.

Software

The primary difference in the software workflow will be the input. Data would be read from the Myo and used to control a point in Rhino space that serves as the Robots TCP. That point would then be streamed in real time to the robot, allowing for live control. The workflow for the second experiment would only differ in the way the data from the Myo is interpreted. Rather than streaming the point, it would be used to cre-

1. TCP -Tool center point, it is the point in space that the robot orients itself to - 52 -


Speculation: Future Outlook

ate a toolpath which would then be streamed to the robot. This process would have a delay as the toolpath is recorded, but would allow for instant playback of a recorded motion.

Myo Recording Description

This project would be designed to explore the use of the Myo as a ‘capture’ device rather than as gesture control. This idea stems from the fact that in many of the plaster projects, there were a range of errors that were encountered. The most notable are joint errors on the robot and marker occlusion with the motion capture system. Joint errors occur when the robot is attempting to reach a rotation on one of its joint that extends beyond its physical limits. This happens because the motion planner for the robot only calculates the joint angles needed to reach a point in space, but does not pay attention to the limits. (can go from 200 - 0 which physically is not possible). Marker occlusion occurs when either the reflective marker is blocked from the view of the cameras, or the reflectivity of the markers is changed. By using the Myo to record both motion data and EMG data, I believe that we can build smarter motion into the workflow. The motion data can be used to determine rotations of the forearm, which could then be directly translated into rotations of specific joints on the robot. In doing so, the robot will only perform motion that our arms are able to produce, which is well within the range of motion for a robot. The EMG data can be used to get more acute readings of the wrist movement, and how the tool is being held in the hand. A large portion of the Master Plasterer’s skill is found in the subtle motions of the hand, and to be able to detect, and hopefully emulate those motions, would greatly increase the dexterity of the robotic plastering process. While EMG data is complicated to understand, I think it could be refined into usable motion that can be used to help achieve a finished surface..

Software

This software workflow would work in a similar method to that of the Motion Capture system we currently use. Motion will be recorded from the Myo, and will have to undergo some processing to make sense of the numbers, as well as transformations to reorient the data from the Myo’s world axis to the robot’s world axis. The remainder of the process would be similar to any of the other plastering projects, where the data is processed through HAL and robot motion is generated and streamed to the robot for execution. The only major difference is that there will need to be a calibration pose / procedure of the Myo every time it is used in order to accurately transform the data into the robot’s coordinate system.

- 53 -


conclusions


Conclusions: Results and Observations

Results and Observations: Craftsmanship and Risk

Throughout this research I have learned a great deal about the varying roles that Craftsmanship and Risk can play in the Robotic Fabrication process. Initially I was proposing that Fabrication workflows as a whole need to include Risk as an integral component of the process, however I want to revise that that statement to say that there should be varying degrees of Risk integrated into the various Fabrication workflows. Not all Risk is good Risk, and often being able to control and understand that Risk is the more critical concept to grasp. That mastery over Risk and the unknown, through the act of making, is what defines the true skill of the Craftsman. I do also understand that there are still times where you don’t want to have the end result of the making process be unknown. In order to accommodate that scenario I also want to make a new distinction, noting that the motivation behind a fabrication process determines whether the role Craftsmanship and Risk should play. There are times where Digital Fabrication processes are being used to ‘produce’ an object. In this case, the outcome should be expected every time the process is completed, otherwise the primary purpose of the tools used would be negated. On the other side there are times where Digital Fabrication is used as a process to ‘design’ an object. Its within this domain that Craftsmanship and Risk should play a critical role. When designing the end result should never be known, even when we are using a tool that is intended to produce known results.

Interaction

I also have begun to better understand the how different levels of interaction can really affect how a tool is used and the success or failure of the resulting output. Digital Fabrication processes that have deeper and more integrated levels of interaction are able to introduce a better understanding of the object, and in doing so an embedded knowledge of the making is developed. This deeper understanding is at the core for


Craftsmanship and many Tradecraft1 processes that have been around for as long as people have been making. While plaster was the medium through which these experiments were situated, similar processes can be applied to work in woodworking, stonecrafting, masonry, etc. These trades have been around for centuries because of the intricate understanding of the material and the work that is necessary to perform them. If a similar understanding can be developed in Digital Fabrication domains, then there is hope of it having a life beyond simply being the current trend.

Beyond Robotic Fabrication

Despite this research being focused on Robotic Fabrication, this new way of approaching and view fabrication can be applied to any other method of making in Digital Fabrication. The case studies alone show that there is work being doing with other machines and processes that support this idea. But more importantly the concept of using theses machines as a design process, rather than merely a tool to produce an object is universal and can be applied to any process. The different ways to control the robot, whether its through streaming gathered data, or directly controlling with a Myo, are only bound to Robotic Fabrication by method of generation the motion for the robot. All that is needed convert my research to a different machine is to generate motion that is specific to the machine. Nothing else in the process changes, and the role of Craftsmanship and Risk is still evident. Beyond this research I believe that work will continue to progress in this direction of Crafted Fabrication, to the point that Digital Fabrication will one day require the same level of skilled hand as any other method of Tradescraft.

1. Tradescraft - Woodworker, Stonesmith, Plasterer, Metalworker, etc.

- 56 -


Conclusions: Results and Observations

- 57 -


appendix


Appendix: Sources

References: CRAFT 1. 2. 3. 4. 5. 6.

Adamson, Glenn. Thinking through Craft. Oxford: Berg, 2007. Crawford, Matthew B. Shop Class as Soulcraft: An Inquiry Into the Value of Work. New York: Penguin, 2009. McCullough, Malcolm. Abstracting Craft: The Practiced Digital Hand. Cambridge, MA: MIT, 1996. Pye, David. The nature and art of workmanship. Cambridge UP, 1968. Risatti, Howard. A Theory of Craft: Function and Aesthetic Expression. Chapel Hill: U of North Carolina, 2007. Woods, Mary N. From Craft to Profession: The Practice of Architecture in Nineteenth-century America. Berkeley: U of California, 1999. 7. Wrensch, Thomas, and Michael Eisenberg. “The programmable hinge: toward computationally enhanced crafts.” Proceedings of the 11th annual ACM symposium on User interface software and technology. ACM, 1998.

FABRICATION 1. Anzalone, Phillip, Joseph Vidich, and Joshua Draper. “Non-Uniform Assem-blage: Mass Customization in Digital Fabrication.” 2. Braumann, Johannes, and Sigrid Brell-Cokcan. “Real-Time Robot Simulation and Control for Architectural Design.” 30th eCAADe Conference, Prague. 2012. 3. Fischer, T., et al. “Digital anD Physical comPuting for inDustrial robots in architecture.” (2012). 4. Iwamoto, Lisa. Digital Fabrications: Architectural and Material Techniques. New York: Princeton Architectural, 2009. 5. Mellis, David, et al. “FAB at CHI: digital fabrication tools, design, and community.” CHI’13 Extended Abstracts on Human Factors in Computing Systems. ACM, 2013. 6. Reiser, Susan L., and Rebecca F. Bruce. “Fabrication: a tangible link between computer science and creativity.” ACM SIGCSE Bulletin. Vol. 41. No. 1. ACM, 2009. 7. Williams, Kim. Digital Fabrication. Basel: Birkhäuser, 2012.


GESTURES 1. Amento, Brian, Will Hill, and Loren Terveen. “The sound of one hand: a wrist-mounted bio-acoustic fingertip gesture interface.” CHI’02 Extended Abstracts on Human Factors in Computing Systems. ACM, 2002. 2. Gleeson, Brian, et al. “Gestures for industry: intuitive human-robot communication from human observation.” Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction. IEEE Press, 2013. 3. Holz, Christian, and Andrew Wilson. “Data miming: inferring spatial object descriptions from human gesture.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2011. 4. McNeill, David. Hand and mind: What gestures reveal about thought. University of Chicago Press, 1992. 5. Weigel, Martin, Vikram Mehta, and Jürgen Steimle. “More than touch: understanding how people use skin as an input surface for mobile computing.”Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2014. 6. Willis, Karl DD, et al. “Spatial sketch: bridging between movement & fabrication.” Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction. ACM, 2010.

INTERFACES 1. Payne, Andrew. “Five Axis Robotic Motion Controller for Designers.” ACADIA 2011_Proceedings. Integration Through Computation. [S.l.]: ACADIA, the Association for Computer-Aided Design in Architecture, 2011. 2. Schubert, G., et al. “3d virtuality sketching: Interactive 3d-sketching based on real models in a virtual scene.” Proceedings of the 32nd Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA). Vol. 32. 2012. 3. Willis, Karl DD, et al. “Interactive fabrication: new interfaces for digital fabrication.” Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction. ACM, 2011.

CASE STUDIES 1. Bard, Joshua, Steven Mankouche, and Matthew Schulte. “Digital Steam Bending: Recasting Thonet Through Digital Techniques.” ACADIA 2012_Proceedings. Synthetic Digital Ecologies. N.p.: ACADIA, the Association for Computer-Aided Design in Architecture, 2012. N. pag. Print. 2. Bard, Joshua, Steven Mankouche, and Matthew Schulte. “Morpheaux: Probing the Proto-Synthetic Nature of Plaster.” ACADIA 2012_Proceedings. Synthetic Digital Ecologies. N.p.: ACADIA, the Association for Computer-Aided Design in Architecture, 2012. N. pag. Print. 3. Mueller, Stefanie, Bastian Kruck, and Patrick Baudisch. “LaserOrigami: laser-cutting 3D objects.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013. 4. Mueller, Stefanie, Pedro Lopes, and Patrick Baudisch. “Interactive construction: interactive fabrication of functional mechanical devices.” Proceedings of the 25th annual ACM symposium on User interface software and technology. ACM, 2012. 5. Oakley, Ian, and Doyoung Lee. “Interaction on the edge: offset sensing for small devices.” Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2014.

- 60 -


- 61 -


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.