Guided Hand

Page 1

Guided Hand Machine Aid for the Analog Process


Acknowledgments Before delving into the contents of my project I would like to take the time to thank everyone who has supported me through this challenging effort, encouraging me along the way. I am forever grateful to Golan Levin and Eddy Kim, who trusted me this semester to take charge of the project and pushed me to fully realize all of the possibilities of this research, offering the utmost support and confidence in me. Thank you to Eric Brockmeyer who was there from the start of the project encouraging the exploration of haptics and how to take advantage of the tool, and to Illah Nourbakhsh who inspired an entirely different way of thinking about the human relationship with technology. Special thanks to both Golan Levin and Illah Nourbakhsh for helping fund the research done for this project. To the Human Robot Interaction team, A1: Robert Zacharias, Kim Baraka, and Su Baykal, thanks so much for being open to my ideas and helping me develop this collaborative tool! I would not have been able to do it without each of you. To all of the MTID students who listened when I needed to talk, and continuously offered unending streams of confidence in me, I am very grateful for your support and enthusiasm. Most of all, I send loving gratitude to my family and to Alex Fischer, without whom I wouldn’t have had the courage and confidence in myself to get to where I am today. Thank you all!


Abstract This thesis research explores the use of haptic augmentation in developing new artistic tools for sculpture. The use of the 3D printing pen is currently marketed and used predominantly as an arts and crafts toy meant more for play than design. There is an opportunity for artists and designers to take advantage of the strengths that the 3D printing pen maintains combined with the ability to skillfully craft within the limitations provided by a haptic feedback device, creating and iterating designs and functional products in a rigorous and interactive design workflow. The Geomagic Touch is the haptic feedback device being utilized to create the effect of a three-dimensional ruler, constraining the user's movements and providing textural effects. By adding a layer of accuracy and efficiency while maintaining a variable level of freedom, the 3D pen has the potential to be used on a more refined level for fabrication, sculpture, and design. Force feedback for freehand 3D printing changes the creation process entirely, allowing for a symbiosis of the design and fabrication processes, bringing back the art of craft and physical design that designers have lost in the digital age.


Introduction Background

This thesis was inspired by my experience as an architecture student in undergraduate school here at Carnegie Mellon. In that time I noticed a strong dependence in students on CAD and CAM tools to develop the architectural design concepts that formed in our mind, focusing less on the actual experience of the conditions we were creating until it came time to render. My personal dependence on technology felt like a crutch when I realized I had no idea how and what I was creating really looked and felt like, while the models created by 3D printing, laser cutting, and milling tended to be presented as the final product, with no regards for texture, materiality, or the true experiential impact that these details would have had. A few of my projects were developed with little to no idea in my head of what I was going to make, and in these projects I resorted to the physical world, taking arbitrary materials and cutting them into blocks or planes and gluing them together by hand, then sitting the resulting product down, turning it every which way, taking photos up close, and starting over again, each time with a stronger understanding, my design strategy emerging before me. The projects done through this process were the ones where I felt the most inspired and excited by what was happening, the ones where I most considered materiality and had a sense of a story of progression through the space. These projects were also the most iterative and the ones for which I concretely understood my reasoning behind my decisions. Upon reflection of my education during my fifth year in undergraduate school I realized how dependent I truly was on physical development and interaction to feel engaged, and realized my interest in the process over the result. That year I explored in my projects using processes that resulted in final products that told the story of their fabrication, taking advantage of the imperfections of the technology used to make it, exposing the machine’s craft. The craft of machinery is displayed in my project, Digital Nouveau, and the craft of computation design in combination with unconventional fabrication methods was exposed my thesis, Re-Imagine Earth. In light of the personal sense of fulfillment from these projects, I realized the room for growth and development of the process of creation and design; I could use this space to explore new interactive tools that allow for a creation process that is more physically immersive, processes that take advantage of the intelligence of our technology while also enabling the real-time engagement and creativity that only a human can achieve. Moving forward with this method of thinking I entered the Masters of Tangible Interaction Design, where I developed a drawing application called Sketch Aid, which helps architects understand the three-dimensional implications of drawing in section and plan by projecting their drawings into a real-time updating perspectival space.


Overview

This section will discuss the development of design tools, beginning with basic analog tools such as the chisel or pen and moving on to design software and high-speed fabrication machines. This will reveal an understanding of how the traditional design process from conceptualization to fabrication allows for direct involvement of the designer, artist, and craftsman in the product they are creating, elaborating on the level of control the human has over the process in real-time. The study of the process will expose potential methods of human-computer interaction that may have gone over-looked and under-utilized until recent years. Studying some of these recent projects that speak to these potential gaps in the designfabrication process reveals the benefits and drawbacks of working with purely analog or purely digital tools, or a hybrid of the two. It is important to first explain what it means for the human to be directly involved with the creative process of design for any product. The idea of involvement can be described by referencing Schön’s description of reflection in action in his paper, “Reflective Conversation with Materials”,

Reflection in action ... is closely tied to the experience of surprise. Sometimes, we think about what we are doing in the midst of performing an act. When performance leads to surprise-pleasant or unpleasant-the designer may respond by reflection in action: by thinking about what she is doing while doing it, in such a way as to influence doing. For example, when talented jazz musicians improvise together. -SCHÖN Using this idea of reflection in action, having the human be highly involved in the design process means for the human to be consciously reflecting on each change made. Schön continues to explain “backtalk” between a human and a design medium as a conversation where the medium imparts an understanding to the human that can be used to then respond by working into the design further. With this understanding of involvement and conversation in the process of design, so begins a brief study of the most well-known tools for basic drawing tasks: a pencil, which uses graphite. Graphite is a swift and simple medium. Graphite as well as virtually any physical twodimensional medium forces the designer to be selective about what to represent due to the limitations of the device. Within limitations creativity can thrive. The low fidelity of precision of the pencil easily allows for the designer to exaggerate lighting and angles and have an expressive quality that can be accomplished with simple and intuitive gestures by a skilled


artist. Although limited in dimension, graphite is free from reality and does not require high fidelity to instill a feeling of completion or understanding about what is being depicted. Additionally, graphite is a very forgiving medium, allowing for manipulation after placement. For these reasons the pencil is a primary starting tool used in any design process, including character design, or diagrams and sketches for architects and designers, or even to sketch out basic math or programming logic for solving a problem.

Figure 1: Industrial Sketch Figure 2: Rebecca Sugar designing Marceline from Adventure Time Figure 3: DeForest Architects showing their process: started in pencil, outlined in pen

The pencil is an extremely common utensil for communication in design. However, it tends to fall short in its precision, and can be difficult to use for communicating complicated ideas for those less experienced with the tool. Lack of precision inhibits a designer from communicating every possible detail, and thus forces the designer to simplify what is being communicated in a drawing. This limitation pushes the artist to simplify the drawing by focusing on the development of a specific facet of the design in each sketch as opposed to having to resolve several issues at once. With the pencil the entry skill level is quite low, as the tool is very simple and intuitive while the medium is forgiving, but the bar is very high, as with practice, high fidelity can be accomplished especially in larger drawings. The intuitive use of the pencil is due to the one-to-one relationship with the tool and the hand: the physical actions of the artist has a direct impact on the output of the graphite on to the paper. The medium does not drip or move from where it has been placed on the page unless smudged. The medium is very predictable.


Beyond the pencil comes more complicated features in the world of art, such as colors with paints and textures that graphite cannot accomplish. Paint is less intuitive to use, more expensive, and less forgiving. Color makes the mimicry of lighting more complex than simple dark/light, while the medium of paint can drip and behave differently after different time periods, which are behaviors to be understood and taken advantage of by the artist for optimally desired results. For this reason, paints, especially higher quality paints, are used more commonly for premeditated productions. As Figure 4: Oil Painting shown, it is common practice for painters to use pencil before laying down paint. Paint is traditionally used for a completed design while work in progress tends to be done in graphite. There are three-dimensional types of sketching as well, such as iterative model-making, which is capable of being just as quick and intuitive as graphite. Physically modeling, like sketching, can have a low-entry skill level with a high ceiling, and can also be taken advantage of at various scales to express, at certain levels of fidelity, different aspects of a project. This kind of modeling is typically done in the same early stages of design as general sketching, although from personal experience graphite sketching often comes first, even if only to lay out the shapes to be cut for the model. Again, although the process is quick and easy, it is inaccurate and a poor representation of what the final product is intended to look like; it is generally little more than an abstraction. Craftsmen working in the three-dimensional realm use graphite for preparation as well, marking dimensions and annotating the construction. Sculptors even work off of drawings for reference, which is generally the first method taught in a basic sculpting class. The graphite markings used in sculpting and painting are used as referenced during final production to ensure less mistakes by planning ahead as well as higher accuracy in construction and efficiency in time. These methods work very well for artistic output such as a drawing to reference for a sculpture, but there is a definite transition from this low-fidelity yet free and gestural process of making when there is a demand for mass production or measured construction when moving from informal sketches or artwork to formal designs of real products or buildings. A designer’s sketch is never intended as a final result, and so the designer will always resort to a more efficient or accurate tool which can yield more refined iterations than those done quickly on paper. This transition out of the early sketching stage takes us to the stages that require additional tools such as rulers, graph paper, and stencils for more resolute sketches. These tools, however, are cumbersome and time-consuming in comparison to contemporary digital


technologies for design processes, such as the Adobe Creative Suite or 3D modeling software such as AutoCAD, Rhinoceros, and Maya. The tools used in the digital realm are generally designed for efficiency of time and precision of details. These kinds of tools can output multiple copies quickly as well as be iterated by saving new copies after making small changes, making the digital realm a powerful tool for the basic early design process as well as for final production. Each of these tools have some basic functionalities in common such as the ability to put objects on different layers with optional visibility, providing exact dimensions, and rendering. Each of these functionalities further allows for ease of iterating through various details without losing fidelity of the whole design and without losing time, contrasting from the previously discussed analog tools. Different components of the design can be duplicated and changed without needing to re-create everything, as would be the case for any iterative physical model or sketch. Additionally, beyond providing relatively intuitive functionality for what is simply timeconsuming and confusing to do through analog methods, digital software is used to accomplish seemingly impossible design tasks in relatively little time when given access to scripting features or algorithm-based modeling interfaces such as the growingly popular add-on for Rhinoceros, called Grasshopper, shown below. Beautiful design patterns can become responsive to virtually any desired parameter such as proximity, quantity, or surface conditions. Analysis tools can be used to understand many behaviors and qualities such as structure, heat, light conditions, and air or water-flow, and then used in combination with parametric algorithms to allow for designs that respond to the analysis. The tools available in the digital realm can create hundreds of iterative outputs in minutes.

Figure 5: Tutorial showing off the ease of creating complex 3D printable models Figure 6: Basic sample flaunting the parametric design of panel window sizes based on exposure to sunlight

Every iteration fabricated from a digital model can potentially be what could be called another kind of sketch model or final product, because once a model or drawing made on a digital medium is fabricated it can be used as a part of the process where the designer receives Schön’s “backtalk”, or it can be successful in the ways the designer desires and thus deemed the final product. The digital prototype can have fidelity added to it without having to start the drawing over, and the scale can be measured in a wide range of ways, be it in pixels,


millimeters, or miles; either way, each line in Adobe Illustrator can be selected and changed in color or thickness and then be printed; each surface in Rhinoceros or Maya can be scaled to proper size and output to a 3D printer or CNC machine and fabricated to scale. For these reasons digital processing tools tend to dominate design and fabrication, as they are more efficient tools in these fields. This brings us to the actual output part of any process, the fabrication process. Emerging Dissonance

The design process has changed as computer aided machines have taken over the fabrication of essentially all of our products and designs. CNCs, 3D printers, and ABB robots are used in manufacturing facilities to produce products that have been designed at a standard minimum resolution that is required for fabrication. This means that even if the model is not finalized, it must have a known scale to it in order to be fabricated properly. While scale is often helpful to consider in the early design stages, what is debilitating is that although fabrication is faster for outputting a high quantity of precise models, it is certainly not faster than a human at making a single low-resolution model, nor does it express any concept of inherent craft of the artists hand in the making as it is traditionally used today. It is excessively accurate for a sketch. It is not efficient for real-time feedback either, as the designer must wait for the machine to complete fabrication before being able to look at the physical model, interact with it, and decide what works and what does not. Because the model was fabricated by a machine it is not as easy as shifting a piece over and re-taping it, which can be done easily with a hot-glued sketch-model. It is not as simple to make small changes in a prototype that was vacuum-formed to a CNC prototype as it would be to make small changes in a playdoh mold. It is hard to discover something new while the machine is in the process of making, and even discouraged to interact with the tool to gain a deeper understanding of the material properties, or to express any kind of personal intimacy with the fabrication process, as milled objects are sanded down and 3D printers offer no manual control for extrusion speed, extruder speed, or distance from extruder to target. The difference in quality is great between machine fabricated items and hand crafted items, but the question is when quality and accuracy is more important than exploration, craft, or real-time interactive feedback is. It is evident that these systems are incredibly beneficial for the speed and efficiency of manufacturing designed products, but is excessively powerful to be used for early prototypes or gestural art. Even digital tools for design seem to have more power than necessary for quick tasks. Examples for this include Rhinoceros and Adobe's Photoshop and Illustrator tools. A great struggle in a student’s process is the idea that anything made using these digital tools should be of the quality that the tools were capable of, while nothing done using these tools is comfortably quick or easy to make. Options have been said to inhibit confidence in decisions,


because as the number of options increases, the probability of making the wrong choice increases, as explained in Baba Shiv’s popular talk at TEDxStanford.

Shiv decided to test the theory on undergraduate students about to solve word puzzles. While one set of students was asked to choose between two teas — caffeinated or relaxing chamomile — the other group was told by the researchers which of the teas to drink. In the end, the students assigned a tea solved more puzzles than those who were given a choice. Shiv hypothesized that this is because making the choice allows a person to have doubt about their decision when faced with the prospect of immediate feedback. -TORGOVNICK This issue of limiting our notion of what is to be done in the design process with technology is addressed as troublesome by Sundström, who wrote in his essay, “Inspirational Bits Towards a Shared Understanding of the Digital Material”:

First, it feels that much of the emphasis at the early stages of design exploration is placed on what users do and, consequently, attention is directed away from exploring and thinking imaginatively about the technologies. -SUNDSTRÖM The focus wound up being on what is certain and safe, what is printable or mill-able, and what looks enticing, rather than what is experientially enticing. The most inspirational process with the quickest results I found was to work with my hands; this is an exchange of quality for a quicker, more experiential process. With this in mind, not forgetting to consider the benefits of machine fabrication, a gap becomes more and more evident in the relationship between the designer and the intelligent tools being used to aid in the design process. This gap is in the feedback, in the response time from the designer designing to the product providing back talk, for the sake of quality. Much of what is done digitally is not design process so much as producing designs, and when it is used as a part of the design process it tends to lack the feedback provided in physical reality, and thus lacks the chance for symbiotic development of materiality, texture, and interaction design. For example, any three-dimensional digital modelling software will allow for the production of a surface with no thickness. This provides errors for students in the design process more often than it should as they overlook wall thickness, structure, embedded utilities, but most importantly, students fail to develop the essence of the experience. Renderings can be manipulated and again are not considerate of all the other senses, and so they can be incredibly enticing and attractive as an image,


convincing clients to move forward with production, but can result in very poorly choreographed interactions and experiences. Throughout the architecture undergraduate program at Carnegie Mellon University the rendering is often the most coveted by all, and thus maintain a lack of understanding how to iterate spaces in section and plan going into their fourth year; so, they continue to default to Rhinoceros to quickly model the space, take a section cut, depict that as their process instead, and then overlay that with a rendering of the section as the largest most detailed image in their presentation, while construction logic, scale, and spatial understanding of the space go swept under the rug, replaced with enticing but unreal images of what the building will be. In any representation of a product the reality can be lost and so it is important to maintain simultaneous development in the physical realm while working with the incredibly convenient digital realm.

Figure 7: By Matt Adler and Rohan Rathod. This studio had students disregard reality of construction and focus instead of the experience.

As you can see in Figure 7, this project, as with the all projects presented in this studio, was presented as an object from afar. Although the focus in the studio was on the experiential quality of the exuberant, allowing us to disregard the constraints of reality, the tools were limited to computation and coding processes. As a result, none considered the immersive experience of being inside the design, the tactile qualities, or the general human experience, because nobody was given the chance to develop in physical models or use a hands-on approach. Instead, it became limited by the tool. As Adler put it,


We weren’t able to cultivate generative ideas using the tools prescribed by the studio. It became like "oh you know how to do this, so that’s what your project is going to be about”. Everyone had developed some complexity in terms of form and hierarchy, but few people actually were able to grab important aspects of the subject they were studying and truly apply it to the scenario we were given, extrapolate the key pieces, however we reached them, and make it somehow "real". -ADLER The role taken in this studio, and a heavy criticism received by reviewers was: we as designers were completely removed from our own designs, focusing on the process of computation, losing the personal touch of the designer. The entire process was driven by computation and machined output, although each project was inspired by researching some phenomenon found in nature, which is the subject Adler referred to. Although the studio expanded students’ toolbox of design methods, and engaged in a new and exciting understanding of conceptual development, the issue lay in the extremely different process of the computational design method from the method that humans were accustomed to using to understand their designs spatially and experientially. At this point it becomes clear that there is a dissonance in the process, where digital tools provide complexity, accuracy, and efficiency, at the cost of an immersive, experiential, and engaging design process. Craft is traded in for smooth surfaces, designs are reduced to form with less consideration for the interactive experience; digital fabrication as it stands today teaches designers comparably less than hands-on engagement does about the materials we are using in our designs, and all the potential is lost to the standard fabrication methods embedded in the pure machine process. This disconnect, however, provides room for the growing field of study and exploration of machine craft, human-machine collaboration, and tangible interaction design.


Prior Work

There are many existing interaction design projects that exploit the craft of tangible interactions with technology for fabrication and art. Many of these projects are proposed purely as concepts for design tools and interactions. They are generally not ready for massdistribution, are expensive to make, and were created by very experienced designers, and thus are not intended for recreation. Despite these setbacks, and in addition to these examples being from the past five years, it is too soon to determine how these projects will be scaled, cheapened, and designed to be more intuitive or have a lower entry skill level. However, by starting the conversation and making pushes toward new types of technological design tools, these projects are making great strides toward machine-aided design opportunities, which encourage the user to explore new materials and material behavior by allowing them to focus on the creative process of making rather than focusing on the final “perfect� product.


Fiebig

A core advantage of hybrid mediums is in having a tool such that the user has no need to prepare for their making process, but to simply go ahead and make without losing quality or accuracy. Christian Fiebig created a system that accomplishes this in his project, “Computer Augmented Craft”.

The idea isn’t necessarily to speed up, or simplify, the designer’s work. Rather, Fiebig hopes that by extrapolating the results of organic process, the machine will set up a feedback mechanism, which the designer can choose to rebel against or collaborate with. Or, a combination of both. "Ideally it will enable the designer to create something nor he or the computer itself could have come up with," he explains. Fiebig is encouraging other designers to adapt the source code for the program and sensors. - CAMPBELL-DOLLAGHAN

Figure 11: The physical workstation Figure 12: The digital model reconfiguring itself to suit the physical model as it is being built – real time feedback.

This project is designed to help the designer accurately cut and weld metal of different length strips to create a unique design. Computer vision allows the program to understand what the user is doing in real time, provide measurements and angles, and even offer suggestions of how to make connections or where to place the next pieces of metal to be of optimal strength. Someone with no design in mind could potentially come up to this intelligent table and decide to make a chair on the spot and, while still having the authority to make executive design decisions, create a sturdy chair by using the suggestions from the table. What is interesting about this piece is that the outcome can still be ugly or a failure depending on how good the designer is at piecing things together and designing on the go, so that part of the process is still very independent and free, requiring the skills to design and create. However the table still offers much needed guidance in terms of the technical construction


regarding measurements and connections. With this tool there is a sense of reflection in action in model making that can maintain accuracy.

Devendorf

Laura Devendorf follows the same trend as Fiebig with her interactive project titled “Being the Machine�, with which the user uploads a model and is able to choose whether or not to follow instructions given by a program that projects a laser pointer indicating where it would recommend to place some material to physically create that model.

Figure 13: Using a material- in this case, candy- and printing with it following the computer’s instruction. Figure 14: Another model from the same process, this time using pipe-cleaner. Figure 15: The vision for a human 3D printer. The laser guide was ultimately removed from the body in her final product.


Zoran

Continuing along the vein of physically and digitally symbiotic interactions, Amit Zoran refers to his own projects as studies on interactions in hybrid mediums; he is the maker of Free-D, an intelligent dremel that is aware of the location of its tip relative to the location of a virtual model. Amit explains his rationale behind his project in his essay, “FreeD – A Freehand Digital Sculpting Tool”:

Figure 16: Showing off the sculpting process with the boundary set as the alien model. Image (e) exemplifies the option to override in the sculpting of the mouth.

We designed the FreeD to allow complete gestural freedom - similar to working with a chisel or a knife - and to allow an intimate tangible experience with a raw material. Nevertheless, the FreeD also gives the user a “safety net” by relying on a pre-designed CAD model, similar to working with a digital machine. -ZORAN The dremel indicates whether it sits within a tolerance of the edges of a model, guiding the wielder to carve out the shape of the model free-hand. With this feature the tool becomes something that a beginner can potentially use to learn about carving shapes accurately, while still having a lot to learn due to the general craft and skills available to a wood-worker, sculptor, or carver. Zoran has worked on another project called Digital Airbrush which has essentially the same functionality but with an airbrush instead of a dremel, where the airbrush is automatically turned off when leaving the boundary of the painted subject. Both projects look into an intelligent technology that uses the digital world to aid in a physical analog process. The final result still does depend on the skill of the designer, but allows for “guides” to make suggestions to rather than force the designer. He calls the circumstance interactions in hybrid mediums due to the digital technology interpreting movements for an analog process and giving real-time feedback that guides the user, not unlike how a ruler does. The


key take away from Amit’s projects is that it gives the artist or designer tools that allow for free movement within the confines of intended boundaries.

Figure 17: Illustrating 3D-tracking, lines highlight the intersection with the model, therefor deactivating the dremel. Figure 18: Image painted by Amit using the Digital Airbrush

Regarding whether or not hybrid materiality or machine-aid in the early processes of design is more conducive to creativity is a question that cannot ultimately be answered, as the answer may vary from person to person. However, it will be assumed in this essay that accuracy in prototyping is generally preferred unless it is at the cost of time or money. To address the two issues, Zoran has designed his tool to have the designer use the digital realm for design first, and then switch over to the intelligent tools which guide the designer’s hand while still allowing for overrides, and thus a continuation of the design process. For this reason, while both of Zoran’s projects speak to a hybrid medium, something lacking is the ability to create from scratch using his tools. With Zoran’s projects the user is required to have a pre-set model or image to reference when moving into the analog side of things. However, time-efficiency in production is, in some situations, not the most important feature to computeraided design, seeing, as previously discussed, some processes are faster by hand, while others are by machine; ultimately it is the real-time feedback that is primarily valued in the design process.


Lee

Joong Han Lee creates a very similar concept to Devendorf’s approach of the human becoming the machine in his project, “Haptic Intelligentsia”, although his version is much more limited. The project also has elements similar to FreeD in that it limits the user’s range of motion, although it can be overridden. In contrast to Zoran’s project, Han Lee’s concept explores the use of haptic feedback to guide the user along a model’s surface instead of blocking it when intersecting with the model. This is because this device uses a glue gun to extrude melted layers of glue, coming the closest among the listed precedents to an actual desktop 3D printer. This project takes the unlimited amount of choices in an empty space and imposes limitations using the haptic feedback device so that the user can simply trace along the guidelines and see what the resulting model is. Although this is the goal of his project, it is also the limitation, as the user is not a designer, but an actuator of an existing input design. The way this project was presented was not for anyone to input their own model, but for them to blindly follow instructions. The user is able to stray from the guidelines physically if desired, but that was not the intention of the interaction, the way it was for Zoran’s and Devendorf’s proposals. This approach, however, has great potential.

Figure 19: The user is incredibly removed from the process, as emphasized by the drastic encasing. Figure 20: Each model is unique due to the nature of the material and the actuation of a human.


Lia

Mononymic Austrian generative artist, Lia’s expressive sculptures showcase the craft and beauty that 3D printing is capable of. Unlike Lee’s work, Lia uses a desktop 3D printer to make these models; however, each model iterates through the multitude of aesthetic opportunities that arise from combining different movement and extrusion speeds.

Figure 21: On the left, a combination of the known and accepted speed and extrusion rate, with some fluff on the ends Figure 22: Two prints using the same tool path but different speed and extrusion rate, revealing a very different result.

What is so exciting about Lia’s work is that it expels the concept of perfection in fabrication. In this work, the models are imperfect, never exactly the same. The beauty lies in the material behavior of melted plastic, the controlled chaos. What is satisfying in these models is the perfect and tight composure of parts of the models, while other parts are allowed to be free. Where Fiebig, Zoran, and Lee bring mechanical limitations on the human process, Lia takes the purely machine process of 3D printing and truly turns it into a craft by allowing for direct control over the printing process. Although the results of this project prove beauty, craft, and imperfection in the machine process, it stands that there is room to expand the process to accommodate changes in the making, real time feedback, or a one-to-one intimate relationship between the designer’s decision and the result of that choice. For the designer who wishes to improvise on the fly, or for the designer who desires hand craft, Lia’s work paves the way for further exploration.


Guided Hand Understanding the Space

The first approach to this thesis was to begin with a very roughly put together version of the concept and test it out early, to ensure the feasibility of the interaction design. This begins with acquiring a device that can track movement and also offer some sort of feedback to the user. Upon extensive research and speaking with several knowledgeable professors such as Ralph Hollis who focuses on haptics and Golan Levin who focuses on interactive art, the Geomagic Touch device from 3D Systems provided the optimal solution, offering high fidelity of both location tracking and haptic feedback. A 3D printing pen is then affixed to the 6 axis robot arm of the Geomagic Touch. The 3Doodler was chosen as it was the most reliable 3D pen at the time.

Figure 19: First prototype

This first prototype was demoed at a gallery in the Frank Ratchye Studio for Creative Inquiry. The 3Doodler was simply taped to the haptic pen, while a basic code sample was edited slightly to place the model in the proper location relative to the device. The demo was set up to show a video of the device in action and attract attendants, who were then instructed briefly on how to use the device. Many students were excited by the interaction, citing the new experience of haptic feedback while modeling. The attendants who mentioned having a design background with an understanding of technology seemed to take more easily to the device, while one student who worked purely in the art space seemed to struggle with the haptics in the device, at one point claiming that it was broken. It seemed as though the biggest challenge for students was that it was difficult to relate the digital model they saw on the screen, and struggled to find it and follow it in the physical world. They often did not reference the cursor on the computer screen, which indicated the relationship of the pen tip and the model. Additionally, the offset of the 3Doodler from the original pen tip location created a large amount of give, which ultimately allowed for more freedom of movement within the constraints,


and arguably a feature and not a bug. The majority of students (about 25 out of 30) were able to figure out how to use the device and deposit a good amount of material along the invisible surface of the model. Each student was limited to around five to ten minutes of use, and thus took about five to ten students to complete a model print.

Figure 20: The strata of different students working is apparent

The different approach for building each of the models is apparent in Figure 20. In each model there are visible layers of construction strategies, some areas exploring the triangle scaffold, some curling textural exploration, the second model is very obviously a scribbled rush testing the speed of the plastic extruder. These models first and foremost proved the feasibility of the project, but also exemplified the concept of human craft within machine constraints. The users, once they understood the haptic modelling space, were able to trace along the surface with ease while moving freely along the surface constraint and explore methods for crafting the model instead of focusing on the form of the model. With these results the concept was proven and the areas that required development became apparent. Some of this development was purely technical: it became necessary to replace the haptic pen on the Geomatic Touch with a 3D printing pen instead of just taping it on; the quality of the 3Doodler was not sufficient and needed to be replaced with a better model; there was a need for an indication of where the model was in physical space. There was also a need for further exploration of the relationship between the human and the device; this was necessary to understand how to most effectively create a perceived environment of collaborative design and creation with the device- the designer should not misunderstand the intention of the robot or feel frustratingly inhibited with the process. Lastly, the project requires several application concepts to expose the full potential of the device.


Robot Painter

The robot painter project was developed for the final project assignment in Illah Nourbakhsh’s course, Human Robot Interaction. This course was chosen for its investment in theoretical understanding of the relationship between humans and robots, rich with papers on social interactions, human psychology, and human-robot interaction designs which respond to the former. My group included myself, Su Baykal, a psychology and HCI undergraduate, Robert Zacharias, an Emerging Media Master’s, and Kim Baraka, a Robotics Master’s. Our project reveals the natural human reaction to working creatively and collaboratively with the Geomagic Touch device, and the perception of creativity during the collaborative process with it. More specifically, results from the experiment depicts how much control the user is willing to give to the device, how much credit, and how inspiring the user thinks the device is under certain conditions. The study began with a single research question:

How does making art in collaboration with the haptic device affect selfperceptions of creativity? The question breaks down into variables in which an independent variable is the change in a detail of what we tell the user about the robot, and the dependent variables are the survey results by the user. The dependent variables are then the user’s perception of creativity, willingness to collaborate, and credit assigned. The independent variable is whether the subject is told they are using the device as a proxy to collaborate with a human or an algorithm. The most important constant is that, in either creative collaboration, the device is actually referencing the same drawing that it will push the user to trace along the curves when in proximity. Additionally, the background story for having users test the device is constant: the users are told that our team will be submitting the best paintings done in collaboration with the device to a robot art competition, for which there will be a prize to the most creative piece.

Technical Warm-up Device Algorithm collaborator

“Human” collaborator

Free trial painting, device off (2 mins)

Follow device’s instructions (2 mins)

Figure 21: Timeline of the study showing conditions

Creative Study Tasks (random order)

Free painting, device off (5 mins); take survey

Subject told they are collaborating with algorithm to create painting (5 mins); take survey

Subject told they are collaborating with human through device to create painting (5 mins); take survey


After each of the creative study tasks the user is asked to fill out a survey which asks questions regarding perceptions of creativity and collaboration during the task. Additionally, they were asked to indicate in percentages what they believed the contribution was of the robot and themselves, and then what the credit assignment was of the robot and themselves. In addition to these questions posed in the Likert-scale format were a few open-ended questions. The setup of the collaboration is as shown below.

Figure 22: The paintbrush is mounted in place of the haptic pen. The artist is required to hold the paint cups within reach of the device to dip the brush. A small bowl of water is offered to clean the brush if desired.

Researcher at Laptop

Subject at Paint Station Collaborator

Figure 23: The second study room was used exclusively for when the user was told they are collaborating with another human, so that the story would be more believable.


The results of the collaborations proved to yield exciting results in the way of human perception of collaboration and credit assignment.

Figure 24: The input curves on the left, followed by resulting paintings. The haptic device snapped to the curve and then forced the user along the curve toward its end point. The proximity of some curves lead to the effect of a vortex, as seen in both paintings.

Changing Perceptions of Openness to Collaboration While Painting Algorithm Human

First Half

Second Half

Figure 25: Showing the change in willingness to collaborate from the first half to the second half of the collaboration.

Figure 25 shows the users’ answers to the question “How open to collaboration with your collaborator were you?� for both the first and second half of the collaboration process. Those told they were collaborating with an intelligent algorithm always changed their opinion after getting acquainted with the device, for better or for worse. Those told they were collaborating with a human mostly maintained their opinions, although those who did change always felt more open to collaboration over time.


Perceptions of Collaborator’s Creativity by Collaborator Type

Human

Algorithm

Figure 26: People felt the human was more creative than the algorithm despite them being exactly the same.

Figure 26 shows people’s perception of the collaborator’s creativity. It is evident that, although the device used the same exact input drawing to guide the test subject along, subjects assigned a higher level of creativity to their collaborator when they believed it was a human. This being said, there was a trend in the open ended answers of word usage that described being dominated by the collaborator more when believed it was a human than when it was an algorithm. Subjects appeared to respect the devices force feedback more when they believed a human was controlling it. However, when told the collaborator was an algorithm, the subjects used a language Self Contribution vs. Self Credit By Collaborator Type describing the device as something that Contribution Credit they were in charge of and telling what to do. This leads to a sense that humans are more willing to take charge when collaborating with technology. This being said, Figure 27 shows that although subjects gave assigned themselves a higher percentage of contribution, they Human Algorithm still assigned themselves only slightly Figure 27: Discrepancy between credit assignment and more than half the credit for the final contribution perception was higher for the algorithm condition. product in collaboration with the robot.


The single result that stood out among the crowd, was that of a first year architecture student, who was told the collaborator would be an algorithm. This subject was to paint first with the device collaboratively, and then paint with the device off. This subject had struggled with the training tasks, and when collaborating with the robot, seemed to be exploring the device’s capabilities more than trying to get a painting out. The last condition for this subject, with the device turned off, generally yields the standard credit assignment as 100% to the subject and 0% to the collaborator, because there is no haptic feedback. However, this subject assigned the robot 20% credit, after producing what decidedly became my favorite painting from the many that were made.

I felt as though my previous interaction with the collaborator heavily influenced my work, and attempted to portray that as much as possible. -NICK, ARCHITECTURE 1ST YR STUDENT

Figure 28: Resulting painting with the robot inactive, after having painted with the “algorithm� collaboratively.


The results from the Robot Art project truly exemplified what I hoped to achieve: to have the haptic feedback device inspire ideas that would not have occurred otherwise. Our team’s results were intriguing enough to our Human Robot Interaction class that we have been encouraged to push the research further into a publishable format. Starting early 2016 our team will begin IRB level studies on a new set of subjects. With this in mind the project jumps out of the paint and paper process, taking advantage of the three-dimensional potential that the haptic device is capable of.


Machine Aid for the Analog Process

Guided Hand provides a symbiosis of fabrication and design processes. This thesis allows users, who in the context of this paper could be any kind of designer, artist, or other type of visionary creator, to explore the 3D printing process in real time to gain an intimate understanding of the material while offering the freedom to perform free-hand printing within machine-limitations. The system will first be described in parts, elaborating briefly on the digital and physical technologies used in the interaction, and then delve into the terminology of the system, the physical development of the device, applications, and limitations. To this end, Guided Hand will be concluded as an exciting concept with great potential, and thus much work to be done. To begin by breaking down the system will produce an understanding of both the computer and the haptic feedback device, called the Geomagic Touch, which will simply be referred to as the device. The computer end of this interaction prototype is used to exemplify the possible range of interactions and applications programmed using C++ on a Windows computer. The application created will first display the command prompt, asking the user to drag and drop an .obj file and hit enter. Once a proper .obj model file is enters, the application will display three screens, not including the Command Prompt. These screens are views of the entered model: top, front, and perspective. These views ultimately display the digital space. This space is able to render a number of objects within the view-space: points, lines, surfaces, and volumes. These views are provided for the user’s comprehension of the relationship of their current cursor position to the physical device.


The device is essentially a robot arm with six degrees of freedom in its movement. The tool on the robot has a likeness to a normal pen, designed to be held in the user’s hand such that it is used as a three-dimensionally capable cursor for the application, relating the movement and orientation of the digital cursor to the pen. By communicating with the computer using an Ethernet connection, the device is able to understand the object rendered in the digital space, and uses three motors to create the physical sensation of various forces. These forces are able to emulate many haptic sensations, including but not limited to the sensation of magnetic attraction, gravity, collision with a physical object, or tactile sensations of touching along a surface. Haptics is defined as relating to the sense of touch. Haptic technology, therefore, is essentially technology that focuses on utilizing the sense of touch. The device is capable of seeing the objects that are within the digital space. As the user holds the pen of the device and the relative digital cursor is moved to collide with an object in the digital space, the device is able to physically respond in a number of ways, which will be explained using a set of terms. Terminology

The geometries, which reflect the objects within the digital space, are explored in Guided Hand to provide the opportunities for physical responses of the device. Like the digital space, these geometries are points, lines, planes, surfaces and volumes. With these geometries come a number of possible haptic interactions, being collision, attraction, and tactile interactions. Collision can be devolved into diametrically opposed subsets, containment and boundary exclusion. Boundary exclusion is an interaction that best represents the interactions had with objects in the physical world, where upon meeting the boundary of an object, a resistant force prevents going through the object. When the device pen tip has collided with the outside boundary of a volume or a surface, the device will provide a resistant force. By doing so reveals the physically invisible object that the device sees, emulating its physical presence. Containment, on the other hand, can only be applied to a volume, and as is clearly stated in the word itself, will contain the pen tip within the confines of the volume. With the containment option the user can essentially feel a volume from the inside-out, providing the opportunity to draw within the bounds three dimensionally or filling the volume with controlled or uncontrolled extrusions of plastic or resin, creating what will be referred to as infill when discussed in further detail in the applications section.


Attraction, unlike collision, is not designed to prevent the pen tip from reaching beyond the confines of a surface or volume. It instead focuses on physically guiding the pen tip, emulating the sensation of the tip pen being magnetic while invisible magnets float, locked in place around it, so that by moving the pen around the user is able to feel pushing and pulling. Snapping is a subset of attraction which can be applies to all four geometry objects, the point, curve, surface, or volume. The snapping force effectively pulls the pen toward the geometry and requires a stronger force to pull the pen away from the object. For a point, this simply means the pen being biased toward the point itself. As the pen tip gets closer to the point, the force becomes stronger, and inversely gets exponentially weaker as it distances itself from the point, effectively making the attraction to the point literally feel like the sensation of snapping into place. Applying this same attraction force logic to a curve would mean that the pen can freely move along the length of the curve, but has to be snapped on or off from the curve itself. This also applies to a surface and a volume. Like snapping, path following also tends the pen tip toward the object; the difference, however, is that path following is very specific to a curve, and proves the opportunity for the pen to be pulled tangentially along the length of the curve to force the user’s hand holding the pen, in effect biasing the user to trace along the line. The user of course has the strength and thus the freedom to resist the biased force of the device along the curve. For both attraction forces, the snapping and the pathfollowing, there is a specific distance under which the attraction takes effect, a distance which can be customized as desired. The strength of the force can also be customized, as is the case for many if not all of the haptic forces available. Having gone over collision and attraction, the last haptic interaction that was explored in this thesis is that of tactile interactions, which thus far is made up of vibration, friction, and dampening effects. Vibration is the easiest to comprehend, as it simply vibrates the pen at a force magnitude and a variable rate of change in the force direction, in this essay referred to as frequency, creating the vibration. Both the magnitude and the frequency can be controlled by the user to create a variety of textural effects. The vibration can toggle in the x, y, z axes, effectively capable of gyrating in all three directions simultaneously. The friction and dampening interactions both have variable magnitude and gain controls, where for friction, the magnitude increases resistance while reducing gain increases fidelity. A friction force with high magnitude and high gain essentially amounts to a snapping resistance to any movement. With the dampening effect, the same is essentially true except that instead of feeling like the snapping resistance of friction, it will feel like the thick resistance of moving through molasses. Dampening essentially makes all movement require more force.


To conclude the types of interactions, they are collision, attraction, and tactile. These interactions will be used as methods to constrain a hand-facilitated fabrication process, where the device pen will be replaced by tools that create. The tools chosen for this thesis are the 3D printing pen and the paintbrush. Tool Tips

There are many 3D printing pens and for this project two different types of pens are used, the first being the well-known plastic extruder type of printer, embodied in the 3Doodler and Samto 3D pens. This kind of pen can take the same ABS or PLA plastic feeds that would work on a regular 3D printer, which gives way for convenience, as these plastics are common for rapid prototyping already. The plastic extruder pen simply heads up its pen tip and pulls the plastic feed in through the back of it and pushed it out of the heated tip, after which it must cool to solidify again. The plaster extruder pen has the opportunity to explore the craft of controlled chaos with the medium, the chaos being variable depending on the speed option of the extrusion rate, which can go quite high. The second type of pen explored in Guided hand is that of photosensitive resin curing. The only UV pen currently on the market is Creopop. A UV light on the pen activates the resin as is comes out from the pen tip resembling gel. This material has the capability to extrude much thicker lines than the plastic alternative, which is particularly beneficial for the purpose of extruding in mid-air and maintaining the shape output. The caveat to printing in mid-air is that the extrusion speed is significantly lower, and it is not uncommon for the resin to require more time in the UV light to cure, resulting in a faster rate of extrusion than curing and thus the need to pause printing now and again. Although the resin material is more dependable once cured, it can also be a little sticky with uncured resin if not given some more time under the UV. With these two pens, there is a multitude of possibilities for applications, which in Guided Hand were explored as filigree, sculpture, prototype, and recording. The filigree application has the option to use either: path following; curve or point snapping; or containment. The way the path following would work would be to load a simple .obj file to the application which describes all of the points in a curve. This option would take some extra file-preparation but could yield in effective results. The option with curve snapping gives less direction from the device and thus more freedom to trace the lines in a unique or desirable way. However, if there is no digital drawing in mind, there is also the option for free-hand drawing within a contained volume. Although filigree is used for a surface, for this interaction a volume was chosen to give a free range of motion within the confines of a vertically extruded shape so that


the user can pick up the pen from the printing surface without having to use extra force to snap off of the surface being filigreed. This interaction essentially allows for full freedom of movement, but within the confines of the outline, such that the device is acting as a smart stencil within which the user can fill with either the resin or the plastic extrusion. In Guided Hand, plastic was explored due to the interesting textural effect of the extrusion drying on a flat surface in a specific drawing pattern, which would not occur with the gelatinous resin extrusion. Using the plastic extruder, the user also is always open to the option of toggling the vibration tactile effect to draw textured lines along the surface.

The second application, sculptures, is essentially the three-dimensional version of filigree, although in this case the focus for the user might be on the outer edge or the general infill of the sculpture’s volume. Both the outer boundary of the model and its infill provide opportunity for creativity, craft, and development. In the Stanford Bunny prints, the first iteration using resin to create a wire-frame proved to be quite challenging but an informative lesson to gain a personal understanding of the rabbit volume’s boundaries. Moving on to a fully printed model, the second iteration focuses on understanding how to optimize infill and understand how to trust the haptic boundary to ensure the print is true to the model, with some tolerance for the human hand. By the third model it becomes evident that an understanding of the rabbit is attained and the user was able to focus on textural qualities of the rabbit, noting denser strata of plastic at the head where a rabbit’s fur is generally shorter, and a looser and more chaotic mess of plastic at the chest and rear of the rabbit, where its fur would generally be fluffier. At this point the user felt more confident choosing not to print the front of the ear, creating the look of a more realistic rabbit-ear, and adding some sense of thought and detail to the final printed model. The Stanford Bunny print series is a good example of an infilled print, although infill is not completely necessary, as shown in the Vase print, in which the user made the conscious decision to only print along the outer boundary of the volume. A sculptural


application such as the Vase print, similarly to the filigree application, has the option to use surface snapping so the user can continuously ensure being on the boundary. However, surface snapping proved to be more stifling and frustrating, as the user might often want to pull the pen away to observe and respond to what is being printed in real time, or might want to on occasion want to pull away from the surface for any other reason; attempts to pull away from a surface with snapping makes for an undesirably rough interaction that could lead to clumsily jerking the pen tip off the surface and perhaps hitting another surface to then have to snap away from, thus making the interaction with the device more about fighting for control and inhibiting the intuitiveness of the process. With the containment option, the user who created the Vase print was able to realize the opportunity to create a bottom for the vase while leaving the rest of it empty without any infill. As shown in the sculpture application, Guided Hand provides the user with plenty of potential to learn new crafty ways to print, but it is not absolutely necessary to print within the confines of a model or to print all of the model shown in the digital space, or to even have a model at all. By allowing some tolerance and freedom within the constraints, and because the user is stronger than the bounding forces and can break away from them, the user is thus able to make changes to a model in real time as it is being printed. Essentially what this means is that the model in the digital space, rather than being the target output print, could be thought of instead as more of a template. With this in mind the opportunity for prototyping arises, where with each print the user can make slight or subtle changes.


The aforementioned option of not having a model would essentially mean no haptic interaction, unless it is replaced by something else that can act as a point of reference, not unlike how grid paper is used when drawing objects to scale. The notion of a grid is made possible by the haptic interaction of point or curve snapping, where the points or curves lie in a grid format, which in Guided Hand can be set to variable distances apart. Although the distances can be made larger or smaller, it is by default set to a comfortable and understandable value. Using the grid, the user can draw free-hand along it and have a reference point of measurements. This can be helpful for users who don’t know what to make and simply want to doodle or color in cubes or squares of the grid and see what comes out and responding to it in the next iteration. The user isn’t required to follow a grid either, though. For any reason if the user prefers to draw with the haptics off, this option is always available. The benefit of doing so might be unclear without considering the last application explored in Guided Hand, which is recording. By recording the printing path, the information can be loaded into a 3D modeling software and developed further digitally. Potentially a volume can be derived from the recording and fed back into the haptic device, making for a true feedback loop system where the user can continuously make and record small changes to the model during each print and feed the new iteration back into the device, essentially evolving slightly for each prototype. Eventually, an iteration will be considered by the user as the final version. This version does not have to be a singular phenomenon, however, because with the recorded printing path, the opportunity arises to deploy the same printing path on a normal 3D printer, yielding mass fabrication duplicates of the original hand-crafted custom print. Beyond the option for a feedback loop, however, is the opportunity for other applications of recording that can occur in real time during the current


print. For example, in filigree, it is more common than not to have symmetrical patterns, and if the user draws and records a pattern, they should then have the option to mirror their pattern over some designated axis. This recording could manifest either as the filigree tracing or path following interactions, or could even take advantage of another robot that now works with the user by mirroring the path in real time. The opportunities for recording continue in the digital realm as well, in that a user who might actually consider a digital medium as the final product could prefer to draw the digital model in a very physical way. By printing freeform and recording, a user can see in person what it is they are making, gaining a deeper and more intimate understanding of the model, and then being given the opportunity to work the model digitally afterward.

At this point the four applications of 3D printing have been explained, elaborating on some of the limitations of each interaction as it currently is designed, and speculating on potential interactions that have not been implemented into the system yet. It becomes apparent in the brief exploration done in Guided Hand that there is a lot of potential for a haptically guided 3D printed pen, and a lot of future work that can be done due to the limitations of the currently developed version of this interaction system. In order to discuss future work, first the limitations must be elaborated on.


Limitations

The limitations of this interaction system can be considered in two ways. Firstly, there are the technical limitations of the system that occurred due to multiple factors such as time constraints and my own skill level as the developer of the system. It became overwhelmingly apparent throughout the development of this project that some problems were out of my scope, and thus were set aside to focus on what could be done. The second way limitation of the interaction can be considered is in the actual limitations of the system even as it reaches its full potential, which would require more user studies to understand but can still be discussed after explaining what technical limitations are preventing Guided Hand as it is from being at its full potential. The haptic device does track the location of the pen, but the cursor in the digital space actually reflects the hinge on the haptic device that connects the pen to the robot arm. This essentially means that there is a lot of tolerance from where the extrusion is happening and where the cursor believes it is happening. Because of this offset from the physical to the digital, recording is actually rather imperfect and not a perfect representation of the print. This could likely be resolved with some coding that does the proper translation, if the API does indeed allow for it. From personal experience the general method for controlling the device was not very clear to me, having worked on it for about six months. Confusing elements of development primarily involved understanding the specific relationship between the application’s digital space and the device itself. The developer kit provided by Geomagic was very helpful for understanding the digital space more than the device. This lack of understanding lead to a dependence on some of the higher-level functions in the API, where the size of the model was determined automatically and could not be set manually. The ultimate desire was to be able to have the real-world measurements of a model reflect in the haptic device. Instead, the API seemed to have its own logic for translating the digital model to the device and this prevents the user from dictating their own model sizes. In addition to the model scaling, controlling the haptic forces on the device also became somewhat of a challenge near the end of development. In order to load a volume, the API provided a function that simply took in the file name and did much of the work itself in visualizing the .obj file and feeding it to the haptic device. This, however, meant that the same function used to open the file also controlled the haptic feedback controls. When later implementing some customized controls such as the tactile interactions and attraction, these forces seemed to compete with the higher level code used to create the forces that responded to the imported volume. For this reason, toggling on tactile forces effectively shut down the collision forces. It also means that, for reasons unknown to me, the application will


only load an .obj file that has exactly one geometry in it. This is, of course, a fixable issue, with more time or skill to develop the custom code to execute in the desired way rather than depending on the generic provided API functions. Moving beyond the technical aspect leads to the actual user experience and the limitations of the interaction system itself. There is, of course, a learning curve to using this device for the applications proposed by Guided Hand. However these kinds of limitations are what potentially makes the interaction so exciting, as it means the user must learn the craft. Something that truly is missing in this interaction, however, is real-world visual feedback. Although augmented reality was attempted during development to overlay a visual of the model that was being printed, it was set aside to make time for the core concept of the project. This certainly made for a difficult time understanding the model, as the user continuously had to look back at the computer screen to get a better understanding of what should really be in front of them. In addition to augmented reality was the consideration of projection mapping, which also proved to be more challenging than the time allowed for development of. Both the projection mapping and the augmented reality options required aligning the size and location of the visual feedback to the size and location of the invisible model that only the haptic device can see, which the user can only feel. With more time the two devices could be consolidated. In addition to consolidating a real-world visualization with the device, it is important to further develop the actual computer interface as well. The current interface is depended purely on keyboard controls, as the focus was on the physical interaction and not the software interaction. However, software is inarguably extremely important and should be redesigned to better explain controls and applications.


Future work

Based on the current limitations of the system, as well as the many exciting potential applications discussed earlier that have not yet been developed, there is much future work to be done. Like the limitation, the future work can also be broken down into technical development and user end development. Acknowledging current technical limitations, it is imperative to complete the technical challenges that have not yet been addressed. This involves starting with the basic issues that need fixing such as the model scaling and the ability to import multiple objects at once. After solving these issues there is then the challenge of calibrating the augmented reality or projector map overlay, preferably both, as each could have very exciting advantages for different applications. An exciting concept in a visual overlay is that the overlay can provide more information than just the object in the haptic space, such as vector maps, or a heat map of what is left to be printed versus what is not, or advice on where in the volume to extrude ink faster and where to extrude slower. With that in mind the force controls should also be refined. As stated in limitations, there is a large amount of tolerance in the device. It could be argued that this tolerance is what provides human freedom and a sense of craft and creativity. However, the tolerance should really be something that is decided on and controlled by the user as desired. By developing the system to have a much higher fidelity and much tighter relationship between the exact point of extrusion and the digital cursor, the user would have an extremely resolute level of control over the outcome. The user should be able to decide how much give there is in confined movements and then be able to work within the haptically imposed limitations that the user has set. With this level of control, the next step of future work to be done lies in a wider variety of haptic controls. What more control looks like in this scenario is the ability to overlay a kind of heat map on to any given object such as a curve, surface, or volume, and indicate areas where the forces behave a certain way and areas where it behaves a different way. Imagine a gradient, such that as the color shifts from white to black, a number of variables can change. For example, the vibration of the haptic device could intensify, or perhaps the resistance or tolerance level intensifies, or perhaps the extrusion speed increases. Perhaps all of these things happen at once, all while the print is still being actuated and completely controlled by the user. This level of fidelity and control would, of course, require a much more rugged user interface development, which can of course be accomplished. This would be an entire project in itself and would require its own independent user studies. To this end there is also work to


be done on the user end, which means getting artists and designers to use the device for longer periods of time and gain insight as to the effectiveness of this interaction system as a symbiotic design fabrication process. It would be valuable to learn and document more on the material behaviors to be logged for users to reference and set preset values for desired results. To truly complete the project, though, would mean for someone to use this interaction system on an actual design-fabrication project and for that person to say they were able to create in ways that they had not envisioned possible before.


Conclusion Guided hand does not pretend to be an ultimate solution to any problem, but rather an exploration that acknowledges some of the dissonance felt in the current traditional methods for design and fabrication. To that end the system is an exciting proposal that could pave the way for an entirely new process in which humans collaborate more closely with machines, taking advantage of their strengths but working together to yield unexpected and exciting results.


References Images:

Figure 1: http://u.osu.edu/idvisualization/2015/08/ Figure 2: http://www.gallerynucleus.com/detail/11582/ Figure 3: http://www.remodelista.com/posts/office-visit-deforest-architects Figure 4: https://www.flickr.com/photos/freeparking/445557426/ Figure 5: http://asd-ddrs.org/arrash/2012/12/04/grasshopper-workshop-4-0/ Figure 6: http://www.grasshopper3d.com/photo/basic-diva-analysis?context=latest

Bibliography

Fiebig, Christian. 2012. Computer Augmented Craft. Article by Campbell-Dollaghan, Kelsey http://www.fastcodesign.com/1670476/your-newest-colleague-a-machine-that-makesdesign-suggestions#10 Devendorf,

Laura.

2014.

Become

Your

Own

3D

Printer.

http://www.autodesk.com/artist-in-residence/artists/laura-devendorf Jinha Lee, Rehmi Post, and Hiroshi Ishii. 2011. ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation. In Proceedings of the 24th annual ACM UIST '11. ACM, New York, NY, USA, 327-336. Schön, D. and Bennett, J. (1996): Reflective Conversation with Materials. In Bringing Design to Software. Addison-Wesley. Sundström, P., Taylor, A., Grufberg, K., Wirström, N., Belenguer, J., and Lundén, M. (2011). Inspirational bits: towards a shared understanding of the digital material. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 1561-1570 Torgovnick, Kate. 2012. Does having choice make us happy? 6 studies that suggest it doesn’t always. http://blog.ted.com/does-having-choice-make-us-happy-6-studies-that-suggest-it-

doesnt-always/ Zoran, Amit. 2013. “FreeD” and “Digital Airbrush” http://www.amitz.co/



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.