WORKSHOP 2: ENCODED MATTER Tutors: Rob Stuart Smith Tyson Hosmer
Joumana Abdelkhalek | Morgan Graboski | Nessma AlGhoussein
Table of Contents Introduction- Workshop Objectives Chapter 1: - Introduction & Initial Experimentation Chapter 2: Processing Script of Swarming Agent:
Trials 1.0: Single Agent Types
Trials 2.0: Multiple Agent Types
Chapter 3: Real-Time milling:
Trials 3.0: - Real Time - Single Agent Types - Real Time - Multi Agent Types
Chapter 4: -ARDUINO -LDR sensor and set-up of CNC
Trials 4.0: Feedback Loop of script with Arduino
Trials 1.0
Single Agent Exploration
Trials 3.0 Single Agent Real-Time Exploration
+ Trials 2.0 Multi-Agent Exploration
Multi-Agent Real-Time Exploration
CNC MACHINE
Trials 4.0 Single Agent Real-Time Arduino Feedback Loop Exploraion with LDR.
Introduction This workshop involved the use of Processing to create digital simulations related to the swarm behaviour of agents, which were later milled either from images, 3D models, or the actual code (in real-time) using a Roland MDX CNC machine. To translate the designs from digital form to physical model form, programs like ArtCAM and Processing were utilised to produce a string of numbers (coordinates) to send as input as well as allow the control of the CNC milling machine.
Processing
commons.wikimedia.org
Initial experimentation involved milling images and 3D models that were produced in Processing and manipulated through ArtCAM, which facilitated the control of the depth of the cut based on color assignment. Later experimentation focused on creating a direct relationship between the digital realm and the physical realm, which meant milling directly from Processing a series of scripts with single or multiple agents being cut in real-time or semi-real-time. This was taken a step further by adding an arduino setup with light sensors that facilitated a feedback-loop between the machine and the digital script. ArtCAM
Roland MDX 540 CNC Mill www.creativetools.se
The CNC Machine has several different types of drill bits for cutting. From right to left, above: flat end, ball nose, and V-shape. The models produced during this workshop were mainly done using the 3mm ball-nose and 6mm V-shape bits.
Problematic: Many of the limitations encountered were concerning the CNC machine’s capabilities: 1. There is only one drill bit and therefore only one single agent can truly be milled in realtime. 2. Any milling of multiple-agent codes is therefore rendered as semi-real-time milling as the resultant G-code is being divided into segments to be sent to the machine. 3. The machine memory is capable of receiving a limit of 60,000 lines of code at any one time.
CNC Milled 3D Model Experimentation
Surface of the Moon http://nasa3d.arc.nasa.gov/models
Smooth Wave Surface Rhino Model 3mm Ball Nose Drill Bit
Rough Cut Angular Rhino Model 3mm Ball Nose Drill Bit - High StepOver
Trials 1.0 Single Agent Types
Coding Parametres: The swarming agents code mainly consist of the following parametres: 1- The position(p), which defines the location of the agent, and the velocity(v) which defines the speed and direction of the agent. 2- The range of vision (RangeofVis), according to which the agents can sense their surrounding and sense each other. 3- Upon sensing each other, the agents can act in different ways: -They can collide(coh), seperate(sep), wander(wan), align(ali), or follow each others’ trails.(tra) -They can be attracted or repelled to a specific point (attractor). -They can rotate or change type according to their neigbors. The above parametres are local rules, which when changed could provide a coherent smart system of the swarming behaviour. The main objective of the scripts below are the following: -To change the local parametre (bottom-up design approach), learn, and possibly create stigmergy patterns. Therefore, each script has a unique factor for the range of vision, the number of agents, and starting points of these agents. The agents also are given different velocities and forces. Moreover, the cohesion, seperation, alignment, and wander vectors are also manipulated in many of the codes to provide possible different interesting combinations. In some of the codes, the types of agents change according to a specific set of rules which are defined.
Swarming Behaviour of Fish
Initial Stigmergy Tests
Frame 218
Frame 120
Frame 386
Frame 598
Frame 194
Four Corner Start: This script was one of the very first trials. The agents start from four corners and they appear to be colliding with each other. However, the closer they approach the center of the space the more they seperate and become single agents. This is due to the following combination of parameters that were altered in the code: The range of vision was very low=20, and the following peremtres that caused this behaviour are as follows:
Frame 62
Frame 114
Frame 236
Frame 269
p = new Vec3D(10, 10, 0); v = new Vec3D(random(-1, 1), random(-1, 1), 0); with a maximum velocity of 4.
coh.scaleSelf(0.02); sep.scaleSelf(2); ali.scaleSelf(0.0); wan.scaleSelf(0.0);
Frame 192
Frame 158
Frame 303
Frame 325
Single Line Start For this simple experiment with swarm code, 100 agents were initialized randomly along a single line. Due to their initial velocities being random values, a slightly different variation of the same result occurred each time. With a range of vision of 130, the agents quickly moved towards each other and continued to travel together. coh.scaleSelf(.01); sep.scaleSelf(0.1); ali.scaleSelf(0.7); wan.scaleSelf(0.2); tra.scaleSelf(0.2);
Frame 43
Frame 89
Frame 129
Frame 155
Single Type with Attractors In the example shown, the attractors were placed at random positions with random numbers, and the behaviour of the agent was influenced by the the strength of the trail and the strength of the attractor. The code demonstrates that when the attractor is not in the rangeofVis, the agents behave randomly and are not reacting to the attractor. In other cases, the agents are strongly drawn to the attractor, due to it being in their range of vision.
The milled pieces was generated from an image of the code and modelled on ArtCAM. The milled piece shows an important factor which is the density of the agents. The less the agents, the shallower the milled piece; and as a result the points at the attractors are the highest.
The following code is a single agent type, which starts randomly. However, creates a very interesing stigmergy pattern as shown in the simulation. The following combination of parametres were used: For a number of 500 agents, which start at random positions of the box width and box height: for (int i = 0; i<500;i++) { Vec3D p = new Vec3D(random(boxWidth), random(boxHeight), 0); Vec3D v = new Vec3D(random(-4, 4), random(-3.5, 3.5), 0); coh.scaleSelf(.01); sep.scaleSelf(0.01); ali.scaleSelf(0.2); wan.scaleSelf(0.3); } if (neighborList.size()<25) tra.scaleSelf(0.9);
Frame 4
Frame 16
In this code, which the agent identifies more that 25 neighbors in the range of vision it becomes more attracted to the trails of those agents and gives the following pattern.
Frame 71
Frame 53
Frame 60
Frame 84
Frame 93
This code was milled in real-time with a 6mm V-shape bit.
Trials 2.0 Multiple Agent Types
Two Agent Stigmergy 01 In the following code there were 2 types of agents with no random variable given. The agents start with opposite speeds and when they become in each otherâ&#x20AC;&#x2122;s range of vision they start seeking trail as shown in the code. The unique parametres of this code are: rangeOfVis = 80; tra.scaleSelf(1); However, the maximum velocity (v) is doubled and is at 8 rather than 4. Frame 117
Frame 174
Frame 141
Frame 190
Two Agent Stigmergy 02 The same idea of starting from opposing positions exists. However, the parametres were manipulated to create the following pattern: rangeOfVis = 20; tra.scaleSelf(0.5); coh.scaleSelf(0.01); sep.scaleSelf(0.3); wan.scaleSelf(0.1); ali.scaleSelf(0.0);
Frame 27
Frame 49
Frame 145
Frame 182
Frame 62
Frame 98
Frame 119
Frame 197
Frame 218
Frame 235
Top and Bottom 01 The same concept of the previous code remained. However, it was more controlled by changing the starting positions. The idea was to create 2 agent types that are seeking each other to form a stigmergy pattern. Hence, the parametres are the same as the previous.
Frame 12
Frame 37
Frame 112
Frame 131
Frame 59
Frame 78
Frame 96
Frame 168
Frame 191
Frame 198
Seeking Trails Agents of type one (yellow) and two (pink) share the same values for cohesion, separation, and alignment. coh.scaleSelf(.01); sep.scaleSelf(0.1); ali.scaleSelf(0.7); tra.scaleSelf(0.5); Agents of type one converge and are chased by agents two who seek each otherâ&#x20AC;&#x2122;s trails.
1
2
5
3
4
6
7
8
Seeking Trails and Space Packing 01 Both agent types here share the same behavioural values: coh.scaleSelf(0.1); sep.scaleSelf(0.1); ali.scaleSelf(0.7); Agents of Type Two (pink) converge and seek the trail left by similar agents (stigmergy). Agents of Type One (yellow) are repelled by the trails left by agent Type Two and consequently fill up the remaining spaces outlined by the trails.
1
3
2
4
Seeking Trails and Space Packing 02 Agents of Type One (yellow) have the following behavioural parameter values:
coh.scaleSelf(10); ali.scaleSelf(0.1); tra.scaleSelf(5); Agents of Type Two (pink) have behavioural parameter values as follows:
coh.scaleSelf(0.1); sep.scaleSelf(0.5); ali.scaleSelf(0.9); tra.scaleSelf(2); Agents of Type One (yellow) start spreading into the frame from the four corners of the simulation, they are repelled by the trails of agent Type One (pink) and try to make their way into the center of the box without intersecting the trails.
1
2
6
7
3
4
5
8
9
10
Seeking Trails and Space Packing 03 Agents of type one (yellow) have the behavioural parameter values as follows: sep.scaleSelf(10); ali.scaleSelf(0.1); tra.scaleSelf(5); Agents of type two (pink) have the values of: coh.scaleSelf(0.1); sep.scaleSelf(0.9); ali.scaleSelf(0.1); tra.scaleSelf(0.5);
Frame 7
Agents of type one (yellow) start spreading into the frame from the four corners of the simulation, they are repelled by the trails of agent one (pink) and try to make their way into the center of the box without intersecting the trails. Agents of type two (pink) start at random points along a central vertical zone and spread out into the box simulation, preventing agents of type one (yellow) from spreading into the central area of the simulation.
Frame 50
Frame 17
Frame 40
Frame 27
Frame 72
Frame 105
Single Agent Responding to Swarm The swarming agents in this code start at random points along a central vertical zone and spread out into the box simulation. They have the following values for their behavioural parameters:
coh.scaleSelf(0.01); sep.scaleSelf(0.9); ali.scaleSelf(0.1); wan.scaleSelf(0.3); The responding single agent (blue) has the behaviours values of:
sep.scaleSelf(10); ali.scaleSelf(0.1); tra.scaleSelf(5);
Frame 03
Frame 25
Frame 107
Frame 131
In this case, the single agent is repelled by the trails left by the swarming agents and travels inside the gaps of the box left by those trails.
Frame 48
Frame 76
Frame 91
Frame 162
Frame 190
Frame 201
Side Flocking 01 200 agents are initialized from one side of the environment. Type One (magenta) agents move towards each other as they move across the environment. Type 2 (red) agents are influenced by the movement of Type One agents, with a range of vision of 200, and traverse the environment following the trails of Type One. The magenta agents eventually begin swarming around in clumps, without ever really settling down due to their increasing wander.
Frame 14
Frame 50
Frame 165
Frame 220
if (type == 1){ coh.scaleSelf(0.3); sep.scaleSelf(0.4); wan.scaleSelf(0.2/(frameCount)); } if (type == 2){ sep.scaleSelf(0.5); tra.scaleSelf(0.7); }
Frame 113
Frame 76
Frame 392
Frame 314
Side Flocking 02 Two agent types are initialized on opposing sides of the environment. The range of vision of both agents is 30 pixels, and they each have the same stigmergy characteristics. coh.scaleSelf(0.03); sep.scaleSelf(0.4); ali.scaleSelf(0.1); wan.scaleSelf(0.1); tra.scaleSelf(0.25); Type One (magenta) agents being swarming amongst themselves near where they are initialized while Type Two (red) agents being moving across the screen in small clumps. As the red agents approach the magenta, some are sucked into the large group traveling together while others travel through the dense trails left by the other agents. As these agents pass through, they take some of the magenta agents with them, reappearing on the other side of the environment due to the torus space.
Frame 23
Frame 55
Frame 154
Frame 168
Frame 87
Frame 102
Frame 134
Frame 188
Frame 237
Frame 55
Frame 237
Side Flocking 03 Like the previous iteration, two agent types begin on opposite sides. In this code, Type 1 agents (magenta) have a range of vision of 20 pixels, while Type 2 (red) have a range of vision of 50 pixels. if (type == 1) { coh.scaleSelf(0.03); sep.scaleSelf(0.4); tra.scaleSelf(0.25);
Frame 21
Frame 38
Frame 95
Frame 117
} else { coh.scaleSelf(0.03); sep.scaleSelf(0.4); ali.scaleSelf(0.1); wan.scaleSelf(0.1); tra.scaleSelf(0.25); } Both agents begin moving towards each other, clumping together as they go, due to cohesion. As they meet in the middle, the different agent types mix and begin to travel together. This creates a gradient of density in the trail population- as they travel together, the trails become very dense.
Frame 56
Frame 75
Frame 135
Frame 157
Frame 173
Frame 213
Frame 450
Frame 384
Frame 338
Frame 283
Frame 595
Frame 533
Side Flocking 02 - Milled Image In order to mill images from this code, .jpg files were imported into ArtCAM and depths were specified for each color. In this case, white was the deepest cut. The effect achieved is that where the trails were the densest in the digital simulation, they are milled the deepest into the foam. This image was milled on foam as well as clear acrylic to see how different materials demonstrate this depth change.
Image Milling: Blue Foam
Image Milling: Clear Acrylic
Swarming Agents Without Trails 01 The following code represents 2 types of agents. The first type has a lower rangeOfVis and a high cohesion rate and hence creates defined blue circles which repel the pink agents that are also high in cohesion but high in seperation as well and hence are in continous tension and movement. The parametres of this code are: for (int i = 0; i<1000;i++) { Vec3D p = new Vec3D(random(300,900), random(300,700), 0); Vec3D v = new Vec3D(random(-10, 10), random(-10, 10), 0); The agents start from a central box and then spread over the whole simulation. The velocity, is a high range between 10 and -10 which means the agent could continuously move in both directions. if (type == 1) rangeOfVis = 20; if (type == 2) rangeOfVis = 60; f (type ==1){ coh.scaleSelf(0.2*frameCount); sep.scaleSelf(0.7*frameCount); ali.scaleSelf(0.8); } else { coh.scaleSelf(0.01*frameCount); sep.scaleSelf(0.3*frameCount); ali.scaleSelf(0.8); }
Frame 6
Frame 14
In this code, the idea of multiplying the variables by frame count was used in order to see the effect of how the cohesion and the seperation could increase the longer the life of the agent.
Frame 121
Frame 32
Frame 63
Frame 144
Frame 169
Frame 192
Swarming Agents Without Tails 02 This is a continuation of the previous code, with the same parametres. However, the only thing that changes is that at frameCount 200, agent type 1 becomes agent type 2. Hence, it looks like an explosion because of the different range of vision, seperation, and cohesion values. Moreover, due to the fact that the factors are multiplied by frame count type 1 acts differently when it changes to type 2 and therefore gives very interesting and Frame 64 dynamic simulations.
Frame 265
Frame 98
Frame 346
Frame 246
Frame 210
Frame 479
Frame 548
Swarming Agents Without Tails 03 Instead of using the frameCount which is a global variable in this code, the neigborlist used in earlier codes was used instead. Hence, when the agent senses 25 neighbors it changes type. As a result, the agents dont all change at the same frame and it becomes a more local rule for the code.
Frame 10
Frame 19
Frame 108
Frame 134
Frame 26
Frame 65
Frame 73
Frame 364
Frame 482
Frame 612
Swarming Agents Without Tails 04 In this trial, from the same code series, the switch changed from type 2 to type 1. Hence, when type 1 senses more than 25 neighbors it â&#x20AC;&#x153;eatsâ&#x20AC;? it and becomes bigger, which is more like a predator effect in the code.
Frame 16
Frame 59
Frame 478
Frame 478
Frame 220
Frame 296
Frame 367
Frame 976
Frame 1393
Frame 1899
Swarming Agents With Trails The same code was tried this time but with trails in order to see the different patterns and track the agent movement. The pattern was very detailed and was milled as an image afterwards. What was unique about this pattern is the fact that despite the huge number of trails, the difference between type 1 and type 2 was very distinct.
Frame 25
Frame 225
Frame 192
Frame 274
Swarming Agents With Trails - Milled Image
Image Milling: Blue Foam
Agent Type Switching 01 Agents of type one (magenta) have the following behavioural parameter values: coh.scaleSelf(0.01); sep.scaleSelf(0.99); ali.scaleSelf(0.1); wan.scaleSelf(0.2); tra.scaleSelf(0.3); Agents type two (blue) have a trail repulsion value of 10. Both agents leave trails, white and fading corresponding to agent type one; purple and permanent corresponding to agent type two.
1
2
5
6
The simulation starts with 300 agents of type one and one agent of type two. Agents of type one turn into type two if a certain set of rules is fulfilled: if ((frameCount%30==0) (neighborList.size()>15) ) {
}
&&(type!=2)
&&
for (int i=0; i<frameCount; i++) { type=2; }
The result is a set of visible trails that are segmented due to the change in agent type.
3
4
7
8
9
10
13
14
11
12
15
16
Agent Type Switching 02 Agents of type one (magenta) have the following behavioural parameter values: coh.scaleSelf(0.01); sep.scaleSelf(0.99); ali.scaleSelf(0.1); wan.scaleSelf(0.2); tra.scaleSelf(0.3); Agents of type two (blue) have the following behavioural parameter values: coh.scaleSelf(0.5); ali.scaleSelf(0.5); tra.scaleSelf(10); Both agents leave trails, white corresponding to agent type one; purple corresponding to agent type two.
1
2
6
7
Agents switch types if a certain set of rules is fulfilled: if ((frameCount%45==0) &&(type!=2) && (neighborList.size()>10) ) { for (int i=0; i<frameCount; i++) {type=2;} } if ((frameCount%90==0) &&(type==2) && (neighborList.size()>5) ) { for (int i=0; i<frameCount; i++) {type=1;} } The resultant trail is made by two agents simultaneously while they switch between types and therefore change behavior accordingly.
3
4
5
8
9
10
11
12
13
16
17
18
14 15
19
20
21
Agent Type Switching 02 - Milled Image
Image Milling: Blue Foam
Agent Types Switching 03 Agents of type one (magenta) have the following behavioural parameter values: coh.scaleSelf(0.01); sep.scaleSelf(0.99); ali.scaleSelf(0.1); tra.scaleSelf(0.3); Agents of type two (blue) have the following behavioural parameter values: coh.scaleSelf(0.4); sep.scaleSelf(1); ali.scaleSelf(0.5); tra.scaleSelf(20); Both agents leave trails, white corresponding to agent type one; purple corresponding to agent type two.
1
Agents switch types if a certain set of rules is fulfilled: if ((frameCount%45==0) &&(type!=2) && (neighborList.size()>10) ) { for (int i=0; i<frameCount; i++) {type=2;} } if ((frameCount%45==0) &&(type==2) && (neighborList.size()>5) ) { for (int i=0; i<frameCount; i++) {type=1;} } The resultant trail is made by two agents simultaneously while they switch between types and therefore change behavior accordingly.
4
2
3
5
6
7
10
11
8
9
12
13
Agent Type Switching 04 Agents of type one (magenta) have the following behavioural parameter values: coh.scaleSelf(0.4); sep.scaleSelf(0.99); ali.scaleSelf(0.1); wan.scaleSelf(0.2); tra.scaleSelf(0.3); Agents of type two (blue) have the following behavioural parameter values: coh.scaleSelf(0.4); sep.scaleSelf(1); sep.scaleSelf(0.5); tra.scaleSelf(20);
1
2
5
6
Agents switch types if a certain set of rules is fulfilled: if ((frameCount%45==0) && (type==1) && (neighborList.size()>10) ) { for (int i=0; i<frameCount; i++) {type=2;} } if ((frameCount%45==0) && (type==2) && (neighborList.size()>10) ) { for (int i=0; i<frameCount; i++) {type=1;} } Agent type one has a higher seek trail value which - together with the cohesion - drive the agents to form the ball-like centres.
3
4
7
8
10
9
14
13
11
12
15
16
Agent Type Switching 05 Agents of type one (magenta) have the following behavioural parameter values: coh.scaleSelf(0.05); sep.scaleSelf(0.99); ali.scaleSelf(0.1); wan.scaleSelf(0.2); tra.scaleSelf(0.3); Agents of type two (blue) have the following behavioural parameter values: coh.scaleSelf(0.9); ali.scaleSelf(0.9); tra.scaleSelf(0.9); Agents switch types if a certain set of rules is fulfilled: if ((frameCount%45==0) && (type==1) && (neighborList.size()>10) ) {
}
1
2
for (int i=0; i<frameCount; i++) {type=2;}
if ((frameCount%90==0) && (type==2) && (neighborList.size()>10) ) { for (int i=0; i<frameCount; i++) {type=1;}
} The resulting trail is made by two agents simultaneously, as they have high cohesive attributes, they cover large surface area, which led to further experimentation on Rhino + Grasshopper. 5
3
4
6
5
7
8
9
12
13
10
11
14
15
Image to Surface The resultant image was taken into Grasshopper to be converted into a 3D surface, which was baked into Rhino and later used as a model to mill from on ArtCAM.
Rhino + Grasshopper
Rhino + Grasshopper
Trials 3.0 Real-Time Milling
Single Agent Real-Time Milling Real-Time milling began by experimenting with the previous â&#x20AC;&#x153;Single Agent Responding to Swarmâ&#x20AC;? code (below). The single agent was milled onto blue foam in real-time, and the final image of the swarm was captured and then milled on top of the real-time mill to create a comprehensive reading of the code in the physical model. To distinguish the two, the real-time milling of the single agent was cut deeper than the image milling. Two iterations of this are shown here.
Real Time- Single Agent with Image Overlay
Similar to the previous experimentation, the single agent here was milled in real-time followed by the milling-from-image of the multiple agents that affected its path. The code used for this model is the previous â&#x20AC;&#x153;Top and Bottom 01â&#x20AC;? code (adjacent). This overlay of different depths allows the reading of the single agent in the complex multi-agent environment from the physical model.
Multiple Agent Real-Time Milling The next phase of experimentation with realtime milling involved multiple-agent codes. The process here slightly differs as there is only one drill bit capable of milling one agent trail at a time. This means that a series of lines of G-code are produced and sent to the machine, which has the memory capacity of 60,000 lines. This delay renders the process as semi-real-time milling. The code explored here is the previous â&#x20AC;&#x153;Agent Type Switching 02â&#x20AC;? code. The two agent types were given different textures and depths to be easily identified in the physical model. These experimentations are shown here.
Problematic: The agents in the simulation were moving inside a torus space, which was not correctly translated to the machine. The code should have included the command to lift the drill bit once the agent hit the edge of the space, move in zmax to the next position of the agent, and then dig back into the material at this new position. The result of this meant that the drill bit dragged through the material in the shown orthogonal lines while chasing the agent from one end of the torus space to the next.
These iterations stem from the same series of “Changing Agent Type” codes discussed earlier, except the agents move inside a “box” environment which means that they bounce off the edge of the canvas rather than continue inside a torus space. The two agent types were given different textures and depths to be easily readable from the physical model.
The code explored for this series is the previous “Side Flocking 03” code. The two agent types were given different textures and depths to be easily identified in the physical model. These experimentations were run using the 6mm V-shape drill bit which gave very clear results when the simulation trail was dotted, but gave rather messy results when the trail was a line. An interesting reading emerged, however, from the line command where that the trail in the physical model became “cleaner” and clearer the more times the drill bit went over it and removed more material each time.
The next iteration of this code aimed at making the cut into the material deeper the more dense the trail was in the simulation. This meant that the depth factor changed over time the longer the simulation ran. The more agents pass over a certain trail in the simulation (the denser the trail), the deeper the cut in the material. This gave an interesting result which permitted an intigrated reading of time and agent position from the physical model. if ( (started==true) && (cutCounter%cutFrames==0) ){ ArrayList tp = a.myTrailPos; ArrayList densityValues = new ArrayList(); for (int j = 0; j < tp.size(); j++) { trail t = (trail) tp.get(j); t.update(); Vec3D tPos = t.pos; int trailCount = 0; for (int k = 0; k < trailPop.size(); k++) { trail tOther = (trail) trailPop.get(k); Vec3D tOtherPos = tOther.pos; float tDist = tPos.distanceTo(tOtherPos); if (tDist<10) { trailCount = trailCount+1; } } densityValues.add(trailCount);
} println(densityValues); }
Real-Time Milling on Transparent Acrylic
Trials 4.0 ARDUINO FEEDBACK LOOP WITH LDR
THE CIRCUIT:
ARDUINO: In order to create a feeback loop between the CNC machine and the processing Arduino Uno Board real-time code, Arduino with an LDR sensor were used to bridge the information between the CNC machine and the Processing script. The items used were mainly the Arduino Uno board and the bread board that creates https://www.arduino.cc/new_home/assets/ illu-arduino-UNO.png the electrical circuit to the sensor.
Why LDR?
Bread Board used
https://www.adafruit.com/images/970x728/64-02.jpg
As shown in the adjacent diagram, the photosensor is connected to the Arduino Uno board using a resistor (to lower the voltage). The wires connect the sensor to the power and ground. https://blog.udemy.com/arduino-ldr/
The Arduino Code: Light Sensor Used(LDR)
The main aim of the sensor introduction is to create a responsive behaviour from the drill bit to the light intensity change. As a result, the simple LDR is used to read the difference in the light intensity inside the CNC machine which in turn will inform the drill bit to go deeper or shallower according to the script.
The Arduino code was used in order to program the sensor to read the values of light intensity between the ranges of 0 to 1023.
REALTIME FEEDBACK LOOP The setup: - the processing code controls the path of the agent that is being milled in real time - a pre-recorded video of the multi-agents that affect the single agent path is projected onto the material being cut - an LDR sensor is hooked to the drill bit in the CNC machine, reads and inputs light values back into the processing code
- the processing code is scripted to change the depth of the cut according to a range of light values read by the sensor: when the light is high, the drill bit cuts into the material deeper, when it is low, the drill bit scratches lightly onto the material instead
This feedback loop allows the light intensity to influence the machineâ&#x20AC;&#x2122;s decision and hence provide new results with each iteration.
Arduino and Breadboard
LDR
Laptop
Projector
reading light intensity
The Feedback Loop Set-up:
The following image shows the pre-recorded video of the simulation which was projected during the milling process:
As seen below, the drill bit used is connected to the LDR sensor and is in the process of cutting a shallow part due to a low light reading value.
Trials 4.1: Alll the trials were carried out with the single agent code to allow instant response from the sensor to the code. The code was run 3 different times on 3 different pieces and would give a different result each time due to the random values given in the velocity and force factors of the agent as well as the random starting position of that agent. Trial 4.1.1
Trial 4.1.2
Trial 4.1.3
The agent moved in the same trail path more than once and hence removing the previous trails. As a result the change in depth was not clear enough.
On Plexi glass the trails started giving different depths and it was shown clearly on the image.
This trial shows also varying depth according to the light intensity of the projection.
if (testValue > 400) { zValue = -200; } if (testValue<= 400) { zValue = -50; }
Trials 4.2: In trials 4.2, the actual simulation was projected rather than pre-recorded. This meant that the single agent being milled received both its position and depth inputs from the code simultaneously. The single agent was programmed to one of two behaviours: The first behaviour shown below forces the agent to be repelled by the projected light. When the agent passes on the light trail, thereby receiving high light values, the cut gets deeper.
Agent in simulation
Agent in milled piece.
The second behaviour encourages the agent to follow the light trail instead, thereby tracking the motion of the light trails over time. Clear depth variations are observed in this piece.
Agent in simulation
Agent in milled piece.
Wrap Up: Several parameters were explored throughout the workflow of this study: -
single agent behaviour multiple agent behaviour material experimentation drill bit variations behavioural change over time material manifestation of change over time real-time correspondence in feedback loop Arduino integration LDR exploation
The results of these experiments show distinct results which were rendered informative and beneficial for further exploration. However, the limitations discussed in the introduction regarding the capabilities of the specific CNC machine used during the workshop remain restrictive for the actualisation of real-time milling of multiple agents- which would be the next possible step in this investiation. For this reason, real-time milling in this experiment was only truly possible when milling one single agent. Further explorations can include: - investigation of other sensors affecting the single agent behaviour (sonar depth or others) - real-time milling of multiple agents integrated with a live arduino feedback loop - integration of various characteristics explored in this study: . depth changing over time . trails milled with different characterstics/using diff drill bits . agent characteristics/behaviours changing over time