Processing Creative Coding and Computational Art
Ira Greenberg
Processing: Creative Coding and Computational Art Copyright © 2007 by Ira Greenberg All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher. ISBN-13: 978-1-59059-617-3 ISBN-10: 1-59059-617-X Printed and bound in the United States of America 9 8 7 6 5 4 3 2 1 Trademarked names may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, we use the names only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. Distributed to the book trade worldwide by Springer-Verlag New York, Inc., 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax 201-348-4505, e-mail orders-ny@springer-sbm.com, or visit www.springeronline.com. For information on translations, please contact Apress directly at 2855 Telegraph Avenue, Suite 600, Berkeley, CA 94705. Phone 510-549-5930, fax 510-549-5939, e-mail info@apress.com, or visit www.apress.com. The information in this book is distributed on an “as is” basis, without warranty. Although every precaution has been taken in the preparation of this work, neither the author(s) nor Apress shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. The source code for this book is freely available to readers at www.friendsofed.com in the Downloads section.
Credits Lead Editor Chris Mills Technical Editor Charles E. Brown Technical Reviewers Carole Katz, Mark Napier Editorial Board Steve Anglin, Ewan Buckingham, Gary Cornell, Jason Gilmore, Jonathan Gennick, Jonathan Hassell, James Huddleston, Chris Mills, Matthew Moodie, Jeff Pepper, Dominic Shakeshaft, Matt Wade Project Manager Sofia Marchant Copy Edit Manager Nicole Flores Copy Editor Damon Larson
Assistant Production Director Kari Brooks-Copony Production Editor Ellie Fountain Compositor Dina Quan Artist Milne Design Services, LLC Proofreaders Linda Seifert and Nancy Sixsmith Indexer John Collin Interior and Cover Designer Kurt Krames Manufacturing Director Tom Debolski
1 4 3D RENDERING IN JAVA MODE
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T Last chapter, you generated simple 3D primitives and looked at some basic 3D modeling theory. You didn’t, however, look at how the 3D data is actually mapped to the 2D screen, nor how lighting is implemented to describe volume. In this final chapter, I’ll build upon many of the concepts introduced last chapter as well as throughout the book. I’ll continue the discussion begun in Chapter 8 on utilizing OOP in Processing, including working in Java mode. Developing a 3D vector class, I’ll revisit and expand upon motion concepts introduced in Chapter 11. And of course I’ll be introducing a bunch of cool 3D concepts, building upon what was covered in Chapter 13.
P3D refresher P3D, Processing’s trusty software-based 3D renderer, took care of all the rendering for us last chapter. I specified “software-based” (as opposed to hardware-based) because P3D handles the 3D math crunching (and there is a lot of it) directly in software. If you’re interested, and not faint of heart, you can see the actual P3D rendering code here: http:// dev.processing.org/source/index.cgi/trunk/processing/core/src/processing/core/ PGraphics3D.java?view=markup. In addition to using software-based rendering, it is possible to use libraries that communicate directly with your computer’s hardware to handle some of the number crunching. As you might suspect, hardware-based rendering is more robust than software-based approaches such as P3D. But don’t fret, P3D is no slouch. And, Processing also comes equipped with a hardware-based 3D rendering option, called OPENGL. OPENGL is a popular 3D library that directly communicates with your computer’s hardware for heavy number lifting. You’ll learn how to use OPENGL in Processing a little later in the chapter (it’s actually very simple). Hopefully, you still remember how to invoke Processing’s (software-based) 3D renderer P3D; it just needs to be included as an argument when you call Processing’s size(w, h, P3D) function. For example, the following sketch creates three rings composed of spheres rotating in 3D space—all made possible because I simply included the P3D argument (shown in Figure 14-1): // Interlocking Sphere Rings float radius = 120.0; int segments = 30; float sphereSize = 7.0; void setup(){ size(400, 400, P3D); noStroke(); sphereDetail(8); } void draw(){ background(0); lights(); translate(width/2, height/2); rotateY(frameCount*PI/50); rotateX(frameCount*PI/46);
2
3 D R E N D E R I N G I N J AVA M O D E float x, y, z; for(int i=0; i<3; i++){ float ang = 0; for(int j=0; j<segments; j++){ pushMatrix(); if (i==0){ x = cos(ang)*radius; y = sin(ang)*radius; z=0; fill(abs(y), abs(x), abs(y-x)); } else if (i==1){ x = cos(ang)*radius; z = sin(ang)*radius; y=0; fill(abs(x-z), abs(z), abs(z)); } else{ y = cos(ang)*radius; z = sin(ang)*radius; x=0; fill(abs(y-z), abs(z-y), abs(z+y)); } translate(x, y, z); sphere(sphereSize); popMatrix(); ang+=TWO_PI/segments; } } }
14
Figure 14-1. Interlocking Sphere Rings sketch
3
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T If you didn’t bother looking at the scary P3D source code from the link I included earlier, you might be thinking this 3D stuff is a piece of cake; it just takes one extra argument in size(w, h, P3D) and varoom! Well, it almost is that easy in Processing, at least to do relatively simple stuff. However, 3D is so deeply encapsulated in Processing that it’s hard to grasp what’s actually going on beneath the surface, which is definitely not simple. Although you can choose to work at this very high level, I think you’ll benefit much more by looking a little deeper and getting some sense of what is actually going on beneath the surface when we include the P3D argument. This recommendation is not (only) an attempt on my part to torture you—would I really be doing my job if I didn’t cause you some brain pain each chapter? Understanding how 3D is actually implemented will allow you to most fully tap into its (and your own) aesthetic and expressive potential. Of course, you’ll also be able to impress prospective employers and cocktail party attendees with some impressive-sounding new vocabulary—“isometric orthogonal projections,” anyone? First, though, I’d like to develop a sketch in Java mode to demonstrate how Processing is integrated with its powerful (industrial-strength programming language) partner.
Java mode, Processing’s final frontier In many ways, whether you realize it or not, you already know a lot of Java. The objectoriented examples earlier in the book, for instance, were essentially a slightly simplified form of Java. Even Processing’s procedural front-end, which allows us to make function calls instead of always needing to deal directly with objects, adheres to Java conventions. Java is a vast language, a gazillion times larger than Processing. However, much of the Java API deals with stuff most of us creative coders will not need to worry about. In addition, Java is well organized and, for the most part, consistent; so as you learn how to do something using one class in Java, you’re actually learning a methodology for utilizing many classes—which again, you’ve already been doing to some degree in Processing.
Procedural bird To begin, I’ll create a 3D procedural sketch in Processing and then convert it to an objectoriented structure, still working in continuous mode; then finally, I’ll convert the sketch to more standard Java. The initial procedural sketch, shown in Figure 14-2, is of a single birdlike 3D object flying in the great blue yonder (all right, it’s just a light blue background fill color): // Simple 3D Bird float ang = 0, ang2 = 0, ang3 = 0, ang4 = 0; float px = 0, py = 0, pz = 0; float flapSpeed = .2; void setup(){ size(400, 400, P3D); noStroke(); }
4
3 D R E N D E R I N G I N J AVA M O D E void draw(){ background(170, 130, 255); lights(); fill(200, 100, 10); //flight px = sin(radians(ang3))*170; py = cos(radians(ang3))*300; pz = sin(radians(ang4))*500; translate(width/2+ px, height/2+py, -500+pz); rotateX(sin(radians(ang2))*120); rotateY(sin(radians(ang2))*50); rotateZ(sin(radians(ang2))*65); //body box(20, 100, 20); fill(200, 200, 10); //left wing pushMatrix(); rotateY(sin(radians(ang))*-20); rect(-75, -50, 75, 100); popMatrix(); //right wing pushMatrix(); rotateY(sin(radians(ang))*20); rect(0, -50, 75, 100); popMatrix(); //wing flap ang+=flapSpeed; if (ang>3){ flapSpeed*=-1; } if (ang<-3){ flapSpeed*=-1; } //increment angles ang2+=.01; ang3+=2; ang4+=.75;
14
}
5
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Figure 14-2. Simple 3D Bird sketch
Make sure you run the sketch before continuing on. There’s nothing new here, so I won’t bore you with a long elucidation. I’m just using a couple of trig expressions to move the individual wings and also the entire bird. Nesting the wing code between the pushMatrix() and popMatrix() commands allows me to transform each wing separately. Again, you’ve looked at all this stuff before. The bird objects are 3D, so I included the P3D argument in the size(400, 400, P3D) call. I also used Processing’s lights() function, which gives me a basic default lighting setup. We’ll look more at lighting later in the chapter. One reason you might want to restructure this code from a procedural to an objectoriented format is to encapsulate the bird into a class. The class structure will make it easier to eventually reuse the bird code—say, to build something like a flock of birds, which we’ll do shortly.
Creating a Bird class We’re going to use Processing’s tabs feature to create our Bird class. Create a new sketch, and add a second tab to the sketch by pressing the tab arrow on the right side of the Processing window. Select New Tab, and name it Bird. You’ll be treating the class, for now, as a PDE, so you don’t need to add a suffix on the class name. (Remember, the .pde will automatically be appended to the class name.) Next, you’ll enter the Bird class code into the tab. One minor warning: At first glance, the following Bird class will look considerably more complicated than the procedural example, but when you eventually build your bird flock, the OOP approach benefit should become clearer.
class Bird{ // properties float offsetX, offsetY, offsetZ; float w, h; int bodyFill; int wingFill;
6
3 D R E N D E R I N G I N J AVA M O D E float float float float float
ang = 0, ang2 = 0, ang3 = 0, ang4 = 0; radiusX = 120, radiusY = 200, radiusZ = 700; rotX = 15, rotY = 10, rotZ = 5; flapSpeed = .4; rotSpeed = .1;
// constructors Bird(){ this(0, 0, 0, 60, 80); } Bird(float offsetX, float offsetY, float offsetZ, ➥ float w, float h){ this.offsetX = offsetX; this.offsetY = offsetY; this.offsetZ = offsetZ; this.h=h; this.w=w; bodyFill = color(200, 100, 10); wingFill = color(200, 200, 20); } // methods void setColor(int bodyFill, int wingFill){ this.bodyFill=bodyFill; this.wingFill=wingFill; } void setFlight(float radiusX, float radiusY, float radiusZ, ➥ float rotX, float rotY, float rotZ){ this.radiusX = radiusX; this.radiusY = radiusY; this.radiusZ = radiusZ; this.rotX = rotX; this.rotY = rotY; this.rotZ = rotZ; } void setWingSpeed(float flapSpeed){ this.flapSpeed = flapSpeed; }
14
void setRotSpeed(float rotSpeed){ this.rotSpeed = rotSpeed; } void fly(){ pushMatrix();
7
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T float px, py, pz; fill(bodyFill); //flight px = sin(radians(ang3))*radiusX; py = cos(radians(ang3))*radiusY; pz = sin(radians(ang4))*radiusZ; // translate(width/2+offsetX+px, height/2+offsetY+py, â&#x17E;Ľ -500+offsetZ+pz); rotateX(sin(radians(ang2))*rotX); rotateY(sin(radians(ang2))*rotY); rotateZ(sin(radians(ang2))*rotZ); // //body box(w/5, h, w/5); fill(wingFill); //left wing pushMatrix(); rotateY(sin(radians(ang))*20); rect(0, -h/2, w, h); popMatrix(); //right wing pushMatrix(); rotateY(sin(radians(ang))*-20); rect(-w, -h/2, w, h); popMatrix(); // wing flap ang+=flapSpeed; if (ang>3){ flapSpeed*=-1; } if (ang<-3){ flapSpeed*=-1; } // ang's run trig functions ang2+=rotSpeed; ang3+=1.25; ang4+=.55; popMatrix(); } }
8
3 D R E N D E R I N G I N J AVA M O D E I designed the Bird class to enable you to create a flock of birds. To accomplish this, I created a number of methods: setColor(), setFlight(), setWingSpeed(), and setRotSpeed(), as well as a slew of properties to allow each bird to be somewhat customized. If you only wanted to generate a single bird, you could have kept the class much leaner. It might help to do a quick review of the class code. The basic structure of a class is the class declaration, global property declarations, constructors, and methods. The Bird class follows this same structure. Below the class declaration are the global properties (global in scope), which can be seen from anywhere within the class. These properties will be directly accessible by using the syntax objectname.propertyname. You’ll notice some of the properties are assigned initial values, while others are not. Setting some initial property values is helpful for creating a default state of an object. Below the properties are the two Bird constructors. You’ll remember a constructor is invoked when the new keyword is used to create an object. The first constructor, without any parameters, actually calls the next constructor, passing in some default argument values. This type of constructor chaining is perfectly legal and common in OOP. One of the benefits of doing it includes being able to call a simple default constructor without any arguments. Another benefit is being able to put object initialization code in just one of the constructors (instead of putting the redundant code in each of the constructors). The constructors (without the initialization code) would then call the initialization constructor. This chaining approach provides flexibility, allowing multiple constructors to accept different argument lists, while still ensuring the object is initialized fully and efficiently. Below the constructors are five methods. The first four methods allow you to customize your bird object. If you’re happy using the default settings, you don’t need to invoke these. The last fly() method is what sets your bird a soaring. The fly() method is very similar to the code in the draw() method in the procedural Bird example. Notice that I nested all the fly() method code [including some inner pushMatrix() and popMatrix() calls] between an outer set of pushMatrix() and popMatrix() calls. I did this to allow the creation of an eventual flock. Since I’m calling Processing’s translate() and rotate() functions (which affect the entire contents of the Processing window, and are not connected to any specific object) from within the fly() method, I needed to make sure that the multiple calls to translate() and rotate() didn’t keep accumulating; this would have progressively sent the birds off the screen; to see an example of this, comment out the outer pushMatrix() and popMatrix() calls when you eventually get to the flocking example (after you’ve had a chance to run it the correct way, of course). To try using the Bird class, simply enter the following code in the main PDE tab (the leftmost tab). // Default OOP Bird Bird b; void setup(){ size(400, 400, P3D); noStroke(); b = new Bird(); }
14
9
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T void draw(){ background(150, 120, 255); lights(); b.fly(); } The class encapsulates the Bird code, making it really simple to create a single flying bird based on default values. You can also customize the bird object by passing in some arguments when you instantiate, as well as invoking some of the other methods in the class. Hereâ&#x20AC;&#x2122;s a wackier version of our bird (shown in Figure 14-3): // Custom OOP Bird I Bird b; void setup(){ size(400, 400, P3D); noStroke(); b = new Bird(0, 0, -1500, 600, 50); b.setColor(color(200, 20, 50), color(10, 255, 50, 170)); } void draw(){ background(150, 120, 255); lights(); b.setFlight(100, 100, 2200, 5, 3, 20); b.setWingSpeed(.6); b.setRotSpeed(.5); b.fly(); }
Figure 14-3. Custom OOP Bird I sketch
10
3 D R E N D E R I N G I N J AVA M O D E
Generating a flock Finally, you can generate a whole flock using lots of random values. The general principle here is very similar to code we looked at much earlier in the book, when you generated lots of moving objects. Instead of using a single Bird object, you’ll use an array of Bird objects. Also, since you’ll be generating random values for most of the arguments, passed into both the constructor and methods, you’ll be using arrays as well for all the values, which you need to populate up in the setup() function. If the arrays instead were filled down in draw(), new values would be generated every frame, making a twitching avian mess. To run this next example, shown in Figure 14-4, the Bird class code should remain the same, in its separate tab. The following code should be put in the leftmost main tab, replacing any previous code. // Crazy Flocking 3D Birds // flock array int birdCount = 300; Bird[]birds = new Bird[birdCount]; float[]x = new float[birdCount]; float[]y = new float[birdCount]; float[]z = new float[birdCount]; float[]rx = new float[birdCount]; float[]ry = new float[birdCount]; float[]rz = new float[birdCount]; float[]spd = new float[birdCount]; float[]rot = new float[birdCount]; void setup(){ size(400, 400, P3D); noStroke(); //initalize arrays with random values for (int i=0; i<birdCount; i++){ birds[i] = new Bird(random(-300, 300), random(-300, 300), ➥ random(-500, -2500), random(5, 30), random(5, 30)); birds[i].setColor(color(random(255), random(255), random(255)),➥ color(random(255), random(255), random(255))); x[i] = random(20, 340); y[i] = random(30, 350); z[i] = random(1000, 4800); rx[i] = random(-160, 160); ry[i] = random(-55, 55); rz[i] = random(-20, 20); spd[i] = random(.1, 3.75); rot[i] = random(.025, .15);
14
} }
11
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T void draw(){ background(150, 120, 255); lights(); for (int i=0; i<birdCount; i++){ birds[i].setFlight(x[i], y[i], z[i], rx[i], ry[i], rz[i]); birds[i].setWingSpeed(spd[i]); birds[i].setRotSpeed(rot[i]); birds[i].fly(); } }
Figure 14-4. Crazy Flocking 3D Birds sketch
Be sure to run this last example and see the crazy flock. Although a lot of arrays are necessary, reusing your Bird class allows you to generate a fairly complex program with a minimum amount of code; this is the cornerstone of OOP. You could take this one step further and encapsulate the flock code as well into a Flock class. This would allow users to generate an entire flock with a simple instantiation, such as new Flock(). Of course, the flock could also be further customized, with the addition of more Flock properties and methods. It would also be possible to create more specific Bird classes (e.g., Wrens, Bluejays, Robins, etc.), each subclass extending (inheriting from) the core Bird class. Hopefully, you see the power of OOP.
Soaring into Java The last thing Iâ&#x20AC;&#x2122;ll demonstrate with the birds before moving on is how to covert the flock sketch into more standard Java. Again, most of the code youâ&#x20AC;&#x2122;ve been writing is already closely related to Java, so this will be easier than you may think. For example, the
12
3 D R E N D E R I N G I N J AVA M O D E Processing functions you’ve been using throughout the book are connected to methods in Java classes: the size() and background() functions, as just two examples, refer to the same named methods in Processing’s base PApplet class. You can view the PApplet Java source code here: http://dev.processing.org/source/index.cgi/trunk/processing/ core/src/processing/core/PApplet.java?view=markup. The Processing developers created this procedural front-end, as I discussed way back in the beginnning of the book, to make it simpler for beginning coders to get started; it’s obviously easier for a newbie coder working in basic mode to just call a prebuilt function than to create a custom function or worry about instantiating an object to invoke a method. However, this ease of use also has certain limitations, especially when using Processing for more elaborate and complex projects. Processing’s continuous mode, incorporating the use of user-created functions and classes, greatly expands the limitations of working in basic mode, and Java mode pushes this flexibility even further, allowing experienced coders to integrate the vast Java language into Processing, as well as Processing into Java-based projects, including even using the Processing core classes outside of the Processing environment.
From continuous mode to Java mode A question you may be asking at this point is, “If I’ve been creating custom Java-esque classes in Processing, haven’t I already been working in Java mode?” Not technically, but, as you’ll see shortly, you’re most of the way there. The classes you’ve been creating in Processing thus far, either directly in the main tab or in separate tabs, are technically called inner classes (in Java speak). Inner classes are just classes that live within another class. Working in continuous mode, any custom classes you create are treated as inner classes within the the main sketch, which gets converted to a single Java file when you run or export the sketch. In Java mode, by comparison, the classes you create in separate tabs will remain as separate classes—not inner classes. This subtle difference as to where the classes reside has real implications, which you’ll learn about shortly. Specifying Java mode mostly just comes down to adding a few extra keywords to what you’ve already been doing, and being a little more explicit in your code with regard to following certain Java requirements. For example, to specify one of your custom classes as a real external Java class instead of an inner class, you need your class to live in a separate tab, with the tab name being identical to the class name (including case); and the class name needs to be appended with the suffix .java. Remember, by default, classes created in external tabs, unless explicilty appended with the .java suffix, are given a .pde suffix and again converted into inner classes. I’ll be going over these syntactical issues throughout the chapter. One of the interesting benefits of working in Java in Processing, as I mentioned earlier, is that you can eventually use your custom Java classes, including the core Processing classes, outside of the Processing environment—something you very well may want to do as you progress as a coder (I provide an example of how to do this in Appendix C, available online). Thus, as your Processing classes could eventually find their way into a Java environment, it is important to adhere to good Java standards as you create your custom classes in Processing.
14
13
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T Without further ado, here’s our Bird class converted to more standard Java. I’ve included some comments in the code, and of course I’ll provide a more detailed explanation to follow. /*You need to import the PApplet class so the Bird class has access to Processing function calls - really PApplet method calls*/ import processing.core.PApplet; public class Bird{ /* properties Properties are usually declared private in java, using the "private" modifer. Public accessor and mutator (getter/setter) methods are then created for each private property, which are used to access/change the properties. I've just added getOffsetX() and setOffsetX() as an example*/ private private private private private private private private private private
PApplet p; float offsetX, offsetY, offsetZ; float w, h; int bodyFill; int wingFill; float ang = 0, ang2 = 0, ang3 = 0, ang4 = 0; float radiusX = 120, radiusY = 200, radiusZ = 700; float rotX = 15, rotY = 10, rotZ = 5; float flapSpeed = .4f; float rotSpeed = .1f;
// constructors public Bird(PApplet p){ this(p, 0, 0, 0, 60, 80); } public Bird(PApplet p, float offsetX, float offsetY, ➥ float offsetZ, float w, float h){ this.p=p; this.offsetX = offsetX; this.offsetY = offsetY; this.offsetZ = offsetZ; this.h=h; this.w=w; bodyFill = p.color(200, 100, 10); wingFill = p.color(200, 200, 20); }
14
3 D R E N D E R I N G I N J AVA M O D E //example getter/setter methods public void setOffsetX(float offsetX){ this.offsetX = offsetX; } public float getOffsetX(float offsetX){ return offsetX; } // methods public void setColor(int bodyFill, int wingFill){ this.bodyFill=bodyFill; this.wingFill=wingFill; } public void setFlight(float radiusX, float radiusY, float radiusZ,➥ float rotX, float rotY, float rotZ){ this.radiusX = radiusX; this.radiusY = radiusY; this.radiusZ = radiusZ; this.rotX = rotX; this.rotY = rotY; this.rotZ = rotZ; } public void setWingSpeed(float flapSpeed){ this.flapSpeed = flapSpeed; } public void setRotSpeed(float rotSpeed){ this.rotSpeed = rotSpeed; } public void fly(){ p.pushMatrix(); float px, py, pz; p.fill(bodyFill); //flight px = p.sin(p.radians(ang3))*radiusX; py = p.cos(p.radians(ang3))*radiusY; pz = p.sin(p.radians(ang4))*radiusZ; // p.translate(p.width/2+offsetX+px, p.height/2+offsetY+py,➥ -500+offsetZ+pz);
14
p.rotateX(p.sin(p.radians(ang2))*rotX); p.rotateY(p.sin(p.radians(ang2))*rotY); p.rotateZ(p.sin(p.radians(ang2))*rotZ);
15
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T //body p.box(w/5, h, w/5); p.fill(wingFill); //left wing p.pushMatrix(); p.rotateY(p.sin(p.radians(ang))*20); p.rect(0, -h/2, w, h); p.popMatrix(); //right wing p.pushMatrix(); p.rotateY(p.sin(p.radians(ang))*-20); p.rect(-w, -h/2, w, h); p.popMatrix(); // wing flap ang+=flapSpeed; if (ang>3){ flapSpeed*=-1; } if (ang<-3){ flapSpeed*=-1; } // ang's run trig functions ang2+=rotSpeed; ang3+=.7; ang4+=.55; p.popMatrix(); } } When creating a new class in Java mode, the first thing you’ll want to do is create and properly name a new tab. Again, you do this by using the tabs arrow on the right side of the Processing window. When you append the .java ending to the class, Processing will treat the class as an external, stand-alone Java class file. As I mentioned a page or so back, class files that you don’t name with a suffix are treated as inner classes, regardless of whether the classes are created within the main tab or a separate tab. Thus, when you’re not working in Java mode, the separate tabs are really just an organizational tool. This is not the case when you work in Java mode, where the separate .java files do remain separate from the main PDE code. To confirm all this, try exporting a sketch that uses both .pde and .java custom classes. When you export (File ➤ Export), the directory where the sketch applet was created will open. Within the directory, you should see at least one Java file, with the name of your sketch and the .java suffix, as well as any additional classes you appended with the .java suffix. Classes you created without specifying a suffix will be appended with the .pde suffix. If you then open the main sketch Java file and look at its contents, you’ll notice that the .pde classes were put within this file, even if you originally created them within a separate tab. However, any .java classes you created will not be included within the main sketch Java file.
16
3 D R E N D E R I N G I N J AVA M O D E Returning to the Bird.java class code, at the top of the Bird.java file, you’ll see an import statement: import processing.core.PApplet; This line explicitly provides the location of the PApplet class, which resides in the processing.core package. Java uses a package structure to organize classes. A package is just a directory, and the dot separating the names simply shows the directory nesting order. For example, processing.core.PApplet means a class named PApplet is within a directory named core, which is in another directory called processing. Usually, packages in Java are organized by functionality and/or logical class relationships. For example, there is a package in Java called java.awt (the awt stands for Abstract Windowing Toolkit) that contains 88 separate classes. As stated in the Java API, these classes are used for “creating user interfaces and for painting graphics and images.” Processing’s core classes are very logically stored in the processing.core package. Package names, by convention, are all lowercase, while class names begin with a capital (in PApplet’s case, two capitals). Instead of importing one class at a time, you can alternatively import all the classes in a specific package, by using an asterisk, like this: import processing.core.*; If I needed to use a bunch of classes within the core directory, instead of just the PApplet class, I would have used the latter format. Below the import statement, the Bird code is pretty similar to what we’ve already looked at, with some minor changes of course. You’ll notice the words private and public added in front of the properties and methods throughout the class. These words are referred to as access modifiers. Normally (and in contrast to Processing), instance properties in Java are declared private, and we rely on public methods to manipulate and access an object’s private properties. As I introduced in Chapter 8, there is a standard pair of methods, referred to as getters/setters (or more pretentiously, accessors/mutators), that are used to access/manipulate the private properties. Since this is an established Java convention, you can assume that most Java classes follow this structure. When a property is declared private, it may not be accessed directly from another class, not even from Processing’s main tab. For example, the property wingFill is declared private in the Bird class. If I instantiate a Bird object called b in the main tab, and I try to access its wingFill property like this: b.wingFill, I’ll get an error stating the property is private and inaccessible. You can learn more about Java access modifiers here: http://java.sun.com/docs/books/tutorial/java/javaOO/accesscontrol.html. I needed to add one new property to the Bird.java class that didn’t need to be added to the earlier .pde (inner) Bird class:
14
private PApplet p; Notice that PApplet is the same class name as the class referred to in the import statement, at the top of the Bird.java class. PApplet is one of Processing’s core classes, and it contains methods by the same name as Processing’s built-in functions. If you want to be able to use the Processing API from within an external Java class, you need to use a reference to this PApplet class to be able to call all of Processing’s extremely handy graphics
17
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T functions. In the main sketch (leftmost) tab, when you issue a function call, as I mentioned earlier in the chapter, there is actually a class connected to all those calls. The class is indeed PApplet, and you can refer to it directly with the special keyword this. All along, you’ve been calling Processing’s functions without the use of this, but it’s also possible to append this to the front of Processing’s function calls. For example, the following simple Processing sketch will run fine: this.size(400, 400); this.background(127, 75, 110); this.fill(255, 200, 40); this.strokeWeight(5); this.stroke(20, 255, 175); this.rectMode(this.CENTER); this.rect(this.width/2, this.height/2, 150, 150); Of course, since you don’t need to do it, the use of this is unnecessary but hopefully you get the point—whether you see them or not, classes abound in Processing (even if they are cleverly referred to as functions in the Processing reference). The PApplet class from the main sketch tab is very significant in Processing, as it’s critically tied to Processing’s graphics capabilities. Thus, when you work with external Java files, such as the Bird.java class, you need a reference to this special PApplet object to be able to call Processing’s functions. You accomplish this by passing the keyword this as an argument from the main tab to any external classes that need to use Processing’s graphics functions (really, PApplet methods). This is the reason I added a PApplet reference as the initial parameter (displayed in bold in the following code) to the two constructors in the Bird class: // constructors public Bird(PApplet p){ this(p, 0, 0, 0, 60, 80); } public Bird(PApplet p, float offsetX, float offsetY, ➥ float offsetZ, float w, float h){ this.p=p; this.offsetX = offsetX; this.offsetY = offsetY; this.offsetZ = offsetZ; this.h=h; this.w=w; bodyFill = p.color(200, 100, 10); wingFill = p.color(200, 200, 20); } Notice also within the Bird.java class that I’m using the keyword this. However, the this argument passed into the Bird constructor and the this within the assignment statements in the Bird constructor refer to two different classes. Huh? Every class can use the keyword this, and again it always refers to the class in which it lives. When you pass in this as an argument from your main sketch to the Bird.java class constructors, this refers to
18
3 D R E N D E R I N G I N J AVA M O D E an instance of the main sketch PApplet class. Within the Bird constructors, this in the assignment statements refers to the Bird class. By assigning the PApplet reference (passed into the constructor) to the instance property p, declared at the top of your class (giving it global scope within the class), you can now use the PApplet reference throughout the entire Bird class to indeed call Processing functions (PApplet methods). Whew! When you look through the rest of the Bird class, you’ll see p. appended to the front of Processing’s function calls. Astute readers may also notice that some of Processing’s keywords (void, float, and int) don’t have the appended p.; this is because these keywords are actually legal keywords in Java and don’t require the PApplet reference; see, you’ve been using Java all along, but never realized it. That’s everything you need to do to use a real external Java class in Processing. To try running the Java-ized sketch, you’ll just need to add one word to some code we used earlier— the keyword this (shown in bold in the following code) as an argument when instantiating the Bird object. // Java-ized Bird Bird b; void setup(){ size(400, 400, P3D); noStroke(); b = new Bird(this); } void draw(){ background(150, 120, 255); lights(); b.fly(); } Although the output doesn’t look any different from the earlier .pde version, you should feel good about yourself that you’ve reached Processing’s advanced Java mode. However, before you crack open the bubbly, there is one more simple edit you’ll want to make to your main sketch code to bring it into “certified” Java mode. But before you make this last edit, why not run the sketch a couple times first, to bask in your success. See if you can get the earlier flock code to run using the Bird.java file. (Hint: It just requires the addition of the this keyword as the first argument in the Bird instantiation statements.) It’s actually incredibly simple to convert your main flock sketch code into Java mode. You just need to add a class declaration line at the top of the code, including an open curly brace, and then add a closing curly brace at the very end of the code—that’s it! Before you do this, though, add the line print(this); to the bottom of the setup() function and run the sketch as is. You should see output similar to the following:
14
Temporary_6210_2422[panel0,0,22,400x400,layout=java.awt.FlowLayout] Now let’s convert to Java mode and rerun the sketch, and then compare the two outputs.
19
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Please note, once you move to full-blown Java mode, you can no longer use .pde classes within tabs; these classes should either be added directly as inner classes within the main sketch or, better yet, converted to .java files.
Add the following line to the top of your main sketch code, and also don’t forget to add a closing curly brace at the bottom of all the code. (I’ll now refer to the main sketch code as the Flock class.) public class Flock extends PApplet { Also, remember to add the this argument to the Bird instantiation calls: birds[i] = new Bird(this, random(-300, 300), random(-300, 300), ➥ random(-500, -2500), random(10, 60), random(10, 60)); Rerunning the code now should output the following: Flock[panel0,0,22,400x400,layout=java.awt.FlowLayout The output tells us that this now refers to the Flock class. You should remember that this also still references the PApplet class, through inheritance—implemented through the use of the extends keyword (public class Flock extends PApplet {). When a class extends another class, instances of the class refer to both data types (the class and its superclass). To test this—that Flock instances are indeed of both Flock and PApplet types—we’ll add another method to the Flock class and then try to call it from within our Bird class. Add the following method to the Flock class, below the draw() method, but above the final closing curly brace: void getBirdCount(){ println("bird count = " + birdCount); } Finally, add the two lines shown in bold in the following code to the Bird class, at the bottom of the main constructor: public Bird(PApplet p, float offsetX, float offsetY, float offsetZ, float w, float h){ this.p=p; this.offsetX = offsetX; this.offsetY = offsetY; this.offsetZ = offsetZ; this.h=h; this.w=w; bodyFill = p.color(200, 100, 10); wingFill = p.color(200, 200, 20); Flock f = (Flock)p; // explicit casting required f.getBirdCount(); }
20
3 D R E N D E R I N G I N J AVA M O D E I suspect the syntax (Flock)p probably looks odd. This is called explicit casting, in which you convert (or cast) an object from one type to another (in this case from PApplet to Flock). I needed to explicitly cast the PApplet p reference to type Flock to be able to call the getBirdCount() method added to the Flock class. The reason I was able to cast the p object between the PApplet and Flock types is because of their relationship through inheritance. The Flock class extends the PApplet class, making Flock objects of both Flock and PApplet types. Remember, in this relationship, the PApplet class is considered the superclass to the Flock subclass. Explicit casting is required when you want to assign a superclass reference (p) to a subclass reference (f), as I did in the line Flock f = (Flock)p;. Rerunning the sketch, you should be able to successfully call the Flock class’s getBirdCount() method from within the Bird class, using the this reference passed into the Bird constructor (with the required explicit casting, of course). Again, this was possible because the this argument passed references of both types (Flock and PApplet) through the magic of inheritance. One final way to demonstrate how Flock objects are of both Flock and PApplet types (that may be more reassuring to some readers) is to pass an additional this argument in the Bird instantiation statements: birds[i] = new Bird(this, this, random(-300, 300), ➥ random(-300, 300), random(-500, -2500), ➥ random(5, 30), random(5, 30)); This also requires that you update the constructors in the Bird class to account for the extra this argument being passed in: // constructors public Bird(PApplet p){ this(new Flock(), p, 0, 0, 0, 60, 80); } public Bird(Flock f, PApplet p, float offsetX, float offsetY, ➥ float offsetZ, float w, float h){ this.p=p; this.offsetX = offsetX; this.offsetY = offsetY; this.offsetZ = offsetZ; this.h=h; this.w=w; bodyFill = p.color(200, 100, 10); wingFill = p.color(200, 200, 20); // Flock f = (Flock)p; casting no longer necessary f.getBirdCount(); }
14
Because the this argument passed to the Bird constructor is of both Flock and PApplet types, you’re able to use the two different parameter types in the head of the Bird constructor. And since you’re now catching the this argument with a reference to both the PApplet and Flock classes, you no longer need to do the explicit casting. Whew again! Now let’s return to 3D, where you’ll eventually apply your newfound Java skills.
21
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Rendering in 3D To begin to better understand the workings of a 3D renderer, you need to review a couple of core concepts. We’ll begin by creating some 3D objects, utilizing a more sophisticated and efficient approach than was covered last chapter. The 3D objects you’ll build include (yet another) cube, a table, and a drinking glass. Finally, you’ll put these together to construct a little minimalist scene, including some nice mood lighting. I’ll begin the process by generating some initial classes. Last chapter, when you built the 3D primitives, you created a Point3D class. This very simple class described a vertex in 3D coordinate space. I want to build upon this concept with an improved and more sophisticated class. One of the conceptual limitations of using a Point class is that the specific vertex coordinate values of an object are actually less important than the relative positions and distances between the different points. You’ll construct each of your objects centered around the origin, so when you move the object along an axis, you’ll be translating all the object’s vertices an equal amount along the axis. Obviously, the literal values of the vertices will change from the translation in relation to the larger coordinate system the object is sitting on (commonly referred to as the world coordinate space), but the relative positions and distances between the individual vertices will not. Thus, in 3D, it is common to create code structures that take into account these different coordinate spaces, as well as how to translate between them. However, this can get overwhelmingly complicated, especially for new coders (as well as more experienced coders). Thus, I’ll be creating a very simplified approach to all this, and relying on a new class to help us keep track of some of this stuff. The new class you’ll be creating is called Vector3D, and you’ll be building it in stages, adding new features as you need them. Initially, the Vector3D class will look a little like our Point3D class from last chapter, which only accounted for an object’s vertices. The Vector3D class will still account for the object’s vertices, but it will also include a number of methods that will allow you to perform some pretty cool 3D mathematical operations on the vertices. I discussed vectors in Chapter 11, but a little review is probably necessary. A vector is a quantity that describes both direction and magnitude, as opposed to a scalar quantity, which only describes magnitude. So, if I say a car is moving at the rate of 50 mph, you know how fast the car is moving, but you have no idea in what direction it’s moving. Thus, 50 mph, or the car’s speed, is a scalar value. If, however, I tell you a car is traveling 50 mph from Oxford, Ohio to Columbus, Ohio (northeast), you now know both the speed and direction the car is traveling. We refer to both of these values together, speed and direction, as velocity—which is a vector quantity. The easiest way to think of a vector is as an arrow, with the head of the arrow representing direction and the length of the arrow representing distance, or magnitude (shown in Figure 14-5).
22
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-5. Vector diagram
The cool thing about a vector is that you can put it anywhere, on any coordinate system, and its internal value will remain the same, as would the distances between vertices on a 3D object translated along some axis or axes. (Of course, it’s also possible to scale or skew a 3D object, thus changing its vector values, but we won’t go there.) As you’ll see a little later in the chapter, using vectors also allows you to perform powerful, yet relatively simple, 3D calculations. Without further ado, then, here’s our new (still very under development) Vector3D.java class, in Java mode of course. To follow along, please create a new Processing sketch, and then create a new tab named Vector3D.java. The following code should be entered into the new tab: import processing.core.PApplet; public class Vector3D{ public float x, y, z; private float[]origVals; public Vector3D(){ } public Vector3D(float x, float y, float z){ this.x = x; this.y = y; this.z = z; // capture original values origVals = new float[]{x, y, z};
14
} //methods public void add(Vector3D v){ x+=v.x; y+=v.y;
23
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T z+=v.z; } public void subtract(Vector3D v){ x-=v.x; y-=v.y; z-=v.z; } public void multiply(float s){ x*=s; y*=s; z*=s; } public void divide(float s){ x/=s; y/=s; z/=s; } public x = y = z = }
void reset(){ origVals[0]; origVals[1]; origVals[2];
public void setTo(Vector3D v){ x = v.x; y = v.y; z = v.z; } public float getMagnitude(){ return PApplet.sqrt(x*x+y*y+z*z); } } The Vector3D class is pretty straightforward, with two constructors and seven methods for performing basic operations on the vectors. The add() and subtract() methods use Vector3D objects as parameters, while the multiply() and divide() methods use scalar float values. A few paragraphs back, when I discussed Java conventions, I mentioned that class properties are usually declared private, while class methods, used to access and change the private properties, are declared public. Well, it didn’t take me long to break with convention. You’ll notice in the Vector3D class that I declared the x, y, and z properties as public. Before you call the Java police on me, there is a convention in Java for allowing public access to the components of points, just as I did. You’ll be accessing the components often, and it will get tiresome to have to keep calling get() methods for each of them, instead of just .x, .y, and .z. The reset() method is used to reinitialize the vector component values to their original values. You’ll need this when you begin to
24
3 D R E N D E R I N G I N J AVA M O D E transform the vertices. reset() simply assigns the x, y, and z properties the values stored in the origVals[]array. The setTo() method just applies new values to the vector, passed in as a Vector3D argument. The getMagnitude() method, which returns the length of a vector, includes some syntax you may not have seen before: return PApplet.sqrt(x*x+y*y+z*z); I appended the class name PApplet. to the front of Processing’s sqrt() function. Normally, you use an instance of a class (an object) to invoke a method, not the actual class name itself; so what gives? When I switched to Java mode a couple pages ago, I added the special access modifier keywords private and public, when declaring properties and methods, respectively. Besides access modifiers, there are some other keywords you can use when declaring the members (properties and methods) of a class. One of these keywords is static. When you declare a method with the static keyword, the method becomes a static method (also referred to as a class method). This works the same way for properties of a class. Up until this point in the book, you’ve mostly been using instance class members. The difference between an instance member and a static member is that the latter, as I mentioned before, doesn’t require the use of an object to be invoked. Instead, you can call a static method by using the actual class name, connected via a dot to the method name, like so: ClassName.methodName. One of the benefits of this structure is that you can build utility type classes that can be used by any other class to solve a single specific task. The two classes might not have any other relationship at all, and you wouldn’t want to create any real relationship between the classes, nor would you need to ever instantiate a specific object of the class. The most famous Java class that works this way is the Math class, which only contains static members. The Math class methods are used for performing basic mathematical operations. For example, to generate a random value in Java, you write Math.random(). Processing’s math functions rely internally on Java’s Math class. The PApplet class contains many of its own static methods, which internally call the same named methods in Java’s Math class. If you’re interested, the reason why Processing goes to this trouble is to ensure that calls to Processing’s math functions return float values, as opposed to the double values returned by many of Java’s Math methods. double values take up more memory than floats, but are also more precise; Processing trades this precision for some increased performance and ease of use. The getMagnitude() method returns the magnitude of the vector, which is calculated using an abridged version of the distance formula, which is based on the Pythagorean theorem. The distance formula in three dimensions is as follows: √(x2-x1)2+(y2-y1)2+(z2-z1)2
14
When dealing with vectors, x1, y1, and z1 will be 0, so you can simply eliminate these terms from the expression, leaving you with √(x2)2+(y2)2+(z2)2, which translates to the following in code: sqrt(x*x+y*y+z*z);
25
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T Besides the Vector3D class, you also need to construct a 3D polygon class. The polygons will encapsulate the vectors, making it easier for you to build an object. I did a similar thing last chapter using Point3D objects for the cube’s vertices, which I encapsulated inside of quadrangles. In this chapter, however, instead of using quadrangles, you’ll build your objects with three-point polygons, better known as triangles. Working with triangles has certain advantages in 3D, as compared to polygons with more than three points. The most important advantage is that triangles are always planar; in other words, all three points of a triangle are guaranteed to lie on the same plane. Polygons with greater than three points can be non-planar, as illustrated in Figure 14-6.
Figure 14-6. Non-planar geometry
This may seem like an arbitrary factoid, especially if you’re totally new to 3D, but planar geometry has certain big advantages when rendering 3D geometry. For example, there are calculations in 3D that rely on being able to find a vector perpendicular to each of an object’s polygonal faces. These perpendicular lines are referred to as surface normals; I’ll discuss them in depth a bit later in the chapter. Planar geometry is required to be able to properly calculate a surface’s normal. In spite of the benefit of working with triangles for 3D, it’s very common, especially when constructing geometry in a 3D modeling application, to create polygons, as well as curves, with point counts greater than three. Although triangles are efficient for the renderer, they can be harder for people to build with; I personally find quads much easier. One solution to this, within the rendering pipeline, is to post-process higher-point-count geometry into triangles prior to performing rendering calculations. This process occurs in sophisticated 3D engines, and is referred to as tessellation. To keep things programmatically simpler, I’ll only be using triangles to construct the objects this chapter. Here’s a really simple Triangle3D.java class. Again, to keep things as simple as possible, you’ll notice I declared the Vector3D[] v property public, which will allow you to replace chunky complicated code such as getVects()[0].x with the simpler v[0].x.
26
3 D R E N D E R I N G I N J AVA M O D E public class Triangle3D{ public Vector3D[] v = new Vector3D[3]; public v[0] v[1] v[2] }
Triangle3D(){ = new Vector3D(); = new Vector3D(); = new Vector3D();
public v[0] v[1] v[2] }
Triangle3D(Vector3D v0, Vector3D v1, Vector3D v2){ = v0; = v1; = v2;
} To add the Triangle3D code to the current sketch, create a new tab named Triangle3D.java and enter the new class code into the tab. To test the new Vector3D and Triangle3D classes, I’ll generate a rectangle in 3D space. The approach I’ll use in constructing the rectangle will be very similar to what I’ll use for more complicated geometry to come. Keeping with the Java mode approach, I’ll generate one additional Rectangle3D.java class. This class should also be put in its own tab, named Rectangle3D.java. public class Rectangle3D{ private float w, h; private Vector3D[]v = new Vector3D[4]; private Triangle3D[]t = new Triangle3D[2]; public Rectangle3D(float w, float h){ this.w = w; this.h = h; setVertices(); setTriangles(); } public float getWidth(){ return w; } public void setWidth(float w){ this.w = w; }
14
public float getHeight(){ return h; } public void setHeight(float h){ this.h = h; }
27
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T public v[0] v[1] v[2] v[3] }
void setVertices(){ = new Vector3D(-w/2, -h/2, 0); = new Vector3D(w/2, -h/2, 0); = new Vector3D(w/2, h/2, 0); = new Vector3D(-w/2, h/2, 0);
public Vector3D[] getVertices(){ return v; } public void setTriangles(){ t[0] = new Triangle3D(v[0], v[1], v[2]); t[1] = new Triangle3D(v[0], v[2], v[3]); } public Triangle3D[] getTriangles(){ return t; } } And hereâ&#x20AC;&#x2122;s the Java mode code to add to the leftmost main tab to actually draw the fabulous rectangle (shown in Figure 14-7) in 3D space (rotating, of course). Please replace any code currently in the main tab with the following: public class MyController extends PApplet { Rectangle3D r; Triangle3D[]t; void setup(){ size(400, 400, P3D); r = new Rectangle3D(200, 200); t = r.getTriangles(); } void draw(){ background(255); fill(225); translate(width/2, height/2); rotateY(frameCount*PI/100); for (int i=0; i<t.length; i++){ beginShape(TRIANGLES); for (int j=0; j<3; j++){ vertex(t[i].v[j].x, t[i].v[j].y, t[i].v[j].z); } endShape(); } } }
28
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-7. 3D Rectangle sketch
If you have any problem running this sketch, make sure that the tabs holding the class code are each named with the exact same name as the respective class they contain, including the .java suffix.
One of the really nice things about the geometry construction approach in the last example is that you should ideally be able to render any 3D object using it, as long as the object’s vertex data can be stored as an array of triangles. For a rendering engine to be effective, it shouldn’t matter what object you send to it to render, as long as the object adheres to certain standards, such as always being composed of an array of triangles. You’ll soon improve upon the last example by expanding upon this principle and creating a new common data type that each of the 3D objects will inherit from. Before you do that, though, let’s create a real 3D object class; this will be yet another spinning cube, but this one will be built more efficiently than the one we built last chapter.
An enlightened 3D cube Last chapter, I conceptualized a cube as a grouping of 6 separate quadrangle faces, each with its own 4 unique vertices, giving us 24 separate vertices. This was a simple way to think about a cube, but not terribly efficient, since a cube is only really composed of eight unique vertices, and each vertex represents a corner joining three quadrangle faces. In the new Cube class (built with triangles instead of quadrangles), rather than think of each face as 2 separate triangles, each with 3 points, for a total of 36 vertices, I’ll stick with the 8 unique vertices, which I’ll share among the cube’s 12 triangle faces. Adding to the current sketch, create a new tab named Cube.java and enter the following code:
14
29
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T public class Cube{ private float w, h, d; private Vector3D[]v = new Vector3D[8]; private Triangle3D[]t = new Triangle3D[12]; public Cube(float w, float h, float d){ this.w = w; this.h = h; this.d = d; setVertices(); setTriangles(); } //getter/setters public float getWidth(){ return w; } public void setWidth(float w){ this.w = w; } public float getHeight(){ return h; } public void setHeight(float h){ this.h = h; } public float getDepth(){ return d; } public void setDepth(float d){ this.d = d; } public void setVertices(){ v[0] = new Vector3D(-w/2, -h/2, d/2); v[1] = new Vector3D(w/2, -h/2, d/2); v[2] = new Vector3D(w/2, h/2, d/2); v[3] = new Vector3D(-w/2, h/2, d/2); v[4] = new Vector3D(w/2, -h/2, -d/2); v[5] = new Vector3D(-w/2, -h/2, -d/2); v[6] = new Vector3D(-w/2, h/2, -d/2); v[7] = new Vector3D(w/2, h/2, -d/2); } public Vector3D[] getVertices(){ return v; }
30
3 D R E N D E R I N G I N J AVA M O D E public void setTriangles(){ //front face t[0] = new Triangle3D(v[0], v[1], v[2]); t[1] = new Triangle3D(v[0], v[2], v[3]); //back face t[2] = new Triangle3D(v[4], v[5], v[6]); t[3] = new Triangle3D(v[4], v[6], v[7]); //right face t[4] = new Triangle3D(v[1], v[4], v[7]); t[5] = new Triangle3D(v[1], v[7], v[2]); //left face t[6] = new Triangle3D(v[5], v[0], v[3]); t[7] = new Triangle3D(v[5], v[3], v[6]); //top face t[8] = new Triangle3D(v[0], v[5], v[4]); t[9] = new Triangle3D(v[0], v[4], v[1]); //bottom face t[10] = new Triangle3D(v[2], v[7], v[6]); t[11] = new Triangle3D(v[2], v[6], v[3]); } public Triangle3D[] getTriangles(){ return t; } } One subtle but important point about the Cube class is the order of vertices I specified in the setTriangles() method. A triangle’s vertices can be connected in two different ways—clockwise or counterclockwise. This may not seem like such an important distinction, but it turns out that certain rendering calculations, especially lighting, have a direct correlation on the connecting direction you choose. In the Cube class, I made sure all the triangles, as viewed from the outside of the cube, were connected in a clockwise fashion. A little later in the chapter, when I discuss surface normals, I’ll revisit this point. To render a cube (as shown in Figure 14-8) with your new, enlightened cube code, replace the current code in the leftmost main tab with the following: public class MyController extends PApplet { Cube c; Triangle3D[]t; void setup(){ size(400, 400, P3D); c = new Cube(175, 175, 175); t = c.getTriangles(); noStroke(); }
14
void draw(){ background(100); lights();
31
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T fill(150); translate(width/2, height/2); rotateY(frameCount*PI/100); rotateX(frameCount*PI/75); for (int i=0; i<t.length; i++){ beginShape(TRIANGLES); for (int j=0; j<3; j++){ vertex(t[i].v[j].x, t[i].v[j].y, t[i].v[j].z); } endShape(); } } }
Figure 14-8. Enlightened Cube sketch
This code is actually quite similar to what I used to render the rectangle. I added a second axis rotation and Processingâ&#x20AC;&#x2122;s lights() command, which gives a default lighting setup. Both the rotation and lights were added just to help better display the three-dimensionality of the cube. Besides that, the code works identically to the rectangle code. Again, by standardizing how geometry is structured, you can easily plug in new objects. One limitation in the last sketch, from an OOP point of view, is that the actual triangle rendering is exposed in the main Processing tab, within the draw() function, and there is also no simple method for rendering more than one object at a time. A better solution is to create a separate class that will encapsulate the actual object drawing and also allow you to transparently evolve the guts of the geometry construction/drawing algorithm (its implementation) while maintaining a consistent interface for its use. This is a fundamental principle of OOP. By encapsulating functionality in classes with clearly defined public interfaces (the methods that you use to interact with the class), you can change how the class is eventually implemented internally, without changing how people interact with the class; who wants to learn new ways to do the same old (albeit perhaps improved) thing?
32
3 D R E N D E R I N G I N J AVA M O D E The new class will be called IG3D.java. The class for now will be quite simple, but I’ll soon add to it (as well as improve it). Here’s the initial code, which should be added to the current sketch, in a new tab named IG3D.java: import processing.core.*; public class IG3D{ private PApplet p; public IG3D(PApplet p){ this.p = p; } public void render(Cube c){ Triangle3D[]t = c.getTriangles(); for (int i=0; i<t.length; i++){ p.beginShape(p.TRIANGLES); for (int j=0; j<3; j++){ p.vertex(t[i].v[j].x, t[i].v[j].y, t[i].v[j].z); } p.endShape(); } } } The drawing code within the render() method works the same way as when it was in the main tab, but now the messy plotting stuff is properly encapsulated. To allow drawing to occur in an external Java file, I needed to pass a reference to the main PApplet, which I did by passing the argument this to the IG3D constructor. I then needed to use the passed-in PApplet reference, which I internally assigned to the global PApplet variable p, in front of any calls to Processing’s drawing functions (PApplet methods). You looked at this process earlier in the chapter, with the Flock.java class example. Here’s the code to paste into the main tab to try out the new IG3D class. Please replace whatever code is currently in the leftmost tab with the following: public class MyController extends PApplet { Cube c; IG3D i; void setup(){ size(400, 400, P3D); c = new Cube(175, 175, 175); i = new IG3D(this); noStroke(); }
14
void draw(){ background(100); lights(); fill(150); translate(width/2, height/2);
33
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T rotateY(frameCount*PI/100); rotateX(frameCount*PI/75); i.render(c); } } If you run the sketch now, the output will look identical to the last version you ran. However, the program structure is better and more highly encapsulated. Notice how little code you needed to actually enter into the main tab. There’s still one other significant modification you need to make to the IG3D class that will allow it to render multiple types of objects. If you look back at the IG3D code, you’ll notice in the render(Cube c) method that I included the parameter c of type Cube. This would be a fine approach if you knew that you were only ever going to construct cubes. However, that might get a little redundant (unless of course you’re a cubist—sorry). One (not very good) solution would be to have multiple overloaded render() methods in the IG3D class, each with a different object type parameter that would be invoked based on what type of object was passed in. So, if you had five different object types, you would have five render(object type reference) methods, each with a different type object as a parameter. This would work, as Java allows overloaded methods (multiple methods with the same name but a different parameter list). However, this is a flawed OOP strategy, since you’d need to keep going back into the class adding new render() methods every time a new object was developed. Ideally, you want to design a class that can be compiled once, without the need to keep returning to the source code—even as new object types are created. This may not at first seem possible, but it is the preferred OOP way (and of course it is possible). The correct solution will be based on one of the central tenets of OOP, inheritance, which will allow your different 3D object classes to ultimately share another common data type. Thus, instead of using specific parameters of each object subtype (Rectangle, Cube, etc.), you’ll use a shared common object type as the parameter in the render() method, which will catch all the different objects passed in as arguments that inherit from this shared type. We refer to this very cool OOP capability as polymorphism, which allows objects to assume multiple forms (data types).
Inheritance and polymorphism are covered in other places in the book, especially Chapter 8, if you’d like a more detailed review of these difficult concepts.
There are two ways you can carry out the inheritance strategy: either using another class as a superclass, which you’d extend, or using an interface, which you’d implement. The former would be most helpful if the different shapes all shared a bunch of common structures (properties and methods), while the latter strategy would be best if you’re mostly just interested in the extra data type association and some common methods. Remember that a class can only inherit (extend) from one other class, but it can implement many interfaces. In determining my current solution, since I’m just interested in adding a common data type to my objects and the four methods setVertices(), getVertices(), setTriangles(), and getTriangles(), I decided to use an interface.
34
3 D R E N D E R I N G I N J AVA M O D E In the current sketch, create a new tab and name it IGShape3D.java. Paste the following code in the tab: public interface IGShape3D { public void setVertices(); public Vector3D[] getVertices(); public void setTriangles(); public Triangle3D[] getTriangles(); } Methods within interfaces are simply declared, not implemented; that’s why there are no curly brace blocks for any of the four methods. Any class that implements an interface is required to also implement each method in the interface. This contract between a class and any implemented interfaces helps ensure consistency between the related classes that all share the common interface (remember, also a common type). Since each class will be required to internally implement the same methods (using their own customized implementation), you get the benefit of a common interface (the shared method names to invoke), but with an individualized implementation (the filled-in method blocks). To use the new IGShape3D interface, you need to update the Rectangle3D and Cube classes by adding implements IGShape3D after the class name in the class declaration statement (e.g., public class Cube implements IGShape3D{). Fortunately, the four declared methods in the IGShape3D interface are already implemented in the Rectangle3D and Cube classes—I thought ahead. You need to update the IG3D.java class, replacing the parameter Cube c in the render(Cube c) method with render(IGShape3D shape). Also, in the first line in this method, you need to replace c with shape, like so: Triangle3D[]t = shape.getTriangles(); Here’s the updated IG3D.java class: import processing.core.*; public class IG3D{ private PApplet p; public IG3D(PApplet p){ this.p = p; } public void render(IGShape3D shape){ Triangle3D[]t = shape.getTriangles(); for (int i=0; i<t.length; i++){ p.beginShape(p.TRIANGLES); for (int j=0; j<3; j++){ p.vertex(t[i].v[j].x, t[i].v[j].y, t[i].v[j].z); } p.endShape(); } }
14
}
35
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T You should now be able to run the sketch as before, but of course with a much deeper and more profound sense of accomplishment. Just to ensure a mistake wasn’t made, here are the steps I just discussed, in list form:
1. Add the IGShape3D.java code into a new tab of the same exact name. 2. Add the implements statement: public class Cube implements IGShape3D{. 3. Add the implements statement: public class Rectangle3D implements IGShape3D{. 4. Change the Cube c parameter to IGShape3D shape: public void render(IGShape3D shape){.
5. Change the object reference from c to shape: Triangle3D[]t
=
shape.
getTriangles();. I also suspect you might want to put all this work you’ve done to a little better use than just rendering yet another rotating cube. To that end, I’ll show you how to construct a table and a vessel, which you’ll eventually compose together with the existing Rectangle3D and Cube classes to generate a quaint, pastoral dining scene. In the existing sketch, create a new tab named Table.java, and enter the following code: public class Table implements IGShape3D{ private float tableTopThickness; private float legThickness; private float w, h, d; private Vector3D[]v = new Vector3D[40]; private Triangle3D[]t = new Triangle3D[44]; public Table(float w, float h, float d){ this(w, h, d, (w+d)/40, h/10); } public Table(float w, float h, float d, float tableTopThickness, ➥ float legThickness){ this.w = w; this.h = h; this.d = d; this.tableTopThickness = tableTopThickness; this.legThickness = legThickness; setVertices(); setTriangles(); } //getter/setters public float getWidth(){ return w; } public void setWidth(float w){ this.w = w; }
36
3 D R E N D E R I N G I N J AVA M O D E public float getHeight(){ return h; } public void setHeight(float h){ this.h = h; } public float getDepth(){ return d; } public void setDepth(float d){ this.d = d; } //required implemented methods public void setVertices(){ // table top (top surface) v[0] = new Vector3D(-w/2, -h/2, d/2); v[1] = new Vector3D(-w/2, -h/2, -d/2); v[2] = new Vector3D(w/2, -h/2, -d/2); v[3] = new Vector3D(w/2, -h/2, d/2); // table top (bottom surface) v[4] = new Vector3D(-w/2, -h/2+tableTopThickness , d/2); v[5] = new Vector3D(-w/2, -h/2+tableTopThickness, -d/2); v[6] = new Vector3D(w/2, -h/2+tableTopThickness, -d/2); v[7] = new Vector3D(w/2, -h/2+tableTopThickness, d/2); /**front legs**/ // left front leg (leg top) v[8] = new Vector3D(-w/2, -h/2+tableTopThickness, d/2); v[9] = new Vector3D(-w/2+legThickness, -h/2+ ➥ tableTopThickness, d/2); v[10] = new Vector3D(-w/2+legThickness, -h/2+tableTopThickness, ➥ d/2-legThickness); v[11] = new Vector3D(-w/2, -h/2+tableTopThickness, ➥ d/2-legThickness); // left front leg (leg bottom) v[12] = new Vector3D(-w/2, h/2 , d/2); v[13] = new Vector3D(-w/2+legThickness, h/2, d/2); v[14] = new Vector3D(-w/2+legThickness, h/2, d/2-legThickness); v[15] = new Vector3D(-w/2, h/2, d/2-legThickness); // right front leg (leg top) v[16] = new Vector3D(w/2, -h/2+tableTopThickness, d/2); v[17] = new Vector3D(w/2-legThickness, -h/2+ ➥ tableTopThickness, d/2); v[18] = new Vector3D(w/2-legThickness, -h/2+tableTopThickness, ➥ d/2-legThickness); v[19] = new Vector3D(w/2, -h/2+tableTopThickness, ➥ d/2-legThickness); // right front leg (leg bottom)
14
37
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T v[20] v[21] v[22] v[23]
= = = =
new new new new
Vector3D(w/2, h/2 , d/2); Vector3D(w/2-legThickness, h/2, d/2); Vector3D(w/2-legThickness, h/2, d/2-legThickness); Vector3D(w/2, h/2, d/2-legThickness);
/**rear // left v[24] = v[25] =
legs**/ rear leg (leg top) new Vector3D(-w/2, -h/2+tableTopThickness, -d/2); new Vector3D(-w/2+legThickness, -h/2+ ➥ tableTopThickness, -d/2); v[26] = new Vector3D(-w/2+legThickness, -h/2+tableTopThickness, ➥ -d/2+legThickness); v[27] = new Vector3D(-w/2, -h/2+tableTopThickness, ➥ -d/2+legThickness); // left rear leg (leg bottom) v[28] = new Vector3D(-w/2, h/2 , -d/2); v[29] = new Vector3D(-w/2+legThickness, h/2, -d/2); v[30] = new Vector3D(-w/2+legThickness, h/2, -d/2+legThickness); v[31] = new Vector3D(-w/2, h/2, -d/2+legThickness); // right rear leg (leg top) v[32] = new Vector3D(w/2, -h/2+tableTopThickness, -d/2); v[33] = new Vector3D(w/2-legThickness, -h/2+ ➥ tableTopThickness, -d/2); v[34] = new Vector3D(w/2-legThickness, -h/2+tableTopThickness, ➥ -d/2+legThickness); v[35] = new Vector3D(w/2, -h/2+tableTopThickness, ➥ -d/2+legThickness); // right rear leg (leg bottom) v[36] = new Vector3D(w/2, h/2 , -d/2); v[37] = new Vector3D(w/2-legThickness, h/2, -d/2); v[38] = new Vector3D(w/2-legThickness, h/2, -d/2+legThickness); v[39] = new Vector3D(w/2, h/2, -d/2+legThickness); } public Vector3D[] getVertices() { return v; } public void setTriangles(){ /***table top***/ //top t[0] = new Triangle3D(v[0], t[1] = new Triangle3D(v[0], //bottom t[2] = new Triangle3D(v[5], t[3] = new Triangle3D(v[5], //front t[4] = new Triangle3D(v[0],
38
v[1], v[2]); v[2], v[3]); v[4], v[7]); v[7], v[6]); v[3], v[7]);
3 D R E N D E R I N G I N J AVA M O D E t[5] = new Triangle3D(v[0], v[7], v[4]); //right t[6] = new Triangle3D(v[3], v[2], v[6]); t[7] = new Triangle3D(v[3], v[6], v[7]); //back t[8] = new Triangle3D(v[2], v[1], v[5]); t[9] = new Triangle3D(v[2], v[5], v[6]); //left t[10] = new Triangle3D(v[1], v[0], v[4]); t[11] = new Triangle3D(v[1], v[4], v[5]); /***table legs***/ // leg faces can be processed efficiently using a loop for (int i=0; i<32; i+=8){ //front face t[12+i] = new Triangle3D(v[8+i], v[11+i], v[15+i]); t[13+i] = new Triangle3D(v[8+i], v[15+i], v[12+i]); //right face t[14+i] = new Triangle3D(v[11+i], v[10+i], v[14+i]); t[15+i] = new Triangle3D(v[11+i], v[14+i], v[15+i]); //back face t[16+i] = new Triangle3D(v[10+i], v[9+i], v[13+i]); t[17+i] = new Triangle3D(v[10+i], v[13+i], v[14+i]); //left face t[18+i] = new Triangle3D(v[9+i], v[8+i], v[12+i]); t[19+i] = new Triangle3D(v[9+i], v[12+i], v[13+i]); } } public Triangle3D[] getTriangles(){ return t; } } Although the Table class is lengthy, itâ&#x20AC;&#x2122;s very similar to the Cube class. A table is really just a combination of five cubes (table top and four legs). The class follows the exact same structure as both the Rectangle and Cube classes, implementing the IGShape3D interface and, perforce, including implementations for the required setVertices(), setTriangles(), getTriangles(), and getVertices() methods. Hereâ&#x20AC;&#x2122;s the code to render a table (shown in Figure 14-9). Again, please replace the existing code in the leftmost main tab with the following code: public class MyController extends PApplet { Table t; IG3D i; void setup(){ size(400, 400, P3D); t = new Table(225, 165, 170); i = new IG3D(this);
14
39
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T noStroke(); } void draw(){ background(100); lights(); fill(150); translate(width/2, height/2); rotateY(frameCount*PI/100); rotateX(-PI/6); i.render(t); } }
Figure 14-9. Table sketch
The last object Iâ&#x20AC;&#x2122;ll create for our scene is a simple vessel, based on a drinking glass shape. Besides the general shape of a vessel, Iâ&#x20AC;&#x2122;ll also account for the thickness of its walls and the amount of detail to render. The plotting algorithm for this object is a bit more complicated than the table, but similar to the steps used last chapter to create the lathed toroid object. Lathing involves taking a cross-section shape and sweeping it around an axis, forming an object with radial symmetry, like a cylinder or wine glass. Figure 14-10 shows the cross-section shape used to create the vessel.
40
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-10. Vessel cross-section shape
Create a new tab named Vessel.java in the current sketch, and enter the following code (donâ&#x20AC;&#x2122;t panicâ&#x20AC;&#x201D;this code looks much more complicated than it really is): import processing.core.*; public class Vessel implements IGShape3D{ public float w; public float h; public float thickness; public int detail; public float cornerRadius; public int cornerSteps; // cross section vertices public Vector3D[]cs; // all vertices public Vector3D[]v;
14
public Triangle3D[]t; public PApplet p = new PApplet();
41
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T // constructor without detail param public Vessel(float w, float h){ // call main constructor this(w, h, w/20, 30); } // main constructor public Vessel(float w, float h, float thickness, int detail){ this.w = w; this.h = h; this.thickness = thickness; this.detail = detail; cornerRadius = thickness*3; cs = new Vector3D[37]; v = new Vector3D[cs.length*detail]; t = new Triangle3D[((cs.length-1)*2)*detail]; setCrossSection(); setVertices(); setTriangles(); } //getter/setters public float getWidth(){ return w; } public void setWidth(float w){ this.w = w; } public float getHeight(){ return h; } public void setHeight(float h){ this.h = h; } public void setCrossSection(){ float px=0, py=0, px2=0, py2=0, ang=90; float cornerSteps = ((w+h)/2)/10; float cornerRadius = thickness*3; int index = 0; cs[index++] = new Vector3D(0, h/2, 0); cs[index++] = new Vector3D(-w/2+cornerRadius, h/2, 0); for (int i=0; i<16; i++){ px = -w/2+cornerRadius+p.cos(PApplet.radians(ang))*-cornerRadius; py = h/2-cornerRadius+p.sin(PApplet.radians(ang))*cornerRadius; cs[index++] = new Vector3D(px, py, 0); ang-=45/16; } cs[index++] = new Vector3D(px, -h/2, 0);
42
3 D R E N D E R I N G I N J AVA M O D E ang = 90; for (int i=0; i<8; i++){ px2 = px+p.cos(PApplet.radians(ang))*thickness; py2 = -h/2+p.sin(PApplet.radians(ang))*-thickness; cs[index++] = new Vector3D(px2, py2, 0); ang-=135/8; } py2 = h/2-(cornerRadius+thickness); cs[index++] = new Vector3D(px2, py2, 0); ang = 180; for (int i=0; i<8; i++){ px = px2+cornerRadius+p.cos(PApplet.radians(ang))*cornerRadius; py = py2+p.sin(PApplet.radians(ang))*cornerRadius; cs[index++] = new Vector3D(px, py, 0); ang-=75/8; } cs[index++] = new Vector3D(0, py, 0); } //required implemented methods public void setVertices(){ float px = 0, pz = 0; float ang = 0; int index = 0; for (int i=0; i<detail; i++){ for (int j=0; j<cs.length; j++){ pz = p.cos(PApplet.radians(ang))*cs[j].z - ➥ p.sin(PApplet.radians(ang))*cs[j].x; px = p.sin(PApplet.radians(ang))*cs[j].z + ➥ p.cos(PApplet.radians(ang))*cs[j].x; v[index] = new Vector3D(px, cs[j].y, pz); index++; } ang+=360/detail; } } public Vector3D[] getVertices() { return v; } public void setTriangles(){ int index = 0; int len = cs.length; for (int i=0; i<detail; i++){ for (int j=0; j<len-1; j++){ if (i<detail-1){ t[index++] = new Triangle3D(v[j+(i*len)], ➥
14
43
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T v[j+((i+1)*len)], v[(j+1)+((i+1)*len)]); t[index++] = new Triangle3D(v[j+(i*len)], ➥ v[(j+1)+((i+1)*len)], v[(j+1)+(i*len)]); } else { t[index++] = new Triangle3D(v[j], v[j+(i*len)], ➥ v[(j+1)+(i*len)]); t[index++] = new Triangle3D(v[j],v[(j+1)+(i*len)], v[j+1]); } } } } public Triangle3D[] getTriangles(){ return t; } } Before discussing the vessel code (which I know probably looks like a big headache), let’s actually render the vessel (shown in Figure 14-11). Here’s the code to enter into the leftmost main tab of the current sketch, which should replace whatever code’s currently there: public class MyController extends PApplet { Vessel v; IG3D i; void setup(){ size(400, 400, P3D); v = new Vessel(150, 200, 10, 60); i = new IG3D(this); noStroke(); } void draw(){ background(100); lights(); fill(150); translate(width/2, height/2); rotateY(frameCount*PI/100); rotateX(-PI/2.5); i.render(v); } }
44
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-11. Vessel sketch
Hopefully, you noticed in the Vessel object rendering that there is both an outer and inner surface. In addition, the bottom of the vessel, as well as its top lip, are both slightly curved. These facets of the object are what causes a certain degree of complexity. To generate the curved details, I relied on trig functions—not unlike what you’ve used throughout the book to render polygons (standard unit circle relationships); only I rendered arcs—instead of closed shapes—that connect to the straight wall sections of the vessel in the initial cross section. The real challenge in this problem was not actually the lathing, but rather plotting the initial cross section itself. In the setCrossSection() method, I filled the cs[] array with the vertices making up the 2D cross-section shape. Again, I used a series of trig functions and straight line calls to create it. Once I had the cross section, the setVertices() method copied the cross-section points radially around the y-axis, creating, in a sense, a point cloud in the shape of the vessel. Finally, the setTriangles() method organized the vertices into triangles, filling the Triangle3D array, which the IG3D class eventually rendered. The thickness and detail properties specify the thickness of the vessel wall and the number of lathe segments radially copied around the y-axis, respectively. As usual, the best way to understand precisely how the plotting code works is to mess with it. You can actually make a fairly wide range of shapes beyond your standard cup. For example, to generate a shallow tray with beveled edges (shown in Figure 14-12), replace the previous vessel instantiation statement, in the main tab: v = new Vessel(150, 200, 10, 60);
14
with the following: v = new Vessel(250, 20, 5, 4);
45
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Figure 14-12. Shallow Tray sketch
Before we move on, here’s the pastoral dining scene I promised (shown in Figure 14-13). It’s chock-full of good code, all stuff we’ve looked at before. I definitely recommend playing around with the sketch and adding your own objects to the scene. To run this sketch, leave all the tabs intact within the current sketch. The required classes you need are Cube.java, IG3D.java, IGShape3D.java, Rectangle3D.java, Table.java, Triangle3D.java, Vector3D.java, and Vessel.java. If you’ve been following along, it should just be a matter of putting the following code in the leftmost main tab (replacing whatever code is currently there): public class MyController extends PApplet { //room dimensions float rmWidth = 400; float rmHeigt = 200; float rmDepth = 400; //objects Rectangle3D ground, rWall, lWall, bWall; Table table; Table chair; Cube chairBack; Vessel sugarBowl; Cube sugar; Vessel glass; // for sugar cube pile int sugarCount = 80; Vector3D[]sugarJitter = new Vector3D[sugarCount];
46
3 D R E N D E R I N G I N J AVA M O D E //colors color wallCol = color(245, 250, 225); color groundCol = color(50, 25, 30); color tableCol = color(150, 75, 30); color chairCol = color(140, 190, 100); color sugarBowlCol = color(160, 160, 190); color sugarCol = color(240, 240, 240); color glassCol = color(100, 10, 40, 90); //controls zooming float camDistZ = 368, dz = 0, ang=0; IG3D i; void setup(){ size(400, 400, P3D); //instantiate room objects //room ground = new Rectangle3D(rmWidth, rmDepth); rWall = new Rectangle3D(rmDepth, rmHeigt); lWall = new Rectangle3D(rmDepth, rmHeigt); bWall = new Rectangle3D(rmWidth, rmHeigt); //table table = new Table(50, 16, 30); //seat chair = new Table(10, 10, 10); chairBack = new Cube(10, 14, 1); //sugar bowl sugarBowl = new Vessel(8, 3, .5, 6); sugar = new Cube(.6, .6, .6); for (int i=0; i<sugarCount;i++){ sugarJitter[i] = new Vector3D(random(-1.8, 1.8), â&#x17E;Ľ random(-.85, .85), random(-1.8, 1.8)); } //drinking glass glass = new Vessel(2.5, 4, .25, 20); // renderer i = new IG3D(this); noStroke(); } void draw(){ background(100); lights(); fill(255); dz = abs(cos(radians(ang+=.2))*camDistZ); translate(width/2, height/18, max(dz, 220)); rotateX(-PI/16); rotateY(frameCount*PI/150);
14
47
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T //ground pushMatrix(); translate(0, rmHeigt); rotateX(PI/2); fill(groundCol); i. render (ground); popMatrix(); //right wall pushMatrix(); translate(rmWidth/2, rmHeigt/2, 0); rotateY(PI/2); fill(wallCol); i. render (rWall); popMatrix(); //left wall pushMatrix(); translate(-rmWidth/2, rmHeigt/2, 0); rotateY(-PI/2); fill(wallCol); i. render (lWall); popMatrix(); //back wall pushMatrix(); translate(0, rmHeigt/2, -rmDepth/2); fill(wallCol); i. render (bWall); popMatrix(); //table pushMatrix(); translate(0, rmHeigt-table.getHeight()/2, 0); rotateY(PI/4); fill(tableCol); i. render (table); popMatrix(); //4 chairs float px=0, pz=0, angle=90; rotateY(PI/4); for (int j=0; j<4; j++){ pushMatrix(); px = cos(radians(angle))*table.getWidth()/2; pz = sin(radians(angle))*table.getDepth()/2; translate(px, rmHeigt-chair.getHeight()/2, pz); if (j>0){ rotateY(-PI/2*j);
48
3 D R E N D E R I N G I N J AVA M O D E } fill(chairCol); i.render(chair); translate(0, -chairBack.getHeight()/2-chair.getHeight()/2, â&#x17E;Ľ chair.getWidth()/2); i. render (chairBack); angle+=90; popMatrix(); } //vessels //sugar bowl pushMatrix(); translate(0, rmHeigt-table.getHeight(), 0); rotateY(PI/4); fill(sugarBowlCol); i.render(sugarBowl); fill(sugarCol); for (int j=0; j<sugarCount; j++){ //resetMatrix(); pushMatrix(); translate(sugarJitter[j].x, -sugarBowl.getHeight()/2+ sugarJitter[j].y, sugarJitter[j].z); rotateX(radians(sugarJitter[j].x*20)); rotateY(radians(sugarJitter[j].y*30)); rotateZ(radians(sugarJitter[j].z*40)); i. render (sugar); popMatrix(); } popMatrix();
â&#x17E;Ľ
//4 glasses px=0; pz=0; angle=45; rotateY(PI/4); for (int j=0; j<4; j++){ pushMatrix(); px = cos(radians(angle))*(table.getWidth()/4); pz = sin(radians(angle))*(table.getDepth()/3.75); translate(px, rmHeigt-table.getHeight()-glass.getHeight()/2, pz); fill(glassCol); i.render(glass); angle+=90; popMatrix(); }
14
} }
49
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Figure 14-13. Pastoral Dining Scene sketch
I suggest trying to add to the scene a little before moving on. It might be interesting to try to add a second room, or even create another class that encapsulates the entire table setting code, allowing you to build an entire dining hall. Whatever you decide, at least play with the sketch some before moving on.
The virtual camera Well, we’ve covered a lot of ground. But what I haven’t discussed is how the 3D rendering process actually occurs on your very 2D monitors. That, of course, is the great irony of 3D graphics—you do all this work to calculate a third dimension, only to have to do even more calculations to convert it all back to two dimensions. One of the structures you’ll code to help you understand the 3D-to-2D conversion is a virtual camera. Most 3D systems employ some type of virtual camera. In a 3D modeling and animation package, you usually have a camera icon that you can drag around, as well as view and render the world through. In Processing, using P3D, you don’t actually see the camera, but it is in a sense what you view the world through. The actual conversion, in terms of the math, from 3D to 2D is relatively simple. However, trying to account for 3D information in 2D space introduces some interesting, and also challenging, problems—a few of which we’ll look at (and even solve). First though, let’s look at a visual model of a virtual camera (shown in Figure 14-14).
50
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-14. Virtual camera model
The figure shows a basic camera model commonly used in 3D. The camera faces into the computer screen, along the negative z-axis. Although this setup is common in 3D, there are other 3D engines and libraries that reverse the z-axis, having it increase as you move into the screen. Some other systems even have the y- and z-axes switched. In Processing, as you surely know by now, the x-axis increases to the right, the y-axis increases downward, and the z-axis increases toward you. Processing follows what’s referred to as a righthanded coordinate system, as opposed to a left-handed coordinate system. These designations are a little confusing, but are very commonly used in 3D. Here’s one way to illustrate how it works: with the palms of both of your hands facing up, point your fingers in the direction of the positive x-axis (to the right—yeah, I know it’s not very comfortable on the wrists). Now look at the direction your thumbs are pointing. On your right hand, your thumb should be pointing toward your chest, which represents the direction of the positive z-axis in a right-handed coordinate system. A left-handed system is the opposite, with the positive z-axis pointing away from you. The view plane in Figure 14-14 represents the computer screen, where the 3D object data eventually gets flattened (2D projected). The camera is situated along the positive z-axis in regard to the view plane. The overall camera view, represented by the shaded pyramidal shape, is referred to as the view frustum. Near the camera, along the negative z-axis (relative to the camera) is the front clipping plane. You can also consider the right edge of the frustum (the bottom base of the pyramid) and the remaining sides of the frustum as clipping planes as well. Clipping planes set rendering boundaries; only what’s inside the
14
51
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T frustum needs to be rendered. This is usually done for efficiency, as 3D rendering eats up lots of resources—why render stuff that’s out of range? In addition, as geometry gets too close to the camera, some strange stuff can happen, which the front clipping plane can prevent. If you previewed the pastoral dining scene, you may have noticed a cup appearing in the scene, as if it grew out of the table; this was due to the table being clipped out of the scene as it passed some rendering threshold with regard to its distance to the virtual camera. Well, enough theory—let’s start building a simple camera. To begin, let’s do a couple of renders using Processing’s P3D camera, to sort of reverseengineer our own; it’s good to stand on the shoulders of giants. Using the current sketch, with all the custom classes in their separate tabs, enter the following code into the leftmost main tab, replacing whatever code is currently there. You might want to do a save-as first. Figure 14-15 shows the output of the first render. public class MyController extends PApplet { IG3D i; Table t; void setup(){ size(400, 400, P3D); background(0); lights(); noStroke(); fill(126, 63, 20); translate(width/2, height/2); rotateX(PI/-4); rotateY(PI/8); i = new IG3D(this); t = new Table(200, 140, 100); i.render(t); } }
Figure 14-15. Partially Rotated Table sketch
52
3 D R E N D E R I N G I N J AVA M O D E When you run the sketch, you should see a partially rotated table. Even though you can’t see it, there is a virtual camera model in Processing, built into the renderer. For example, add the line println(g.camera); to the bottom of your setup() function, and rerun the sketch. The reason you precede camera with g. is because camera is a property within the PGraphics class, one of Processing’s core classes that is internally instantiated when you start a sketch. This PApplet instance is named g in Processing and is publicly accessible. When you run the sketch with the println(g.camera); line, you should see the following output: processing.core.PMatrix@11a775. The output tells you that the camera object is of type PMatrix, which lives in the processing.core package. The stuff after the @ sign just relates to an address in the computer’s memory, in hexadecimal notation. Processing’s PMatrix class is used to construct a matrix, which is just a structure for holding a series of values. You can think of a matrix as a table structure, with rows and columns. Although the actual structure of a matrix is not that complicated, mathematical operations involving matrices can be very complicated and are beyond the scope of this book. However, matrix math is fundamental to more advanced 3D coding, so I recommend, if the 3D bug really bites, to learn more about them. Here’s a link to more on matrices: www.sacredsoftware.net/tutorials/Matrices/Matrices.xhtml. You’ll also learn a lot about them in a class on linear algebra. Although we’re not going to deal much with matrices in this book, it’s important to understand why they are significant.
Multiple coordinate systems When you built your 3D objects earlier in the chapter, you constructed them around the origin (0, 0, 0). Thus, at birth, the objects lived centered around the top-left corner of the screen, which isn’t terribly useful. I’ve been using Processing’s translate() function to move the objects where I want them to live. Each object is made up of a series of vertices, so to move the object, the translate() function needs to internally move each of these vertices the same distance. Of course, moving the object doesn’t change the relationships between each of the object’s vertices, which I discussed earlier when I created the Vector3D class. However, the actual values of each of the vertices will of course change when you move the object. You can think about each object as having its own local internal coordinate space, which also lives in another larger world coordinate space. You can also think about the virtual camera and even the screen itself as having other separate coordinate spaces. Hopefully, you’re beginning to see the complexity of trying to keep all these separate coordinate spaces straight, especially when you have lots of objects, each with lots of vertices. Matrices help keep all this stuff organized and allow you to more efficiently do actual calculations translating between these different coordinate spaces. Add the following line of code to the bottom of the setup() function in the current sketch:
14
printCamera(); This command will output the following: 001.0000 000.0000 000.0000 000.0000
000.0000 001.0000 000.0000 000.0000
000.0000 -200.0000 000.0000 -200.0000 001.0000 -346.4102 000.0000 001.0000
53
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T That’s the output of the camera’s current matrix. We won’t worry about what all the numbers mean, except for the right column. If you look at the sketch code, you might be able to intuit where these values come from. Change the size(400, 400, P3D) function call to size(600, 500, P3D); and rerun the sketch. The fourth column of numbers now should be different. Try a couple of different size values, such as (600, 200, P3D). Hopefully, you noticed that not only was the fourth column of numbers changing, but so was the image. In fact, if you tried the arguments (600, 200, P3D), you probably saw a pretty warped table (shown in Figure 14-16). Why? I’ll get to this answer shortly.
Figure 14-16. Vector Diagram sketch
Let’s go back to using size(400, 400, P3D);, and right beneath the background(0); command in the setup() function add the following line: camera(width/2.0, height/2.0, (height/2.0) / tan(PI*60.0 / 360.0), ➥ width/2.0, height/2.0, 0, 0, 1, 0); When you rerun the sketch, it should look as it did before you started messing with the size() call values. The new camera() call you added allows you to explicitly position and aim the camera. The values I used as arguments in the call are the defaults published in the Processing reference. The first three arguments control where the camera is pointing, the next three control the center of the scene, and the last three control which axis is pointing up (the one with a value of 1). You may have also noticed that the printCamera() output returned to its original values. I recommend playing with these settings to better understand their effect, and also checking the printCamera() output to see how the actual values change. I’ve created a simple interactive version that illustrates some of the changes in real time. Moving the mouse to the left and right will change the first argument, moving up and down will change the second argument, dragging left and right will change the fourth argument, dragging up and down will change the fifth, and pressing the up and down arrows will change the third. Finally, pressing the X, Y, or Z keys will toggle the seventh, eighth, or ninth arguments to 1.0, respectively, while the other two will be reset to 0 (e.g., if you press Y, the eighth argument will be 1.0 while the seventh and ninth arguments will be reset to 0). As usual, this code should replace any code in the leftmost main tab. Please leave all the additional tabs, with their respective classes, intact.
54
3 D R E N D E R I N G I N J AVA M O D E public class MyController extends PApplet { IG3D i; Table t; float eyeX, eyeY, angle, cntrX, cntrY, axisX, axisY, axisZ; void setup(){ size(400, 400, P3D); noStroke(); fill(126, 63, 20); // initialize camera arguments eyeX = width/2; eyeY = height/2; cntrX = width/2; cntrY = height/2; angle = 60; axisY = 1.0; i = new IG3D(this); t = new Table(200, 140, 100); } void draw(){ background(0); camera(eyeX, eyeY, (height/2.0) / tan(PI*angle / 360.0), cntrX, â&#x17E;Ľ cntrY, 0, axisX, axisY, axisZ); lights(); translate(width/2, height/2); rotateX(PI/-4); rotateY(PI/8); i.render(t); printCamera(); } void mouseMoved(){ eyeX = mouseX; eyeY = mouseY; } void mouseDragged(){ cntrX = mouseX; cntrY = mouseY; } void keyPressed() { if(key == CODED) { if (keyCode == UP) { //zoom in angle++; } else if (keyCode == DOWN) { //zoom out
14
55
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T angle--; } } else { // set which axis points up if (key == 'x') { axisX = 1.0; axisY = 0; axisZ = 0; } else if (key == 'y') { axisX = 0; axisY = 1.0; axisZ = 0; } else if (key == 'z'){ axisX = 0; axisY = 0; axisZ = 1.0; } } } } Hopefully, playing with this sketch gives you a better sense of how a basic virtual camera model works. You should be able to move and pivot the camera, as well as specify the vertical axis. However, there is still one other very significant feature of the camera model that I haven’t fully addressed yet, and it relates to why the table got warped earlier when I changed the ratio of the screen size.
Projection In flattening 3D coordinates onto a 2D surface, you have some choices about how to account for the values along the z-axis. Mathematically speaking, when you simply remove the third dimension (the z components of the vertices), vertices a mile away or a micron away will still have the same x and y component values and thus will occupy the same place in 2D space. This may not sound like such a big deal, but it’s not at all how you perceive the physical world. Because of our binocular vision, and of course the way our brains work, visual data appears to change dramatically based on its relative distance along the real-world z-axis to our eyes. This phenomenon is often exploited by tourists capturing photos of themselves standing in seemingly impossible poses—supporting the leaning tower of Pisa with their outstretched arms being one of the most famous examples. Objects appear to decrease in size with distance; lines perpendicular to our eyes moving into the distance (e.g., railroad tracks) seem to converge to a point; and value, chroma, and texture properties all lose intensity and detail with distance. When you try to model 3D space with math and code, you need to build in this human filtering factor, causing things like perspectival distortion. Processing has a perspective() function that lets you explicitly change how the perspective mapping is calculated (commonly referred to as
56
3 D R E N D E R I N G I N J AVA M O D E projection)â&#x20AC;&#x201D;allowing you to simulate both a natural perspective and also more exaggerated (and of course interesting) effects. The next sketch (shown in Figures 14-17 through 14-19) interactively illustrates three types of projection: parallel, perspective, and exaggerated perspective (e.g., fish-eye lens). Parallel projection eliminates perspectival distortion, including scale shifts due to changes in distance. An orthographic projection is a type of parallel projection, where a 3D object is displayed in six straight-on 2D views. Orthographic views are used in architectural, engineering, and other modeling applications, where precision is required. In most 3D applications, itâ&#x20AC;&#x2122;s possible to work interchangably with both orthographic and perspective projections, and very common to even use a split screen, where both projections are viewed simultaneously. In the sketch, the ortho() function simply eliminates any perspectival distortion, collapsing the z-axis coordinates onto a common 2D plane. While running the sketch, press the o, p, or w keys to interactively view the cube in an orthographic, perspective, or wide-angle perspective projection. The following code should be entered into the main sketch tab, replacing whatever is currently there (of course, all the existing class code within the tabs should remain): public class MyController extends PApplet { IG3D i; Cube c; float translateX1, translateX2, translateZ; void setup(){ size(400, 400, P3D); noStroke(); fill(150, 150, 200); i = new IG3D(this); c = new Cube(50, 50, 50); translateX1 = width/2-160; translateX2 = width/2+90; translateZ = 150; } void draw(){ background(0); translate(0, 0, translateZ); lights(); pushMatrix(); translate(translateX1, height/2, -450); rotateX(frameCount*PI/150); rotateY(frameCount*PI/160); i.render (c); popMatrix();
14
pushMatrix(); translate(translateX2, height/2, -150); rotateX(frameCount*PI/150);
57
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T rotateY(frameCount*PI/160); i.render(c); popMatrix(); } void keyPressed(){ if (key == 'o'){ translateX1 = width/2-160; translateX2 = width/2+160; translateZ = 150; ortho(-width, width, -height, height, -10, 10); } else if (key == 'p'){ translateX1 = width/2-160; translateX2 = width/2+90; translateZ = 150; float fov = PI/3.0; float camZ = (width/2.0) / tan(radians(fov/2)); perspective(fov, 1.0, camZ/10, camZ*10); } else if (key == 'w'){ translateX1 = width/2-300; translateX2 = width/2+30; translateZ = 430; float fov = PI/1.5; float camZ = (width/2.0) / tan(radians(fov/2)); perspective(fov, 1.0, camZ/10, camZ*10); } } }
Figure 14-17. Perspective Projection sketch
58
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-18. Parallel Projection sketch
Figure 14-19. Extreme Perspective Projection sketch
The two rotating rectangles have the same dimensions. The initial size difference when the sketch launches is due to the default perspective settings in the P3D renderer. Since the two cubes are translated different amounts along the z-axis (–450 and –150, respectively), the default perspective projection makes them appear different sizes, with the cube deeper in space appearing smaller. When the o key is pressed, the ortho() call eliminates this scale shift, showing both cubes at their true sizes. The six arguments in the ortho() call control the clipping volume, which is best understood by trying some different values in the function call. As of this writing, the last two arguments in the ortho() function for controlling the minimum and maximum clipping volume along the z-axis have no visible effect on the rendering, as long as these values are not the same value—in which case you get no output. The p key resets the perspective projection to Processing’s default values.
14
59
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T The fov (field of view) variable relates to the angle of view. The default value PI/3, in radians, is equivalent to 60 degrees. If you’ve ever shopped for a real lens for a camera, you may have come across wide-angle, telephoto, and zoom lenses, with focal lengths specified in millimeters (mm). A standard lens will have a focal length of around 50 mm, a wideangle lens might have a focal length below 35 mm, a telephoto lens might have a focal length at 200 mm, and a zoom lens might specify a range between 28 and 70 mm. These different types of lenses have very different fields or angles of view. A very wide-angle lens, such as a fish-eye lens, can have a FOV of 180 degrees, while the longest telephoto lens could have a FOV of 2 degrees. As the FOV reaches these extremes, severe distortion becomes a big issue. In the sketch, pressing the w key creates a very wide-angle view (120 degrees), revealing the effects of this type of distortion. The camZ variable uses the calculation (width / 2.0) / tan(radians(fov / 2), which you’ll look at again when you build your own camera. The expression calculates the camera’s distance to the view plane by using the FOV (angle of vision) and the width of the Processing window. You can form a triangle based on these values and apply some basic trig to determine the camera’s distance along the z-axis. Figure 14-20 illustrates the relationship of FOV to camera distance.
Figure 14-20. Calculating camera distance with FOV
If this sounds confusing, don’t worry about it; it will be much easier to understand when you play with your custom camera shortly. Finally, the call perspective(fov, 1.0, camZ/10, camZ*10); uses the FOV, which I’ve discussed; an aspect ratio (second argument), relating to the proportion of the Processing window; and the near and far clipping
60
3 D R E N D E R I N G I N J AVA M O D E plane values (the third and fourth arguments, respectively). I used the default values specified in the Processing reference, but these last two arguments (as of this writing) just need to have positive values, with the fourth argument being larger than the third; if you reverse this order, you’ll get a strange rendering of what looks like the inside of the object instead of its exterior. I’ll discuss what can cause this odd inversion when I cover surface normals. To actually build your own shiny new virtual camera, you’ll need to work inside of the existing IG3D class, which hopefully is still happily living in its tab in your current sketch. As I mentioned earlier in the chapter (and in spite of the theory), it’s really not difficult building a simple virtual camera. The main thing you need to do is to account for the 2D projection of the 3D components of each vertex. However, if you forget about the z components for a minute, the camera becomes simpler still. In the IG3D class, within the render(IGShape3D shape) method, simply remove the last argument in the call: p.vertex(t[i].v[j].x, t[i].v[j].y, t[i].v[j].z); which should now read p.vertex(t[i].v[j].x, t[i].v[j].y); In the main sketch tab, replace whatever code is there with the following code (you may want to do a save-as first): public class MyController extends PApplet { IG3D i; Cube c; void setup(){ size(400, 400); noStroke(); fill(127, 255, 0); i = new IG3D(this); c = new Cube(100, 100, 100); } void draw(){ background(0); translate(width/2, height/2); i.render(c); } } Congratulations, you just created your own 3D virtual camera! Of course, in its current state, it’s pretty useless, but you’re able to render 3D data onto a 2D display—well, sort of. In actuality, you’re just ignoring the 3D components of each of the vertices. Since you removed P3D from the size() call, you also no longer have the benefit of using Processing’s 3D rotation, translation, or vertex functions. If you try running this sketch, you’ll just get a centered square, but at least it’s a very lovely chartreuse color. Before adding more to the camera, let’s add some 3D rotation capabilities to the sketch.
14
61
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
3D rotations revisited Last chapter, you learned how to code your own 3D rotations. The expressions you’ll use again to rotate around each of the axes are as follows: For rotation around the x-axis: y' = cos(θ) ✕ y – sin(θ) ✕ z; z' = sin(θ) ✕ y + cos(θ) ✕ z; For rotation around the y-axis: z' = cos(θ) ✕ z – sin(θ) ✕ x; x' = sin(θ) ✕ z + cos(θ) ✕ x; For rotation around the z-axis: x' = cos(θ) ✕ x – sin(θ) ✕ y; y' = sin(θ) ✕ x + cos(θ) ✕ y; These expressions, as opposed to their simpler forms (e.g., y = sin(θ) and x = cos(θ)), allow you to combine multiple rotations around all three axes. Since the rotations will act on the object vertices, you’ll encapsulate the rotations within the Vector3D class. Here’s the setRotation() method, to be added to the Vector3D class, anywhere beneath the constructors and above the closing curly brace: // rotation public void setRotation(float sinVals[], float cosVals[]){ reset(); float tempX = 0, tempY = 0, tempZ = 0; //x-axis tempY = cosVals[0] * y - sinVals[0] * z; tempZ = sinVals[0] * y + cosVals[0] * z; y = tempY; z = tempZ; //y-axis tempZ = cosVals[1] * z - sinVals[1] * x; tempX = sinVals[1] * z + cosVals[1] * x; z = tempZ; x = tempX; //z-axis tempX = cosVals[2] * x - sinVals[2] * y; tempY = sinVals[2] * x + cosVals[2] * y; x = tempX; y = tempY; } I’ve taken the liberty of giving you a more optimized version of the 3D rotations than we looked at last chapter, which might squeeze off a couple nanoseconds of rendering time
62
3 D R E N D E R I N G I N J AVA M O D E (thank me later). Initially, the setRotation() method calls the reset() method, which resets the x, y, and z components to their original values, prior to any rotation. I did this to prevent rotation values from accumulating. Without calling reset(), it wouldn’t be possible, when calling a rotation method from the draw() function, to rotate an object without it continually spinning. For example, say you wanted to have a table stay rotated 35 degrees around the x-axis, but then you also wanted to continually spin the table around the y-axis. You’d send a single value of 35 degrees (converted to radians of course) to the rotX() method and then a value like frameCount (which keeps increasing every frame) to the rotY() method; for this to work, you need to call reset() prior to rotating the vertices each frame. The rotation method expects arrays of sine and cosine values, which will be precomputed in the IG3D class. Huh? This is actually a very common optimization that you looked at in Chapter 11. Each call to a trig function takes time, and each of the vertices will be rotated the same amount each frame. If we put the trig calls directly within the rotation method, then the same sin() and cos() calculations will be performed for every individual vertex in each frame, which is a big waste of processing time. Instead, it’s more efficient to first calculate the sin() and cos() of the rotation angle for each frame, and then pass the precomputed results to the rotation method. To be able to do this, you need to add a couple properties and methods to the IG3D class, which will take care of the trig precomputing. In the IG3D class, add the following code right below the private PApplet p; declaration, near the top of the class: // arrays for precomputed trig functions private float sinVals[] = {0, 0, 0}; private float cosVals[] = {1, 1, 1}; I filled the two arrays with initial values based on 0 degrees of rotation. The reason the values are different for the two arrays is because sin(0) is equal to 0, but cos(0) is equal to 1. Next are the three new methods to precompute the trig values, which should be added to the IG3D class, anywhere beneath the closing curly brace of the IG3D constructor (and of course above the closing curly brace of the class): //precompute trig functions for rotations public void rotX(float ang){ sinVals[0] = p.sin(p.radians(ang)); cosVals[0] = p.cos(p.radians(ang)); } public void rotY(float ang){ sinVals[1] = p.sin(p.radians(ang)); cosVals[1] = p.cos(p.radians(ang)); } public void rotZ(float ang){ sinVals[2] = p.sin(p.radians(ang)); cosVals[2] = p.cos(p.radians(ang)); }
14
The rotX(), rotY(), and rotZ() methods simply do the precomputing of the trig values for each axis. You still need to apply these values to the actual rotation of the object’s
63
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T vertices. I’ll add an additional rotXYZ() method to the IG3D class that will take care of this. In this method, each of the vertices stored in the Triangle3D faces will call the setRotation() method in the Vector3D class, passing in the precomputed trig values— remember each of the cube’s vertices is a Vector3D object. Here’s the rotXYZ() method to add below the closing curly brace of the rotZ() method just added in the IG3D class: private void rotXYZ(Triangle3D[]t){ for (int i=0; i<t.length; i++){ for (int j=0; j<3; j++){ t[i].v[j].setRotation(sinVals, cosVals); } } } There is one final step before you can try out the new custom rotation methods. You need to actually call the rotXYZ() method, which you’ll do from inside the render(IGShape3D shape) method. Also, notice that I declared the rotXYZ() method with the private modifier. Up until this point, you’ve been making all your methods public. The private modifier is used to enforce proper class use. The rotXYZ() method is not intended to be called outside of the IG3D class; rather, it will only be called internally by the render(IGShape3D shape) method. By making it private, this rule is enforced. In the render(IGShape3D shape) method, below the line Triangle3D[]t = shape. getTriangles();, add the following line: rotXYZ(t); Since there were a lot of changes to the IG3D class, I’ve included the revised class here. I recommend taking a moment and making sure your version matches. // revised IG3D class import processing.core.*; public class IG3D{ private PApplet p; // arrays for precomputed trig functions private float sinVals[] = {0, 0, 0}; private float cosVals[] = {1, 1, 1}; public IG3D(PApplet p){ this.p = p; } //precompute trig functions for rotations public void rotX(float ang){ sinVals[0] = p.sin(p.radians(ang)); cosVals[0] = p.cos(p.radians(ang)); }
64
3 D R E N D E R I N G I N J AVA M O D E public void rotY(float ang){ sinVals[1] = p.sin(p.radians(ang)); cosVals[1] = p.cos(p.radians(ang)); } public void rotZ(float ang){ sinVals[2] = p.sin(p.radians(ang)); cosVals[2] = p.cos(p.radians(ang)); } private void rotXYZ(Triangle3D[]t){ for (int i=0; i<t.length; i++){ for (int j=0; j<3; j++){ t[i].v[j].setRotation(sinVals, cosVals); } } } public void render(IGShape3D shape){ Triangle3D[]t = shape.getTriangles(); rotXYZ(t); for (int i=0; i<t.length; i++){ p.beginShape(p.TRIANGLES); for (int j=0; j<3; j++){ p.vertex(t[i].v[j].x, t[i].v[j].y); } p.endShape(); } } } Now you can try out your optimized custom rotations (shown in Figure 14-21). In your sketchâ&#x20AC;&#x2122;s main tab, add the following three lines to the draw() function, right above the line i.render(c): i.rotX(frameCount*3); i.rotY(frameCount*2); i.rotZ(frameCount*1.5);
14
65
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Figure 14-21. Customized 3D Rotation sketch
When you run the sketch, you should get a blobby green form somewhat resembling a rotating cube. Without lighting cues and perspective, there’s no way to see depth. To give some hint about what’s going on, comment out noStroke() in the setup() function, which will reveal the individual triangles making up the cube (shown in Figure 14-22). You can also add Processing’s smooth() function to setup(), which will anti-alias the strokes surrounding the triangles. This is actually one (albeit small) advantage that our renderer has over P3D, which can’t use the smooth() call.
Figure 14-22. Cube with Stroked Triangles sketch
If you look carefully at the spinning cube, with the strokes rendered, you’ll notice something doesn’t look right. It appears that the cube collapses in on itself, or that you’re somehow seeing both the inside and outside at the same time. No, this is not a quantum cube in n-dimensional space, but rather an inherent problem to rendering 3D data in 2D space (bummer, I know). The good news is that you can fix it; the bad news is it’s a little bit of work. However, before you get to patching up the cube, let’s finish building the custom camera and generate a more natural-looking perspective projection.
66
3 D R E N D E R I N G I N J AVA M O D E
Calculating perspective Within the IG3D class, you need to add some calculations that, instead of simply disregarding the z components of the object’s vertices, use them to calculate perspective. To accomplish this, you’ll calculate the projectionRatio of viewDistance (the distance between the camera and the view plane) to viewDistance minus the z component of each vertex. The expression in code looks like this (but don’t add it to your sketch quite yet): projectionRatio = viewDistance / (viewDistance - t[i].v[j].z); This relatively simple expression handles all the perspective magic. The reason you subtract the z components from the viewDistance, instead of adding them together, is because you’re building a right-handed coordinate system (similar to Processing’s), where z-axis values decrease as they go into the screen. When you subtract by a negative value, you of course get a positive. Thus, objects further in space will have a larger negative value, and when you subtract these negative values from viewDistance, you’ll actually get a larger value. Dividing the original viewDistance by this increased value, you’ll get a fractional value less than 1. You then multiply the x and y components of the vertex by this projectionRatio value, causing the vertices to be offset, which generates the perspective projection. The code will look like this (but again, don’t add it to the sketch yet): projectedX = t.v[j].x * projectionRatio; projectedY = t.v[j].y * projectionRatio; You could arbitrarily set a value for viewDistance. However, there is a handy expression that generates this value for us, based on an angle of view for the virtual camera and the size of the sketch window. You actually looked at this expression a few pages ago, when I discussed Processing’s camera() function. Here’s the expression in code: float viewDistance = p.width/2 / p.tan(p.radians(viewAngle / 2)); Now, let’s finally implement this within the current sketch. You’re going to add a project() method to the IG3D class that will take care of all the projection conversion math. The method will be called internally by the render(IGShape3D shape) method, just like the rotXYZ() method you created earlier, so you’ll declare the project() method using the private access modifier as well (since it should never be explicitly called outside of the class). The project() method should be added anywhere beneath the closing curly brace of the IG3D class constructor (and above the final closing curly brace of the entire class, of course). private void project(Triangle3D t){ float projectedX = 0, projectedY = 0; float projectionRatio = 0; viewDistance = p.width/2 / p.tan(p.radians(viewAngle/2)); p.beginShape(p.TRIANGLES); for (int j=0; j<3; j++){ // calculate perspective ratio projectionRatio = viewDistance/(viewDistance - t.v[j].z); // 2D perspective projection projectedX = t.v[j].x * projectionRatio;
14
67
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T projectedY = t.v[j].y * projectionRatio; p.vertex(projectedX, projectedY); } p.endShape(); } You also need to edit (actually simplify) the existing render(IGShape3D shape) method in the IG3D class, which will now simply coordinate the rendering process and let the project() method handle the actual drawing. You also need to add a call to the project() method. Here’s the updated render(IGShape3D shape) method, which should replace the existing one in the IG3D class: public void render(IGShape3D shape){ Triangle3D[]t = shape.getTriangles(); rotXYZ(t); for (int i=0; i<t.length; i++){ project(t[i]); } } The last things you need to add before you can try the cool new projection feature are viewDistance and viewAngle properties. You’ll eventually need to access the viewDistance value in more than one place in the IG3D class, so you’ll give it global scope by declaring it up at the top of the class; you’ll do the same for the viewAngle property. Beneath the private PApplet p; declaration, near the top of the IG3D class, add the following two lines: private float viewAngle = 60; private float viewDistance; Also go ahead and add a public setViewAngle() method to the IG3D class, which will allow you to change the viewAngle (and thus the camera’s distance to the view plane) from the main sketch tab. As with the project() method, add the setViewAngle() method to the class—anywhere beneath the closing curly brace of the IG3D constructor and above the final closing curly brace of the entire class. public void setViewAngle(float angle){ viewAngle = angle; } Now try rendering the cube again; it should look somewhat more three-dimensional, with some minor perspective distortion. However, using the setViewAngle() method you just added to the IG3D class, you can really increase the perspective distortion (as shown in Figure 14-23). In the main sketch tab, add the following line, right beneath the translate(width/2, height/2); call: i.setViewAngle(130);
68
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-23. Cube with Perspective Distortion sketch
If you run the sketch again, the distortion should be much more visible now. As always, I suggest playing with these values some to understand the range of possibilities. Since you didn’t add any clipping, it’s possible to generate some pretty wacky distortion by increasing the value of the viewAngle property further. Besides the viewAngle, you’ll also want to be able to translate the object along the z-axis. However, since you’re not currently using P3D, you can’t use Processing’s three-argument version of its translate(x, y, z) function. You can, of course, still translate along the x- and y-axes using the two-argument version, translate(x, y). You might as well write your own three-argument version to get a better understanding of how the translation works. If you were building a “real” rendering engine, you’d probably want to build a stand-alone Transform class and let the class internally handle all the different possible transformations, including using matrix math to help with conversions between the different coordinate systems (world, object, camera, etc.). However, that is way beyond this book, so you’ll implement a much more down-and-dirty translation.
A brief word of encouragement: If it isn’t obvious, my intention in developing this simple rendering engine is not to replace P3D, but just to give you a deeper understanding of how 3D works. I do realize this material is complicated and not for everyone. On the other hand, if you’re loving this stuff, do feel free to try replacing P3D, but you’ll definitely need some additional books for that.
14
69
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Translating A real concern when you combine transformations such as translations and rotations is the order of operations. If you move an object away from the origin (0, 0, 0) and then rotate it, the object will still rotate around the origin. The distance you moved the object will act as the radius of rotation—this is like spinning a yo-yo in the air, with the string distance equaling the distance you moved the object. However, if you rotate an object at the origin first and then move it, you can have the object rotate around its own (local) center point and yet still move around the screen. Some of you may be thinking that this order seems backward with regard to what you do in Processing. In Processing, you normally call translate() before rotate(). What you don’t see, however, is that transformations are actually applied in the reverse order that they are specified in your code. There is a logical reason (of course), involving the use of matrices, why this makes sense—which is also beyond the scope of this discussion. To learn more about 3D transformations, check out www.glprogramming.com/red/chapter03.html. To implement our “easy” custom translation solution, you’ll apply translations as the final step in the rendering process. Within the IG3D class, you’ll add a translateXYZ() method as well as a transVals property to hold the translation values. The actual translation will happen within the project() method. Add the following transVal declaration/instantiation statement to the IG3D class; it should be put with the other instance property declaration statements at the top of the class— anywhere between the opening curly brace of the IG3D class and the start of the IG3D constructor: private Vector3D transVals = new Vector3D(); Now add the translateXYZ() method to the class, anywhere below the closing curly brace of the IG3D constructor: public void translateXYZ(float x, float y, float z){ transVals = new Vector3D(x, y, z); } Finally, you need to apply the translation within the project() method by adding the values stored in the transVals Vector3D object to the following three expressions: projectionRatio = viewDistance/(viewDistance - ➥ (t.v[j].z+transVals.z)); projectedX = t[i].v[j].x * projectionRatio + transVals.x; projectedY = t[i].v[j].y * projectionRatio + transVals.y; Here’s the updated project() method: private void project(Triangle3D t){ float projectedX = 0, projectedY = 0; float projectionRatio = 0; viewDistance = p.width/2 / p.tan(p.radians(viewAngle/2)); p.beginShape(p.TRIANGLES); for (int j=0; j<3; j++){
70
3 D R E N D E R I N G I N J AVA M O D E // calculate perspective ratio projectionRatio = viewDistance/(viewDistance - ➥ (t.v[j].z+transVals.z)); // 2D perspective projection with translation projectedX = t.v[j].x * projectionRatio + transVals.x; projectedY = t.v[j].y * projectionRatio + transVals.y; p.vertex(projectedX, projectedY); } p.endShape(); } Applying the translation values this way allows you to calculate the projectionRatio, taking into account any translation along the z-axis, which gets applied to the 3D-to-2D perspective projection. Then, x and y translation values are simply added directly to the projected (flattened) x and y coordinates. Of course, in reality, if you had two objects at different distances along the z-axis, and you wanted to center both, you couldn’t use the same x translation value (i.e., width/2), as an object further away from the camera would need to travel much further to be centered than an object close to the camera. However, by projecting the 3D object coordinates to 2D screen coordinates first, you can simply translate the geometry along the x- and y-axes the same way you do when working in 2D. This type of challenge illustrates some of the joys of dealing with multiple coordinate systems. Before moving on, let’s test out the new and improved custom renderer. I created a little sketch (shown in Figure 14-24) using some Table objects hurtling through space. The hurtling happens thanks to the custom 3D translation and trusty 3D virtual camera [technically just the project() method]. Replace the code in the main sketch tab of the current sketch with the following: // Tablescape public class MyController extends PApplet { IG3D ig3d; int objs = 20; Table[] t = new Table[objs]; float[] x = new float[objs]; float[] y = new float[objs]; float[] z = new float[objs]; float[] spdX = new float[objs]; float[] spdY = new float[objs]; float[] spdZ = new float[objs]; float[] xRotSpd = new float[objs]; float[] yRotSpd = new float[objs]; float[] zRotSpd = new float[objs]; float tableSize = 50; color tableFill = color(0); void setup(){ size(400, 400); noStroke();
14
71
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T // instantiate 3D renderer ig3d = new IG3D(this); //initialize values for (int i=0; i<objs; i++){ t[i] = new Table(tableSize, tableSize, tableSize); x[i] = random(width); y[i] = random(height); z[i] = random(-1000, 150); spdX[i] = random(-3, 3); spdY[i] = random(-3, 3); spdZ[i] = random(-3, 3); xRotSpd[i] = random(.2, 2); yRotSpd[i] = random(.2, 2); zRotSpd[i] = random(.2, 2); } smooth(); } void draw(){ fill(255, 30); rect(0, 0, width, height); fill(tableFill); for (int i=0; i<objs; i++){ ig3d.translateXYZ(x[i], y[i], z[i]); ig3d.setViewAngle(80); ig3d.rotX(frameCount*xRotSpd[i]); ig3d.rotY(frameCount*yRotSpd[i]); ig3d.rotZ(frameCount*zRotSpd[i]); ig3d.render(t[i]); x[i]+=spdX[i]; y[i]+=spdY[i]; z[i]+=spdZ[i]; if (x[i]>width+t[i].getWidth()/2) { x[i] = 0; } else if (x[i]<-t[i].getWidth()/2){ x[i] = width; } if (y[i]>height+t[i].getHeight()/2) { y[i] = 0; } else if (y[i]<-t[i].getHeight()/2){ y[i] = height; }
72
3 D R E N D E R I N G I N J AVA M O D E if (z[i]>200){ z[i] = -2000; } else if (z[i]<-2000){ z[i] = 200; } } } }
Figure 14-24. Tablescape sketch
Holy cow, we’ve covered a lot this chapter—from coding in Java mode, to developing a much improved 3D modeling methodology, to creating a pretty cool new 3D renderer. Your minds must be buzzing with excitement. (I’ve also heard rumors that a few of you have lives outside of reading this book.) Thus, let’s look at the last step in the rendering process—lighting. Before you do, though, as always, I recommend playing with the last sketch first, and also doing a save-as.
14
73
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Lighting To properly understand lighting in 3D, you need to first understand how the computer recognizes the surface of an object—since that’s what will reflect the light back, creating the illusion of a 3D object (on your very 2D screen). A surface, with regard to the IG3D renderer you’ve been creating, is a triangle, and it just so happens that there is an easy way (mathematically) to detect a triangle’s surface, based on its three vertices—or more precisely, the three vectors making up the sides of the triangle. In the Vertex3D class, you created methods to handle multiplication and division of a vector quantity with a scalar quantity. You can envision these two operations as simply lengthening or shortening an arrow—changing the magnitude only of the vector. It is also possible to multiply a vector quantity by another vector quantity, which is precisely what is required to solve a surface detection problem.
Multiplying vectors There are actually two ways to multiply vectors together, and I’ll discuss both, as each will provide useful information in improving the renderer. The first way multiplies two vectors together to generate a third vector, referred to as the vector product, or more commonly the cross-product. The second way multiplies two vectors together and yields a single scalar value, commonly referred as the dot product. If you have two vectors, V1 and V2, you can denote the two-vector multiplication operations as follows: Cross-product: V1 ✕ V2 Dot product: V1 • V2 The cross-product calculation yields a third vector that is perpendicular to the two that were multiplied together. Since two vectors also define a plane, the third vector is not only perpendicular to the other two vectors, but also to the plane they define. This perpendicular line is commonly referred to as the surface normal. Thus, the cross-product provides a handy way to detect the surface of an individual polygon, which is critical in lighting calculations, as well as for a common optimization technique called back-face culling. Backface culling, also referred to as hidden face removal, removes polygons hidden from the camera, such as the polygons on the back side of a cube. It should be pretty obvious that it would be a waste of processing power to render such polygons (which can’t be seen). Shortly, you’ll add both lighting and back-face culling capabilities to your custom renderer. In the next example, I’ll generate and plot a cube’s surface normals. To do so, I’ll need to add methods to both the Vector3D and IG3D classes. Obviously, to follow along, you’ll need the current sketch (the one you’ve been constructing throughout this entire chapter) loaded in Processing. In the Vector3D class, add the following getCrossProduct() method (it can be put anywhere beneath the two Vector3D constructors):
74
3 D R E N D E R I N G I N J AVA M O D E public Vector3D getCrossProduct(Vector3D v1, Vector3D v2){ v1 = new Vector3D(v1.x-x, v1.y-y, v1.z-z); v2 = new Vector3D(v2.x-x, v2.y-y, v2.z-z); Vector3D cp = new Vector3D(); // actual cross product calculation cp.x = v1.y*v2.z - v1.z*v2.y; cp.y = v1.z*v2.x - v1.x*v2.z; cp.z = v1.x*v2.y - v1.y*v2.x; // normalize cp.divide(cp.getMagnitude()); return cp; } There are three vector3D objects involved in invoking the getCrossProduct() method; the first calls the method and the other two are passed in as arguments. Each of these three objects holds a different vertex of the same triangle. You don’t actually need the specific vertex values, but rather you need the values of the vectors of two sides of the triangle. I calculated these vector values in the first two lines of the method v1 = new Vector3D(v1.x-x, v1.y-y, v1.z-z); v2 = new Vector3D(v2.x-x, v2.y-y, v2.z-z); by subtracting one of the vertices from the other two. I also reused the same vector objects (v1 and v2) to hold these new values. This may look a little odd, but it’s OK to do this, as the assignment on the left happens after the subtraction/instantiation on the right side of the expressions. The actual cross-product calculation cp.x = v1.y*v2.z - v1.z*v2.y; cp.y = v1.z*v2.x - v1.x*v2.z; cp.z = v1.x*v2.y - v1.y*v2.x; involves multiplying the components of each of these two vectors together. Prior to doing the actual cross-product calculation, I needed to create another Vector3D object, cp, which I then used to store the three resulting values from the cross-product calculation. These three values make up the x, y, and z components of the triangle’s normal. I’m not going to deal with how the three cross-product expressions were derived, but if you’d like to go deeper into the math, here’s a link: http://planetmath.org/encyclopedia/ CrossProduct.html. The last step in the getCrossProduct() method involves what’s called normalizing the vector, and in this (very confusing-sounding) case, involves actually normalizing the normal. The relationship between the two similar-sounding terms, normal and normalizing, ends there, as they refer to two completely separate concepts. Normalizing involves dividing each of the components (x, y, and z) of a vector by the overall length (the magnitude) of the vector, making the length of the normalized vector equal to 1. You’ll notice in the normalizing calculation that I used the existing divide() and getMagnitude() Vector3D methods I created earlier; it’s good to plan ahead. (If some of this sounds familiar, I discussed normalizing vectors in Chapter 11 as well).
14
75
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Rendering the normals You’ll be using surface normals in calculations to generate directional lighting, and also to remove (actually, just not render) hidden polygons (from the camera’s view). Before you implement these features, though, let’s actually render the surface normals. In the IG3D class, you’ll add a renderNormals() method that will actually plot the cube’s normals. Generally, it isn’t necessary to actually draw the surface normals, as they’re primarily used for internal calculations. However, when creating 3D models, there are times when it is very helpful to be able to see the normals, and especially to know which way a polygon is facing. In addition, the direction of the normal allows you to identify any potential rendering problems, such as non-planar geometry, which can easily occur when you’re working with polygons that have more than three vertices. In the IG3D class, add the following renderNormals() method anywhere beneath the IG3D constructor: // draw normals public void renderNormals(IGShape3D shape, float len){ Vector3D cp = new Vector3D(); float normLen = 75; float lineX1 = 0, lineY1 = 0, lineX2 = 0, lineY2; float projectedX1 = 0, projectedY1 = 0, projectedX2 = 0, ➥ projectedY2 = 0; float projectionRatio = 0; Triangle3D[]t = shape.getTriangles(); for (int i=0; i<t.length; i++){ // get cross product cp = t[i].v[0].getCrossProduct(t[i].v[1], t[i].v[2]); // calculate perspective ratio projectionRatio = viewDistance/(viewDistance - ➥ (cp.z+transVals.z)); // normal line base-centered in triangle lineX1 = (t[i].v[0].x + t[i].v[1].x + t[i].v[2].x)/3; lineY1 = (t[i].v[0].y + t[i].v[1].y + t[i].v[2].y)/3; // normal line tip-centered in triangle lineX2 = lineX1+cp.x * len; lineY2 = lineY1+cp.y * len; // 2D perspective projection with translation projectedX1 = lineX1 * projectionRatio + transVals.x; projectedY1 = lineY1 * projectionRatio + transVals.y; projectedX2 = lineX2 * projectionRatio + transVals.x; projectedY2 = lineY2 * projectionRatio + transVals.y; p.line(projectedX1, projectedY1, projectedX2, projectedY2); } } Rather than explain this method now, let’s draw the normals first (shown in Figure 14-25). In the main sketch tab, replace whatever code is currently there with the following (again, you might want to do a save-as first):
76
3 D R E N D E R I N G I N J AVA M O D E public class MyController extends PApplet { IG3D i; Cube c; void setup(){ size(400, 400); fill(127, 255, 0); i = new IG3D(this); c = new Cube(150, 150, 150); smooth(); } void draw(){ background(0); i.translateXYZ(width/2, height/2, -100); i.setViewAngle(70); i.rotX(frameCount*3); i.rotY(frameCount*2); i.rotZ(frameCount*1); stroke(0); i.render(c); stroke(255); i.renderNormals(c, 100); } }
14
Figure 14-25. Rendered Normals sketch
77
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T Running the sketch, you should see a green cube with black outlines around the individual triangles making up the cube faces, and white lines emanating perpendicularly from the center of each triangle. The white lines represent the surface normals. As I mentioned earlier, I normalized the normals in the getCrossProduct() method, so I added a length argument to the renderNormals(IGShape3D shape, float len) method, just to let you better see the normals. Again, it’s not essential for lighting and back-face culling calculations to actually render the normals. The renderNormals() method uses a lot of the same principles as the render() method. In fact, I’m using the same viewDistance and projectionRatio expressions to project the normals from 3D space to 2D screen space, as I did with the cube vertices. The rest of the code simply centers the normals at the center point of each triangle, which was just an aesthetic decision on my part to allow you to better see them. The normals are perpendicular to a plane, and the overall angle of the plane is what you’re most interested in, with regard to lighting and hidden face removal—so it doesn’t really matter where on the plane the normal emanates from.
Removing hidden faces Before adding some lighting, we’ll implement hidden face removal and also try to fix the strange inverting polygon problem. To handle the hidden face removal, you need a way of comparing the angle between the virtual camera’s view and the surface normal of each triangle face. Try to visualize the camera’s view to the cube (actually to each of the cube’s vertices), which you can think about as a vector quantity (describing both distance and direction). The cube’s surface normals are also vectors. If the angle between the camera’s view and the individual surface normals is greater than 90 degrees or less than 270 degrees, the camera won’t be able to see the polygon, and thus it shouldn’t be drawn. Figure 14-26 illustrates this relationship. When I introduced the cross-product, I mentioned one other approach for multiplying two vectors together, called the dot product. Unlike the cross-product, which returns another vector perpendicular to the two vectors multiplied together, the dot product simply returns a single scalar value. However, this single float value is very useful, as it corresponds to the angle between the two multiplied vectors. Using the dot product along with some simple trig, you can find the precise angle between the multiplied vectors. The actual equation describing this relationship looks like this: a • b = |a| |b| cos(θ) a • b, on the left side of the expression, represents the dot product calculation between the two vectors, and |a| and |b| represent the lengths, or magnitudes, of the vectors. θ is the angle between a and b, in radians. Fortunately, you don’t need to actually solve for the precise angle between the vectors to implement a hidden face–removal solution. It just so happens that you can simply use the sign (positive or negative) of the value returned from the dot product calculation (the left side of the previous expression) to determine which way the polygon is facing. The dot product calculation (in 3D) is solved as follows: V1 • V2 = V1.x ✕ V2.x + V1.y ✕ V2.y + V1.z ✕ V2.z
78
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-26. Hidden face removal rules
In your custom rendering engine, you’re using a right-handed coordinate system, and the polygon faces of the objects are built in a clockwise direction. Based on this, a polygon face is visible when the dot product returns a positive value ( the angle between the vectors would be less than 90 or greater than 270 degrees). To implement the hidden face–removal solution, you’ll need to add a getDotProduct() method to the Vector3D class and an isFaceVisible() method to the IG3D class. Here’s the getDotProduct() method to add to the Vector3D class; it can be added (as usual) anywhere beneath the two Vector3D constructors: public float getDotProduct(Vector3D v){ return x*v.x + y*v.y + z*v.z; }
14
Next you’ll make the necessary changes to the IG3D class. First, add the new isFaceVisible() method, which should be added anywhere beneath the IG3D constructor. This method should also be declared private, since it will only be called internally within the IG3D class.
79
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T // hidden face removal private boolean isFaceVisible(Triangle3D t){ boolean isVisible = false; Vector3D cameraView = new Vector3D(t.v[0].x, ➥ t.v[0].y, viewDistance); Vector3D norm = t.v[0].getCrossProduct(t.v[1], t.v[2]); if (cameraView.getDotProduct(norm)>0){ isVisible = true; } return isVisible; } The isFaceVisible() method returns a Boolean value (true or false). This method will be called from within the IG3D class’s render() method, only allowing triangles to be drawn when the method returns true. Notice I created a cameraView Vector3D object in the isFaceVisible() method: Vector3D cameraView = new Vector3D(t.v[0].x, t.v[0].y, viewDistance); This vector represents the difference between camera’s position and any vertex on the triangle being evaluated. I arbitrarily used the vertex t.v[0], but I could have used t.v[1] or t.v[2] as well. Also, notice that instead of using the z coordinate of the vertex (t.v[0].z), I needed to use the viewDistance. Next, I calculated each of the individual triangle’s normals (using the cross-product), with the following line: Vector3D norm = t.v[0].getCrossProduct(t.v[1], t.v[2]); Finally, using the getDotProduct() method, along with the cameraView and norm vectors, I had the method return true if getDotProduct() returned a positive value, or false if it was negative. Within the render() method (still in the IG3D class), you need to add a conditional statement in the for loop, which will call the isFaceVisible() method. The conditional statement should be put around the existing call project(t[i]);. This will ensure that the project() method will only be called if the polygon face is visible, avoiding unnecessary calculations. Here’s the completed render() method (which should replace the existing one). I put the new conditional block in bold: // updated render() method public void render(IGShape3D shape){ Triangle3D[]t = shape.getTriangles(); rotXYZ(t); for (int i=0; i<t.length; i++){ if (isFaceVisible(t[i])){ project(t[i]); } } } Hidden face removal can reduce the amount of polygons to render by 50 percent, so it’s an important optimization technique. The approach I implemented is a pretty simple
80
3 D R E N D E R I N G I N J AVA M O D E method (sorry for that news), suitable for a single object, like our cube, but not suitable for handling many objects or more complex overlapping geometry. My goal here in including all this (pretty complicated) material is really just to shed some light on these internal processes, as the principles involved are applicable to other aspects of graphics programming, such as lighting. Since you put in all this hard work, letâ&#x20AC;&#x2122;s finally see some hidden face removal in action. Hereâ&#x20AC;&#x2122;s the code to put into the main sketch tab; again, you may want to do a save-as first. This code should replace whatever code is currently there: public class MyController extends PApplet { IG3D i; Rectangle3D r; void setup(){ size(400, 400); fill(127, 255, 0); i = new IG3D(this); r = new Rectangle3D(250, 200); } void draw(){ background(0); i.translateXYZ(width/2, height/2, -100); i.setViewAngle(70); i.rotY(frameCount*2); stroke(0); i.render(r); stroke(255); i.renderNormals(r, 100); } } Running the sketch, you should see a rotating single-sided rectangle (composed of two triangles) that disappears as it rotates between 90 and 270 degrees. The rendered normals are not being back-face culled, which is why you see them the whole time. You can also render a cube, which will be culled to three faces. Replace the code in the main sketch tab with the following: public class MyController extends PApplet { IG3D i; Cube c; void setup(){ size(400, 400); fill(127, 255, 0); i = new IG3D(this); c = new Cube(150, 150, 150); }
14
void draw(){ background(0); i.translateXYZ(width/2, height/2, -100); i.setViewAngle(70);
81
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T i.rotX(frameCount*2); i.rotY(frameCount*3); i.rotZ(frameCount*.5); stroke(0); i.render(c); stroke(255); i.renderNormals(c, 100); } } The cube is composed of 12 triangles, making up its 6 sides. Each of the three vertices making up each triangle is connected in clockwise order. This order is significant—as I alluded to in the discussion about the dot product—in determining which side of the triangle will be rendered and which will be hidden. To see this point, within the isFaceVisible() method in the IG3D class, change the greater-than sign (>) in the following line if (cameraView.getDotProduct(norm)>0){ to less-than (<), and rerun the sketch (shown in Figure 14-27).
Figure 14-27. Inverted Cube with Hidden Faces sketch
You should now see the three sides of the cube, but from an inside vantage point. Another way you could make this inversion happen, keeping the greater-than sign intact, would be by reconnecting all the cube’s triangles in a counterclockwise rotation—but that seemed like too much work. Remember to put the greater-than sign back in the line if (cameraView.getDotProduct(norm)>0){ before continuing on.
Depth sorting Rerun the sketch. Do you notice that occasionally there seems to be a jitter, where some of the side triangle edges seem to flip from side to side? Since each of the triangles making up the faces has a 100 percent opacity fill, you shouldn’t be able to see through any of the cubes’ faces, so what’s causing this jittering?
82
3 D R E N D E R I N G I N J AVA M O D E The jitter problem is caused by the renderer not knowing in what order to draw the cube’s faces. The last thing drawn will always be on top of what’s been drawn previously. You and I know which sides of the cube should be in front of the others, based on our vantage point. However, the computer only renders the cube’s faces in the order directed in the setTriangles() method in the Cube class, which the for loop obediently and sequentially runs through in the IG3D render() method. The way the rendering works is that each sequential triangle is rendered over the previous ones. Thus, as the cube rotates, the original triangle drawing order, which might work for a static cube in one specific orientation, doesn’t hold true for all orientations, and this problem becomes much more noticeable once you begin hiding back-facing polygons. A way you can solve this problem is by explicitly and dynamically rendering the triangles in the correct z-stacking order. Z-stacking relates to how 2D geometry is stacked, or layered, on an implied z-axis. In most 2D graphics software applications, there are methods for shifting layers—and vector-based artwork—behind and in front of each other. This is precisely what you need to do with the code to fix your twitchy cube. To accomplish this, you’ll use a relatively simple sorting method, called a bubble sort, to sort the triangles by their average z-axis values, and then render them in order from back to front. This backto-front rendering approach is commonly referred to as the painter’s algorithm, based on a simplified (and quite incorrect) notion of how a painter works—where forms in the background are painted first, followed by the middleground and then foreground. Having taught painting for a number of years, I can tell you this approach would not have cut it in my classroom. Nonetheless, the “bad painter’s” algorithm will suffice to solve our rendering problem. To begin, you’ll create a new private method in the IG3D class called sortFaces(). The method will use a bubble sort algorithm, which sorts values by comparing neighboring values. When two values are out of order, the algorithm switches the values. The algorithm runs iteratively, using two for loops, which allows the entire array to be properly sorted. As far as sorting algorithms go, a bubble sort is not very efficient, but it’s relatively simple to implement and works fine for a smallish set of values. Add the following sortFaces() method to the IG3D class, anywhere beneath the IG3D constructor: private int[] sortFaces(Triangle3D[]t){ float[]zStack = new float[t.length]; int[]sortOrder = new int[t.length]; for (int i=0; i<t.length; i++){ zStack[i] = (t[i].v[0].z + t[i].v[1].z + t[i].v[2].z)/3; sortOrder[i] = i; } for (int i=0; i<t.length; i++) { for (int j=1; j<t.length-i; j++) { if (zStack[j-1] > zStack[j]) { float zTemp = zStack[j-1]; zStack[j-1] = zStack[j]; zStack[j] = zTemp;
14
83
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T int orderTemp = sortOrder[j-1]; sortOrder[j-1] = sortOrder[j]; sortOrder[j] = orderTemp; } } } return sortOrder; } This method is pretty geeky looking (all right, very geeky looking), and a bubble sort algorithm is indeed the type of thing you’d learn in a computer science class. Again, what’s essentially happening in the bubble sort is that neighboring values are compared and switched if they’re out of order. What adds another level of complexity to the method is that I’m actually sorting two arrays at the same time. The zStack[] array is filled with the average z component values for each triangle. The sortOrder[] array is filled with consecutive integers (beginning at 0), based on the number of triangles in the object (in the cube’s case, 12). I do the actual bubble sort calculation on the z component values (stored in zStack[]), but also order the integers in sortOrder[] based on these sorting calculations. Then the method returns the ordered list of integers stored in the sortOrder[] array. I’ll call the sortFaces() method from within the render() method in the IG3D class. I’ll then use the returned ordered list of integers to run through the triangles, rendering them in back-to-front order. Here’s the updated render() method with the added depth sorting. I put the new code in bold. This revised render() method should obviously replace the existing one (again in the IG3D class). public void render(IGShape3D shape){ Triangle3D[]t = shape.getTriangles(); rotXYZ(t); int[] sortOrder = sortFaces(t); for (int i=0; i<t.length; i++){ // only render visible faces if (isFaceVisible(t[sortOrder[i]])){ project(t[sortOrder[i]]); } } } The line int[] sortOrder = sortFaces(t); calls the new sortFaces() method, which returns a sorted array of integers, which is assigned to sortOrder[]. Notice in the method how I’m using an array in an array within the if statement head and also to call the project() method: if (isFaceVisible(t[sortOrder[i]])){ project(t[sortOrder[i]]); } This is perfectly legal (albeit somewhat confusing) syntax, as the inner stuff is evaluated first. Since the sortOrder[] array is filled with ordered integers, these values can be used
84
3 D R E N D E R I N G I N J AVA M O D E to access data in the t[] array, which will now be accessed and rendered in the correct order. Rerun your current sketch, and the jittery problem should be gone (or at least lessened). I’ve included the code again that should already be in the main sketch tab. After you finish basking in your cube de-jittering success, you’ll add some lighting to the IG3D renderer. public class MyController extends PApplet { IG3D i; Cube c; void setup(){ size(400, 400); fill(127, 255, 0); i = new IG3D(this); c = new Cube(150, 150, 150); } void draw(){ background(0); i.translateXYZ(width/2, height/2, -100); i.setViewAngle(70); i.rotX(frameCount*2); i.rotY(frameCount*3); i.rotZ(frameCount*.5); stroke(0); i.render(c); stroke(255); i.renderNormals(c, 100); } }
Lighting You’ve actually done most of the hard work already, with regard to adding a light source. And for this we all owe much thanks to Johann Heinrich Lambert, the 18th century luminary (forgive the pun) who developed a law (Lambert’s cosine law) that essentially says that the illumination on a surface is directionally proportional to the cosine of the angle between the observer’s view (which will be our light source vector) and the surface normal. That description should sound a little familiar, as you looked at a similar type of relationship when I discussed back-face culling. With culling you used the camera’s view vector and a triangle’s normal, and you learned how the dot product fits into this relationship, specifically in the following expression:
14
a • b = |a| |b| cos(θ) So in a sense you already solved the light problem (well, not quite). Using what you’ve already learned, you’ll (pretty easily) be able to calculate the ratio of the amount of illumination that a surface should radiate. However, you still have to do something with this ratio. One way to apply the ratio is to alter a polygon’s fill color using bitwise operations.
85
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T (I cover bitwise operations in both Chapter 10 and Appendix B.) Although using bitwise operations might impress your friends, there is a much easier way using Processing’s red(), green(), and blue() color component functions; I recommend the easy path. The first step you need to take is creating a light source, which will be represented as a Vector3D object. You’ll also want to create a setLightPosition() method to allow the light’s position to be changed from within the main sketch tab. At the top of the IG3D class, directly below the transVals declaration/instantiation, add the following line: private Vector3D lightSource = new Vector3D(0, 0, 350); Then, anywhere below the IG3D constructor, add the following setLightPosition() public method: public void setLightPosition(Vector3D v){ lightSource.setTo(v); } You’ll also need an instance property to hold the current fill value. Below the lightSource declaration, at the top of the IG3D class, add the following line: private int currentFillColor; Then, within the IG3D constructor, you’ll add an assignment for the currentFillColor property. I’ve put the updated IG3D constructor below, with the new code in bold: public IG3D(PApplet p){ this.p = p; currentFillColor = p.g.fillColor; } The assignment currentFillColor = p.g.fillColor; may look a little odd. The PApplet object, which is passed into the IG3D constructor, has access to a public PGraphics object called g. The PGraphics class is one of Processing’s core classes. Within the PGraphics class is a public property called fillColor, which holds the value of the current fill color in Processing. I admit this is a bit of the behind-the-scenes hack, but as this is the last chapter and all . . . You won’t find this sort of information in the standard Processing reference; instead, you’ll need to look on the dev.processing.org site, which I recommend exploring as you progress in your Processing mastery. The current link to Processing’s PGraphics class is http:// dev.processing.org/source/index.cgi/trunk/processing/core/src/processing/core/ PGraphics.java?view=markup. Now let’s get down to the actual lighting calculation methods. Anywhere beneath the IG3D constructor, add the following setLighting() method:
86
3 D R E N D E R I N G I N J AVA M O D E private void setLighting(Triangle3D t){ // get light ratio float intensity = getLightIntensity(t); // get RGB color components from current fill color float r = p.red(currentFillColor); float g = p.green(currentFillColor); float b = p.blue(currentFillColor); // update color components r*=intensity; g*=intensity; b*=intensity; // reset fill with updated color values p.fill(p.color(r, g, b)); } This method doesn’t do the actual light intensity calculation, but rather takes that info and applies it to the fill value. The getLightIntensity(t) call will handle the actual calculating, which you’ll get to next. Notice the calls to Processing’s red(), green(), and blue() functions, which again save you from uglier bitwise operations. Once you get the actual component values out of the currentFillColor, you simply multiply these values by the intensity, and then call Processing’s fill() function, using these updated values to generate a new fill color. This function will be called by each triangle, which will generate a unique intensity ratio and thus change the fill value. The intensity value will very conveniently be between 0 and 1.0 (again, thanks to Lambert), which will correspond perfectly to the angle the triangle is facing in regard to the light source vector. For example, when the polygon face is directly facing the light source, the intensity value will be very near 1.0, causing the fill color to be 100 percent radiant. When the polygon is facing away from the light, the value will be near 0, sometimes even causing the fill to go (unnaturally) to black—something you can fix by adding a little ambient light. Next, you’ll add the getLightIntensity() method. As usual, add the method anywhere beneath the IG3D constructor: private float getLightIntensity(Triangle3D t){ Vector3D lightView = new Vector3D(); lightView.setTo(lightSource); lightView.subtract(t.v[0]); Vector3D norm = t.v[0].getCrossProduct(t.v[1], t.v[2]); float lightViewMag = lightView.getMagnitude(); return(lightView.getDotProduct(norm)/lightViewMag); } Notice that this method, as well as the last one you looked at, was declared private, as it’s an internal method, which wouldn’t be called outside of the class. This class is the real workhorse for generating lighting. It’s not that long a method, but it’s nice and dense. At the top of the method, I declared a lightView Vector3D object, which will hold the light vector from the light source to a vertex of each triangle passed to the method. The next
14
87
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T line simply copies the x, y, and z position values from the lightSource to the lightView, using the Vector3D setTo() method. Next I subtract the x, y, and z values of one of the triangle’s vertices from the lightView coordinates. The lightView now holds a vector quantity from the light source to the triangle. Then I call the same getCrossProduct() method discussed earlier in the back-face removal discussion. The final calculation requires the magnitude of the lightView vector, so I get that next. And finally, I have the function return the dot product calculation between the lightView vector and the polygon normal, all of which is divided by the magnitude of the lightView vector. In the lighting expression you looked at earlier, the dot product is divided by the magnitude of both vectors (the lightView and norm). However, in the getCrossProduct() call, the normal is automatically normalized, so this additional division could be skipped. The last step is calling the setLighting() method, which internally handles calling these other methods. You’ll add the call setLighting(t); within the IG3D class’s project() method, right above the line p.beginShape(p.TRIANGLES);. That’s it! You should now have basic lighting in your IG3D rendering engine (shown in Figure 14-28). Let’s try it out. Here’s the code for the main sketch tab (which, as usual, should replace whatever is currently there): public class MyController extends PApplet { IG3D i; Cube c; void setup(){ size(400, 400); fill(127, 255, 0); noStroke(); i = new IG3D(this); c = new Cube(150, 150, 150); } void draw(){ background(0); i.translateXYZ(width/2, height/2, -200); i.setViewAngle(70); i.rotX(frameCount*3); i.rotY(frameCount*2); i.rotZ(frameCount*1); i.setViewAngle(75); i.render(c); } }
88
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-28. Illuminated Cube with a Direct LIght Source sketch
Hopefully, you see the illuminated cube in all its 3D glory.
Advanced lighting Besides overall diffuse directional light, which you just generated, there are other types of light sources. For example, spotlights, point lights (think lightbulb hanging from a wire), and linear lights (fluorescent tubes), just to name a few. In addition, lights can have certain properties, such as falloff radii, edge softness, and illumination shape (e.g., conic illumination of a spotlight). Lights can also interact with different types of materials, such as the hot spots of intense light seen on highly reflective surfaces (specularity), or the glossiness seen on wet surfaces. As you might imagine, it all comes down to a bunch of mathematical calculations. Ultimately, though, these values all need to act on the fill color of each pixel, so many of these values are ultimately just added together to generate the cumulative light. You can get a little bit of insight into this situation by adding some ambient light to the IG3D renderer. Ambient light, similar to ambient sound, is the overall light, not connected to any specific light source. Generally, when working in 3D animation, you want fairly low levels of ambient light, as too much can flatten out a scene, lessening the visual drama. However, too little ambient light can lead to black shadows and a loss of detail. You’ll create an ambientLight property and a setAmbientLight() method, and then you’ll integrate the ambient light into your overall lighting calculation in the existing setLighting() method. At the top of the IG3D class, under the currentFillColor property declaration, add the following line:
14
private float[] ambientLt = {30, 15, 5}; You’ll add the ambientLight as an RGB value, allowing you to not only change the light intensity, but also alter the color of the light. It’s very common when working in 3D to create subtle coloration using light sources, which adds more complexity, richness, and ultimately an increased sense of verisimilitude to a scene.
89
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T Anywhere beneath the IG3D constructor, add the following setAmbientLight() method: public void setAmbientLight(float[] ambCol) { ambientLt = ambCol; } Finally, in the existing setLighting() method, within the IG3D class, you’ll add the ambientLt values to the individual components, directly below the existing three multiplication/assignment expressions: r*=intensity; g*=intensity; b*=intensity; The three new expressions to add are as follows: r+=ambientLt[0]; g+=ambientLt[1]; b+=ambientLt[2]; Here’s the edited setLighting() method, with the three new lines of code in bold: private void setLighting(Triangle3D t){ // get light ratio float intensity = getLightIntensity(t); // get RGB color components from current fill color float r = p.red(currentFillColor); float g = p.green(currentFillColor); float b = p.blue(currentFillColor); // update color components r*=intensity; g*=intensity; b*=intensity; r+=ambientLt[0]; g+=ambientLt[1]; b+=ambientLt[2]; // reset fill with updated color values p.fill(p.color(r, g, b)); } Now try rerunning the sketch. You should see a slight increase in value as well as a subtle orange color mixing with the existing green surface. That’s the last feature I’ll add to the IG3D rendering engine. As always, please mess around with it, and definitely improve upon it; then send me the faster, leaner, and more well-organized code. Who knows, maybe one of you eventually will write P3D2; or better yet, IG3D2.
90
3 D R E N D E R I N G I N J AVA M O D E
One word of caution: The IG3D renderer was developed strictly for demonstration purposes only. Without additional development, it will most likely produce very inconsistent results on any geometry beyond a single cube.
Lighting, the Processing way All right, so all that work, and all yooz gots to show for it is another lousy stink’n spin’n cube. The goal here was never to replace P3D, but to simply provide a deeper understanding about how 3D works. Even though this chapter has gone on for a long time, I’ve only scratched the surface of 3D. It’s a vast and exciting area of research, both in the arts and sciences. In fact, it’s perhaps the place where these two disparate disciplines most come together. Before I really sign off, I’ll provide one final interactive example showcasing some of Processing’s built-in 3D lighting capabilities, and then I really will let you get back to your busy lives. The final sketch example of the book is of a star field with a mess of orbiting planets. The sketch includes six different lighting setups (shown in Figures 14-29 through 14-34), accessible by pressing the number keys 1 through 6 on the keyboard. The following list gives the settings of each: 1: Only ambient light 2: Ambient and directional light 3: Ambient, directional, and specular light 4: Ambient, directional, and specular light, and a spotlight 5: Ambient, directional, and specular light, a spotlight, and a point light 6: Processing’s default lights() call Next is the finished lighting example, titled Orbiting Planets (which should be entered into a brand new sketch, all within the main sketch tab). Please note that the Orbiting Planets sketch requires that the image starfield.gif be added to the sketch’s data directory. The image can be downloaded from the Download section of the friends of ED website, at www.friendsofed.com/.
// Orbiting Planets /* required: "starfield.gif" needs to be added to the data directory */
14
import processing.opengl.*; PImage stars; int planetCount = 40; float[]x = new float[planetCount];
91
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T float[]y = new float[planetCount]; float[]z = new float[planetCount]; float[]angle = new float[planetCount]; float[]radius = new float[planetCount]; float[]orbit = new float[planetCount]; float[]speed = new float[planetCount]; color[]planetColor = new color[planetCount]; boolean lightSetup1, lightSetup2, lightSetup3, lightSetup4, â&#x17E;Ľ lightSetup5, lightSetup6 = true; void setup(){ size(600, 400, OPENGL); stars = loadImage("starfield.gif"); noStroke(); // fill arrays for (int i=0; i<planetCount; i++){ y[i] = random(1.5, 1.7); angle[i] = random(360); radius[i] = random(.1, 5); orbit[i] = random(130, 178); speed[i] = random(.5, 2); planetColor[i] = color(random(255), random(255), random(255), 255); } } void draw(){ background(0); //star box fill(255, 190, 255, 255); pushMatrix(); translate(width/2, height/2, -175); textureMode(NORMALIZED); starBox(); popMatrix(); //set lighting if (lightSetup1){ ambientLight(110, 110, 110); } else if (lightSetup2){ ambientLight(40, 40, 30); directionalLight(120, 130, 170, 1, 1, -1); } else if (lightSetup3){ ambientLight(60, 40, 60); lightSpecular(50, 145, 175); directionalLight(102, 102, 102, 1, 1, -1);
92
3 D R E N D E R I N G I N J AVA M O D E specular(160, 160, 160); shininess(9); } else if (lightSetup4){ ambientLight(40, 30, 50); lightSpecular(50, 145, 175); directionalLight(102, 102, 102, 1, 1, -1); specular(160, 160, 160); shininess(9); spotLight(37, 75, 85, -100, height/2, 800, 1, 0, -1, PI/3, 5); } else if (lightSetup5){ pointLight(10, 100, 130, width, height/2, -100); ambientLight(30, 20, 40); lightSpecular(50, 145, 175); directionalLight(102, 102, 102, 1, 1, -1); specular(160, 160, 160); shininess(10); spotLight(37, 75, 85, -100, height/2, 800, 1, 0, -1, PI/3, 5); } else if (lightSetup6){ lights(); } // orbiting planets orbit(); // central large planet hevihevi(); } void hevihevi(){ pushMatrix(); translate(width/2, height/1.75, -230); rotateZ(PI/16); rotateY(frameCount*PI/90); fill(55, 100, 110); sphereDetail(32); sphere(150); popMatrix(); } void orbit(){ for (int i=0; i<planetCount; i++){ pushMatrix(); sphereDetail(10); fill(planetColor[i]); rotateX(PI/10); x[i] = cos(radians(angle[i]))*orbit[i]; z[i] = sin(radians(angle[i]))*orbit[i]*2;
14
93
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T translate(width/2+x[i], (height+10)/y[i], -75+z[i]); angle[i]+=speed[i]; rotateY(frameCount*PI/40); rotateX(PI/5); rotateZ(-PI/5); sphere(radius[i]); popMatrix(); } } void starBox(){ // back wall plotWall(); //left wall pushMatrix(); rotateY(PI/2); plotWall(); popMatrix(); //right wall pushMatrix(); rotateY(-PI/2); plotWall(); popMatrix(); //bottom wall pushMatrix(); rotateX(PI/2); plotWall(); popMatrix(); //top wall pushMatrix(); rotateX(-PI/2); plotWall(); popMatrix(); } void plotWall(){ beginShape(); texture(stars); vertex(-width/2, -height/2, -200, 0, 0); vertex(-width/2, height/2, -200, 1, 0); vertex(width/2, height/2, -200, 1, 1); vertex(width/2, -height/2, -200, 0, 1); endShape(CLOSE); }
94
3 D R E N D E R I N G I N J AVA M O D E //interactivity void keyPressed(){ if (key=='1'){ clearLights(); lightSetup1 = true; } else if (key=='2'){ clearLights(); lightSetup2 = true; } else if (key=='3'){ clearLights(); lightSetup3 = true; } else if (key=='4'){ clearLights(); lightSetup4 = true; } else if (key=='5'){ clearLights(); lightSetup5 = true; } else if (key=='6'){ clearLights(); lightSetup6 = true; } } void clearLights(){ lightSetup1 = false; lightSetup2 = false; lightSetup3 = false; lightSetup4 = false; lightSetup5 = false; lightSetup6 = false; }
14
95
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T
Figure 14-29. Default lights() sketch
Figure 14-30. Ambient Light sketch
Figure 14-31. Ambient + Directional Light sketch
96
3 D R E N D E R I N G I N J AVA M O D E
Figure 14-32. Ambient + Directional + Specular Light sketch
Figure 14-33. Ambient + Directional + Specular + Spot Light sketch
14
Figure 14-34. Ambient + Directional + Specular + Spot + Point Light sketch
97
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T Running the sketch, you should see a large, central, rotating bluish planet (called Hevi Hevi, originally coined by writer Bruce Coville), with 40 satellite planets orbiting around it. When the sketch starts up, the lighting is controlled by Processing’s lights() function, which you’ve used throughout this chapter. Pressing the keys 1 through 6 will change the light setup, progressively moving from a simple ambient light source (the 1 key) to a full light setup with numerous types of lights (the 5 key). Pressing 6 resets Processing’s lights() function. The orbiting planets were created by using a series of translations and rotations—stuff you’ve done many times throughout the book. The starBox, which creates the star field, utilizes texture maps on five separate rectangular surfaces, each translated to form the inside structure of a box. This is a common approach in 3D, sometimes referred to as an environment map, or in this case, a sky dome. It is much cheaper, in terms of rendering resources, to generate an environment using a series of images mapped to the inside of a cube or sphere than to try to render a real star field. This approach also gives better visual results than simply using a flat background image. The texture mapping works by attaching an image to two extra coordinates, called u and v, to each vertex. Also discussed in Chapter 11, u-v mapping is commonly found in 3D applications, and gives the benefit of having images adhere directly to object vertices, allowing effects such as object/image map deformations. In Processing, u-v mapping is built into vertex() calls by passing in two extra arguments, for the u and v coordinates, respectively. In addition, the function call texture() needs to precede the vertex() image mapping calls. Both the texture() and vertex() calls also need to be nested between the beginShape() and endShape() function calls. There is one other texturing call I used, textureMode(), which controls how the uv coordinate values map the image to the polygon. There are two constant arguments you can pass to the textureMode() function: IMAGE and NORMALIZED. Using the IMAGE argument, the uv coordinates should be the size of the actual image. However, you can use larger and smaller values to both distort and scale the image. Using NORMALIZED, you use a 0 or 1 for the two arguments to specify the mapping. To map an image on a quadrangle, you’d use the uv order (0, 0) (1, 0) (1, 1) (0, 1) for the last two arguments in the four vertex() calls. Of course, when mapping an image onto a polygon, you also need to properly load your image, which as you’ll remember first needs to be added to the sketch’s data directory by selecting Add File from Processing’s Sketch pull-down menu. The lighting commands use lots of arguments, and the best way to really understand how they work is to play with them. If your sketch includes a draw() function, then any lighting commands need to be called from within draw(). Many of the lights work together, forming a cumulative lighting effect, similar to what I demonstrated in the custom IG3D renderer when I added ambient lighting to the directional light. In general, the light calls should be made prior to the individual drawing calls, similar to how the fill() command works.
Processing’s light functions The different lights in Processing include the following:
98
3 D R E N D E R I N G I N J AVA M O D E ambientLight(): This uses three arguments to specify overall (and very flat) lighting. Press 1 to see the scene only rendered utilizing ambient light (boooor-ing!). directionalLight(): This uses six arguments; the first three specify RGB color and the last three specify the direction the light points along the individual axes, as follows: x-axis: –1 = left, 0 = middle, 1 = right y-axis: –1 = bottom, 0 = middle, 1= top z-axis: –1 = front, 0 = middle, 1= rear Press 2 to see the scene rendered utilizing ambient light and directional light. lightSpecular(), specular(), and shininess(): These all work together to generate specular light effects. Specularity describes surface highlights and relates to how light bounces off usually shiny surfaces. The specular() and shininess() calls set material properties, which interact with the specular light. In general, as the highlight areas on objects increase in size, objects appear to get less shiny. At the other extreme, tiny hotspots can make a surface feel glossy, even wet—think about the tiny highlights on the surface of an eyeball. Press 3 to see the scene rendered utilizing ambient light, directional light, and specular lights. spotLight(): This uses 11 arguments (yes, it’s too many). The first three specify color; the next three specify the xyz coordinates of the light; the next three, similar to directional light(), are for the direction the light points along the three axes; and the last two are for the angle of the spotlight cone of light (in radians) and the concentration of the light in the center of the illuminated area. As this last argument increases in size, the spot beam diameter decreases, but with increased contrast along the edge of the illuminated areas. This argument works similarly to the shininess() function, with its effect on the lightSpecular()/specular() functions. Press 4 to see the scene rendered utilizing ambient light, directional light, specular lights, and a spotlight. pointLight(): This uses six arguments, the first three for color and the last three for its coordinates. Point lights are omnidirectional, as opposed to directional lights, which are unidirectional. Point lights are useful for adding fill lighting to dark areas in a scene, as well as for adding subtle color effects. I used the point light for both fill light and coloration on the dark side of planet Hevi Hevi. Press 5 to see the scene rendered utilizing ambient light, directional light, specular lights, a spotlight, and a point light.
OPENGL renderer Finally, notice in the size(600, 400, OPENGL); call that I used the argument OPENGL instead of P3D. I also began the sketch with the import statement import processing. opengl.*;. OPENGL is a very popular API for generating hardware-accelerated 3D. P3D, on the other hand, is a software-only 3D engine. OPENGL is really a specification that many hardware manufacturers follow, allowing fairly high-level programming calls (in OPENGL) to communicate at a very low level (within computer hardware). The benefit of using OPENGL is greatly increased performance. Processing comes with what’s referred to as a binding to OPENGL. In actuality, the binding is through Java (JOGL), but I think by now you
14
99
P R O C E S S I N G : C R E AT I V E C O D I N G A N D C O M P U TAT I O N A L A R T get the relationship between Java and Processing. This very cool link to OPENGL is handled in Processing by the OPENGL library, which is one of the core libraries that’s included in the standard Processing download. You gain access to the OPENGL library through the import statement at the top of the sketch, which you can also have Processing write for you by selecting Import Library ➤ OPENGL, found under Sketch in the top menu bar. I recommend replacing the OPENGL argument in the size() call with P3D to see the difference in performance. On my machine it’s substantial. Processing’s libraries are beyond the scope of this book, but I recommend familiarizing yourself with them. They can be found at http://processing.org/reference/ libraries/index.html.
Summary In this final chapter, you delved further beneath Processing’s hood, worked in Java mode, and looked at the nuts and bolts of a 3D rendering engine. Hopefully, you didn’t find the transition too daunting. The genius of Processing is that real coding (and learning about coding) happens almost without you even knowing about it. One minute you’re trying to get some cool shape to spin, and the next thing you know you’re manipulating some fairly complex arrays and nested for loops in an external Java file. Processing makes coding fun, and it eases users into deeper and more complex coding structures—but only when they’re ready. Utilizing three working modes (basic, continuous, and Java), users can work and play with programming at any level, making the development time—from newbie to fully fluent programmer—faster than with a standard one-size-fits-all approach—and it’s certainly more fun! Most importantly, Processing addresses a large part of the population (us creative types) who traditionally might have been terrified or bored silly sitting in a traditional computer science class. Hopefully, if I did my job right, you’ve glimpsed the creative potential of code and maybe even have begun to find your own creative coding voice. Coding is definitely similar to other art forms, in that hands-on practice is required to reach mastery. Beyond the theory, the language API, and the math is a powerful medium for self-expression, which is precisely the message that motivated me to write this book—and therefore I guess a good place to leave it.
100