The Dancing Pixels: Creating Self Pattern with Processing

Page 1

Istanbul Okan University A RC D E 1 1 1 - Pa r a m e t r i c D e s i g n Midterm Assignment

Self Pattern The Dancing Pixels

Instructor: Hasan GÖKBORA Semester: Spring 2020/2021 Name: Efekan ÇAKIR Student ID: 160209005 E-Mail: cakirefekan@gmail.com Course: Parametric Design

Submission Date: April 28, 2021


Contents

1 Assignment Definition

1

2 The Work: The Dancing Pixels

1

2.1

2.2

The Video Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2.1.1

The Tutorial Documentation . . . . . . . . . . . . . . . . .

1

2.1.2

The Tutorial Experience . . . . . . . . . . . . . . . . . . .

4

Let’s move the Pixels! . . . . . . . . . . . . . . . . . . . . . . . .

4

3 The Codes of The Dancing Pixels

9


1

Assignment Definition

Generating a pattern interacted with yourself through defining variables, functions, control statements in Processing.

2

The Work: The Dancing Pixels

This work made by Processing version 3.5.4. Mainly, The Dancing Pixels based on a tutorial that is included in Processing’s official website. The name of the tutorial is "video". This video tutorial needs "video library for Processing" to use webcamera included in runtime environment.

2.1 2.1.1

The Video Tutorial The Tutorial Documentation

The tutorial gives that code-block to pixelise the real-time images that come from webcamera: import processing.video.*; // Size of each cell in the grid, ratio of window size to video size int videoScale = 8; // Number of columns and rows in the system int cols, rows; // Variable to hold onto Capture object Capture video; void setup() { size(640, 480); // Initialize columns and rows cols = width/videoScale; rows = height/videoScale; background(0); video = new Capture(this, cols, rows); video.start() } // Read image from the camera void captureEvent(Capture video) {

1


video.read(); } void draw() { video.loadPixels(); // Begin loop for columns for (int i = 0; i < cols; i++) { // Begin loop for rows for (int j = 0; j < rows; j++) { // Where are you, pixel-wise? int x = i*videoScale; int y = j*videoScale; color c = video.pixels[i + j*video.width]; fill(c); stroke(0); rect(x, y, videoScale, videoScale); } } }

Firstly, this code-block creates an visual window whose dimension is 640x480 px. Then, to make grid they defined a constant variable as videoScale. This videoScale divides the screen as equal parts with videoScale value times. It means value of videoScale is 8 and then the screen will be divided equal 8 different parts in horizontal and vertical. This dividing occurs in cols and rows variables. And we need the graphic to process as make pixelise. In first line of code-block we import the processing video library to use webcamera. After that to access these graphics, we must assign them to an object. This procedure will be occured with variable as Capture type.For this reason we define Capture video;. Then we can access, assign capture with video variable. Then we assign the captured graph into video variable as line of video = new Capture(this, cols, rows). In that codes, this actually means: "Hey listen, I want to do video capture and when the camera has a new image I want you to alert this sketch.". This definition from tutorial page. The second

2


and third terms defines the dimensions of the capture as dimensions of squares of created grid. With video.start() function, we start to collect graphics into Capture object that assigned in video variable. Our setup class is it, for now. Then we need define another event class to read image from camera as void captureEvent(Capture video) {video.read();1 } Then we can ready to process our images. In draw construction class, we load pixels with video.loadPixels() function. After getting real pixels of the graphic, we can start loop to process each pixel.

We need nested loop to access each pixel that defined by column and row count.So our main loop means the columns and sub-loop means the rows of the valid column. Then we start process the image. We define 2 new variable as x and y. x means the horizontal position of the our virtual pixel in grid system. -that is not real pixel of image-, and also y is the vertical position of the virtual pixel. With that line color c = video.pixels[i + j*video.width], we get the real pixels color into variable c as color type. Then we create a swaure in x and y position with videoScale dimension through rect() function. And we fill the rectangle with color of c and the color of the stroke of the rectangle is 0 that means black. We successed creating pixelised image.

3


2.1.2

The Tutorial Experience

Actually in my experience, almost everything is OK. Only I got an error in first step of tutorial. I could not get image from my webcamera. Processing gives an error as IllegalStateException: Could not find any devices. To fix that problem, I searched in Google then I found a solution in processing forum. The user says we should define capture object like that: cam = new Capture(this, "pipeline:autovideosrc"). The "pipeline:autovideosrc" is fix the problem. But if you want to define capture size as in our tutorial the "pipeline:autovideosrc" is must be last term of function likte that: video = new Capture(this, cols, rows, "pipeline:autovideosrc") You

can

see

the

webcamera

solution

topic on

processing

forum:

https://discourse.processing.org/t/processing-cant-find-the-camera-videolibraries-dont-work/25128/13

2.2

Let’s move the Pixels!

After the creating pixelised image I can start to manipulate pixels and their datas. Then preferred to show pixels as only cirlce shape. Our old pixel creator function was: rect(x, y, videoScale, videoScale) and now our new circle creater function is circle(x, y, map(brightness(c), 0, 255, 0, 20)). There is new function as map(). This scaled automatically your first parameter from an arrangement to another arrangement. And also the brightness() function gives the brightness of the color that included in as c in our code. So the map() function scales the brightness of c, from 0-255 to 0-20 also that value is radius of the our new created circle. So that means, brightness of our pixels determine the radius of circles.

4


We success to move the circles but that is not a real pattern. It gives the a virtual circled image. To make pattern let’s make harmony. In my opinion, dancing contains real harmony, and it is generally played as community (together). Each member of community, connected to other one to show harmony. We express that with circles and some lines... So actually we have 2d array as columns and rows. In my imagination, I need convert to 1d array that data. I created a line in vertical plane. In that figure, the red lines express the columns, and the blue lines express the rows. If I can create a line like that, it means I have datas like 1D array. So that means in mathematically, our center point of line have to be in same x-position. and y-position should be different. I assign x and y position to each line as x-position is half of the screen and y-position is x+y/3. In this calculation the x and y are variables that we defined before. I preferred to divide 3 to y value because directly x+y may be gives

5


same result in different pixels. For example, the pixel that located in 2nd column and 3rd row, gives 2+3=5 and also the 3rd column and 2nd row placed pixel also gives 5 with 3+2 addition. To prevent that conflict, y divided by 3. I have a point for each pixel that means I cannot create a line directly, because lines need 2 different points. I preferred create a rectangle with rect() function. It needs only a point, width, and height. Then I create rectangles with that datas. rect(width*1/2, x+y/3, 50, 1) In that line, 50 is width of the rectangles and 1 is the height of the rectangles to seem as line.

Then I created circles on same point with rectangles with: circle(width*1/2, x+y/3, 5). Now we need to move that circle on the our rectangles. It means our x-position should be changed. Then we need another parameter to determine x-position. We can use again brightness of pixels’ color. Then we define new variable as extraX. If we use directly birghtness of color, we may not control the position of circles, so we can use again map() function to control them. Let the define extraX variable as float type to assign mapping birghtness value of color of pixel. float extraX = map(brightness(c), 0, 255, -100, 100) Our new arrange is from -100 to 100 due to if we start from 0 again, our circles move only positive way- that it means right- on horizontal plane. Then add that extraX value to x-position of circles: circle(width*1/2+extraX, x+y/3, 5)

6


Now we created base of our pattern. Let’s connect the vertices (cirlces) each together. Best method for my decision is connect each vertices to next one. There is an exception that is the last vertex has not an next vertex. We have to check that with conditional statements. Before that we should define next vertices. Now we have 3 parameter of vertices. These are x-position as x, y-position as y, and distance from base point as extraX. We should access for next vertex’s parameters so we should define them. First of all, we need extraX from color of next vertex of its pixels. We hold the color of pixels as color c = video.pixels[i + j * video.width]. So the dependencies of color is i and j. We need define them for next vertices as nextI and nextJ. int nextI = (i + 1) and int nextJ = (j + 1). We have next’s i and j value. Then we can get next one’s color as video.pixels[nextI + nextJ * video.width]. So we need assign it into a variable as nextC that means color of next pixel as color data type. So the nextC should be like that: color nextC = video.pixels[nextI + nextJ * video.width]. Then we can determine distance from base point of next vertex with brightness of color of next vertex like that: float nextExtraX = map(brightness(nextC), 0, 255, -100, 100). Now we have the first parameter of vertices. We should get others as x-position and y-position. We get x-position and y-position of current vertex with: int x = i * videoScale; int y = j * videoScale; Then we should define nextX and nextY variables. These should be like that: int nextX = (i + 1) * videoScale;

7


int nextY = (j + 1) * videoScale; Now we have each parameter of next vertex. Let’s connect each vertex to next one. We use line() function to create line. The function needs only source’s x-position, y-posiiton and target’s x-position and y-position. The line should be like that: line(width*1/2+extraX, x+y/3, width*1/2+nextExtraX, nextX+nextY/3); Do not forget to create conditional statements to control last vertices. The conditinoal statements should be like that: if (nextI != cols && nextJ != rows) {} due to for last vertex, i equals cols minus 1. So the next ones should equal directly cols value. But there is not any vertex like that. So if there is a statement like that, the codes should not work. For this reason we created an if statement. Our code blocks should be inside of if statements.

Now, actually we can remove our rectangles that we used as line. And to give some visual aesthetic, we can give color to vertices. It’s so easy, we need add fill(c) and noStroke() functions before we create our current vertex. Then after creation circle add stroke to colorize line: stroke(c)

8


3

The Codes of The Dancing Pixels

import processing.video.*;

String videoName = "pipeline:autovideosrc"; // Size of each cell in the grid// Size of each cell in the grid, ratio of window size to video size int videoScale = 16; // Number of columns and rows in the system int cols, rows; // Variable to hold onto Capture object Capture video;

void setup() { size(960, 640); // Initialize columns and rows cols = width / videoScale / 2; rows = height / videoScale / 2; video = new Capture(this, cols, rows, videoName); video.start(); //noLoop(); }

// Read image from the camera void captureEvent(Capture video) { video.read(); }

void draw() { background(255); video.loadPixels(); // Begin loop for columns for (int i = 0; i < cols; i++) { // Begin loop for rows for (int j = 0; j < rows; j++) { // Where are you, pixel-wise?

9


int x = i * videoScale; int y = j * videoScale; int nextX = (i + 1) * videoScale; int nextY = (j + 1) * videoScale; color c = video.pixels[i + j * video.width]; fill(c); stroke(0); /****************************/ rect(x, y, videoScale, videoScale); /****************************/ noFill(); circle(x, y + height / 2, map(brightness(c), 0, 255, 1, 20)); /****************************/ float extraX = map(brightness(c), 0, 255, -100, 100); fill(c); //fill(0); noStroke(); circle(width * 3 / 4 + extraX, x + y / 3, 5); stroke(c); int nextI = (i + 1); int nextJ = (j + 1); if (nextI != cols && nextJ != rows) { color nextC = video.pixels[nextI + nextJ * video.width]; float nextExtraX = map(brightness(nextC), 0, 255, -100, 100); line(width * 3 / 4 + extraX, x + y / 3, width * 3 / 4 + nextExtraX, nextX + nextY / 3); } /****************************/ } } }

10


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.