Posted by & filed under Uncategorized.

We talked about a ton of things this week.

1. Objects review
2. The Matrix
3. Rotation
4. 3D
5. Pixels!
6. Pixels and the video camera
7. OpenCV
8. More Libraries: Minim
9. More Libraries: OSC
10. FaceOSC
11. Kinect
12. OpenTSPS
13. Arduino
14. Arduino & Processing
15. Etc.

 
 
Objects review
We looked at different ways that we can implement objects in our code.
In the first example, we take one of the data sketches from last week and make the data more interactive by making them objects.

This is what the draw loop looks like:

void draw() {
  background (0);
  for (int i = 0; i < items.length; i++) {
    items[i].drawRect(); 
    items[i].activateHover();
    if (items[i].isHover == true && mousePressed == true) {
      items[i].drawText();
    }
  }
}

In the second, we take an example from Dan Shiffman’s excellent Learning Processing and see how we can use objects to create game objects that detect collision.

void draw() {
  background(255);
  // Move and display balls
  ball1.move();
  ball2.move();
  
  if (ball1.intersect(ball2)) { // New! An object can have a function that takes another object as an argument. This is one way to have objects communicate. In this case they are checking to see if they intersect.
    ball1.highlight();
    ball2.highlight();
  }
  
  ball1.display();
  ball2.display();
}

Finally, in the third example, we use objects as a way to control the flow of the story by making each page an object:

void draw() {
  switch (pageNum) {
  case 0:
    pages[0].display(); 
    break;
  case 1:
    pages[1].display(); 
    break;
  case 2:
    pages[2].display(); 
    break;
  case 3:
    pages[3].display(); 
    break;
  case 4:
    pages[4].display(); 
    break;
  }
}

In the third example, you can see how we used the switch(case) structure instead of our usual if/else.

 
 
The Matrix
In class, we talked about how the matrix is a bit like adding layers in Photoshop and Illustrator. It is similar in that you can affect the position and movement of one “layer” or matrix without affecting the other, but it does a bit more than that — it also resets your coordinate system.
Read up on how to use the Matrix and Translations here: http://processing.org/tutorials/transform2d/.

In brief:
pushMatrix(); – starts the new matrix
translate (x, y, z); – moves the matrix to a new location
popMatrix() – closes the matrix and resets the old coordinate system.

Intermission: A word on Processing Render Modes
You can control a bit of how Processing renders your sketch by switching modes around (there are really only 3: P2D, P3D, and PDF). You can read more of why we would need to do it here.

For the next few examples, we will be working in the 3D space — introducing a new axis, the z-axis — so we’ll need to work in P3D. To do that, we just need to append another parameter to our size() function, like so:

void setup() {
  size(640,480,P3D);
}

 
 
The Matrix and Rotation
Unless you want to rotate the entire canvas, you cannot rotate without using transform. Processing will always rotate around (0,0), so in order to change the rotation point, you need to move 0,0!

In this example, we rotate around the z-axis:

pushMatrix(); 
  translate (width/2, height/2); 
  rotateZ (radians (angle)); 
  angle += 5; 
  rectMode (CENTER); 
  rect (0, 0, 50, 50); 
popMatrix(); 

 
 
The Matrix and 3D
Drawing shapes in 3D is almost the same as drawing them in 2D. A rect is a box, and an ellipse is a sphere.

The biggest difference — other than that you are now adding a third dimension to your shape — is that when you create a box or a sphere, you only describe the size:

sphere(28);

and not the location! That’s because you need to use translate() to describe the location of 3d things:

noStroke();
lights();
translate(58, 48, 0);
sphere(28);

This example in our repository shows a 3d box rotating based on the location of your mouse.

 
 
Pixels
Processing makes it easy for us to work with pixels. What’s a pixel? Well, its a tiny object that contains color information, basically. And every Processing canvas, picture, video, or shape has a combination of pixels that make it up, and the way that we access those pixels is through a pixels[] array.

For instance, if you want to get/set/use all the pixels that make up the entire canvas, you would just use:

pixels

To control the pixels in a picture of a cow, you would use:

cow.pixels

Or the video camera:

camera.pixels

To get only ONE single pixel, just call the specific index of that pixel, in the same way you would a normal array:

pixels[3300];
cow.pixels[22];
camera.pixels[940]; 

But you probably will never really want to call just one, you’ll want to have control over the entire pixels array, in which case you’ll have to loop through them all like so:

for (int i = 0; i < cow.pixels.length; i++) {
    color thisColor = cow.pixels[i]; 
}
}

A really, really important thing that we need to know about pixels is that they can be accessed in two ways: by their location (x,y) coordinates, or by their location (or index) in the pixels[] array.

If you have their location but want to know their index in the pixels array, you simply use this handy formula:

x + y * width

So say, you wanted to use the location to double-for-loop through all the pixels, you can still access them using the index of the array like so:

for (y = 0; y < cow.height; y++) {
   for (x = 0; x < cow.width; x++) {
   color thisColor = cow.pixels[x + y * cow.width];
   }
}

One last important thing about working with pixels. Before we use an image, a video, or the canvas’ pixels, we just need to tell Processing to load them all into an array, like so:

loadPixels(); 
cow.loadPixels(); 
camera.loadPixels(); 

And when we finish doing whatever we want with them, we tell Processing to update:

updatePixels(); 
cow.updatePixels(); 
camera.updatePixels(); 

It’s a complex topic, so I would suggest going through this tutorial to understand it better: http://www.processing.org/tutorials/pixels/

We have a ton of pixels examples in our class repo, too.

This one simulates analog TV noise by assigning random grey values to every pixel on the screen:

void draw() {
  loadPixels();

  for (int i = 0; i < pixels.length; i++) {
    pixels[i] = color (random (255)); 
  }
  updatePixels(); 
}

This one adjusts the brightness of an image based on where your mouse location is (you’ll need to download the tuna.jpg file here).

This one shows you how to manipulate a double-for-loop so you only affect every 10th pixel.

  loadPixels();
  for (int y = 0; y < height; y++) {
    for (int x = 0; x < width; x++) {
      if (x % 10 == 0) { //this affects every 10th column. Try a making a box area
      pixels [x+y*width] = color (random(255)); 
      }
    }
  }
  updatePixels(); 

And this cool one uses the brightness of the pixels to extrude them to the z-axis.

 
 
The Video Camera
Before we combine pixels and the video camera, let’s figure out how to access the camera.

The first thing we would need to do is to import the Processing video library, like so:

import processing.video.*;

Then we will declare a Capture object and call it “video”, like so:

Capture video;

To initialize, we need to just say:

 video = new Capture (this); 

You can specify the size of your capture as well as the desired frame rate by initializing it in one of the ways listed in the reference:

Capture(parent)
Capture(parent, requestConfig)
Capture(parent, requestWidth, requestHeight)
Capture(parent, requestWidth, requestHeight, frameRate)
Capture(parent, requestWidth, requestHeight, cameraName)
Capture(parent, requestWidth, requestHeight, cameraName, frameRate)
1

To start the camera, we say so in setup():

1  video.start();  

And to playback the feed, this is what we write in draw():

  if (video.available()) { //first check to see if we are receiving a stream
    video.read(); //then read the stream
  }

 image(video, 0, 0); //we are displaying the video as we would a PImage, by calling "image"!
 

Choosing another camera that’s not your built-in default

Use this code from the Processing examples to list the cameras that are currently in your system:

  String[] cameras = Capture.list();

  if (cameras == null) {
    println("Failed to retrieve the list of available cameras, will try the default...");
    cam = new Capture(this, 640, 480);
  } if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    println("Available cameras:");
    for (int i = 0; i < cameras.length; i++) {
      println(cameras[i]);
    } 

It will print out a list to the console. Find your camera in the list and include it in the arguments when you initialize, like so:

 cam = new Capture(this, cameras[2]); 

 
 
Pixels and the Video Camera
So now we can combine pixels and live video! Woot!

If you haven’t already guessed, we can apply basically anything that we have done with still images with moving images. Accessing camera pixels is the same as accessing PImage pixels.

In this example we convert video pixels into little ellipses, to get that halftone effect.

if (vid.available()) {
    vid.read(); 
    vid.loadPixels(); 
    
    for (int x = 0; x < vid.width; x+=5) {
      for (int y = 0; y < vid.height; y+=5) {
        float bright = brightness(vid.pixels[x + (y * vid.width)]);
        float mapBright = map (bright, 0, 255, 1, 2); 
        ellipse (x, y, mapBright, mapBright); 
      }
    }
    
  }

And in this one, we extrude those pixels in 3d space and rotate as we go!

void draw() {
  background (0); 
  ellipseMode (CENTER); 
  
  float rotY = map (mouseX, 0, width, 0, 360); 
  float rotX = map (mouseY, 0, height, 0, 360); 
  pushMatrix();
  translate (width/2, height/2); 
  rotateY(radians (rotY)); 
  rotateX (radians (rotX)); 
  
  if (vid.available()) {
    vid.read(); 
    vid.loadPixels(); 
    
    for (int x = 0; x < vid.width; x+=10) {
      for (int y = 0; y < vid.height; y+=10) {
        float bright = brightness(vid.pixels[x + (y * vid.width)]);
        float mapBright = map (bright, 0, 255, 3, 9); 
        pushMatrix();
        translate (x - width/2, y - height/2, mapBright*50);
        ellipse (0, 0, mapBright, mapBright); 
        popMatrix(); 
      }
    }
    
  }

There are a ton of things we can do with live image processing. Blob detection, background subtraction, detecting movement, color, and patterns. We can even do photoshoppy things like add blur or change color. Sometimes those tasks involve complicated algorithms, so now we will look at libraries that people have made to make our lives easier.

 
 
OpenCV

Open CV is a prolific library for computer vision. There are a few ports of this library to Processing, but the one we are going to use isn’t on the Processing.org site. It’s by Greg Borenstein, and I find that it is much more reliable (and easier to use) than the others. Download it here.

Copy the downloaded folder into your Processing libraries folder and restart the program. If you do it right, you should be able to go to File > Examples > Contributed Libraries > OpenCV for Processing.

Here’s an example using video and face detection:

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

void setup() {
  size(640, 480);
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  video.start();
}

void draw() {
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
}

 
 
Minim
If you want to work with audio, look no further than the Minim library. It used to be separate, but now its baked into Processing. To learn how to use it, check out the docs here.

In class, we looked at how we can turn a switch on based on the volume of the noise in the room, or how to pause and play a song using the spacebar.

 
 
OSC
OSC (stands for Open Sound Control) is a communications protocol used so that different apps can talk to each other over a network.

You can learn more about it here. The Processing implementation for using OSC is called OSCP5, and you can download it here.

I mention OSC now because it is how we will bring live data into Processing later on. But first …

 
 
The Kinect
If you didn’t already know, the Microsoft Kinect, launched as an accessory to the Xbox gaming console, has become very popular in the art installation/ hacker world. At its most basic, it is a depth camera, able to give you 3D information about the world in front of you. But more than that, you can get skeleton tracking, face tracking, joint detection, etc.

To get the Kinect to talk to Processing, we need the Simple OpenNI library, here. There are instructions on that page, but its actually easier to go to your Tools > Library > Add Library feature in the Processing IDE and just download it from there.

However, the docs are in the actual download, so you might want to download the zip anyway, even if you dont use it to install the actual library.

Once you’ve downloaded the library, restart Processing. Your Examples folder (under file…) should show SimpleOpenNI under Contributed Libraries.

You need to plug in a Kinect to your computer before you run this. I would start by looking at the DepthImage example to see what the Kinect sees, and then the User example to see skeleton data.

 
 
FaceTracker
Remember when I said OSC would be important? This is where we will use it. FaceOSC is an app made by Kyle McDonald. It looks for any faces in a camera feed, as well as amazing details like eye location, mouth height, etc — and sends it out as numbers via OSC. What we need to do in Processing is “catch” those numbers and use them in our sketch.

Download FaceOSC here: https://github.com/kylemcdonald/ofxFaceTracker/downloads
Learn how to use it with OSC here: https://github.com/kylemcdonald/ofxFaceTracker/wiki/Osc-message-specification
Access the OSC data using Processing (as well as some other frameworks) using these templates by Dan Wilcox: https://github.com/CreativeInquiry/FaceOSC-Templates

If you weren’t in class, all you need to do is:
1. Make sure you have a camera
2. Open up the faceOSC app (just double click it like a normal app)
3. Point the camera at your face to make sure its detecting something (you’ll know when its working). If it doesn’t work, maybe play with the lights around you, as it might be too bright/ too dark.
4. Open one of the Processing examples under the templates made by Dan Wilcox. The FaceOSCReceiver is a good place to start.
5. Run the sketch, and you should see the Processing sketch responding to the OSC app. Yippee!

 
 
OpenTSPS
Open TSPS (Toolkit for Sensing People in Spaces) is another App that takes camera information and sends it out via OSC. This was created by the people at Rockwell group. Go ahead and download the app from the link above. Their website is pretty self-explanatory, so I will let you guys peruse it.

Here’s how you would use it with Processing.

 
 
The Arduino
The Arduino is how we will get Processing to talk to the physical world.
You will need the board, of course (you can get these now at any old Radio Shack) and the software, which you download here: http://arduino.cc/

If you don’t have a board, 123dcircuits.io is an AMAZING resource that lets you run simulations on a virtual Arduino + breadboard. It’s a great way to practice (and even learn about electronics!).

Because Arduino and Processing are so friendly with each other, the minute you open up the Arduino interface you will see the similarities. Arduino is built on C and Processing on Java, but even with these differences the language still feels quite similar.

These are the first things you need to know about writing Arduino Code (from a Processing point of view):
1. Instead of setup() and draw(), its setup() and loop().
2. We use PINMODE(pin, input or output) to tell Arduino if the thing that is connected to a specific pin is either an input or an output
3. We say ANALOGWRITE (pin, value) to control something analog (like an LED), and ANALOGREAD(PIN) to get a value from an analog sensor (like a temperature, light, or pressure sensor).
4. We say DIGITALWRITE (pin, value) to write out digital values (turn something on or off), and DIGITALREAD(pin) to get digital values (from a switch).
5. In Processing, all we need to do is RUN. In Arduino, we do two things: COMPILE, and then UPLOAD. Which makes sense — you first assemble all the code, and then move it to the board :)

Finally, how do we get those values to Processing? Instead of OSC, Adruino and Processing talk through another protocol, serial. Read more about serial here: http://www.processing.org/reference/libraries/serial/ and on the arduino site, here: http://arduino.cc/en/reference/serial#.UwA9i0JdLRo

The examples that we used in class are here. Both the Arduino code and Processing code are included.

To recap what we did in class:

To run the examples, you’ll need an Arduino, a USB Cable, a breadboard, an LED or two, a switch, and a resistor.
1. Plug in your Arduino.
2. Open the arduino equivalent of the sketch (it has an “_ard” or .ino suffix, and won’t open in Processing)
3. You’ll see in the top of the sketch where you’ll need to hook up the LED/ switch/ whatever you need.
3. Compile it first (the check button)
4. Under Tools, first select your board (the name of it should be written on your board itself) and the serial port (choose the cu.usb one).
5. Now, hit upload! The little LED on your board should blink while it uploads.
6. Open the equivalent Processing sketch and hit play (you know how to do that).

 
 
CLOSING
That’s it! I know that was a ton of stuff, so feel free to write me if you have questions (in fact, please do).

Here are some more resources to send you on your way:

Join the community:
Share your work, see what others have done:http://www.openprocessing.org/
Collaborative Coding: http://sketchpad.cc/

Libraries:
2D Physics: Box2d
3D Physics: ToxicLibs
Buttons and Sliders: ControlP5
Geometry: Geomerative
And so much more!

Further Learning:
Super Handy: 25 Life-Saving Tips for Processing
Simulate Nature: Nature of Code
A really cool book I just bought: Generative Design
Processing and Data: Visualizing Data
Watch for this Coursera course to come back: https://www.coursera.org/course/compartsprocessing
Processing, Arduino, and Openframeworks in a book: Programming Interactivity

Computer Vision:
Kinect: Making Things See

The Arduino:
Use the Arduino without an Arduino: 123d.circuits.io
The Arduino and the World: Making Things Talk

Leave a Reply

  • (will not be published)