Tag Archive for 'spatialmedia'

linearMusic

linearMusic is an interactive installation that reacts to the presence of passers-by and turns them into performers. It consists of a camera tracking pedestrians on a street, a projection on a wall covering the surface tracked by the camera, and sound played through speakers.

When a pedestrian is seen by the camera a set of lines appears on the wall, following her and “wrapping” around her silhouette. As more people pass in front of the camera, the lines reform themselves to wrap around all of them. A soundscape plays as people interact with the piece, and each individual affects a specific quality of the sound by moving left to right and up to down.

There are two ways to interact with the installation. One can simply pass across the projection, or actively engage with it, seeing how the movements affect the soundscape.

linearMusic was tested in the hall at ITP, but the ideal setup for the installation would be a large wall on a warehouse in Chelsea.

flora

Better late than never… Here is the documentation from Eyal and I’s Spatial Media midterm, flora.

_MG_9747 flora
_MG_9741 _MG_9748
_MG_9744 _MG_9742

linearMusic Tracking Test

We’ve got the visuals almost there, just need to play with the camera calibration and adjust the colours. The next and hardest part we need to tackle is the audio…

PoopWatch

PoopWatch is an enhanced sidewalk designed to monitor dog owners, and ensure they clean up after their pets. It tracks people walking their dogs on the sidewalk, and when the animal relieves itself, checks if the owner picks up the poop. If he fails to do so, the system notices automatically and proceeds with the following:

  • PoopWatch will place a bright outline around the¬†dog poop, to ensure that any passers-by notice it and¬†don’t accidentally step in it.
  • PoopWatch will track the dog on the rest of its walk¬†and draw a path from the poop to the dog, so that it can¬†be identified and its owner brought to justice.

PoopWatch is equipped with pressure and temperature sensors on the entire sidewalk area. As weight is applied, the system can identify whether the walker is a human or animal by detecting the number of pressure points following the path (two for a human, four for an animal). If an animal pauses and leaves something behind, the temperature sensors measure how warm the dropping is to identify it as poop or not. If it is recognized as poop, PoopWatch registers the animal as an offender and follows its footsteps.

The PoopWatch sidewalk consists of state-of-the-art magic asphalt which changes color when heat is applied to it. A network of heating pipes is embedded in the asphalt and controlled by a computer system which can independently address every square inch. This technology is used to draw the poop outline and the dog path once the offending animal has been flagged.

View the PDF

Change Counter

The change counter is an in-class exercise we had to do for Spatial Media, where we had to write an application which counts change in a given image. We didn’t finish on time and it’s been on my mind, so I decided to have a go at it. Here is the openFrameworks code for the finished app. The sample images can be found here.

testApp.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#ifndef _TEST_APP
#define _TEST_APP
 
#define WHITE_THRESHOLD 244
#define AREA_THESHOLD 50
#define COLOR_THESHOLD 10
 
#include "ofMain.h"
 
class coin {
    public:
        void    load(string imgName);
 
        int     area;
        int     color;
};
 
class change {
    public:
        void    load(string imgName);
        void    seedFill(int x, int y, int tag);
        int     count(coin penny, coin nickel, coin dime, coin quarter);
 
        unsigned char* colorPixels;
        unsigned char* grayPixels;
        unsigned char* tagPixels;
 
        ofImage img;
        int     currTag;
};
 
class testApp : public ofBaseApp {
    public:
        void    setup();
 
        coin    penny;
        coin    nickel;
        coin    dime;
        coin    quarter;
 
        change  inMyPocket;
};
 
#endif

testApp.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
#include "testApp.h"
 
//--------------------------------------------------------------
void coin::load(string imgName) {
    // load the image
    ofImage img;
    img.loadImage(imgName);
    unsigned char* pixels = img.getPixels();
 
    // calculate the area
    area = 0;
    color = 0;
    int numPixels = img.width * img.height;
    for (int i=0; i < numPixels; i++) {
        int pixelAvg = pixels[3*i+2];
        if (pixelAvg < WHITE_THRESHOLD) {
            area++;
            color += pixelAvg;
        }
    }
    color /= area;
 
    printf("[area: %i  color: %i]\n", area, color);
}
 
//--------------------------------------------------------------
void change::load(string imgName) {
    // load the image
    img.loadImage(imgName);
    colorPixels = img.getPixels();
 
    // init the grayscale and the tags
    grayPixels = new unsigned char[img.width*img.height];
    tagPixels = new unsigned char[img.width*img.height];
    for (int i=0; i < img.width * img.height; i++) {
        int pixelAvg = (colorPixels[3*i] + colorPixels[3*i+1] + colorPixels[3*i+2]) / 3;
        grayPixels[i] = pixelAvg;            
        tagPixels[i] = 0;
    }
 
    // find the blobs
    printf("Coin Detection\n");
    currTag = 0;
    for (int i=1; i < img.width-1; i++) {
        for (int j=1; j < img.height-1; j++) {
            if (grayPixels[j*img.width + i] < WHITE_THRESHOLD && tagPixels[j*img.width + i] == 0) {
                currTag++;
                printf("> Found a coin with color %i at (%i, %i)!\n", grayPixels[j*img.width + i], i, j);
                seedFill(i, j, currTag);                
            }
        }
    }
}
 
//--------------------------------------------------------------
void change::seedFill(int x, int y, int tag) {
    if (grayPixels[y*img.width + x] < WHITE_THRESHOLD) {
        tagPixels[y*img.width + x] = tag;
 
        if (y > 0 && tagPixels[(y-1)*img.width + x] == 0)             // north
            seedFill(x, y-1, tag);  
        if (y < img.height-1 && tagPixels[(y+1)*img.width + x] == 0)  // south
            seedFill(x, y+1, tag);
        if (x < img.width-1 && tagPixels[y*img.width + (x+1)] == 0)   // east
            seedFill(x+1, y, tag);
        if (x > 0 && tagPixels[y*img.width + (x-1)] == 0)             // west 
            seedFill(x-1, y, tag);
    }
}
 
//--------------------------------------------------------------
int change::count(coin penny, coin nickel, coin dime, coin quarter) {
    // identify the coins
    int** coinAreas = new int*[currTag];
    for (int i=0; i < currTag; i++) {
        coinAreas[i] = new int[2];
        coinAreas[i][0] = 0;
        coinAreas[i][1] = 0;
    }
    for (int i=0; i < img.width * img.height; i++) {
        if (tagPixels[i] != 0) {
            coinAreas[tagPixels[i]-1][0]++;
            int pixelAvg = colorPixels[3*i+2];
            coinAreas[tagPixels[i]-1][1] += pixelAvg;
        }
    }    
    for (int i=0; i < currTag; i++) {
        coinAreas[i][1] /= coinAreas[i][0];
    }
 
    // count the value of all the change
    printf("Coin Identification\n");
    int value = 0;
    for (int i=0; i < currTag; i++) {
        if (ABS(coinAreas[i][0] - quarter.area) < AREA_THESHOLD) {
            printf("> Found a quarter");
            value += 25;
        }
        else if (ABS(coinAreas[i][0] - dime.area) < AREA_THESHOLD) {
            printf("> Found a dime");
            value += 10;
        }
        else if (ABS(coinAreas[i][1] - penny.color) < COLOR_THESHOLD) {
            printf("> Found a penny");
            value += 1;
        } 
        else {
            printf("> Found a nickel");
            value += 5;
        }
        printf(" [%i - %i]\n", coinAreas[i][0], coinAreas[i][1]);
    }
 
    printf("Found %i coins worth %i cents!\n", currTag, value); 
 
    return value;
}
 
//--------------------------------------------------------------
void testApp::setup() {
    printf("Coin Reference\n");
    printf("> Penny: ");
    penny.load("image1.jpg");
    printf("> Nickel: ");
    nickel.load("image2.jpg");
    printf("> Dime: ");
    dime.load("image3.jpg");
    printf("> Quarter: ");
    quarter.load("image4.jpg");
 
    printf("---------------------------\n");
    inMyPocket.load("image5.jpg");
    inMyPocket.count(penny, nickel, dime, quarter);
 
    printf("---------------------------\n");
    inMyPocket.load("image6.jpg");
    inMyPocket.count(penny, nickel, dime, quarter);
 
    printf("---------------------------\n");
    inMyPocket.load("image7.jpg");
    inMyPocket.count(penny, nickel, dime, quarter);
 
    printf("---------------------------\n");
    inMyPocket.load("image8.jpg");
    inMyPocket.count(penny, nickel, dime, quarter);
 
    std::exit(0);
}

flora Vine Tests

The following two sketches are vine tests for the flora project. The final design will probably have to incorporate elements from both these applets.

This first sketch is a curve fitting example. It draws a path following the mouse movements, adding a point whenever the mouse moves far enough in a direction different enough from the previous point.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
ArrayList pts;
PVector lastPt, beforeLastPt;
int distThreshold = 50;
float slopeThreshold = 0.5;
 
void setup() {
  size(400, 400);
  smooth();
 
  pts = new ArrayList();
  lastPt = new PVector();
  beforeLastPt = new PVector();
}
 
void draw() {
  // clear the screen
  background(255);
 
  // draw a path through all key points
  beginShape();
  for (int i=0; i < pts.size(); i++) {
    PVector currPt = (PVector)pts.get(i);
    // draw the key point
    noStroke();
    fill(0);
    ellipse(currPt.x, currPt.y, 5, 5);
    // add the vertex to the path
    noFill();
    stroke(255, 0, 0);
    curveVertex(currPt.x, currPt.y);
  }
  endShape();
 
}
 
void mouseDragged() {
  // if the new point is far enough from the last key point...
  if (dist(mouseX, mouseY, lastPt.x, lastPt.y) > distThreshold) {
    // ...and there are less than 2 key points total
    // OR
    // ...the slope between the new point and the last key point is
    //    different enough from the slope between the last 2 key points...
    if (pts.size() < 2 ||
        abs(slope(mouseX, mouseY, lastPt.x, lastPt.y) - slope(lastPt.x, lastPt.y, beforeLastPt.x, beforeLastPt.y)) > slopeThreshold) {
      // ...add a new key point
      beforeLastPt = lastPt;
      lastPt = new PVector(mouseX, mouseY);
      pts.add(lastPt);
    }
  }
}
 
float slope(float x1, float y1, float x2, float y2) {
  if (x2-x1 == 0) return 0;  // avoid division by 0!
  return (y2-y1)/(x2-x1);
}
 
void keyPressed() {
  if (key == ' ') {
    // reset all the key points
    pts.clear();
    lastPt = new PVector();
    beforeLastPt = new PVector();
  }
}

This second sketch draws a vine using circles on a bezier path. These have varying diameters which results in a tapering effect. It also adds leaves at regular intervals on the curve.

taperedBranch 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
int ax1 = 385;
int ay1 = 20;
int cx1 = 10;
int cy1 = 10;
 
int ax2 = 15;
int ay2 = 380;
int cx2 = 390;
int cy2 = 390;
 
void setup() {
  size(400, 400);
  smooth();
}
 
void draw() {
  background(255);
 
  ax1 = mouseX;
  ay1 = mouseY;
 
  noFill();
  stroke(128);
  bezier(ax1, ay1, cx1, cy1, cx2, cy2, ax2, ay2);
  stroke(128, 0, 0);
  //line(ax1, ay1, cx1, cy1);
  //line(cx2, cy2, ax2, ay2);
 
  noStroke();
  int steps = 300;
  int leafSteps = 6;
  boolean leafDir = true;
  float px = 0;
  float py = 0;
  for (int i = 0; i <= steps; i++) {
    float t = i / float(steps);
    float x = bezierPoint(ax1, cx1, cx2, ax2, t);
    float y = bezierPoint(ay1, cy1, cy2, ay2, t);
    float s = sin(i*PI/(steps*2))*10;
    fill(#743632);
    noStroke();
    ellipse(x, y, s, s);
 
    if (i%leafSteps == 1) {
      float a = atan2(y - py, x - px);
      leaf(x, y, leafDir? a+PI*3/4 : a+PI*1/4, leafDir? s*3 : -s*3);
      leafDir = !leafDir;
    }
 
    px = x;
    py = y;
  }
}
 
void leaf(float x, float y, float r, float s) {
  stroke(#07903D);
  fill(#1AEA11);
 
  pushMatrix();
    translate(x, y);
    rotate(r);
    beginShape();
      curveVertex(0, 0);
      curveVertex(0, 0);
      curveVertex(s/2, -s/4);
      curveVertex(s, 0);
      curveVertex(s, 0);
      curveVertex(s/2, s/4);
      curveVertex(0, 0);
      curveVertex(0, 0);
    endShape();
    line(s/4, 0, s*3/4, 0);
  popMatrix();
}

flora Technical Document

Software
We are aiming to code the front-end of the piece (graphics and animation) in Processing/Java and the back-end (blob detection and tracking) in openFrameworks/OpenCV, and have both components communicate with each other using the OSC protocol. The information to be transmitted will be very minimal: a list of blob IDs and xy-coordinates. If Processing turns out to be too slow, we will convert the front-end to openFrameworks/OpenGL code and have both components running in the same application.

Computer Graphics
The growth algorithm will consist of a central vine which follows the path drawn by the fingers. The direction of the growth is determined by generating a path following a finger as it travels on the surface. In order to keep the path as smooth as possible, the following algorithm is used to register the least amount of control points, using threshold values for the distance and slope between the last few points.

IF (the new point is far enough from the last key point) {
    IF (there are less than 2 key points total) {
        add a new key point at the end of the list;
    }
    ELSE IF (the slope between the new point and the last key point is
             different enough from the slope between the last 2 key points) {
        add a new key point at the end of the list;
    }
}

The leafs/flowers particles will be created in an external graphics program, and imported as SVG graphics into flora. These will be placed along the central vine, rotated and scaled depending on their position in the overall plant. The particle animations will be computed automatically based on keyframes for the birth, alive, and dead states. Pre-rendered animations cannot be used because of the initial growth and nurturing features, which require being able to stop at any intermediate state between the birth, alive, and dead states. The keyframes will consist of positions for a fixed number of control points which determine the shape of the particle. Each control point will have a birth, alive, and dead position, and will tween from its current position to one of these targets depending on the triggered animation.

Surface
We will use a single horizontal surface as the touch interface and the display, which will be cut to custom shape as represented in the design document. This surface will consist of a sheet of acrylic covered with a thin film such as vellum or mylar. The purpose of the film is twofold:
• It will make the surface more pleasant to touch and will allow the fingers to move smoothly across it.
• It will act as a diffuser for the IR light if we end up using the Diffused Illumination technique for touch detection (see Sensing Device section).

Display Device
The application will be projected on the touch/display surface using a projector. Depending on the setup used (see Sensing Device section), the image will either be front-projected or rear-projected.

Sensing Device
Touch detection can be achieved using one of the two following options.
The first is to build a touch interface following the Diffused Illumination technique. This works by shining infrared light using IR emitters below the screen, which is covered by a diffuser such as vellum or mylar. When the finger touches the surface, it reflects more light than the diffuser, and can therefore be detected by the camera. The camera and projector used to display the software are both also placed below the screen. The advantage of using this technique is that we can accurately detect points of contact on the touch surface, and that we do not have to worry about shadows covering the projected image. The disadvantage is that the table needs to be built, and this will cost time and money.

The second option is to use the pre-existing setup at ITP. Because the projector and camera are on top of the surface, the user’s entire arm will be tracked and not just the point of contact with the screen. One solution to this problem would be to track from which edge the arm enters the tracked area and to assume the finger is on the opposite end. For example, if the arm comes in from the right, we know that the fingers are at the leftmost edge of the tracked blob. This however does not solve the problem of the shadows covering the projected image. We also cannot detect the moment when the finger touches or releases the screen, which is necessary for our interface. This can be solved by installing a contact mic on the surface, which could set a flag in the software whenever it “hears” the finger tapping the surface.

Computer Sensing
The computer will identify blobs using a background subtraction algorithm. The blobs will be tracked and identified so that their movement can be followed from frame to frame. This will be done using a customized version of Stefan Hechenberger’s extension to the openFrameworks OpenCV libraries. The way the tracker works is by comparing the current list of blobs with the list from a previous frame, and assigning the same ID to those that are near each other. This is limited by a maximum distance a blob can travel per frame, and inconsistencies are resolved by using more than one previous frame when performing the comparison. Once the blobs are identified, their position will be transmitted to the front-end software, which will generate graphics.

Points of Failure
• The background subtraction could stop functioning properly if the light conditions in the room change. If this is the case, we would need to research adaptive background subtraction techniques.
• The blob detection could be imprecise if the blobs and the background have similar colour. This potential bug can be eliminated if we use the Diffused Illumination setup.
• The software may lag if there are many input points and/or growths at the same time. If this happens using Processing, we can switch to openFrameworks for a speed increase. If the problem is still not solved, we may have to revert to OpenGL vertex buffers and display lists to perform more processing on the graphics card.
• The way the directions will be conveyed may not be as clear as we want it to be. User-testing and multiple iterations will eventually solve this problem.

View the PDF

Manners, Please

Invited your boss for dinner to wow him into giving you that new promotion?
Need to persuade your mother-in-law that you are good enough for her daughter?
Having your dentist over for brunch to solicit capital for your latest great movie script?

Then Manners, Please is the dining table enhancement for you!

Manners, Please is a projection system that overlays the dining room table, reminding you of good dining etiquette. This intelligent system will give you subtle hints throughout the meal, which will help impress your guests with your impeccable table manners.

Don’t know which fork to use?
Manners, Please will highlight the fork, spoon, or glass you should use corresponding to the food or drink you are currently consuming.

whichFork moreWine

Is one of the guests low on wine?
As the host, it is your duty to make sure everyone’s glass is always full, and Manners, Please helps you always be on your toes by projecting a faint halo around the wine bottle when someone finishes their glass.

Done eating?
Manners, Please will place guides on your plate, helping you place your utensils properly, parallel to one another in the position of ten o’clock to four o’clock.

whenDone

More than a projection
Manners, Please is also a state-of-the-art table, incorporating sophisticated haptic feedback technology, which gently vibrates under your elbows if you carelessly rest them on the table, or under your napkin if you’ve got food on your face and need to wipe it off.

How it works
The Manners, Please hardware consists of a projector and colour camera installed on the ceiling, both facing down and covering the entire dining table area. The table looks like a regular dining table, but it is equipped with vibrating motors along the perimeter hidden underneath the surface, which are used for haptic feedback.

All sensing is done using the camera. Manners, Please uses pattern recognition to determine where objects are placed on the dining room table. In training mode, the system prompts the user to place the items under the camera one at a time, so that their shape and colour can be determined and added to the internal database. When it is active, the system can then recognize tableware, and can tell if glasses or plates are empty based on colour analysis.

View the PDF

iFlora Design Document

A flourishing interactive experience at the Brooklyn Botanic Garden.

iFlora is a kiosk located in the main hall of the Steinhardt Conservatory at the Brooklyn Botanic Garden. The interactive environment is an introduction to the 4 main pavilions of the Conservatory; a digital representation of what you will experience as you walk through each area.

iFlora is a tabletop touch screen that is divided into 4 zones for each of the pavilions. It is shaped like a rounded square, with each edge facing the direction of the pavilion it represents. As you drag your finger in each zone, you trigger a particle-based growth animation representative of the type of flora that is found in each area: Sandstorms and cactii for the Desert Pavilion, thick deep-green vines for the Tropical Pavilion, ferns and wild flowers for the Warm Temperate Pavilion, and finally, lily pads and schools of fish for the Aquatic House.

When you place your finger on an empty space on the screen, you give birth to a plant. The longer you hold your finger down, the longer it will grow and the longer it will live. If you drag your finger along the screen, the plant will grow following your path. When you let go, the plant slowly starts dying, but you can help it live longer by touching it and dragging some more to nurture its growth.

If iFlora is idle for several minutes, an animation of a finger tapping appears on screen, indicating to any viewers how to interact with the table. The iconic finger appears in a random position on the screen, generating the growth animation corresponding to its location. As all it does is tap briefly, the generated plant will have a very short life span, enticing any viewers to help it grow by interacting with the table.

iFlora is a playful installation meant to encourage you to visit each pavilion. As no screen-based experience can rival viewing these environments face-to-face, it is not designed using photorealistic visuals, but with highly stylized graphics and animation that are better suited for the screen. It is not meant to be a replacement or a catalog of the Conservatory, but more of an addendum, a teaser of all the awe-inspiring areas surrounding you.

View the PDF (14.3 MB!)

Couch Potato

Couch Potato is an interactive coffee table for the living room, designed to help you make the most out of your relaxation time. Once you sink into the couch, the last thing you want is to have to get back up. That’s where Couch Potato comes in, by centralizing tasks and keeping you comfortably on the couch.

CouchPotato-games

Can’t find the remote?
Couch Potato has an integrated keypad which can be used as a remote controller for most TVs, cable boxes, and DVD players. It can even be configured to control the lights in the room, so you can dim them to an appropriate level when watching a movie.

A little chilly?
Simply use the corresponding slider on Couch Potato to set the desired temperature without having to get up.

Bored?
Couch Potato comes with a variety of games for quality time-wasting such as Solitaire and Minesweeper.

Hungry?
Use the Google Maps to find restaurants nearby. You can even place your order, using the keypad to dial the number and the built-in speaker and microphone to talk. You can even set up Couch Potato to buzz the delivery guy in when the doorbell rings!

screenshot-food
Continue reading ‘Couch Potato’