Posts Tagged Portfolio

Controlling a Virtual Camera with FaceTracker

A preliminary test using FaceTracker to control a virtual camera within a 3D gaming or design environment. With some tuning, the effect could be made pretty convincing. The biggest obstacle is that FaceTracker looses non-front-facing faces. Perhaps the face detection training model could be extended. Failing this, it might be helpful to supplement the FaceTracker data with Kinect skeletal tracking to smooth or approximate the head position when necessary.

For this version, I ported ofxFaceTracker to Cinder and wrote a simple camera control system to interface with FaceTracker data. By shifting control of the camera position towards a motion vector rather than pure position-driven approach, it should be possible to diminish the apparentness of brief drops in face tracking. Some turbulence forces might also mask a freeze in face-controlled camera data.

Using Jason Saragih’s FaceTracker library:

web.mac.com/​jsaragih/​FaceTracker/​FaceTracker.html

Adapted for Cinder from Kyle McDonald’s ofxFaceTracker:

github.com/​kylemcdonald/​ofxFaceTracker

My Cinder port of ofxFaceTracker is now available at:

https://github.com/Hebali/ciFaceTracker

 

, , , , , ,

2 Comments

MeandriCA by Morgen Fleisig & Patrick Hebron

This project was developed for and presented at the ITP Big Screens 2010 event @ the IAC Video Wall in Chelsea, NY.

The aspect ratio of the IAC wall is 10.625:1, which is not very conducive to the web. This video represents the middle third of that screen.

Evaluating the state of each of the eight neighbors of the center pixel in a nine-square grid for every one of the 8,160 x 768 pixels on the wall of the IAC, algorithms are employed to generate such natural behavior as the flooding of lowland areas by rising water levels, and the reformation of new land masses by sedimentary deposition. The same pattern of behavior exists at multiple scales, and can be seen in chemical reactions, culture growth, spillage, and lava flows, to name a few. The algorithms themselves can be modified to simulate any variation. To operate in real-time, our algorithm must compute a baseline of 150,994,944 pixel comparisons every frame. To achieve this, we have designed our algorithm to take advantage of the computer’s standard processor (CPU) as well as its hardware-accelerated graphics processor (GPU).

MeandriCA was created using C++, OpenGL, GLSL and Cinder.

No Comments

Aluminum Foil: My work as a child

aluminumfoil.hebali.com

Here is a slideshow of some of my childhood work. Though I primarily focus on software development in my “adulthood,” my interests have remained much the same. The above image is of a design model for a virtual reality headset I constructed when I was nine or ten.

This collection covers a sample of works from ages 5 to 13.

No Comments

Rock Paper Scissors – Genetic Optimization Visualization

Week 1: “Genetic Optimization Visualization”

For our first session of Games & Art, we looked at Rock Paper Scissors (RPS) as well as at strategies for winning this time-honored game.

In the standard game, one point is awarded for rock beating scissors, paper beating rock or scissors beating paper. With this point assignment, there is no numerical strategy that will create a distinct advantage. Basically, the best way to win is to try to psyche the other player out or, to go a bit further, to make your rock, paper or scissors choices be as random as possible. In other words, R, P and S should each have a 33.3% chance of being chosen in each round. This makes sense given that RPS is generally thought of being more along the lines of a coin toss than a game.

If we change the point distributions awarded to a winning R, P or S, however, the possibility of numerical strategies for winning begins to emerge.

Read the rest of this entry »

No Comments

All About Digitalis and DEcomp

Over winter break, I decided that this semester I would finally be ready to create the first installment of a film called Digitalis, which I’ve been developing for the past few years. With four months, an 8-core Mac Pro and a lot of work, I’ve learned a ton and have created a short film, called “Digitalis: An Introduction to DEcomp,” as well as the software I used to make the film, DEcomp Visual Toolkit 4.

WATCH “DIGITALIS: AN INTRODUCTION TO DECOMP” ONLINE

VIEW “MAKING OF…” SLIDESHOW

Read the rest of this entry »

No Comments

Conway’s Game of Life

I wrote a simple implementation of Conway’s Game of Life. Available here:

Conway’s Game of Life for Processing

Conway’s Game of Life with Gosper Glider Gun Loader

You can toggle between setup and run modes by pressing the ‘r’ key. The program will open in setup mode. Begin by drawing some initial state on the board (see Wikipedia for ideas), then press ‘r’ to bring the world to life.  The ‘c’ key clears the board.

Here is a Gosper glider gun, created with the above software:

No Comments

A particle bag and a bag for particles.

When two objects collide with one another in the physical world, nature doesn’t require a different procedure to calculate the collision’s effect upon each possible shape – rectangle, triangle, ellipse and so forth.  Computationally, it is much more straightforward to calculate whether a regular shape such as those listed above has experienced a collision than it is to do so for an irregular shape. Yet, any procedure that works for irregular shapes should also work for regular ones.

As a starting place, I envisioned a dynamic scenario that involved both regular and irregular geometries: a cloth bag filled with marbles being dropped from some height onto the ground. In an attempt to deal with this scenario in a “universal” manner, I decided to think of all parts of this system being composed of the same type of thing, which we will call particles. The size of a particle can vary and it can either exist freely or be constrained to other particles. So the marbles are large, unconstrained particles and points along the bag’s surface are small particles, which are tied to one another in the sense that one particle and the next cannot be more than a certain distance apart. This constraint can be extended to include the property of stretchiness. From here, we apply any forces – gravity, air turbulence, etc – to all particles and scale the effect of each force based on the mass of each particle. So far, I have implemented the bag portion of this simulation. Next I will add the marbles. Without the marbles, the bag feels more like a chain necklace. More to come.

Here is a video of the bag in action:

http://vimeo.com/9252290

Here is the source code:

Particle Bag 5 source code (Processing)

In the process of creating the particle bag, I did a few tests to get the hang of using polar coordinates. To initialize the bag, I set the particles around the circumference of a circle using polar coordinates to do so. This way of representing coordinates is ideal for this purpose and I can’t think of an easy or efficient way to achieve the same effect without them. So for the purpose of meeting the oscillation goals of this week’s assignment, I wrote the following:

View Polar Coordinates Oscillation

Download Polar Coordinates Oscillation source code (Processing)

No Comments

Controlled Chaos – Particle Walker

The question of whether anything in nature is truly random is a daunting one – perhaps unanswerable in the terms of our present understanding. Yet, many things in nature appear random so much as we can apprehend them. For this assignment, the goal was to visually represent a natural phenomenon without using any programmatically generated “randomness.” This was a difficult challenge, which made clear how readily we look to the notion of randomness in our consideration of the characteristics of the natural world. In answer to this challenge, I created the Particle Walker, which combines some conceptual elements of cellular automata with a simulation of the physical dynamics of elastic collisions.

The Particle Walker procedure:

Upon startup, the user is presented with a motionless walker (a gray square). The user clicks somewhere inside of the walker, drags to another point inside the walker and then releases the mouse to instantiate a single “particle,” which is represented by a small circle. The particle’s initial position is constituted by the coordinates of the initial mouse click. The difference between this point and the location where the mouse button was released constitutes the particle’s velocity vector. The particles have been programmed to have dynamic collisions with the boundaries of the walker and with one another. The particle-particle collision procedure incorporates Conservation of Momentum equations, making the interaction between particles something like that between real-world billiard balls. The user may add as many particles to the walker as he or she wishes and may also change their radii and maximum velocities as well as the size of the walker itself.

The image below presents a relatively small group of particles bouncing around a walker: (For demonstration purposes, a particle is filled with red if it has had a collision within a certain number of frames – in this case ten – and is otherwise gray.)

The key to the Particle Walker is that when a particle collides with one of the walker boundaries, it moves the entire walker by a single pixel in one of four directions: up, down, left or right depending on which boundary is hit. So, if there were only one particle in the system and it were bouncing back and forth between the left and right boundaries, the walker would simply move back and forth by one pixel along the horizontal axis. But with a large number of particles instantiated, the walker’s movement will not be as regular. We may assume that for a large number of particles viewed over many frames, there is likely to be a more or less equal distribution in the number of collisions that each of the four boundaries experiences. Yet, over the course of a few frames, an even balance is by no means assured. For example, it is possible that four particles will hit the left wall before any hit the right. From this, some quite irregular starts to emerge, something which feels a lot like Brownian Motion. Of course, there is nothing random about the Particle Walker’s motion, it is entirely deterministic. For any particular time value, we could extract the position and dimension of the walker as well as the positions, velocity vectors and radii of the particles and calculate any future state of the system from these datapoints. The irregularity of the walker’s motion can therefore be attributed to the complexity of the initial state – the position(/velocity) of the particles and walker at the time of their instantiation – being repeatedly compounded by the dynamics simulation over many successive states.

The two images below were generated by drawing the walker’s path (as represented by the 2D position of its center) over the course of many frames. The characteristics of the generated image will be affected by the size of the walker and particles as well as the maximum velocity and number of particles in the system. The line has been shaded to show its temporal path – a darker portion represents an earlier frame and bright green represents a later one.

An image generated by the Particle Walker.

A variation using different parameters.

Download Particle Walker source code (Processing).

View Particle Walker web applet (may not work in all broswers).

A question that arises from this is even though the system is entirely deterministic and so all future states can be calculated from the complete knowledge of any previous state, can a particular state be calculated from a previous state in fewer steps than it would take to solve each state between the two in question? I believe the answer is no.

2 Comments

3D Scanning and Re-Presentation Final Projects

In the past few weeks, I have taken a whirlwind tour through the worlds of 3D scanning & position/orientation sensing. The product of this study is still a work in progress.

Here is my project concept statement:

If Photorealism is only one of many possible styles of painting, then the “realism” of our natural vision must similarly be one of many possible modes of realism. There is no reason to suspect that our natural vision presents the world as it really is. We are abstractly aware of this and yet are limited in our ability to experience other modes. It is the goal of this project to enable the user to experience a physical environment in a visual style which differs from his or her natural mode of vision. This is achieved through the use of a 3D scanner, which images a physical space and provides a software system with a cloud of three-dimensional coordinates. The software uses geometric analysis to interpret these points into an alternate view of the space – a view which may be described as a blocky, Lego-like version of the physical environment. This alternate view is then sent to the user’s head-mounted display. A 3D positioning and orientation sensor is used to locate the user within the depicted space. This system is intended for use in an interactive cinema experience, entitled “Parallax Digitalis,” which will use prerecorded and interactive sequences to convey the story of a young man entrapped by the false notion that he is a deity. The current implementation of this project serves as a proof-of-concept for my longtime theoretical exploration of the possible uses of descriptive geometry in the telling of cinematic narratives.

Here are the results so far:

Bust (Original Data) Card Detection

Bust Stage Decomposition

Building Blocks

And the steps taken to reach these results… (continue on next page)

Read the rest of this entry »

2 Comments

The Visible Human

870 slices. 100,800 pixels per slice. Potentially 87,696,000 data points. Of course, we don’t need all of them. Some are background and others represent volumetrically hidden tissue. Since deep tissue tended to be the most red, we employed some color filtering. This was not enough. So, we added some basic edge detection. Within a particular slice, as the loader moves across a row, it checks the color value difference between one column and the next. If the difference is substantial, the pixel is added to the data set for display. Even so, the current implementation will run on my Mac Pro but not my MacBook. The loaded dataset is about 300,000 points, but all 87.6 million candidate pixels must be processed in loading. For stylization, on each iteration of the draw loop, we display about 1/6 of the total number of points, selected randomly on each iteration. This gives the model an animated quality and makes it feel less congested so that as the viewer approaches, he or she may peer into the model’s internal organs and skeletal system.

No Comments