Every few weeks, though, I find myself porting an openFrameworks addon to Cinder. And now that there’s ofxAddons, a site that tracks every addon on GitHub, my porting has become even more frequent. Some of these are easy enough. Vec3f becomes ofVec3f, ColorA becomes ofColor, etc. But more complex addons like ofxOpenNi or ofxAssimpModelLoader are time-consuming or difficult to port. Of course, the Cinder community has its own “addons” (which are instead called “Blocks”). But there are fewer of them, they often take longer to reach the community and in some cases they are harder to wrangle. Cinder’s OpenNi block in particular is a nightmare to get setup.

I believe the greatest strength of each framework is also its greatest weakness. openFrameworks is extensively documented and supported by a large and active community. This means a constant stream of new features, but also that the tools themselves sometimes feel like a disorganized hodgepodge. Cinder is supported by a smaller community, which is rigorous about API standards and best practices. However, this often translates to month-long forum debates that do not result in any new features. So we beat on.

I’ve learned a great deal about programming by reading the Cinder source code and that nagging feeling of wanting to do things properly and efficiently keeps me a Cinder devotee. But this week, something happened that changed my attitude: I met Polycode.

I’d been aware of Polycode since it first surfaced, but only gave it proper attention for the first time this week. I was looking for inspiration while implementing the 3D environment and scenegraph components of my software project, Foil. I was impressed with Polycode’s Assimp-based 3D model loader, scenegraph structure, Bullet-based physics simulation and overall architecture. For a moment, I considered making the switch. But then, I began to find that Polycode was extremely slow in loading large skeletal animation files and started looking elsewhere for solutions.

This brought me back to openFrameworks, which unsurprisingly has a very nice Assimp loader addon. My first impulse was to port ofxAssimpModelLoader to Cinder. After a bit of frustration and wasted time, I decided to put Cinder aside temporarily and try working in openFrameworks instead.

Within a day, I had connected ofxAssimpModelLoader, ofxBullet and ofxTimeline with openFramework’s built-in ofLight, ofCamera, etc classes to reproduce the majority of Polycode’s 3D model/skeleton, physics and animation functionality. Over the course of the next few days, I will integrate addons including ofxTerminal, ofxAssimpOpenNISkeletonSync, ofxOpenSteer/ofxPathfinding/ofxFlock, ofxParticleSystem, ofxFaceTracker and ofxOpenNi to produce a scriptable 3D modeling/animation/gaming environment, potentially called “ofxGoneWild” or maybe “ofxKitchenSink.” ofxAddons galore! There is some work in connecting these pieces and without a fair amount of C++ as well as Xcode experience, it would be difficult. Yet, given the volume of features that comes with these addons, it’s a pretty quick process.

I still love Cinder and will continue to work in it. But the draw of openFrameworks’ vast collection of addons is immense. The ability to build a custom physics-simulating, keyframe-animated 3D environment in a week is earth-shattering. It’s not Maya, but it’s an extremely good start. With the support of Assimp and Bullet, openFrameworks provides access to the same model loader and physics engine that are available in Maya 2013. So, for this project at least, I’m with openFrameworks.

(A very early screenshot of ofxGoneWild.)

]]>A preliminary test using FaceTracker to control a virtual camera within a 3D gaming or design environment. With some tuning, the effect could be made pretty convincing. The biggest obstacle is that FaceTracker looses non-front-facing faces. Perhaps the face detection training model could be extended. Failing this, it might be helpful to supplement the FaceTracker data with Kinect skeletal tracking to smooth or approximate the head position when necessary.

For this version, I ported ofxFaceTracker to Cinder and wrote a simple camera control system to interface with FaceTracker data. By shifting control of the camera position towards a motion vector rather than pure position-driven approach, it should be possible to diminish the apparentness of brief drops in face tracking. Some turbulence forces might also mask a freeze in face-controlled camera data.

Using Jason Saragih’s FaceTracker library:

web.mac.com/jsaragih/FaceTracker/FaceTracker.html

Adapted for Cinder from Kyle McDonald’s ofxFaceTracker:

github.com/kylemcdonald/ofxFaceTracker

My Cinder port of ofxFaceTracker is now available at:

https://github.com/Hebali/ciFaceTracker

]]>

As the name suggests, a 3-strip film process is one in which the red, green and blue component images are recorded onto three separate sheets of black & white film. Actually the first version of this process, innovated by Technicolor in 1916, was a two-color system (red and green). The basic idea is the same for both systems: an image enters the camera’s lens and passes through a beam-splitter. The beam-splitter is directed towards two or three color filters, through which the component beams pass onto a film emulsion. Naturally, this process made for a bulky camera and used 2x or 3x as much film stock as a B&W film. Nonetheless, it was an incredible innovation and Walt Disney and others quickly put it to good use. This, of course, was later replaced by color film that allowed the color components to be layered on top of one another on a single sheet of film. In the digital era, this 3-strip idea found new life. Many high-quality digital motion picture cameras still use 3 component image sensors and a beam-splitter. These are commonly called 3CCD cameras.

To imitate this process, I set up my camera on a steady tripod and obtained three sheets of color filter material: red, green and blue. As Eric told us, the color filters that are generally used in digital image sensors tend not to be rich, saturated reds, greens and blues, but instead more pastel so as to not produce true color separations and instead have some channel blending at the sub-pixel level. In this process, I used saturated filters that I hoped would produce stark color separations. I put the camera on a timer and shot a very still scene (an empty room) three times in succession, each time with a new color filter in front of the lens. I repeated this process twice and the second time included a gray card in the shot so that I could do some calibration later. Here are the results:

The bottom image is much better than the top. This is largely because the gray card allowed me to better calibrate but also because the original component exposures for the bottom image are better. Overall, the color accuracy to the original scene is pretty good and could certainly be fine-tuned further with more extensive testing.

]]>This project was developed for and presented at the ITP Big Screens 2010 event @ the IAC Video Wall in Chelsea, NY.

The aspect ratio of the IAC wall is 10.625:1, which is not very conducive to the web. This video represents the middle third of that screen.

Evaluating the state of each of the eight neighbors of the center pixel in a nine-square grid for every one of the 8,160 x 768 pixels on the wall of the IAC, algorithms are employed to generate such natural behavior as the flooding of lowland areas by rising water levels, and the reformation of new land masses by sedimentary deposition. The same pattern of behavior exists at multiple scales, and can be seen in chemical reactions, culture growth, spillage, and lava flows, to name a few. The algorithms themselves can be modified to simulate any variation. To operate in real-time, our algorithm must compute a baseline of 150,994,944 pixel comparisons every frame. To achieve this, we have designed our algorithm to take advantage of the computer’s standard processor (CPU) as well as its hardware-accelerated graphics processor (GPU).

MeandriCA was created using C++, OpenGL, GLSL and Cinder.

]]>In all stages of this project’s development, I believe it will be important to consistently reconcile the visual and conceptual elements to one another. I am hoping to strike a delicate balance between a light-hearted, playful tone found in Disney or Pixar animated films and the understated presentation of natural wonder that can be found in few good nature documentaries. At the same time, I am building the system as an interactive one. For this purpose, I am looking to the books *Theory of Games and Economic Behavior *by John von Neumann and Oskar Morgenstern and *Evolution and the Theory of Games* by John Maynard Smith. Through these books, I am exploring how concepts of evolutionarily stable systems could be applied to the gameplay mechanics of my project. A few more visual and conceptual influences are depicted here:

Click to enlarge. Conway’s Game of Life being computed on the graphics card for 8160×768 pixels at 60 frames per second. The full frame is rendering to a frame buffer, only the left 1360×768 is drawing to the screen.

]]>Here is a slideshow of some of my childhood work. Though I primarily focus on software development in my “adulthood,” my interests have remained much the same. The above image is of a design model for a virtual reality headset I constructed when I was nine or ten.

This collection covers a sample of works from ages 5 to 13.

]]>Here are the rules:

- Each player receives seven cards, the rest remain in the deck.
- Turns go clockwise around the circle.
- The game ends when one player holds a complete sorted deck. If something prevents this from happening, the game should be replayed from the beginning.
- During a player’s turn, he/she may ask one other player if that player holds a particular card (the 5 of Hearts, for instance). If the other player holds the desired card, they must hand it over along with any other cards that are “sequential” with the one that was requested. (See more on “sequential” cards below). If the other player does not hold the requested card, the player takes one card from the deck. In either case, the player does not go again until every other player has had a turn.
- SEQUENTIAL CARDS: Players should order the cards in their hand in the following manner: Hearts then Diamonds then Clubs then Spades and from Two to Ace within each suit. Sequential cards are ones that touch one another within this ordering. So if a player is asked for a 4-D (4 of Diamonds) and their hand contains: J-H, K-H, A-H, 2-D, 3-D, 4-D, 6-D, they would have to hand over everything except for J-H and 6-D.

That’s how GoSort! is played. This sorting algorithm isn’t nearly as efficient as a computational sorting algorithm such as MergeSort or QuickSort, but hopefully it is more fun.

]]>Week 1: “Genetic Optimization Visualization”

For our first session of Games & Art, we looked at Rock Paper Scissors (RPS) as well as at strategies for winning this time-honored game.

In the standard game, one point is awarded for rock beating scissors, paper beating rock or scissors beating paper. With this point assignment, there is no numerical strategy that will create a distinct advantage. Basically, the best way to win is to try to psyche the other player out or, to go a bit further, to make your rock, paper or scissors choices be as random as possible. In other words, R, P and S should each have a 33.3% chance of being chosen in each round. This makes sense given that RPS is generally thought of being more along the lines of a coin toss than a game.

If we change the point distributions awarded to a winning R, P or S, however, the possibility of numerical strategies for winning begins to emerge.

One variation we considered in class was the following set of point assignments: R:+10, P:+3, S:+1

In this variation, the reward for winning with an S is only +1, which is much less than R or P. To make matters worse, if the other player happens to choose R, then the effective cost of choosing S can be thought of as -11 since the other player has gained 10 while you have also lost the opportunity to gain 1. Therefore, we can see that an optimal strategy for winning at this variation of the game would not be an even 33%/33%/33% distribution between R, P and S. Instead, we should favor our choices towards R or P. But to what extent?

To find this out, I looked towards genetic algorithms. With the help of Dan Shiffman’s excellent genetic algorithm text, I devised the following model for an algorithm that could be used to determine the optimal choice-probability distribution for any particular point distribution:

First, I defined a Player class of objects, which contains four data points: a R, P, and S choice-probability value as well as the number of points the player has scored. Next, an initial population of players is generated. The choice-probabilities for each player are initially random, but hover somewhere close to an even 33%/33%/33% distribution. Once the population has been generated, the players are put into matches against one another. In each match, each player chooses an R, P or S in accordance with its choice-probability model. The winner of each match is awarded with the appropriate number of points. After the players have played a sufficient number of games against one another, their fitness for mating is determined by the number of points they have scored. So if one player has scored 10 points, the probability of its being selected for mating is ten times that of a player who has scored only 1 point.

The mating process consists of pairing two players together and using some of each of their “genetic” traits to form a child player. I had to experiment with the best way to go about this mating process. Since the R, P and S choice probabilities for each player must add up to a total of 100%, we cannot disassociate these values from one another. My first thought was to simply average the two parents’ choice-probability values to form the child’s values, but this turned out to produce somewhat muddy results and does not always come out to 100% probability total. The approach I eventually implemented was to take a number line between 0 and 1 and insert two barriers into it: one barrier separating R from P and one separating P from S. These two barriers can be moved anywhere in the number line and so long as they don’t cross one another or go above 1 or below 0, we will end up with positive R, P and S probability values that sum to 1.0 (or 100%). Using this way of the dividing the probability space works well for the mating process. Once two parents have been chosen for mating, the algorithm randomly decides whether to take one barrier value from each parent or both from one. Finally, to ensure that the gene pool does not get too incestuous, each child is given a 10% chance of mutating. If a child is chosen for mutation, it’s barrier values are moved randomly in either direction by a small amount.

With this genetic algorithm, the program is able to automatically find an optimal choice-probability model for a given RPS point distribution by using a survival of the fittest approach where fitness is determined by the player’s accumulated winnings over many games. The size of the population, the number opponents each player plays against, the number of matches that each pair of plays against one another and the number of generations over which the algorithm is run all have an effect on the accuracy of the algorithm’s determination of what constitutes an “optimal” choice-probability model.

Next, I set about the task of creating a visualization that would present the optimal choice-probability distribution for a group of RPS point distribution variations. For visualization purposes, I had the good fortune of RPS being a three-dimensional system, which maps well to both 3D coordinates and RGB values.

I assigned Rock to the X axis, Paper to the Y axis and Scissors to the Z axis. As we move along the X from 0 to 9, for instance, we add one to the value of winning with a rock and so forth for each of the axises and its respective R, P or S point value.

At each coordinate in the grid, I represent the choice-probability distribution with a color. Rock is associated with the Red color dimension, Paper with Green and Scissors with Blue. So, if for instance, Rock was worth 9 and Paper and Scissors were both worth 0, the color of that grid coordinate should in theory be bright Red as an optimal strategy for this point distribution would presumably bias towards the choice of Rock.

To show how the number of players, opponents, matches and generations affect the successfulness of the optimization process, I ran my visualization using varied parameters for these variables and recorded the results. Here are a few samples:

Notice that with a small number of players, opponents and matches, the arrangement of color values is relatively unordered. With a larger number of players, opponents and matches, the color arrangements take on a linear gradient quality. This shows that greater optimization (or accuracy if you prefer) has been achieved with the larger population. If the smaller population were allowed to run for a much larger number of generations, it’s accuracy would begin to approach that of the larger population.

I wrote the genetic algorithm in C++ and the visualization uses openFrameworks.

WEEK 2: “Genetic RPS Newsfeed”

For last week’s assignment, I was interested in exploring the aesthetics of RPS as a type of zero-player game. In the book, *Evolution and the Theory of Games*, John Maynard Smith considers the ways in which game theory or the investigation of strategic advantage can be applied to our understanding of individual fitness and evolution within a population. For instance, with the Hawk-Dove game, he demonstrates how the aggressiveness of an individual’s strategy in competing for food affects population balance and, in turn, how evolutionarily stable populations can emerge from the proper mix of strategies utilized by individuals as part of an ecosystem. To call RPS a zero-player game is in some respects counter intuitive. There are, of course, two players. In a statistical sense, though, these two individuals are merely playing out the probabilities. Though each run of the aforementioned genetic algorithm is likely to produce slightly different optimal probability distributions for a given point distribution, the results vary only slightly and therefore indicate that to a large extent, there can be little individual differentiation in the numerical aspect of choosing a winning strategy.

What remains, then, is a sort of mating dance. If no individual is truly more fit than any other, competition must be about the extent to which an individual is able to purport him or herself as more fit than another. That is to say, RPS is about psyching the other player out. The best way to win seems to be to focus on executing the steps of your own dance while trying remain undistracted by that of your opponent. It is hard to conjure random or statistically-weighted choices while in the midst of a mating dance. So for this week, I was interested in testing whether strategic advantage could be achieved by enhancing a player’s ability to do so.

Using my genetic algorithm, I precompiled a list of R,P or S choices for a particular point distribution (R:10 P:3 S:1) and converted this list to an audio feed. Before entering a game of RPS with a human opponent, I donned a TV news anchor-style earpiece, which would be used to feed me a stream of R, P or S’s during game play. I then played a number of games against opponents who were not given any technological aid. After playing several opponents over a number of games, it was clear that this aid was indeed giving me an advantage over my opponents. Of course, RPS remains a game of probability and so I found that this device created an advantage but did not lead to a complete domination of all opponents in all cases.

Is RPS a zero-player game? Understanding, of course, that I am using this term metaphorically and not by the technical definition (which is intended to describe dynamics more like Conway’s Game of Life, etc), I submit that RPS is a game of narrow margins. But so too is the mating game. If RPS falls somewhat short in meeting the criteria required by Smith’s comprehension of evolutionary advantage, perhaps it forms a better metaphor for the evolutionary state of humankind. That is to say, the availability of technology and the complexity of our society is such that the fitness criteria for an individual human cannot be clearly named. Rather, we live and die by something far more ephemeral, with all the same technologies, medicines, etc at our disposal, we have only the slight bias of our mating dance, our swagger, to put us ahead. In the end, this seems to make all the difference or possibly none at all.

]]>