Scape


I am exploring natural simulations, terrain generators, evolutionary and genetic algorithms as starting points for interactive or gaming experiences. Here are a few images that have come out of my early programming work on the project:

In all stages of this project’s development, I believe it will be important to consistently reconcile the visual and conceptual elements to one another. I am hoping to strike a delicate balance between a light-hearted, playful tone found in Disney or Pixar animated films and the understated presentation of natural wonder that can be found in few good nature documentaries. At the same time, I am building the system as an interactive one. For this purpose, I am looking to the books Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern and Evolution and the Theory of Games by John Maynard Smith. Through these books, I am exploring how concepts of evolutionarily stable systems could be applied to the gameplay mechanics of my project. A few more visual and conceptual influences are depicted here:

Mood Board #1

  1. #1 by Eric Mika on November 16, 2010 - 9:44 am

    Patrick: Your work here reminds me of the ways in which the technical origins of computer generated graphics often distract from their content, and how that can be an interesting threshold to explore.

    For example, any time I see something “natural” skinned with artificial textures, I start to get obsessive and immediately search for the recurring patterns and seams where the texture tiles and repeats — compulsive reverse-engineering. Once they’re found and the technical means behind the graphic’s construction is unraveled, I feel a vague sense of indignation that I was ever supposed to buy into such a shoddily conceived world in the first place.

    Here’s a particularly painful example:
    http://www.rantsandwaves.com/portfolio/3dgraphics/intro3b.jpg

    Occasionally perfectionists might go out of their way to avoid repetition. Take the texture on the back of the iBooks book case, for example: http://colleenish.files.wordpress.com/2010/04/bookshelf.png?w=420

    It’s clearly not dynamically generated, but they flip and translate the texture at random to break any sense of excess order. But I still notice this, and in that capacity it might be even more distracting than things that are fake in the conventional way. But that distraction might be interesting if it could be amplified.

    Your generative approach to natural simulation certainly sidesteps my grievances with the tiled texture. However, in a lot of ways the issue of how technical understanding pollutes our experience of an image goes full circle to your explorations of forced perspective with Digitalis. I think forced perspective is so powerful because it breaks our technical expectations.

    e.g.:

    Typical 3D scene: “This is a world, I know it’s fake, and it’s fake in predictable ways.”

    3D scene with forced perspective: “This is a world, I know it’s fake… holy shit.” (Aside: I would be super interested to see your forced perspective techniques applied to outdoor scenes.)

    To bring it back to your cellular-automata style GLSL generative work, I wonder if there is some way to apply the technical subversions of Digitalis to the sometimes-too-predictable world of Perlin marble and undulating slime.

  2. #2 by Patrick on November 16, 2010 - 11:17 am

    Eric,
    Thanks for this very thoughtful response. It comes just as I am beginning to think about how to reintroduce perspective augmentation into my terrain generation project. Perfect timing!

    In CG graphics, there seems to be this weird phenomenon… You go see the latest cutting-edge special effects blockbuster and the graphics look really convincing until you go see the next season’s cutting edge graphics. When we look at the real-world, we see the product of many subtle behaviors that are difficult to emulate: a ray leaves a light source, hits an object, some of that light is reflected to the viewer, some of it continues on, bounces off another object and some of that too reaches your eye. This keeps going until the energy has been so dissipated that an additional object bounce would result in no noticeable amount of light reaching the viewer. Using ray tracing and global illumination techniques, we can mimic this to a reasonable extent. Calculating each bounce takes work, so we limit the simulation to a certain number of them. How many is enough? In the real-world, there isn’t an explicit limit, the values just taper towards zero. When we look at a CG image we see a lack of detail in this respect. It’s subtle, so we can’t quite put our fingers on what’s wrong, but we know that it isn’t quite real. Then when we see better CG, the difference points us towards what was wrong.

    The iBooks book case is faking this kind of behavior. The corners are darkened because fewer ambient bounces reach concave corners. Calculating rays is time consuming, so they’ve faked it through texturing as you say. Similarly, video games use a lot of normals and bump mapping to make a mesh appear to contain more vertices than it really does. These techniques work just well enough to fall somewhere on the rim of the uncanny valley. But really, what should we expect? We’ve been looking at the world for a long time, so every person is in some sense an expert on what looks real. Any imperfection in a CG image will stand out. But that’s only true if you’re trying to imitate natural vision.

    There are lots of reasons to imitate natural vision and the movie industry has systematically found every one and completely driven it into the ground… monsters, explosions, other planets and storms. My hope for the future of computer graphics is that now that we’ve explored what you can objects you can put in front of a camera (real or virtual), we can now explore what those objects would look like through a system of vision that differs from our natural one. Here, we have less of a basis of comparison for what looks credibly real. We also have the opportunity to find new ways of understanding the world – our mode of vision impacts our understanding of objects, so seeing them differently will lead to a different understanding.

    As I’ve worked on my procedural terrain generation, I’ve kept coming back to the question of how to introduce my perspective work back into the mix. I’m not certain I have an answer yet, but seeing your reflections is very helpful. The key for me is that I don’t want forced perspective or any other kind of perspective to be used as a gimmick. It has to tell us something we couldn’t otherwise know. Forced perspective and its underlying projective geometry can tell us a lot about the world – for example, it is used in the mathematics of 3D scanning, the Kinect, etc. I explored one angle of how this could apply to narrative concepts with Digitalis. Now I need to find how this kind of perspectival decomposition can enrich the terrain work. I haven’t applied my forced perspective software to a full outdoor scene, but I have applied it to a single tree mesh. The results are interesting… I’ll try to find them and post them now.

    Thanks!

(will not be published)