Over winter break, I decided that this semester I would finally be ready to create the first installment of a film called Digitalis, which I’ve been developing for the past few years. With four months, an 8-core Mac Pro and a lot of work, I’ve learned a ton and have created a short film, called “Digitalis: An Introduction to DEcomp,” as well as the software I used to make the film, DEcomp Visual Toolkit 4.
The Origin of Digitalis and DEcomp
As an undergraduate student of Philosophy, I did my thesis on the aesthetic properties of digital special effects. The core concept of this paper was that the medium of computer-generated imagery is in some ways a hybrid of photography and painting, but must be seen as a distinct medium, which possesses properties that are absent from any other medium. In a way, this is quite obvious – clearly computers can make movies that couldn’t have been made before. But artists have yet to realize the full implications of this emerging medium’s unique potential. Digital effects allow us to present anything imaginable in the appearance of photographic truth. For the most part, though, this is still being used to create bigger explosions and more unusual monsters. As my theoretical work developed, I came to believe that the real power of this medium was not in its ability to conjure new things to put in front of the camera, but rather in its ability to find new ways for the camera to look at the world. All of our worldly experiences are filtered through one mode of vision, making it easy to forget that there is nothing absolute or necessary about the structure of our visual perception. As Einstein showed, objects are not fixed in themselves. Rather, they exist in relation to an observer whose point of view plays an essential role in defining the structure of the object itself. Though our bodies are tied to one mode of vision, we already know from artists like Picasso that our minds are able to occupy other modes of visual perception. Computers offer an ideal mechanism that allows us to simultaneously design new ways of visualizing the world as well as the aesthetic or narrative applications of these techniques. After college, I began to explore how I could bring apply these theoretical ideas to my own filmmaking work. The film that emerged from this process is Digitalis.
My undergraduate thesis in Philosophy at Bard College is available here.
An article I wrote for Film Comment Magazine about digital effects is available here.
The Story of Digitalis
Digitalis is my first cinematic exploration of how unconventional optical modes can be used to tell stories and show things about the nature of our experience that cannot be shown through any other medium. One of the earliest influences for the narrative of Digitalis was the film, The Last Temptation of Christ. I was drawn to the idea of a prophet who, in the final hour, came to resent the burden of being holy, who couldn’t fully understand what made him different from others and felt estranged. I thought this story interfaced perfectly with the canon of superhero and villain narratives, so I began to devise a character who was seen by those around him as a superhero of sorts but who could not see this in himself. Naturally, I didn’t want to merely convey the idea through dialogue. I wanted to create some visual trope that could explain the difference between the character’s perception of himself and the perception of him held by the other characters. After some thinking and experimentation, I came to the idea of using forced perspective to explain this concept. An example of forced perspective is the classic tourist photo of a person seeming to hold up the Leaning Tower of Pisa. The basic idea is that because of the way our vision works, we can keep the appearance of an object constant with respect to some particular point of view by altering its scale and contour to compensate for a change in its distance from the camera or visa versa. I realized that this was the perfect structure to explain my character. To the other characters, the protagonist appears to be capable of physically-impossible movements – he can step from the front of the room to the back in one small move, for example. But in the protagonist’s understanding of himself, there is nothing unusual about his abilities. It is a difference of perspective. The other characters want to see the protagonist as a superhero, so their visual modes of perception match this desire – they want a savior and therefore bend the world around their preconceived notion of the protagonist. But the protagonist, called Daniel Bellefonte, sees himself as awkward and clumsy and as someone to whom others cannot relate. Along the way, I began to develop an antagonist character, named Vincent Digitalis, who is obsessed with uncovering the true nature of Daniel Bellefonte’s abilities. The short film I’ve created this semester (detailed below) is narrated by Vincent and provides an introduction to these ideas. For technical reasons, I was not yet able to merge human actors with the computer-generated environments. So for this film, I found a format that would allow me to discuss these characters and their narrative world without having to show the characters onscreen. My goal is to next do a short film that brings human actors into the mix and, further down the road, to do a feature film based on this narrative world.
The Story of DEcomp
DEcomp, which stands for “Decompositional Environment Composer,” is a 3D software tool I’ve written in order to bring the narrative world of Digitalis to the screen. While simple forced-perspective transformations could potentially be done manually in a program like Maya, all pre-existing 3D environments take what I call an “object-centric” approach to modeling and as such are not capable of performing complex operations within the logic of forced perspective. Early in the process of creating the Digitalis narrative, I realized that to make the film, I would need to write my own software that was specifically set up for “perspective-centric” 3D modeling. I began to imagine the set of tools that would become DEcomp – tools that allow a user to stretch and distort the geometry of objects without changing their perspectival appearance for a certain camera position. There are infinite ways that any particular geometry can be manipulated in forced perspective and so I set out to build tools that would not limit the scope of geometric possibilities, but also not overwhelm the user with options. Some of the tools allow the user to graphically manipulate objects while others use a series of drop down menus and nodes to select the desired properties of a transformation. At the beginning of the process of creating DEcomp, I had almost no exposure to computer science proper. So I immersed myself in the subject and began to learn how to program by working towards DEcomp. Versions 1 and 2 were extremely crude, could only handle a very limited number of forced perspective operations and had cumbersome interfaces. I wrote DEcomp 3 last summer, just before entering ITP and with this version was finally able to get a glimpse of what Digitalis could be. But DEcomp 3 was still quite slow and was still limited in its functionality. At the end of my first semester at ITP, I felt my technical skills had advanced enough to warrant a complete rewrite of DEcomp. I also realized that the film I was hoping to make this semester would require a much more powerful tool. So I began work on DEcomp 4 (detailed below).
Digitalis: An Introduction to DEcomp
The short film I created this semester is narrated by Vincent Digitalis and presents itself as a sort of informational video, which hopes to convince the viewer that Daniel Bellefonte is in fact not a superhero. In having Vincent argue for a geometric rather than supernatural explanation of Daniel’s abilities, I was able to give the viewer a brief overview of the concepts of the Digitalis world, which I hope to develop more fully in future versions of the film. I attempted to blur the line between the film’s production and its narrative world by having Vincent refer to DEcomp as a technology he has developed to better understand the “allegedly supernatural” abilities of Daniel Bellefonte. There are four main shots in the film: two mono- and two stereoscopic; two regular perspective and two forced. As the film progresses, each successive shot helps the viewer to better understand how the camera’s relation to the scene is driving the sense we have of it. In the last shot, when we finally see a stereoscopic version of the forced perspective scene, it becomes clear that the apparent distances of objects from the camera are not what they should be. Finally the camera unhinges from the privileged point in space for which the scene appears normal and flies around a distorted, forced perspective version of the warehouse in which the film is set. This short film serves two main learning purposes for me: it gives me the opportunity to see how viewers respond to the narrative and visual ideas behind the Digitalis project and has also allowed me to do a complete pass through the production process of making a narrative film with DEcomp. Over the course of the semester, I wrote and rewrote the screenplay as I developed the digital warehouse in which the film takes place while writing new features for the software and then generating forced perspective versions of the warehouse, performing camera animations and editing the voice-over and score. These processes overlapped and shaped one another. In fact, their concurrence was completely essential to both the final product and my learning. This was most true of the relationship between my work on new features for the software and the construction of the film’s forced perspective model. Designing the model for the film allowed me to see more clearly what sorts of tools would be most useful. Some of the features I was working on at the beginning of the semester lost priority to new ones that emerged from the needs of the production process. In the making of this film, I’ve been most heavily influenced by the aesthetics and grammatical styles of Walt Disney, Pixar and Peter Jackson. As such, I’ve stylized the film in a way that I believe will appeal to both adults and children. At the same time, however, the film is built around mathematical ideas that may be difficult for some. In the early stages of developing this project, before the visual elements were in place, I found that it was difficult to convey some of these ideas verbally. Yet, as the visual components of the projects have come to life, I’ve been pleased to see that most viewers have no trouble understanding these ideas when they are demonstrated visually. In presenting this work, I am eager to see how a wider audience understands the project. Vincent’s voice-over was performed by my great friend, the fabulous actor Matthew Shear.
DEcomp Visual Toolkit 4
As I developed the screenplay and visual material for “Digitalis: An Introduction to DEcomp,” it became clear that I would need to make DEcomp more powerful and add a number of new features in order to make the film. The final warehouse model for the film contains 2,687 objects, which are made up of 985,568 triangles. This takes a fair amount of computing power and requires good programming practices. Starting over winter break and continuing through the course of this semester, I did a complete rewrite of my software. I wrote DEcomp in Objective-C using only two third-party frameworks or libraries: Apple’s Cocoa APIs and OpenGL. Since there is no other 3D software that is built upon the same geometric premises as DEcomp, I had to write all geometric aspects of the software entirely from scratch. In this version, I centered the geometry engine around vector representations, which enabled an immense number of new forced perspective operations. I also adapted some of the parametric tools I’d developed for earlier versions of the software because I found that each approach facilitated its own way of working with space and therefore brought out different aesthetic properties in objects.
Here are some of the new features of DEcomp Visual Toolkit 4:
- The object node-tree and geometric datatypes were completely redesigned for significantly greater performance. Data transformations were optimized through the use of function passing and object polymorphism.
- The modeling engine was rebuilt around a vector-based rather than parametric approach to geometric transformations, leading to sizable speed increases as well as a broader set of mathematical possibilities in the logic of forced perspective modeling.
- A variety of new forced perspective transformation tools were added, allowing the user to more easily vary the compositional elements of a geometric scene.
- Back-buffer rendering was used to draw triangle identification markers to an offscreen buffer, allowing for the addition of highly accurate graphical object selection and manipulation tools.
- A systematic organization of algorithms for the sorting and transforming of geometric assets was implemented, allowing the user to apply any mathematically-possible forced perspective operation to a scene.
- An efficient algorithm for the assessment of a scene transformation’s geometric validity was implemented, allowing for new machine learning-based tools.
As this iteration of the project comes to a close, a new set of possibilities has emerged from the things I’ve learned and I’m looking forward to continuing the development of Digitalis and DEcomp as well as exploring new ideas that have come out of this semester. For my machine learning class this semester, I began to explore how intelligent algorithms can be used to find aesthetically pleasing forced perspective structures for a given geometry. There are many possibilities for this and I plan to begin implementing some of them this summer. In the first week of the summer, I’ve begun work on a new Digitalis sequence, which I hope will include human actors. I’m really liking the way it looks so far (should be far more advanced that the Introduction film) and I’ll post a progress report soon. I’m also working on an interactive, game-like version of DEcomp.
This semester, I’ve had four incredible classes that were absolutely perfect forums for my work on the projects of DEcomp and Digitalis. My class with Dan Shiffman focused on vector operations, physics simulation, fractals, cellular automata and genetic algorithms; my class with Danny Rozin focused on optical techniques and properties from both scientific and artistic perspectives, image processing and tracking; my class with Heather Dewey-Hagborg focused on machine learning through digital neural networks, collective intelligence, natural language processing and face, pattern and image recognition; and my class with Douglas Rushkoff focused on narrative structures and techniques in classical drama and modernism as well as contemporary forms of digital and interactive media. Really, I couldn’t have asked for a better combination of things to study. Each class was both directly applicable to work on this project and exposed me to new ideas and fields that expanded my horizons and advanced the goals of my work.
ITP Professors: Dan Shiffman, Danny Rozin, Douglas Rushkoff, Heather Dewey-Hagborg, Nancy Hechinger, Dan O’Sullivan, David Nolen and Stewart Smith.
ITP Classmates: Yin Ho, Don Miller, Patricia Adler, Morgen Fleisig and really everyone.
ITP Staff: Rob Ryan, George Agudow and Brian Kim.
Friends: Matt Shear, Reb Leopold and Scott Goodman.
Bard Professors: John Pruitt, Hap Tivey and Garry Hagberg.
Family: Mom, Dad, Diana, Haywood and Rue.