Back when I was working on The Fall of the House of Escher I became obsessed with live z-displacement in the media. That particular show was wrestling with questions of quantum physics, time, reality, and all manner of perceptual problems and so it made a lot of sense to be love the look and suggested meaning inherent in the displacement of an image in a third dimension. This particular technique is perhaps most well known to us because of the Rutt Etra Video Synthesizer. This video synth was an attempt to begin manipulating video in same manner as sound was being manipulated in the 1970s, and was truly a ground breaking examination of our relationship to video – live video especially. One of the most famous elements of the Rutt Etra Synth was z displacement, the change in the depth of an image based on its luminance.
You can get a better sense of what this kind of effect looks like by playing with the Rutt Etra-izer online. This simple tool lets you play with one slice of what the original video synth did. Additionally, if you’re a mac user you might want to check out what v002 had to share when it comes to plug-ins, as well as some thoughts from Bill Etra. You can find more from v002 here: Rutt Etra v002
So that’s all well and good, but what am I after? Well, in the pursuit of better understanding how to program with TouchDesigner, as well as how to explore some of the interesting ideas from the 1970s, I wanted to know how to replicated this kind of look in a TouchDesigner network. There are plenty of other people who have done this already, and that’s awesome. I, however, happen to subscribe to the kind of art philosophy asks students to copy the work of others – to practice their hands at someone else’s technique, to take what works and leave what doesn’t. Art schools have often required students to copy the work of masters in order to foster a better appreciation and understanding of a particular form, and I think there’s a lot too that in programming. So today, we’ll look at how to make this kind of effect in TouchDesigner and then ask how we might manipulate this idea in ways that differ from how the original method was intended to work.
To begin, this idea started when I was looking through the wiki page about Generative Design. Specifically, the sample network talking about image manipulation really got my attention. Buried in this example network is something that’s Rutt Etra flavored, and it’s from this example that I started to pull out some ideas to play with. As we work our way through this example it’ll be easy to get lost, frustrated, or confused. Before that happens to you, take a moment to think about what we’re trying to do. Ultimately, we want to take an image (video later, but for now an image) and add together the RGBA values of an individual pixel and use that number to transform said pixel in a z dimension. In other words, we’re really after creating a faux depth map out of an image. It’s also important to remember that using an image that’s 1920 x 1080 means we’re talking about 2,073,600 pixels. That’s a lot of points, so we’re going to start by creating a grid that simplifies those dimensions – partially for our own sanity, partially for the sake of the processing involved, and partially to remain true the aesthetic of the original idea. Once we replicate the original, then we can start to talk about where we can start to play. That’s enough disclaimers for us to get started, so let’s do some programming.
Let’s start by looking at the whole network:
From this vantage point we can see that we’re going to make a network that uses a little bit of everything – some texture operators (TOPs), some channel operators (CHOPs), and some surface operators (SOPs).
Starting with Texture Operators
First things first, let’s make a new container and dive inside. To our empty container let’s start by adding an In TOP as well as a Movie In TOP. We’re going to start by connecting the Movie In to the second input on the In TOP. Why? The Movie In TOP is going to allow us to pass a video stream into this container. We may, however, want some image to show up while we’re working or when we don’t have a video stream coming in; this is where that second input comes in handy. This gives us a default image to use when we don’t have a stream coming into the container. Here’s what you should have so far:
Next, we want to down sample some of this image. This step is actually helping us plan ahead, we’re going to use some expressions to set the parameters of some of our other operators, and this step is going to help us with that. To do this down-sampling, we’re going to use the Resolution TOP. After that we’ll end our TOP string in a Null TOP. All of that should look like this:
Before we move on, let’s make on change to the resolution TOP in our string. Open up the parameters window for your resolution TOP and on the common page change the Output Resolution parameter to Quarter:
Channel Operators
There are a number of different transformations that we’ll need to do with the data from our image, we’ll start this process by first moving to Channel operators. If we think back to where we started this process with an intent to transform luminance into depth, then we know that one of the operations we need to complete is to add together the RGBA values of our pixels in order to have a single number we can use to manipulate a surface. To get us started we first need to convert our image into channel data that we can manipulate. To do this we’re going to add a TOP to CHOP to our network. This operator is going to allow us to look at our TOP as if it were all Channel data. We’ll see what that looks like in a moment. First, however, let’s make a few changes to our operator. In the Image page for the TOP to make sure that you’re set to “Next Frame” as the download type. On the Crop page you’ll also want to make sure that you’re set to the full image:
Next we need to assign a TOP so we have a texture that we’re converting into Channel data. You can do this by dragging the TOP onto this CHOP, or you can enter in the name of the target TOP on the image page. Your TOP to CHOP, should now look something like this:
This viewer on this operator can be taxing on your system, so at this point I’d recommend that you click the small bulls-eye in the upper left hand corner of your CHOP to turn off the viewer. This will save you some system resources as you’re working in your network and keep you from seeing too much lag while you’re editing this network.
The next series of channel operators is going to allow us to make some changes to our data. First we’ll use a Shuffle CHOP to reorganize our samples into sets of channels, then we’ll use some Math CHOPs to add together our pixel values and to give us some control over the z-displacement, and finally we’ll use a Rename CHOP to make it easy to use this data.
Let’s start this process by connecting our Top to CHOP to a Shuffle CHOP and setting our method to be Sequence Channels by Name:
Next we’ll add our Math CHOP and set the combine channel operation to Add, let’s also set the multiply value to 1 /3:
Next we’re going to add a slider, and another Math CHOP. Why? Great question. For starters, I want to be able to control the strength of this effect with a slider. At some point I’m going to want to be able to drive the strength of this effect, and a slider is a great way for us to do that. Why another Math CHOP? Another excellent question. While we could just use one Math CHOP and apply our slider there, that also means that there’s no way to isolate the effect of the slider. There’s some redundancy here, but there’s also a little more flexibility in the isolation of applying different alterations to our data set. Is this the best way to code a final component, maybe not, but it is a fine way to work with something that’s still being developed. Alright, let’s add our slider and second Math CHOP:
Next we need to write a simple expression in our math2 operator in order to be able to use the slider as an input method. On the Multi-Add page we’re going to use the out value from the slider as the value that the incoming channel information is multiplied by – if we think about the structure of our slider’s output we’ll remember that it’s a normalized value ranging from 0 – 1, and we can think of this as being the same as 0 – 100%. Alright, here’s our simple reference expression:
op( ‘slider1/out1’ ) [ ‘v1’ ]
In plain English this expression reads: give me the number value of the channel called ‘v1’ from operator called ‘out1’ that’s inside of ‘slider1’. If you click on the + sign on the bottom right of your slider (making it viewer active) you can now move it left and right to see the change in the number value of your math2 CHOP.
Before we move away from this portion of our network we need to add one final operator, a Rename CHOP. The Rename CHOP allows us to rename the channels inside of an operator. Later we’ll want to be able to use this number to replace an value from a surface operator chain – in order to do that easily we need to rename this value. In the to field of the rename CHOP type tz:
Surface Operators
Now that we have started the process of converting our Texture into a channel data, we need to think about what we’re going to do with that information. We’re going to start by adding a Grid SOP to our network. We’d like this operator to pull some of its dimensions from the our video – this will make sure that we’re dealing with a surface that’s the correct aspect ratio. In our Grid SOP we’re going to use some simple expressions to pull some information from our Null1 TOP (remember that our Null1 is at the end of our TOP chain). First we’ll use the height and width of our Texture to set the number of rows and columns – we can use the dot call method to ask for the height and width of our TOP with the following expressions for rows and cols:
op( ‘null1’ ).width
op( ‘null1’ ).height
Next we can use internally call the number of rows and columns to for our height and width with the expressions:
me.par.cols
me.par.rows
We’ll also want to make sure that we’ve set our grid to be a Polygon with Rows as the connectivity type.
Now we’ll need to get ready for our next step by adding a CHOP to SOP. This operator is going to allow us to create some geometry out our CHOP data.
What gives? Well, in order for this operator to work properly, we need to have some feed it some CHOP data.
Channel Operators Again
Now we’re finally making progress, even if it doesn’t feel like it just yet. For starters, we have our Texture Operators converted into channel data, we have a piece of geometry that we can alter based on the dimensions of the input Texture, and we’re ready to combine our Channel data with our Surface data, we just have some final house keeping to do.
Let’s start by first converting our Grid SOP to a CHOP with a SOP to CHOP.
Like we did before we’re going to use a Shuffle CHOP, but this time we’re going to set it to Sequence all Channels:
Next we’ll use a Math CHOP to find the absolute value of our shuffle – we can do this by setting the Channel Pre Op to be Positive:
Next we’ll use an Analyze CHOP to find the maximum value coming out of our Math CHOP.
Now we’re going to normalize our data against itself. We’re going to add another Math CHOP to our network. We’ll connect it’s input to our sopto1 CHOP and in this case we’ll use the Multi-Add page and use 1 / the maxim from our analyze CHOP. We can do this with a simple reference expression:
1 / op( ‘analyze1’ ) [ ‘tx’ ]
All of that should look something like this:
We can finally start putting all of this together with one more Math CHOP. This final Math CHOP is going to combine our SOP to string, and our TOP to string. You’ll want to make sure that the SOP string is in the first place on the Math CHOP, with the TOP string coming in underneath. Next make sure that Combine Channels is set to Add, and that Match By is set to Channel Name.
Now let’s end this set of operations in a Null, and move back to where we left off with our surface operators. Back in our Chop to SOP, let’s set our CHOP reference to be our last CHOP null (in my case it’s Null2). All of that should finally look like this:
At long last we’ve finally transformed a grid in the z direction with the information from an input Texture. Ba da bing bang boom:
Rendering
Dear flying spaghetti monster help us… this is has been a lot of work, but so far it’s not very fancy. What gives? Well, if we really want to make something interesting out of this, we need to render it – that is, we need to change it from being just some geometric information back into some pixels. If we think back to what we learned while we were playing with Instancing, rendering isn’t too hard, we just need a few things to make it work (some geometry, a camera, and a light source… this time we’ll skip that last one).
Let’s start by adding a camera, and a geometry component to our network.
For our render to work properly, we need our chopto SOP to be inside of our Geo. At this point you can use your favorite piece of profanity and get get ready to remake this network OR you can HOLD YOUR HORSES and think about how we might solve this problem. My favorite way to address this kind of issue is to jump inside of the geo component, add an In SOP and set it to render and display. This means that we can pass our geometry into this component without needing to encapsulate all of the geometry inside.
Now let’s connect our chopto to our inlet on the Geo comp:
Ideally we want the image from our original texture to be used when rendering our z transformation. To do this, let’s jump back inside of our Geo COMP and add a constant Material. A Constant, unlike a phong, isn’t shaded. In other words, it doesn’t render with shadows. While this isn’t great in some respects, the pay off is that it’s much cheaper to render. While we’re getting started, this is a fine material to start with. We’ll also want to make sure that our Constant is using our original TOP as a color Map. In the Color Map field we can tell this operator to look for the operator called in1 in the directory above with the following call:
../in1
Next we need to apply this material to our Geo. To do that let’s jump out of the Geo comp and head to the render page of the parameters window. We can tell our Geo to look for the material called constant1 that’s inside of the geo comp like this:
./constant1
Holy macaroni, we’re almost there. The last step we need to take is to add a Render TOP to our network:
At long last we have finally replicated the z-translation aesthetic that we set out to emulate. From here you might consider changing the orientation of the geo comp, or the camera comp for a more interesting angle on your work.
Play Time
That’s all well and good, but how can we turn it up a little? Well, now that we have a portion of this built we can start to think about what the video stream going into this operation looks like, as well as how we modify the video coming out of it. Let’s look at one example of what we can do with the video coming out.
I’ve made a few changes to our stream above (some simple moving animation, and changed the resolution op), to have something a little more interesting to play with but it’s still not quite right. One thing I want to look at is giving the lines a little more of a neon kind of look and feel. To do this I’m going to start by adding a Blur TOP to my network:
Next I’m going to add an Add TOP and plug both of my TOPs into it, adding the blur back to the original render:
Finally, I’m going to add a Constant TOP set to black, and a Composite TOP. I’ll composite my Add TOP and my Constant TOP together to end with a final composition:
Now it’s your turn to play. What happens when you play with the signal processing after your render, what happens when you alter the video stream heading into this component. Also, don’t forget that we built a slider that controls the strength of the effect – play and make something fun.
Looking to take a closer look at what makes this process work? Download the tox file and see what makes this thing tick: rut