Tag Archives: animation

TouchDesigner | 3D solutions for a 2D world

12761364894_714f3b8985_nOne of the fascinating pieces of working in TouchDesigner is the ability to use 3D tools to solve 2D problems. For the last seven months or so I’ve been working on Daniel Fine’s thesis project – Wonder Dome. Dome projection is a wild ride, and one of the many challenges we’ve encountered is thinking about how to place media on the dome without the time intensive process of pre-rendering all of the content specifically for this projection environment. To address some of these issues we started working with Los Angles based Vortex Immersion Media, their lead programmer is the TouchDesigner specialist Jeff Smith of Eve Vapor. Part of the wonderful opportunity we’ve had in working with Vortex is getting to take an early look at Jeff’s custom built Dome Mapping tool. Built exclusively in TouchDesigner it’s an incredibly powerful piece of software built to make the dome warping and blending process straightforward. The next step in the process for us was to consider how we were going to place content on the interior surface of the dome. The dome mapping tool that we’re using uses a square raster as an input, and can be visualized by looking at a polar array. If you’re at all interested in dome projection, start by looking at Paul Bourke’s site – the wealth of information here has proven to be invaluable to the Wonder Dome team as we’ve wrestled with dome projection challenges. This square image is beautiful mapped to the interior surface of the dome, making placing content a matter of considering where on the array you might want a piece of artwork to live.

There are a number of methods for addressing this challenge, and my instinct was to build a simple TouchDesigner network that would allow us to see, place, and manipulate media in real time while we were making the show. Here’s what my simple asset placement component looks like:

asset placement

This component based approach makes it easy for the design and production team to see what something looks like in the dome in real time, and to place it quickly and easily. Additionally this component is built so that adding in animation for still assets is simple and straight forward.

Let’s start by taking a look at the network that drives this object, and cover the conceptual structure behind its operation.

Screenshot_022814_125736_AM

In this network we have a series of sliders that are controlling some key aspects of the media – orientation along a circular path, distance from the center, and zoom. These sliders also pass out values to a display component to make it easy to take note of the values needed for programming animation.

We also have a render chain that’s doing a few interesting things. First we’re taking a piece of source media, and using that to texture a piece of geometry with the same aspect ratio as our source. Next we’re placing that rectangle in 3D space and locking its movement to a predefined circular pathway. Finally we’re rendering this from the perspective of a camera looking down on the object as though it were on a table.

Here I’m using a circle SOP to create a circle that will be the pathway that my Geo COMP will rotate around. Here I ended this network in a null so that if we needed to make any changes I wouldn’t have to change the export settings for this pathway.

Screenshot_022714_095918_AM

You’ll also notice that we’re looking at the parameters for the circle where I’ve turned on the bulls-eye so we’re only seeing the parameters that I’ve changed. I’ve made this a small NURBS curve to give me a simple circle.

The next thing I want to think about is setting up a surface to be manipulated in 3D space. This could be a rectangle or a grid. I’m using a rectangle in this particular case, as I don’t need any fancy deformation to be applied to this object. In order to see anything made in 3D space we need to render those objects. The render process looks like a simple chain of component operators: a geo COMP, a camera COMP, a light COMP, and a render TOP. In order to render something in 3D space we need a camera (a perspective that we’re viewing the object from), a piece of geometry (something to render), and a light (something to illuminate the object).

Screenshot_022814_125903_AM

We can see in my network below that I’ve used an in TOP so that I can feed this container from the parent portion of the network. I’ve also given this a default image so that I can always see something in my container. You might also notice that while I have a camera and a geo, I don’t have a light COMP. This is because I’m using a material type that doesn’t require any lighting. More about that in a moment. We can also see that my circle is being referenced by the Geo, and that the in TOP is also being referenced by the Geo. To better understand what’s happening here we need to dive into the Geo COMP.

Screenshot_022814_125926_AM

Inside of the Geo COMP we can see a few interesting things at work. One thing you’ll notice is that I have a constant MAT and an info CHOP inside of this object. Both of these operators are referencing the in TOP in that’s in the parent network. My constant is referencing the in to establish what is going to be applied to the Geo as a material. My info CHOP gives me quick access to several of the attributes of my source file. Included in this list of attributes is the resolution of the source media. I can use this information to determine the aspect ratio of the source, and then make sure that my rectangle is sized to match. Using this process I don’t have to rely on a particular aspect ratio for my source material, I can pass this container any shape of rectangular image, and it will size itself appropriately.

Initially I just had three sliders that controlled the placement of my media in this environment. Then I started thinking about what I would really need during our technical rehearsals. It occurred to me that I would want the option to be able to place the media on the surface of the dome from a position other than behind the media server. To address this need I built a simple TouchOSC interface to replicate my three sliders. Next I captured that OSC information with TouchDesigner, and then passed that stream of floats into my container. From here I suddenly had to do some serious thinking about what I wanted this object to do. Ideally, I wanted to be able to control the container either form the media server, or from a remote access panel (TouchOSC). I also wanted the ability to record the position information that was being passed so I could use it later. This meant that I needed to think about how I was going to capture and recall the same information from three possible sources. To do this I first started by packaging my data with merge CHOPs. I also took this opportunity to rename my channels. For example, my OSC data read – osc_rot, osc_dist, osc_zoom; rotation, distance, and zoom sliders from my TouchOSC panel. I repeated this process for the sliders, and for the table that I was using. I also knew that I wanted to rename my stream, and pass it all to a null CHOP before exporting it across the network. To keep my network a little more tidy I used a base to encapsulate all of the patching and selecting, and switching that needed to happen for this algorithm to work properly.

Screenshot_022814_010413_AM

Inside of the base COMP we can see that I’m taking my three in CHOPs selecting for the appropriate channel, passing this to a switch (so I can control what value is driving the rendering portion of my network) and then back out again. You may also notice that I’m passing the switch values to a null, and then exporting that do a opViewer TOP. The opViewer TOP creates a rendered image of the channel operator at work. Why would I do this? Well, I wanted a confidence monitor for my patch-bay. The base COMP allows you to assign a TOP to its display. Doing this meant that I could see into a portion of the base, without having to actually be inside of this component.

Screenshot_022814_010506_AM

With all of the patching setup, I needed to build an interface that would control all of these changes. I also needed a way to capture the values coming out of TouchOSC, store them in a table, and then recall them later.

Screenshot_022814_010616_AM

The solution here was to build a few buttons to drive this interface. To drive the witch CHOP in my base component, I used three buttons encapsulated inside of a container COMP and set to operate as radio buttons. I then used a panel CHOP in the container to export which button was currently being toggled into the on position. Next I added a button COMP to record the values set from TouchOSC. Using a Chop to DAT I was able to capture the float values streaming into my network, and I knew that what I wanted was to be able to copy a set of these values to a table. To do this I used a panel execute DAT. This class of DAT looks at the panel properties of a specified container (buttons and sliders also qualify here), and runs a script when the specified conditions in the DAT are met. This is the portion of the network that gave me the most headache. Understanding how these DATs work, and the best method of working with them took some experimentation. To trouble shoot this, I started by writing my script in a text DAT, and then running it manually. Once I had a script that was doing what I wanted, I then set to the task of better understanding the panel execute DAT. For those interested in Python scripting in TouchDesigner, here’s the simple method that I used:

m = op (“chopto1”)
n = op (“table1”)
n.copy(m)

Here the operator chopto1 is the DAT that is capturing the OSC stream. The operator table1 is an empty table that I want to copy values to, it’s the destination for my data. The Python method copy starts by specifying the destination, and then the source that you want to pull from.

Screenshot_022814_010809_AM

Finally ready to work with the panel execute DAT, I discovered that all of my headaches were caused by misplacing the script. To get the DAT to operate properly I just had to make sure that my intended script was between the parameter specified, and the return call.

Screenshot_022814_010926_AM

One last helpful hint / tip that I can offer from working on this component is how to specify the order of your buttons in a container. One handy feature in the container parameters page is your ability to have TouchDesigner automatically array your buttons rather than placing them yourself. The catch, how do you specify the order the buttons should appear in? If you look a the parameter page for the buttons themselves, you’ll notice that they have a smartly named parameter named “Alignment Order.” This, sets their alignment order in the parent control panel.

If I’ve learned nothing else, I have learned that sometimes it’s the simplest things that are the easiest to miss.

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp – this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP – I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]”.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1”)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)[“transz”]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1” and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)[“noise”]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.

Escher Image and Animation Research

In thinking about what the media and animation for The Fall of the House of Escher might look like I’ve been sifting through the internet this summer looking for images that abstractly represent the universe and the behavior of particles and waves. Some of the more interesting work that I’ve found uses simple geometric shapes, and particle systems to evoke a sense of scale and distance and perspective. The work of the motion graphics designer Mr. Div is a prime example of someone who makes works that are both simple and also strikingly captivating.

The gif to the right is amy attempt at copying his piece “Tri-Heart”. I think copy-art is a practice that can’t be over stated. Recreating a work that you see from scratch teaches you more than simply following a tutorial. You are forced to wrestle with questions of how and why, and solve problems that don’t necessarily have clear solutions. While I don’t think this animation, specifically, is going to find it’s way into Escher, there are qualities of it that I really like and that feel distinctly quantum.

On the other end of the spectrum, in the “just follow along with a tutorial” category is a fascinating how-to create by minutephysics. Their quick After Effects tutorial covers how to create a simple simulation of formation of the universe. While it’s not scientifically accurate, the aesthetic conveys the look and feel of much more complex simulations that look at formation of the universe. Both of their videos are worth a watch, and the result of the tutorial can be seen here. Again, I don’t know that this exact animation will be something that I use, but it has elements that are inspiring and interesting.

In many respects there is a daunting amount of media in this production – interactive elements, large scale animations, moments of world-shifting perspective, and the expression of the abstract inner space of the atom. There’s a life-time of work in this production, and it goes up in October. There’s lots to do.