Tag Archives: playback design

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp – this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP – I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]”.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1”)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)[“transz”]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1” and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)[“noise”]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.