Tag Archives: media design

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp – this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP – I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]”.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1”)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)[“transz”]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1” and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)[“noise”]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.

Interface Building – Execute DATs | TouchDesigner

Sometimes it’s easy to forget about the most obvious features of a device. In my case, I finally decided to do some investigating about the nature and function of the LAN port on the back of an InFocus 2116. It is not uncommon to see projectors with network access ports these days but I had always assumed that they only worked with the access software that the manufacturer is looking to sell / distribute. InFocus produces a free piece of software called ProjectorNet ( ) that’s designed to give system admins quick access to the settings and status of connected projectors. This seems like a handy piece of software, but just wasn’t something I had been in a position to review or experiment with. Last week when I finally gave myself some time to look at my LAN options for this InFocus, I noticed something when I booted up the machine – in a rather unassuming way, the projector was listing an IP address on the lamp-up screen.

Being the curious type, I decided to see what I got if I pinged the address. I also looked for open ports, and discovered that it was listing for http. Opening up a web browser I decided to try my luck and see what would happen if I just typed in the IP address of the projector itself. I was greeted by a lovely log-in screen for the projector.

Screenshot_122113_034210_PM

Selecting Administrator from the drop down menu, and leaving the password field blank (I just guessed that the password was either going to be blank or “admin”), I was a shocked to see the holy grail of projector finds. Access to all of the projector’s settings and calibration tools. Jackpot. For anyone who has ever been in the unfortunate position of trying to wrangle the menus of a projector, you’ll know how maddening this experience can be – especially if there’s any chance that the previous user might have left the projector in ceiling mode (upside down) or rear-projection mode (backwards).

Screenshot_122113_034239_PM

As it turns out, the task of remote wrangling and futzing is in fact something I’ve been wasting time doing. In thinking about how to use this find to my best advantage I started thinking about the production that I’ll be working on in the Spring of 2014 – Wonder Dome. One of the challenges of Wonder Dome is the complex multi-projector installation, calibration, and operation that our team will be working with. Suddenly having the ability to manage our projection system over the network is a huge win – and a discovery that started me working on the application of this particular find.

Our media server is going to run a custom piece of software developed in Derivative’s TouchDesigner. As I’ve been working on various parts of the media system, the issue of easy calibration has been high on our wish list. To that end it seemed like being able to power and manage the projectors from within TouchDesigner would be a more than handy. Here’s the small piece of part of our calibration window dedicated to this process:

Screenshot_122113_035750_PM

Here I have four fields where the IP address of the projectors can be entered. Saving the show file will mean that we’ll only need to do this process once, but also means that if for some reason we swap out a projector, we can easily change the IP address. The Projector Status button opens all three address in separate tabs of my default web browser. Let’s take a look at how to make that work.

Here’s what this part of the network looks like:

Screenshot_122113_040214_PM

Here I have four Field Components, and a Button Component. In this particular network I’ve altered one field comp to act as a static label (Projector IP Address), and I’ve altered the button. Turning off the top field was fairly straightforward. Looking at the Panel page of this Comp you’ll notice a toggle for “Enable.” By setting this parameter to “Off” the panel element is no longer active.

Screenshot_122113_041200_PM

I knew that I wanted the button to pull from three IP address. I started by first adding three field Comps. Next I added my button comp. To pull in the three strings from the field comps I needed to add inputs to the button. Let’s take a look inside of the button comp to see how this works.

Screenshot_122113_042847_PM

Other than the usual button ingredients, I’ve added a few other elements. I have three In DATs, one Text DAT, three Substitute DATs, a single Merge DAT, a Null DAT, all ending in a Panel Execute DAT.

Here the important starting principle is that our Panel Execute DAT needs the following string in order to open our web page “viewfile http://IP_Address_Here“. Listing three viewfile commands means that all of those files are opened at once. Practically that meant that in order to make this panel command work I needed to correctly format my IP address and add them to the Panel Execute DAT in order to open the three web pages. If we take a look at the format of the In – Text, – Substitute DAT string we’ll see how this works.

Screenshot_122113_042950_PM

Here’s how the following DATs work in this network.

  • In DAT – this pulls in the text string entered into the Field Comp.
  • Text DAT – in this DAT I’ve formatted my command for the Execute DAT, with the exception of including a placeholder for the IP address of my projectors.
  • Substitute DAT – the substitute DAT uses the string of my Text DAT, and then removes the placeholder and replaces that value with the IP address of my projectors.

Let’s look at the parameters of the Substitute DAT so we can see how this node works.

Screenshot_122113_043321_PM

Here I specified that the term “P1” should be replaced by the contents of the In DAT. I exported the values of the string with the expression “op(“in1″)[0,0]” which means – in the operator named “in1” pull the contents of the first cell in the first row of the table.

These three Substitute DATs are then combined with a Merge DAT, passed to a Null (just in case I need to make any further modifications at another point), and finally passed into a Panel Execute DAT.

Let’s quickly take a closer look at the Panel Execute DAT to make sure that we know exactly what it’s doing. First off we want to make sure that we’re using T-Script for this particular method. You can check this by looking for the “T” in the upper right hand corner of the properties dialog box.

Screenshot_012414_122114_AM

We also want to make sure that we force this DAT to stay speaking T-Script. We can do this by bringing up the “Common” page, and selecting “Node” for the language method.

Screenshot_012414_122434_AM

Next let’s test this to make sure it’s working. First we’ll move up a level so we can see our button. We’ll make our button something we can interact with by clicking on the View Active button in the bottom right hand corner (it’s the button that looks like a + sign). Now we should be able to click our button which should in turn launch three browser windows.

Screenshot_012414_122801_AM

Bingo, bango our button now opens up three tabs in Chrome. If you need some more information about working with buttons in general you can do some more reading here.

Multiple Windows | TouchDesigner

For an upcoming project that I’m working on our show control needs to be able to send out video content to three different projectors. The lesson I’ve learned time and again with TouchDesigner is to first start by looking through their online documentation to learn about what my options are, and to get my bearings. A quick search of their support wiki landed me on the page about Multiple Monitors.

To get started I decided to roll with the multiple window component method – this seemed like it would be flexible and easy to address out the gate. Before I was ready for this step I had to get a few other things in order in my network. Ultimately, the need that I’m working to fill is distortion and blending for the interior surface of a dome using three projectors that need to warp and edge blend in real time. First up on my way to solving that problem was looking at using a cube map ) in order to address some of this challenge. In this first network we can see six faces of a cube map composited together, exported to a phong shader, and then applied to a dome surface which is then rendered in real time from three different perspectives.

Screenshot_121913_105543_PM

A general over view of the kind of technique I’m talking about can be found here. The real meat and potatoes of what I was after in this concept testing was in this part of the network:

Screenshot_121913_105619_PM

Here I have three camera components driving three different Render TOPs, which are in turn passing to three Null TOPs that are named P1, P2, and P3 – projector 1 – 3. As this was a test of the concepts of multiple monitor outs, you’ll notice that there isn’t much difference between the three different camera perspectives and that I haven’t added in any edge blending or masking elements to the three renders. Those pieces are certainly on their way, but for the sake of this network I was focused on getting multiple windows out of this project.

If we jump out of this Container Comp we can see that I’ve added three Window Components and a Button to my network. Rather than routing content into these window elements, I’ve instead opted to just export the contents to the window comps.

Screenshot_121913_105514_PM

If we take a closer look at that parameters of the Window Comp we can see what’s going on here in a little more detail:

Screenshot_121913_111605_PM

Here we can see that I’ve changed the Operator path to point to my null TOP inside of my container COMP. Here we can see that the path is “/project1/P1”. The general translation of this pathway would be “/the_name_of_container/the_name_of_the_operator“. Setting Operator path to your target operator will export the specified null when the window is opened, but it will not display the contents of the null in the node itself. If you’d like to see a preview of the render on the window node, you’ll also need to change the node pathway on the Common Page of the Window Comp. Here we can see what that looks like:

Screenshot_121913_111619_PM

Finally, I wanted to be able to test using a single button to open and close all three windows. When our media server is up and running I’d like to be able to open all three windows with a single click rather than opening them one window comp at a time. In order to test this idea, I added a single button component to my network. By exporting the state of this button to the “Open” parameter of the window on the Window Page I’m able to toggle all three windows on and off with a single button.

AutoCAD and Dynamic Blocks for Media Designers

This semester (Fall 2013) I decided to take an AutoCAD course taught by ASU’s Jennifer Setlow. Jen’s course is primarily designed to serve lighting and scenic designers. That said, it’s already proven to be an invaluable experience for a media designer as it’s exposed me many of the models and methods that a lighting designer would use when creating a lighting plot.

As a final project Jen asked that students identify a project that would be challenging intellectually and technically. Ideally, this project would also be useful to the student in some capacity that reaches beyond the classroom itself.

With that in mind, as a final project I’ve opted to create drawings of the projectors that we keep in stock at ASU. in addition to detail drawings of the projectors themselves, I also want to create a set of dynamic properties that allow the designer to visualize the throw distance of the projectors when placed in a drafting of the theatre. My hope is that this will allow for easier plotting and planning not only for myself but for future designers.

One of the problems to consider here is how to dynamically resize a portion of a block in a drawing based on another changing property of the block. In other words I want to be able to shift the shape of the cone of the projector based dragging a the dynamic handle of a drawing.

We can solve this problem with a little bit of digging on the Internet, and some careful work in AutoCAD. My initial starting point was to look at a helpful video from CAD Masters (you can see the whole channel here).

For the sake of this process, I’m going to focus on a simple implementation of this particular concept. To get started on this we first have to create a new drawing. With our new drawing created we need to add a few features that we can use.

Lets start by making a rectangle, and a triangle (to represent our projection cone).

Next I’m going to convert this shape to a block. First I’ll select the whole object, then type “Block” into my command line.

The block command will bring up a dialog box that will allow me to convert this object into a block (essentially a single object). First I’m going to give my block a name, in my case I’m going to call it “Barco1.” For the Base Point I’m going to click on the button that says “Pick Point” and select the bottom center of the projector. Next I’m going to make sure that I check the box that says “Open in Block Editor.” Finally I’m going to click, “OK.”

This will open the our new bock in the Block Editor where we can make some of the more interesting changes to our projector. In the Block Editor we have a new contextual ribbon, and a new pallet (the Block Authoring Palettes).

Here in the block editor we’re going to use a linear parameter as a handle. Let’s place this parameter coming out of the center point of our cone.

I only one a single handle on this parameter, so I’m going to click on the bottom blue arrow, and his the delete key to delete just that handle.

Next I’m going to associate an action with this parameter. From the Actions tab on the Block Authoring Palette, I’m going to select “Scale.” After this I’ll first select my parameter, and then my object to scale (in this case the triangle) and hit enter.

Finally I’ll click on “Test Block” in my ribbon to see if this block is working the way I had expected.

For now this is pretty close.
Coming up, how to dynamically see zoom and lens shift.

TouchDesigner | Import from a System Folder

One of the handy building blocks to have in your arsenal as you’re working with any media system is an understanding of how it works with system folders. Isadora, for example, pulls assets from the folder specified when you load the original file. This means that you can change an asset, save it with the same name and in placing it in the proper system folder your changes show up without having to re-import any files. What then, if I want to automatically pull files from a folder on my computer? In TouchDesigner that’s a relatively simple thing to accomplish without too much work. Better yet, the underlying principles open up a lot of much more interesting possibilities.

Alright, let’s start with a simple problem that we can solve with TouchDesigner. I want to build a simple patch where I can use two control panel buttons to navigate through a list of images (or movies) in a specified folder – essentially I want to be able to add my photos or videos to a folder on my computer, and then display them in TouchDesigner without having to drag them into the network or having to add a Movie-In TOP.

Let’s start by looking at the full network.

Our system is made of three strings – a series of CHOPs, a series of DATs, and a TOP. We’re going to use two buttons to control a series of channel operators to give us a number. We’re going to use this number to select a specific file from a folder, and then we’re going to use that file name in a Movie-In Texture Operator to be displayed.

Let’s start with our CHOP string.

Button – Trigger – Count – Math – Null

First things first, the important place to start in this particular string is actually in the middle at the Count CHOP. The Count CHOP, in this case, is going to watch for a change in a signal and use that as a mechanism for determining when to increment or reset. There are lots of different ways to count, but in this particular case the Count CHOP is an excellent method for incrementing from a button press. Alright, so if we know that we’re going to start with a count, then we don’t actually need the Trigger CHOP – especially as I’m going to watch a Button for a signal change. That said, the Trigger CHOP might be handy if I’m watching something that doesn’t have a specific binary, or a signal where I want a certain threshold to act as the trigger. You’ll notice that my Trigger is bypassed, but I’ve left it in my network just in case I want to use a different kind of triggering method. Alright, with some of those things in mind, lets move back to the beginning of this string.

First off I start by adding to Button COMPs to my network (If you want to learn more about Buttons and Control Panels Start here). Zooming into the button COMP I can set the text for the button itself. First I want to find the Text TOP labeled “bg.” Then I want to open the parameter window for this TOP and change the Text to “Forward” (you can name this whatever you want, I just wanted to know that this button was going to move me forward through my set of photos).

We’ll do the same thing for the other button, and in my case I both changed it’s name and it’s color. You can do both from the Text TOP in your second Button COMP.

Moving back out of the Button COMP I’m going to make a few other changes to my buttons. I want to change their position on the control panel, and I want to make them a little larger to accommodate the longer words inside of them. To do this I’m going to open up the parameter window of the Button COMPs and change the width of both to 66. I’m also going to change the Y location of button labeled “Forward” to 55 to make sure that my buttons aren’t on top of one another.

Before moving on, I also want to change the button Type for both of these COMPs to “Momentary.”

Next I’m going to connect my Button called “forward” to the top inlet on my Count CHOP, and the Button “reset” to the middle inlet on the Count CHOP. This means that every time I click the forward button I’ll increment the count up by 1, and when I click reset it will set the counter back to 0.

In my Math CHOP the only change that I’m going to make is to Pre-Add 1 to the integer coming form my count. I’m going to do this because TouchDesigner counts the first row of a table as 0. When we get to our DATs we’ll see that the 0 line in our table is actually the header of a table. While I could tell TouchDesigner not to show the header, that doesn’t feel like the best solution – especially as I want to be able to see what columns are referring to.

Last but not least, I’m going to end this string of CHOPs in a Null. If you’re new to TouchDesigner, it’s important to know that ending a string in a Null can be a true life safer. Ending in a Null allows you to make changes up stream of the Null, without any headache. Let’s imagine in a complicated network that you’ve connected 20 or 50 different operators to a CHOP. You decide that you want to change something in your string, and you delete the final CHOP (the one that’s connected to everything) leaving you to reconnect all of those patch chords in order to get your network back up and running again. The better solution is to end the string in a Null, and then connect the Null to those 20 or 50 or 1000 other operators. Changing things up stream of the Null is easy, especially as you don’t have to reconnect all of your patch chords again.

Now let’s take a look at our DAT stream.

(Text) – Folder – Select – Null

Here the Text DAT isn’t an essential part of my network, at least in terms of operation. As a general rule of thumb, I try to add comments to my networks as I’m working in them in order to make sure that I can remember how I did something. I highly recommend having some kind of commenting system, especially as this can prove to be a helpful tool to communicate with collaborators about what you’re doing with a specific part of a network. All of that to say, that we can ignore my first Text DAT.

The Folder DAT will pull the contents of a specified system folder. Ultimately, I’ll need the pathway of the files in this folder. I can set the Folder DAT to give me all of that information, and specify if I want it to include the content of sub folders, or the names of sub folders. In my case I only want the names and locations of the files in this single folder.

Next I want to select a single line out of this whole table. To do this, we’re going to connect the Folder DAT to a Select DAT.

First up I want to set this DAT to extract Rows by Index. I want to make sure that both Include First Row, and Include First Column is turned off. Next I want to export the Null CHOP to both the Start Row Index and the End Row Index. Finally, I want to set the Start Column Index to 5, and the End Column Index to 5 – I’m telling the Select DAT that I’m only interested in the sixth column in this table (but you set it to 5, whats the deal? remember 0 counts as a column, so when you’re counting columns in your table 0 is the new 1). I’m also telling it that I only want to see the row that’s specified by the CHOP string.

Last but not least, I’m going to pass this to a Null.

Finally, let’s add a Movie-In TOP to our network. Now, in the Movie-In parameter dialog we’re going to do a little typing. We need to tell the Movie-in TOP that it should reference a cell in a table in a DAT. Easy. Right? Sure it is. We’re going to first click the little plus that shows up as your pointer hovers over the left side of the parameter window.

Next we’re going to toggle the field from a constant into a expression by clicking on the little blue box.

Finally, we’re going to write a short expression to tell TouchDesigner where to look for the pathway of a file to load. Our expression looks like this:

op(‘null1’)[0,0]

If we were writing that in English it would read – Look at the Operator named “null1” and find the contents of the cell in the 0 column position and in the 0 row position. It’s important to note that the name in single quotes should be the name of the DAT Null that we created before, otherwise this won’t act as expected.

There you have it. We’ve now made a network that will allow us to load the contents of a folder, sequentially based on how we use two buttons in our control panel.

Sending and Receiving OSC Values with TouchDesigner

The other day I posted look at how to send and receive TouchOSC data with Troikatronix’s Isadora. In an effort to diversify my programming background I’m often curious about how to translate a given task from one programing environment to another. To this end, I was curious about repeating this same action in Derivative’s TouchDesigner. If you’re one of a growing number of artists interested in visual programming, VJing, media design, interactive system design, media installation, or media for live performance than it’s well worth your time to look at both Isadora and TouchDesigner.

That said, I initially started thinking about control panel interfaces for TouchOSC when I saw Graham Thorne’s instructional videos about how to transmit data from Isadora to Touch OSC (If you have a chance take a look at his posts here: Part 1 & Part 2). Last year I used TouchOSC accelerometer data to drive an installation at Bragg’s Pie Factory in Downtown Phoenix, but I didn’t do much with actual control panels. Receiving data from TouchOSC in TouchDesigner is a piece of cake, but sending it back out again didn’t work the first time that I tried it. To be fair, I did only spend about 3 minutes trying to solve this particular question as it was a rather fleeting thought while working on another project.

Getting this to working Isadora, however, gave me a renewed sense of focus and determination, so with a little bit of experimenting I finally got this working the way that I wanted. While there are any number of things you might do with TouchDesigner and TouchOSC, I wanted to start with a simple deliverable: I wanted the position of Slider 1 to inversely change the position of Slider 2, and vise versa. In this way, moving Slider 1 up moves Slider 2 down, and moving Slider 2 up moves Slider 1 down. In performance setting it’s unlikely that I’d need something this simple, but for the sake of testing an idea this seemed like it would give me the kind of information that I might need.

For the most part this is pretty easy, but there are a few things to watch for in order to get the results that you want.

Here’s the network stream of Chanel Operators (CHOPs):

OSC In – Select – Math – Rename – OSC Out

OSC In

Derivative Documentation on the OSC In CHOP 

The OSC in CHOP is pretty excellent out the gate. We need to do a little configuring to get this up and running, but after that we’re truly off to the races. First off it’s important to make sure that your TouchOSC device is connected to your network and broadcasting to your TouchDesigner machine. If this process is new to you it’s worth taking a little time to read about out to set this all up – start on this page if this is new to you. Once your TouchOSC device is talking to your TouchDesigner network we’re ready to start making magic happen. One of the first things that I did was to activate (just adjust up or down) the two faders that I wanted to work with. This made sure that their IDs were sent to TouchDesigner, making it much easier for me to determine how I was going to rename in them in the future.

Screenshot_100913_094345_PMSelect

Derivative Documentation on the Select CHOP

When you’re working with the OSC In CHOP, all of your TouchOSC data comes through a single pipe. One of the first things to do here is to use the select CHOP to single out the signal that you’re looking for (as a note, you could also export the Channel in question to a Null, or handle this a variety of other ways – in my case a Select CHOP was just the first method I tried). This operator allows you to pull out a single Channel from your OSC In.

Screenshot_100913_094428_PM

Math

Derivative Documentation on the Math CHOP

Next I wanted to remap the values of my incoming signal to be inverted. A Math CHOP lets us quickly remap the range for our data by changing a few values in the Range Tab of that parameters window. You’ll notice that you first need to specify the from range (what’s the range of incoming values), and then specify the to range (what’s the range of outgoing values). In my case, the from values are 0 and 1, and the to values are 1 and 0.

Screenshot_100913_094528_PM

Rename

Derivative Documentation on the Rename CHOP

The Rename CHOP is one of the most important steps in your whole network. When communicating with TouchOSC you need to know exactly what object you’re trying to drive with your data. To accomplish this you need to know the UDP address of your device, the port number that you’re board casting to, and the name of the slider or button that you’re wanting to change. While we can specify the UPD address and port number with another CHOP, we need to use the Rename CHOP (a quick aside – there are, of course, other ways to do this – this just happens to be the method that I got working after a bit of trial and error) to specify the name of the object that we’re driving. In my case I wanted fader 1 to drive fader 2. You’ll notice that formatting of the fader is important. In this particular OSC layout there are multiple tabs. We can see this as evidenced by the “1/” preceding the name of our fader. In changing the name of this input we need to enter the precise name of the fader that we want to change – this is why we activated both faders initially, this gave us the names of both objects in the TouchOSC panel. You’ll notice that changed the name of my fader to “1/fader2”.

Screenshot_100913_094606_PM

OSC Out

Derivative Documentation on the OSC Out CHOP 

Now that we’ve processed and named our signal we’re ready to send it back out to TouchOSC. In this CHOP we need to input the UDP address of the TouchOSC device (you can find this in the info tab of the TouchOSC app) as well as the port that we’re sending to. We also want to make sure that we’re sending this as a sample, and finally we want to turn off “send events every cook.” This last step is important because it ensures that we’re only sending values when they change. If we send messages for every cook we won’t be able to see the inverse relationship between our two sliders. Correction, we’ll be able to see the relationship so long as we don’t set up slider 2 to drive slider 1. In order to create an inverse relationship with both sliders we only want to transmit data as it’s changed (otherwise we’ll find that we’re fighting with the constantly streamed position data from our slider that isn’t being activated).

Screenshot_100913_094705_PM

The last step in this process is to copy the whole string and set up fader 2 to transmit to fader 1, this last step allows both faders to drive one another in turn.

There you have it. You’ve now successfully configured your TouchDesigner network to receive and send data to a mobile device running TouchOSC.

Sending and Receiving TouchOSC Values in Isadora

Sometime last year (in 2012) I came across Graham Thorne’s instructional videos about how to transmit data from Isadora to Touch OSC. Here’s Part 1 and Part 2 – if this is something that interests you, I’d highly recommend by that you watch these two videos first.

While I didn’t have a TouchOSC project at the time that I was working on, it got me thinking about how interfaces communicate information about what’s happening in a patch, and how that information communicates to the operator or user of a given system. This year I’m working on the thesis project of Daniel Fine (an MFA student here at ASU), and one of the many challenges we’re bound to face is how to visualize and interact with a system that’s spread across multiple computers, operating systems, and controlling a variety of different systems in the installation / performance space.

To that end, I thought that it would be important to start thinking about how to to send control data to a TouchOSC interface, and how to then ensure that we can see relationships between different control values in a given control panel. That all sounds well and good, but it’s awfully vague. A concrete exploration of this kind of concept was what I needed to start planning, in order to more fully wrap my head around how idea could be more fully exploited in a performance setting.

In order to do this I decided that I wanted to accomplish a simple task with a TouchOSC control panel. On a simple panel layout I wanted the position of Slider 1 to inversely change the position of Slider 2, and vise versa. In this way, moving Slider 1 up moves Slider 2 down, and moving Slider 2 up moves Slider 1 down. In performance setting it’s unlikely that I’d need something this simple, but for the sake of testing an idea this seemed like it would give me the kind of information that I might need.

First let’s look at the whole patch:

The Whole Patch

The set up for this starts by configuring Isadora to receive data from TouchOSC. If you’re new to this process start by reading this post (or at least though the Stream Set-Up section) to learn about how to start receiving data in Isadora from TouchOSC. Next we’re going to use a few simple actors to make this happen. We’re going to use the following actors:

  • OSC Listener
  • Limit Scale Value (there are other ways to scale values, I just like this method as a way to clearly see what values you’re changing and in what way)
  • OSC Transmitter

OSC ListenerOnce you have your connections between TouchOSC and Isadora set up you’ll want to make sure that you’ve isolated a single slider. We can do this by using the OSC Lister Actor. The OSC Listener reports the data coming from a given channel that you specific in the input inlet on the actor. The listener then sends out the transmitted values from the value outlet.

Limit Scale ValueWe have two sliders that we’re working with – Channel 1 and 2 respectively (they also have names, but we’ll get to that later). We first want to look at Channel 1. We’re going to set the OSC Listener actor to channel 1 and then connect the value output to the value inlet on a Limit-Scale Value Actor. The Limit-Scale Value Actor allows you to change, scale, or otherwise remap the floats or integers to a new range of values. This is important because TouchOSC uses normalized values (values from 0-1) for the range of the sliders in its control panels. In order to create an inverse relationship between two sliders we want to remap 0-1 to output as values from 1-0. We can do that by enter the following values in the inlets on the actor:

  • limit min: 0
  • limit max: 1
  • out min: 1
  • out max: 0

OSC TransmitThe remaining step is to connect the output from our Limit-Scale Value Actor to an OSC Transmit Actor. The OSC Transmit Actor, like its name suggests, transits data wrapped in the OSC protocol. In order to fully understand how this data is being transmitted we need to know a few things about this actor. In looking at its components we can see that it is made up of the following inlets:

  • UDP Addr – UDP address. This is the IP address of the computer or device that you’re taking to. You can determine what this address is in TouchOSC by looking at the info panel. Imagine that this is the street name for a house that you’re sending a letter to.
  • Port – this is the port on the device that you’re sending data to. It’s important that you know what port you’re trying to talk to on a given device so that your message can be parsed. If the UDP Address is the street name, the port number is akin to the house number that you’re trying to send a letter to.
  • Address – The address in the case of TouchOSC is the individual target / name of an asset that you want to change. Each of the sliders and buttons on a TouchOSC panel have a name (for example – /1/fader1), the address is how you tell Isadora what slider you are wanting to change. You can determine these names by looking closely at your Stream Set-up when you’re connecting your device to Isadora. To follow with our letter sending metaphor above, the Address is the name of the person you’re sending the letter to.
  • Use Type – this allows us to toggle the sending mechanism on and off.
  • Value – this is the value that we’re transmitting to our other device.

To use the OSC Transmit actor we need to fill in all of the appropriate fields with the information from our mobile device. You’ll need to specify the UPD Address, the Port number, the Address, and connect the value out form our Scale-Value actor to the value inlet of the OSC Transmit Actor.

In this test I started by having fader1 drive fader2. Once I got this working, I then repeated all of the steps above, for the other fader – if you look closely at the full patch this will make more sense. The resulting interaction can be seen in the gifs below.

 

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.

TouchDesigner Concepts for Projection Mapping

This coming year at ASU I’ll be working on a project with several talented artists and media Makers on a project that’s tentatively being called “WonderDome.” As Dan Fine’s thesis project he’s exploring what it means to create a playing space that exists inside of a dome of projection. The audience and performer will share a single immersive media environment where a story will be told with puppets, video, and just about all the theatre magic you can imagine. Central to this endeavor are questions about immersive media systems and how to approach the complex issue of mesh warping media for a curved surface that’s being ditched together with somewhere between six to eight projectors.

The traditional approach for this kind project would be to make the flat media first. After creating the structure, and installing the projectors we’d create a sample mesh-warped After Effects Comp and ultimately run all of our flat media through that comp to split it up into several pieces with the appropriate distortion applied. While that’s certainly a tried and true method it doesn’t leave much room for error, and it also makes it difficult to use live video.

On our wish-list of media systems is a d3 media server by d3technologies. While their systems may well be out our price range, their approach is one that’s gaining traction in smaller projection circles. We can pull apart some of the same process by working with Derivative’s TouchDesigner to get a sense of how we may well be thinking of projection mapping in the not too distant future.

Instead of waiting to get onsite, or crafting media specially fit for a particular structure, we instead start with a 3D model. In our computer generated model we add projectors, lights, paint our surfaces with textures or video, effectively assembling our system virtually.

The videos below take a quick and rough look at what this kind of workflow looks like in TouchDesigner, and how it differs from the kind of projection mapping you may already be accustomed to.

Edge Blending

Geometry and Cameras

Putting it All Together

House of Escher | Media Design

In December of 2012 I was approached at an ASU School of Theatre and Film party and asked if I would be interested in working on a project that would begin the following semester, and premiere a new work in the Fall of 2013. As this is exactly the kind of opportunity that I came to ASU to peruse, I eagerly agreed to be a part of the project. 

Some Background

ASU’s School of Theatre and Film (soon to also include Dance) has a very interesting graduate school program for performers and designers. Operating on a cohort based system, the school admits a group of performers, directors, and designers (Scenic, Costume, and Lighting) every three years. One of the other graduate programs at the school, the one in which I’m enrolled, can enroll students each year. My program, Interdisciplinary Digital Media and Performance (IDM), straddles both the school of Arts, Media, and Engineering as well as the School of Theatre and Film. Students in my program have pursued a variety of paths, and one skill that’s often included in those various paths is media and projection design for stage productions. Just today as I was watching the live web-cast of the XboxOne announcement, I was thinking to myself, “some designer planned, created, and programmed the media for this event… huh, I could be doing something like that someday.”

The latest cohort of actors, designers, and directors started in the Fall of 2011, which means that the group is due to graduate in the Spring of 2013. In both the second and third year of the cohort’s program they work to create a newly devised piece that’s performed in one of the theatre’s on campus as ASU. Occasionally, this group also needs a media designer, and it’s their new show for 2014 that I was asked to be a part of. 

The Fall of the House of Escher

Our devising process started with some source material that we used as the preliminary research to start our discussion about what show we wanted to make. Our source materials were Edgar Allen Poe’s The Fall of the House of Usher, M.C Escher, and Quantum Mechanics. With these three pillars as our starting point we dove into questions of how to tackle these issues, tell an interesting story, and work to meet creative needs of the group. 

One of our first decisions focused on the structure of show that we wanted to create. After a significant amount of discussion we finally settled on tackling a Choose Your Own Adventure (CYOA) kind of structure. This partially arose as a means of exploring how to more fully integrate the audience experience with the live performance. While it also brought significant design limitations and challenges, it ultimately was the methodology the group decided to tackle. 

Shortly after this we also settled on a story as a framework for our production. Much of our exploratory conversation revolved around the original Poe work, and it was soon clear that the arc of the Fall of the House of Usher would be central to the story we set out to tell. The wrinkle in this simple idea came as our conversations time and again came back to how Poe and Quantum Mechanics connect with one another. As we talked about parallel universes, and the problems of uncertainty, we decided to take those very conversations as a cue for what direction to head with the production. While one version of the CYOA model takes patrons on the traditional track of Poe’s gothic story, audience members are also free to send our narrator down different dark paths to explore what else might be lurking in the Usher’s uncanny home. Looking at the photo below you can see where the audience has an opportunity to choose a new direction, and how that impacts the rest of the show. 

While this was a fine starting point, we also realized that it only giving the audience an opportunity to explore one avenue of possibility in the house felt a little flat. To address that point we discussed a repeated journey through the house in a Ground Hog Day-esque repeated style. Each run of the show will send the audience through the CYOA section three times, allowing them the opportunity to see the other dark corners of the house, and learn more about the strange inhabitants of the home. I did a little bit of map-making and mapped out all of the possible paths for our production; that is, what are all of the possible permutations of the three legged journey through the house. The resulting map means that there are twelve different possible variations for the production. A challenge, to be sure. 

Media and the House

So what’s media’s role in this production? The house is characterized by it’s Escher patterned qualities. Impossible architecture and tricks of lighting and perspective create a place that is uncanny, patterned, but also somehow strangely captivating. Just when it seems like the house has shared all of it’s secrets there are little quantum blips and pulses that help us remember that things are somehow not right until ultimately the house collapses. 

Our host (who spends his/her time slipping between the slices of the various paths the audience tumbles through) is caught as a destabilized field of particles only sometimes coalesced. The culminating scene is set in a place beyond the normal, a world of quantum weirdness – small as the inside of an atom, and vast as the universe itself. it’s a world of particles and waves, a tumbling peak inside of the macro and micro realities of our world that are either too big or too small for us to understand on a daily basis. 

Media’s role is to help make these worlds, and to help tell a story grounded in Poe’s original, but transformed by a madcap group of graduate students fighting their way out of a their own quantum entanglement.