Category Archives: Design

TouchDesigner | Buttons for Interfaces

Building any system that requires an operator who isn’t the system designer often involves creating a control panel to operate the patch or network in question. A control panel can also be useful for controlling various elements of a system that might otherwise be too complicated to manipulate directly.

In TouchDesigner one of the handy built in components that you might think about using is a button. Buttons have various states of operation that offer different types of control. Let’s star this process by looking at some actual buttons in TouchDesigner so we can see some how these work in practice.

Lets start by creating a new network, and deleting the template operators – if you’re new to touch you can draw a rectangular selection by clicking and dragging with the right mouse button. Select all off the operators and hit delete or backspace. You can also close your palette browser. You should now be left with an empty network.

To get started we’re going to double click the empty network or hit TAB on your keyboard. This brings up the Operator Dialog window. Make sure that you’re in the COMP (component) tab of the dialog window, and select a “Button” from the “Panels” column.

Place this button anywhere on your network. If we look at the parameters of this component we can start to get a sense of how it works, and how we can customize its operation for whatever you might be designing.

Looking at the parameter window we can see how this component is organized. This component has several tabs of parameters that all relate to different types of functions and methods for manipulating the button. Below is a quick summary what each tab deals with:
The Layout page gives us the ability to manipulate how the button is placed and represented on the control panel itself.

  • The Panel page gives us some control over how the button behaves in the panel.
  • The Button page allows for manipulating the kind of button operation methods.
  • The Color page allows for some alteration in the color of the button.
  • The Drag page controls the drag and drop behavior of the button.
  • The Common page controls for some of the higher order mechanisms for this component – for now we don’t need to worry about this page.

To learn more about the Button COMP make sure that you look at Derivative’s documentation here.

Before we move on, let’s open up our control panel so we can see what our button looks like, and where it’s situated. In the upper left hand corner of your network there’s a small square shaped button. Mousing over this button will reveal the help text “open viewer.” Clicking on this button will bring up the panel viewer for the container that you’re working inside of.

Clicking on your open viewer button should mean that you see something like this:

At this point you should be able to click on your button in the viewer. To help us understand what’s happening with the button itself let’s attach a Null to the CHOP out from the container. In your network add a Null Chanel Operator (CHOP). Attach this to the CHOP out from the button (you can tell that it’s a Chanel Operator Out because of the color). With the null attached, now when we click on our button in the viewer we can see what kind of signal is being generated by this button component.

Before we take a closer look at button modes, let’s first see how we can position our button in the viewer. First select the button component to bring up the parameters window. Here on the Layout page we can see X and Y as variables that we can manipulate. These allow us to change the position of the button. The values for X and Y are in pixels. For now, I’m going to leave my button in the bottom left corner.

Next lets take a look at the Button page for this component. Two of the most commonly used states for buttons are Momentary and Toggle. Momentary transmits a signal as a pulse. Clicking the button sends an on, releasing the button sends an off. A toggle enables a button state to be activated or deactivated. In this case, clicking the button toggles it to the on position where it stays until you click the button again sending a signal of off. There are additional button types to choose from, but for now these two different types will help us enough to work through some basic ideas about how you might use and change a button. I’m going to leave my button set as a toggle down for now.

Now that we have a basic understanding of how this component works, lets start customizing it’s appearance. To do this we need to dive inside of our Button. We can do this by zooming in with the scroll wheel until we dive into the component, or we can select the button and hit the “i” key on your keyboard (i in this case is a shortcut for “in”).

Inside of our button we can see the Chanel operators, Texture Operator, and Data Operator that are driving this interface element. One of the most interesting elements to take a note of us the panel1 CHOP. Here you can see that TouchDesigner tracks several different kinds of interaction for our button. We can trigger actions based on the state of our button (is it on or off), the rollover position of our mouse (is the mouse hovering over the button), and if the button is being actively selected (the moment when the mouse button is being actively clicked).

Let’s start our customization of this button by first looking at the Text TOP (Texture Operator). If you click on the text TOP labeled “bg” we can see the parameters associated with this TOP. On the first page we see “Text.” I’m going to change the name of my button here to “Play.”

Next I want to adjust the color of this button. Before we do that, lets first draw our attention to the fact that our button changes it’s color depending on if we’re mousing over it, or have turned it on or off. A closer look at the Text TOP and the associated DAT and CHOP we can see that something very specific is happening. An expression CHOP is watching the button and reporting out the state of our button as a variable called “i” which is then being used by the DAT to dynamically change the color of our button. The table DAT defines what color values should be used for the button in the states: off, on, rollover, rollover on. If we want to keep the established convention of different colors for different states, we then need to edit the table DAT rather than the text TOP.

Let’s start by zooming into the table DAT. We can see here that we’re only using a single color value here for the dynamic content. This means that our options for changing the color of the button right now are only limited to gray scale. Bummer. Let’s make some changes to our table so we can have some color. To do this we’re first going to make our DAT active by clicking on the small + sign in the button right corner of this operator.

With this DAT in active mode we can now edit the table. Color values for our Text are generated off of three different numbers – values for red, green, and blue. We’re going to add two more columns and change the name of one column so we can use three different numbers to generate some different colors. With our DAT active, right click on the column labeled “background” and select add right. Repeat this step so that you have two empty columns to the right of the background column. Next let’s edit the text in the “background” cell, changing it to “bg_r.” In the columns next to this one add the names “bg_g” and “bg_b.” This is going to break our button for a moment, and that’s okay, don’t panic. You should now have a table that looks like this:

For now copy from bg_r to the other columns. Now your table should look like this:

Next lets fix our text TOP. Click on the Text TOP and then click on the Color page. On this page we can see that our values for the background color are all 0s and in red, telling us that something is broken. If we click on the text “Background Color” on this page we can see the expressions driving the dynamic changes for this TOP.

Next we’re going to edit the text in these expressions to account for our new columns in table DAT. We can see that there are three fields that correspond to our bg_r, bg_g, and bg_b with bgcolorr, bgcolorg, and bgcolorb. We’re going to change only the last reference in the expressions in these cells to represent the changes in our table. What we’re after here is to make sure that:

  • bgcolorr is looking at the bg_r
  • bgcolorg is looking at the bg_g
  • bgcolorb is looking at the bg_b

We’ll do that by changing the term “background” in the expressions to match the columns that we’ve added.
Our edited expressions should look like the following (your button should be working again):

Now we need to change our table DAT in order to generate the colors that we want for our button. To help me determine the float values for the colors that I want, I’m going to add a constant TOP to the network momentarily. With my constant added to the network, I’m click on the white rectangle to bring up the color selector. This window will also give me the float values that I want to know so I can edit my table.

After a little bit of exploring I’ve found some colors that will work as a place to start. My resulting table DAT looks like this:

We now have a Button that we’ve given a unique name to, and made changes to a table DAT so that we can see a custom set of colors on our button that dynamically change as we operate our control panel. Now that we’ve made one, try making some buttons on your own.

Sending and Receiving OSC Values with TouchDesigner

The other day I posted look at how to send and receive TouchOSC data with Troikatronix’s Isadora. In an effort to diversify my programming background I’m often curious about how to translate a given task from one programing environment to another. To this end, I was curious about repeating this same action in Derivative’s TouchDesigner. If you’re one of a growing number of artists interested in visual programming, VJing, media design, interactive system design, media installation, or media for live performance than it’s well worth your time to look at both Isadora and TouchDesigner.

That said, I initially started thinking about control panel interfaces for TouchOSC when I saw Graham Thorne’s instructional videos about how to transmit data from Isadora to Touch OSC (If you have a chance take a look at his posts here: Part 1 & Part 2). Last year I used TouchOSC accelerometer data to drive an installation at Bragg’s Pie Factory in Downtown Phoenix, but I didn’t do much with actual control panels. Receiving data from TouchOSC in TouchDesigner is a piece of cake, but sending it back out again didn’t work the first time that I tried it. To be fair, I did only spend about 3 minutes trying to solve this particular question as it was a rather fleeting thought while working on another project.

Getting this to working Isadora, however, gave me a renewed sense of focus and determination, so with a little bit of experimenting I finally got this working the way that I wanted. While there are any number of things you might do with TouchDesigner and TouchOSC, I wanted to start with a simple deliverable: I wanted the position of Slider 1 to inversely change the position of Slider 2, and vise versa. In this way, moving Slider 1 up moves Slider 2 down, and moving Slider 2 up moves Slider 1 down. In performance setting it’s unlikely that I’d need something this simple, but for the sake of testing an idea this seemed like it would give me the kind of information that I might need.

For the most part this is pretty easy, but there are a few things to watch for in order to get the results that you want.

Here’s the network stream of Chanel Operators (CHOPs):

OSC In – Select – Math – Rename – OSC Out


Derivative Documentation on the OSC In CHOP 

The OSC in CHOP is pretty excellent out the gate. We need to do a little configuring to get this up and running, but after that we’re truly off to the races. First off it’s important to make sure that your TouchOSC device is connected to your network and broadcasting to your TouchDesigner machine. If this process is new to you it’s worth taking a little time to read about out to set this all up – start on this page if this is new to you. Once your TouchOSC device is talking to your TouchDesigner network we’re ready to start making magic happen. One of the first things that I did was to activate (just adjust up or down) the two faders that I wanted to work with. This made sure that their IDs were sent to TouchDesigner, making it much easier for me to determine how I was going to rename in them in the future.


Derivative Documentation on the Select CHOP

When you’re working with the OSC In CHOP, all of your TouchOSC data comes through a single pipe. One of the first things to do here is to use the select CHOP to single out the signal that you’re looking for (as a note, you could also export the Channel in question to a Null, or handle this a variety of other ways – in my case a Select CHOP was just the first method I tried). This operator allows you to pull out a single Channel from your OSC In.



Derivative Documentation on the Math CHOP

Next I wanted to remap the values of my incoming signal to be inverted. A Math CHOP lets us quickly remap the range for our data by changing a few values in the Range Tab of that parameters window. You’ll notice that you first need to specify the from range (what’s the range of incoming values), and then specify the to range (what’s the range of outgoing values). In my case, the from values are 0 and 1, and the to values are 1 and 0.



Derivative Documentation on the Rename CHOP

The Rename CHOP is one of the most important steps in your whole network. When communicating with TouchOSC you need to know exactly what object you’re trying to drive with your data. To accomplish this you need to know the UDP address of your device, the port number that you’re board casting to, and the name of the slider or button that you’re wanting to change. While we can specify the UPD address and port number with another CHOP, we need to use the Rename CHOP (a quick aside – there are, of course, other ways to do this – this just happens to be the method that I got working after a bit of trial and error) to specify the name of the object that we’re driving. In my case I wanted fader 1 to drive fader 2. You’ll notice that formatting of the fader is important. In this particular OSC layout there are multiple tabs. We can see this as evidenced by the “1/” preceding the name of our fader. In changing the name of this input we need to enter the precise name of the fader that we want to change – this is why we activated both faders initially, this gave us the names of both objects in the TouchOSC panel. You’ll notice that changed the name of my fader to “1/fader2”.



Derivative Documentation on the OSC Out CHOP 

Now that we’ve processed and named our signal we’re ready to send it back out to TouchOSC. In this CHOP we need to input the UDP address of the TouchOSC device (you can find this in the info tab of the TouchOSC app) as well as the port that we’re sending to. We also want to make sure that we’re sending this as a sample, and finally we want to turn off “send events every cook.” This last step is important because it ensures that we’re only sending values when they change. If we send messages for every cook we won’t be able to see the inverse relationship between our two sliders. Correction, we’ll be able to see the relationship so long as we don’t set up slider 2 to drive slider 1. In order to create an inverse relationship with both sliders we only want to transmit data as it’s changed (otherwise we’ll find that we’re fighting with the constantly streamed position data from our slider that isn’t being activated).


The last step in this process is to copy the whole string and set up fader 2 to transmit to fader 1, this last step allows both faders to drive one another in turn.

There you have it. You’ve now successfully configured your TouchDesigner network to receive and send data to a mobile device running TouchOSC.

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.

Book Keeping

Not all of the work of media design is sexy. In fact, for all of the excitement generated by flash of stunning video, interactive installations, and large scale projection there is a tremendous amount of planning and paper work involved in the process. In the case of a theatrical project, one of the designer’s crucial pieces of work comes in the form of creating a cue sheet. For the uninitiated, a cue sheet is a list of all of the moments in a show that contain media. These documents help the designer, stage manager, and director establish a common understanding about how media / lights / sound are going to be used in a given production.

Drafting a useful cue sheet is often more a matter of preference, but it warrants mentioning some things that can help the media designer get organized as she / he wrestles with a growing list of content to generate. While one could certainly use a word-processing program in order to generate a cue sheet, I prefer to working a spreadsheet. Excel or Google Spreadsheets are a fine medium for writing cues, and have features that can be extremely helpful.

Cue Sheet Must-Haves

In my opinion there are a few must-have columns in organizing your cue sheet:

The Cue – Whatever you’ve settled on using, numbers or letters or some combination of the two, you need a column that puts the cue name / number in plain sight.

The Page Number – If you’re working from a traditional script, keep track of the page number. At some point you’ll be struggling to find the place in the script where a particular cue is getting called, and knowing the page number can ensure that you stay organized in the planning and technical rehearsal process.

Duration – How long does the cue last? In talking with the director it’s important to have a shared understanding of what’s happening in a given moment in a production. Specifying how long a particular effect or video is going to last can help provide some clarity for the designer as she/he is editing, animating, or programming.

From the Script – What’s in the source material that’s driving you to design a particular look? Did a character say something in particular? Is there any stage direction that’s inspiring your choice? Having a quick reference to what’s inspiring you can be a godsend while you’re crafting the content for a production.

Notes – For all that times you say “I’ll remember,” you will invariably forget something. Write it down. Did the director say something inspiring? Write it down. Did the lighting designer mention something about the amount of ambient light during a particular moment? Write it down. Write down what you’re thinking, or brainstorming. You’re never obligated to keep anything, but having a record of what you’ve been thinking gives you something to start from when you sit down to edit / animate / program.

Shooting Notes – If you’re going to need to record video for a production, make note of what particulars you need to keep in mind at the shoot. Do you need a green screen? A particular lighting effect? A particular costume? Keeping track of what you need for a particular moment is going to make the filming process that much easier.

Checklists. At the end of my cue sheet I keep a checklist for each cue. Three columns that help me keep track of what work I’ve done, and what work is left to be done.

Filmed / Animated – Is this cue filmed or animated?
Edited – Is this footage cut and prepped for the playback system?
Installed – Is this footage installed in the playback system?

Working with a Spreadsheet

Simple Formulas

One of the truly magical parts of working with a spreadsheet is one’s ability to use simple formulas to automate a workflow. A simple example comes out of the show that I’m currently working on, The Fall of the House of Escher. A new work that’s come out of the collaborate process of the 2014 ASU Graduate Acting / Directing / Design cohort, this show is built around a structural focus on giving the audience as much agency as possible. Central to this endeavor is a choose-your-own-adventure model for a the production. Given this structure, our production has sections that are distinguished from another another by a alpha-numeric prefix – 1A, 2A, 3A, et cetera. Additionally, I want to pair the section code with the number of a particular cue. This, for example, might look like 3A-34 (section 3A, cue 34). This combination of letters and numbers could make renumber cues, should one change, a difficult process. To simplify this particular problem I can use a simple formula for combining the contents of columns.


First I start by creating separate columns for the section of the show, and the cue number. Next I created a third column intended to be a combination of these other two columns. Here I inserts the following formula: =A24&”-“&B24

Here the Google Spreadsheets (or Excel for that matter) will read this formula as: display the contents of cell A24 and insert a “-” symbol and take the contents of cell B24. This may not seem like a big deal until you consider the time saved when a cue is added or removed, forcing a change in the numbering convention of all the following cues.

Conditional Formatting

Conditional formatting largely comes in varieties that are relative to changing the background of a particular cell. In this case Excel has a much wider range of possibilities for this kind of implementation. For me, however, the simplicity of automatic color coding is tremendously helpful. For example, let’s consider the final three checklist categories that I talked about earlier. Let’s say that I want every cell that contains the word “No” to be color coded red. In order to achieve this look first I’d highlight the cells that I want the formula to apply to.

Next I’d click on the background formatting button in the toolbar and select “Conditional Formatting” from the bottom of the list.

House_of_Escher_Media_Cue_Sheet 2

Finally I’d write my simple formula to control the change of color for the Cell’s selected.

House_of_Escher_Media_Cue_Sheet 3

Multiple Tabs

Last by not least, maintaining multiple tabs on a worksheet saves time and organizational energy. Additionally this allows you to cross reference notes, cells, and thoughts in your workbook. You might, for example, maintain a working cue sheet where you can brain storm some ideas and be a little less tidy. You can then use simple reference formulas to pull the relevant data to your final cue sheet that you give to the Stage Manager. Should you have the good fortune of having an assistant, you might make a separate page in your work book to outline her/his responsibilities on a project.

A cleanly organized cue-sheet is far from exciting, but it does ensure that you stay focused and productive as you work.

Live Camera as Mask | TouchDesigner

Back in May I wrote a quick piece about how to use a camera as a mask in Isadora. This is a very powerful technique for working with live cameras, and no matter how many times I see it I’m still struck by how fun it is. This isn’t difficult to program, and one of the questions I wanted to answer this summer was how to create a network in TouchDesigner that could accomplish the same process. Before we get started it’s going to be important to make sure that you have a camera (a web-cam is fine) connected to your computer. While you can create this same effect using any recorded footage, it’s certainly more compelling and playful when you’re working with a live camera. You’ll also want to make sure that you know how the feedback TOP works in Touch Designer. If you’re new to this programming environment you might take a moment to read through how to work with the feedback TOP here.

Getting Started

We’ll start by creating a new project, and by selecting all of the standard template operators and deleting them. We can stay inside of the default container that TouchDesigner starts us off with as we create our network. Let’s start by taking a look at what we’re going to do, and the operators that are going to be involved.

Unlike other tutorials, this time we’re going to work almost exclusively with Texture Operators (TOPS). The effect we’re looking to create is to use the feed from a live camera as a mask to hide, or reveal, another layer that could be either a video or photo (in our case we’ll work with a photo today, though it’s the same process and the same TOP when working with video). To do this we need to first remove a portion of our video stream, we’ll create a little bit of motion blur with the Feedback TOP, next we’ll composite this mask with our background layer, we’ll finish by creating a final composite with a black background layer.

Some like it Hot

Without a kinect we’re really just approximating separation. Luckily, if you can control your light there are some ways to work around this. Essentially what we’re looking to create is an image where the hottest (or brightest) portion is the subject we want to separate from the background. In another post I’ll talk about some more complicated methods, for now lets look at what we can do with just a camera and some light.

We’ll start by creating a string of TOPs like this:

Movie In – Monochrome – Threshold – Feedback – Blur – Level – Composite

You’ll want to make sure that you’ve connected the Threshold to the second inlet on the Composite TOP, and assigned the Target TOP of the Feedback operator as the Composite operator in this first portion of the string.

Remember to adjust the opacity on the Level operator to a value less than .95 and greater than 0.8. You will also need to spend some time adjusting the parameters of the Blur Operator to fine tune the aesthetic that you’re after.


The Threshold TOP is going to be your best friend in this string. This operator is going to help you control how much background you throw away. By adjusting the threshold value you’re controlling the pixel values that get passed as white and what values are thrown away and converted to alpha. This means that as long as you can keep light on your subject, and mostly off of your background, you’ll be able to isolate the subject from unwanted background pixels. This will take some adjusting, but it’s well worth your time and attention. If you need a little more fine grain control here, you insert add a Level TOP to adjust your image before it gets to the Threshold TOP.

Composite Composite Composite

The final component of this network is to composite all of our images. First we’ll need to add a Movie In TOP as well as a Constant for our backgrounds. Next we need to add two more Composite TOPs and finally a Null. It might be useful to think of this as a string that looks like three Composite TOPs ended in a Null with some additional operators at each stage. First our composite of our live camera and feedback string is going to combined with our Movie In TOP. In the Composite TOP’s parameters make sure that the Operand method is Multiply. This replaces the white values from our previous string with the pixel values from the Movie In TOP. Next we’re going to composite this string with a constant. In this case I’m using a constant black background. Depending on the venue or needs of a production you might well choose another color, you can do this by adjusting the parameters of the constant TOP. Finally we’ll end the whole string with a Null.

We’ve now used a live feed as a mask to reveal another video or image. Next you might think about where in these strings you might think about adding other operators to achieve different affects or moods. Happy programming.

Visualizing OSC Data | TouchDesigner

After looking at how to work with accelerometer data in Isadora in an earlier post, I thought it might also be worth looking at how to approach the same challenge working with Derivative’s TouchDesigner. In the Spring of 2013, for an installation piece, I used TouchDesigner to create a sculpture with a reactive projection component. While I learned a lot about working with TouchOSC in the process, I didn’t spend much time really digging into understanding what kind of data I was getting out of my sensor – in this case I’m using an iPod Touch running TouchOSC, broadcasting data over a wireless network. This type of sensor is one that I hope to use in furutre live performances, and so spending some time really digging into what the detects is an area of interest for me.

TouchOSC to TouchDesigner

To start we need to first capture the OSC data with TouchDesigner. I’m using TouchOSC for this process. With TouchOSC installed on your mobile device you first need to make sure that’ve identified your IP address, and chosen a port number that corresponds to the port number you’ve selected in TouchDesigner. Next we need to ensure that TouchOSC is broadcasting the accelerometer data. You can do this by first tapping on the “Settings” icon in the upper corner of the TouchOSC panel. From here selection “Options” Under options we need to make sure that “Accelerometer (/xyz)” is enabled. Back in TouchDesigner we’ll need to do a little work to get started. You’ll need to create a new network, and toss out all of the template OPs so we can get started fresh. You can choose to create a new container or to use the one that’s present in the template file. Once we’re in this first container we’ll start by creating an OSC In Channel Operator (CHOP). I’ve set my network port to 5001 (the same as in TouchOSC). Once your port numbers are set you should see thee bars in the CHOP reporting out the data from TouchOSC. We can see with a quick glance here that we have three channels of information: accxyz1, accxyz2, and accxyz3. Once we dive into our network we’ll come back to this in order to control our visualization.

The Big Picture

First let’s take  look at the full patch and talk through what our next steps are going to be. In order to have a fuller sense of what the accelerometer data is doing, we’re going to create a simple visualization. We’ll do this by creating a small sphere for each data stream, and map their change to the vertical axis, while mapping a constant change to the horizontal axis. We’ll then set up a simple feedback effect so we can see a rough graph of this data over time that will automatically be cleared each time our visualization wraps around the screen. This approach should allow us to see how the three data channels from the accelerometer correlate to one another, as well as giving us a chance to consider how we might work with this kind of data flow inside of a TouchDesigner network. Start to finish we’ll work with all of the different classes of operators (Components, Textures, Channels, Surface, Materials, and Data), as look at connecting operators and exporting paraemters.

Getting our OSC Data Ready

While the process of programming often bounces back and forth between modules, I’m going to start us out by looking at how I’ve parsed the OSC data so that it can be useful for us as we transition to other operators in our network. The flow of operators looks like this:

OSC In – Select – Lag – Math – Null

The same OSC in is passed to two copies of the “select – lag – math – null” string. While there are plenty of other ways to accomplish this same flow of data, this will allow us to make changes to the mapping of data coming out of individual channels (if we happen to need that option). We start with the OSC In, and then pass this value to a Select CHOP. The Select CHOP helps us to pull a single stream of data out of our OSC In Chop. In adding the Select CHOP make sure that in your parameters window (short-cut key “p”) you’ve selected the appropriate channel. You’ll notice that in the example below, I’ve selected “accxz1” as the first select operator. Next I’ve added a Lag CHOP. This operator will allow us to control the rate at which the data is being processed from our sensor. In essence, this allows us to smooth out the noise from the accelerometer, in effect making it a little less sensitive. Your mileage may vary here, but for my current configuration I’ve set the Lag values to 0.5 and 0.5. After you get your system up and running you may well want to return to these parameters, but for the time being this is a fine place to start. The Math CHOP allows us to remap the incoming values to a different range. This is another place that will require some individual adjustments once you get your whole network set up. To get started we can begin by setting the from range to -2, and 2. The To Range should be set to -1.5, and 1.5. While you’re not likely to see values of -2 or 2 coming from your accelerometer when you get started, this will make sure that there isn’t any data that’s outside of our measurement range. We’ll end our string with a Null. It’s considered good TouchDesigner programming practice to always end your strings with a Null. This ensures that if you want to change your string, add operators, or in any way alter your work, you don’t need to re-export any data. So long as you have ended your work with a Null, you can always make changes upstream of this object. While it feels cumbersome when you’re first starting in TouchDesigner, it’s hands down a practice that will save you time and headaches as you make increasingly complicated networks. Last, but not least, don’t forget to add a Text DAT to the front end of your string. This operator doesn’t do anything in terms of function, but it does allow you a space to write comments to yourself (or whoever is going to be working with your network). Making a few notes about how your network is working, and your underlying thoughts in setting up your chain of operators will help refresh your memory when you come back to a portion of your network after focusing on other areas.

Working with Surface Operators

Anytime we’re drawing some geometry in TouchDesigner we need to Surface Operators (SOPs). While we could do this same process with an image and by only using Texture Operators (TOPs) that means that we’re relying on something other than TouchDesigner to create our artwork. For this particular exercise, I think it’s worth thinking about how we might use TouchDesigner exclusively to create the visualization we’re going to see. Let’s take a look at the whole SOP network before we dive into how to put this all together.

As a general practice we can see out the gate that we’re going to use a single sphere and then create three copies of that single geometry that we’ll then change by exporting our TouchOSC CHOPs to change the behavior of our spheres. We’ll end the whole process by passing our SOPs into a Geometry component, and applying a Phong shader.

Let’s start by first creating a single Sphere SOP. Next we’ll connect that to a Transform SOP. Next we’ll pass our sphere into a Geo Component. To do this, we’ll need to make a few changes to the standard Geo Component. Start by creating a new Geo Component. Double click on the Geo. Once inside, delete the place holding Torus and add an In SOP. Make sure that you turn on both the blue display flag and the purple render flag on the the In SOP. By replacing the Torus with an In we’ll now be able to pass in the SOP operator strings outside of the Geo Component. Exit the Geo by hitting the “u” key (u for up), or by using the mouse wheel to zoom out of the component. You should now notice that there’s an inlet on the left side of our Geo Component. You can now connect the Transform SOP to the Geo, and you should see a picture of the sphere in the window.

Next create a Material called a Phong. Our Phong is going to allow us to add some color to our Geo. Drag the Phong on top of the Geo – you should see an arrow with a plus sign appear to tell you that you’re applying this Phong shader to this Geo Component. In the options for the Phong you can now select any color you’d like. With one whole string completed, you can select the string from Transform through Phong, copy and paste the string two more times for a total of three spheres. Make sure that you connect the Sphere SOP to the two additional strings, and you should be in business.

Next we’re going to export the CHOPs from our accelerometer channels to change the location of our spheres. To make this tutorial readable, I’m going to forgo detailing how to export CHOPs. If this isn’t something you’ve done before, you can look for more information about exporting CHOPs in TouchDesigner by first looking at Derivative’s official documentation here.

Export the Null CHOPs for accxyz1-3 to the Transform SOPs of the three respective Spheres. Make sure that you export the Nulls to the Y position field of the Transform SOPs. All three speheres should now be moving up and down based upon the change in accelerometer data  for each channel.

We’re also going to want to be able to change the size of our spheres based upon the over all look of our visualization. To do this we’re going to set up a simple string of CHOPs that we’ll then export to all three Transform SOPs. Start by creating a Constant CHOP and connect it to a Null CHOP. The Constant outputs a constant number in a channel and will allow us to change the dimensions of our spheres collectively. Next export the Null from this string to all three size attributes (x, y, z) of all three transform SOPs. In the parameters of the Constant change the value of the channel to 0.025.

Finally, don’t forget to add a Text DAT. Again, this is a great place to add some notes to yourself, or notes for your operator about what you’ve done and what’s important to consider when coming back to work with this string later.

Changes over Time

Moving up and down is only a part of this challenge. While what we’ve made so far is interesting, it still lacks a few key elements. For starters, we want to be able to see this change over time. In our case that means seeing the spheres move from left to right across the screen. Here’s a look at the string of operators that’s going to help us make that happen:

Wave – Math – Null 
Wave – Math – Trigger – Null

We’re going to use a single Wave CHOP to drive the lateral movement of our spheres, and to act as the trigger to erase our video feedback that we’ll make once we’ve set our geometry to render.

The first thing we need to do is to create a Wave CHOP. This operator will create waves a regular intervals which we can control with the attributes of this operator. First we want to change the type of wave to be ramp. A ramp will start at 0 and increase to a specified value, before returning to 0. Next we’ll set the the Period and the Amplitude to 30.

Next we need to look at the Channel Tab of this operator’s parameters. On the Channel tab we need to set the End value to 30 seconds. The values that we’re currently changing are time it takes for the wave to cycle. You may ultimately find that you’d like this to happen faster or slower, but we can use these values at least as a place to start. We also need to set the End and REnd values to be 1800 ( 60 frames per second * 30 seconds = 1800 frames). This will ensure that we have enough time for our animation to actually wrap from right to left.

The first Math CHOP that we’re going to change is on the top string. This operation is going to scale our wave values to make sure that we’re starting just off the screen on the left, and traveling to just off the screen on the right. In the From Range insert the values 0 and 30. In the To Range change the values to -2.1 and 2.1. Connect this to the Null. Next export the Null CHOP to the X position field of all three Transform SOPs.

One of the things we can do to plan ahead, is to build a trigger based on this same wave that will erase our video feedback that’s going to be generated in the rendering process. Create a new Math CHOP and connect it to the same Wave CHOP. In the From Range change the values to 0 and 30, and in the To Range change the values to 0 and 1.

Next we’re going to add a Trigger CHOP to our second string. A trigger can be used for any number of purposes, and in our case it’s going to help ensure that we have a clean slate for each time our spheres wrap from the right side of the screen back to the left. With the Math CHOP connected to the trigger, the only change we should need to make is to ensure that the Trigger Threshold is set to 0. Connect the trigger to a Null. We’ll come back to this CHOP once we’re building the rendering system for our visualization.


Rendering a 3D object is fairly straight forward provided that you keep a few things in mind. The process of rendering an object requires at three components: at least one Geometry (the thing being rendered), a Camera (the perspective from which it’s being rendered), and a Light (how the object is being illuminated). We’ve already got three Geo components, now we just need to add a light and a camera. Next we’ll add a Render TOP, and we should see our three spheres being drawn in the Render TOP window. If you want to know a little more about how this all works, take a moment to read through the process of rendering in a little more detail. Let’s take a look at our network to get a sense of what’s going to happen in the rendering process.

There are a couple of things happening here, but the long and short of it is that we’re going to create a little bit of video feedback, blur our image, add black background, and create a composite out of all of it.

We’re going to start by making these two strings:

Render – Feedback – Level – Composite – Blur
Render – Level – Over – Over – Out

Here’s a closer look at the left side of the strings of operators that are rendering our network (as a note, our Render TOP is off the screen the left):

Once our string is set up, make sure that your Feedback is set to target Comp1. Essentailly what we’re creating here a layer of rendered video that is the persistent image of the spheres passing from left to right. Additionally, at this point we want to export the trigger CHOP to the feedback TOP to the bypass parameter. When the bypass value is greater than 0 it turns off the feedback effect. This means that when our trigger goes off the screen will clear, and then the bypass value will be set back to 0.

Now let’s look at the right side of our rendering network:

Here we can see our over TOPs acting as composite agents. The final Over TOP is combined with a Constant TOP acting as a black background. Finally all of this is passed to an Out TOP so it can be passed out of our container.

That’s it! Now we’ve built a simple (ha!) visualization for the three channels of data coming out of an iPod Touch accelerometer passed over a wireless network. The next steps here are to start to play. What relationships are interesting, or not interesting. How might this be used in a more compelling or interesting way? With a programming environment like TouchDesigner the sky is really the limit, it’s just a matter of stretching your wings.

Mapping and Melting

This semester I have two courses with conceptual frameworks that overlap. I’m taking a Media Installations course, and a course called New Systems Sculpture. Media Installations is an exploration of a number of different mediated environments, specially art installations in nature. This course deals with the issues of building playback systems that may be interactive, or may be self-contained. New Systems is taught through the art department, and is focused on sculpture and how video plays as an element in the creation of sculptured environments

This week in the Media Installations course each person was charged with mapping some environment or object with video. While the wikipedia entry on the subject is woefully lacking, the concept is becoming increasingly mainstream. The idea is to use a projector and to paint surfaces with light in a very precise way. In the case of this course, the challenge was to cover several objects with simultaneous video. 

I spent some time thinking about what I wanted to map, and what I wanted to project, and was especially interested in using a zoetrope kind of effect in the process. I kept thinking back to late nights playing with Yooouuutuuube, and I wanted to create something that was loosely based on the kind of idea. To get started I found a video that I thought might be especially visually interesting for this process. My friend and colleague Mike Caulfield has a tremendously inspiring band called The Russian Apartments. The music he writes is both haunting and inspiring, an electronica ballad and call to action. It’s beautiful, and whenever I listen to a track it stays with me throughout the day. Their videos are equally interesting, and Gods, a video from 2011, kept coming back to me. 

To start I took the video and created a composition in AfterEffects with 16 versions of the video playing simultaneously. I then offset each video by 10 frames from the previous. The Effect is a kind of video lag that elongates time for the observer, creating strange transitions in both space and color. I then took this source video and mapped it to individual surfaces, creating a mapped set of objects all playing the same video with the delay. You can see the effect in the video below:

Meanwhile in my sculpture course we were asked to create a video of process art: continuous, no cuts, no edits. I spent a lot of time thinking about what to record, and could not escape some primal urge to record the destruction of some object. In that vein I thought it would be interesting do use a slow drip of acetone on styrofoam. Further, I wanted to light this with a projector. I decided to approach this by using the mapping project that I had already created, and instead to frame the observation of this action from a closer vantage point. Interestingly, Gods has several scenes where textures crumble and melt in a similar way to the acetone effect on the styrofoam. It wasn’t until after I was done shooting that I saw this similarity. You can see the process video below:

Tools Used:

Mapping – MadMapper
Media Generation – Adobe After Effects
Projector – InFocus IN2116 DLP Projector