Tag Archives: Live Performance

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp – this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP – I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]”.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1”)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)[“transz”]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1” and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)[“noise”]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.

Custom Quartz Compositions in Isadora

The What and Why

The more I work with Isadora, the more I feel like there isn’t anything it can’t do. As a programming environment for live performance it’s a fast way to build, create, and modify visual environments. One of the most interesting avenues for exploration in this regard is working with Quartz Composer. Quartz is a part of Apple’s integrated graphics technologies for developers and is built to render both 2D and 3D content by using the system’s GPU. This, for the most part, means that Quartz is fast. On top of being fast, it allows you access to GPU accelerated rendering making for visualizations that would be difficult if you were only relying on CPU strength.

Quartz has been interesting to me largely as it’s quick access to a GPU-accelerated high performance rendering environment capable of 2D, 3D and transparency. What’s not to love? As it turns out, there’s lot to be challenged by in Quartz. Like all programming environments it’s rife with its own idiosyncrasies, idioms, and approaches to the rendering process. It’s also a fair does of fun once you start to get your bearings.

Why does all of this matter? If you purchase the Isadora Core Video upgrade you have access to all of the Core Imaging processing plugins native to OS X. In addition to that you’re now able to use Quartz Composer patches as Actors in Isadora. This makes it possible to build a custom Quartz Composer patch and use it within the Isadora environment. Essentially this opens up a whole new set of possibilities for creating visual environments, effects, and interactivity for the production or installation that you might be working on.

Enough Already, Let’s Build Something

There are lots of things to keep in mind as you start this process, and perhaps one of the most useful guidelines I can offer is to be patent. Invariably there will be things that go wrong, or misbehave. It’s the nature of the beast, paying close attention to the details of the process is going to make or break you when it all comes down to it in the end.

We’re going to build a simple 3D Sphere in Quartz then prep it for control from Isadora. Easy.

Working in Quartz

First things first, you’ll need to get Quartz Composer. If you don’t already have it, you’ll need to download Quartz Composer. Check out I Love QC’s video about how to do this:

The next thing we’re going to do is to fire up QC. When prompted to choose a template, select the basic composition, and then click “Choose.”

Template_Chooser

One of the first things we need to talk about is what you’re seeing in the Quartz environment. The grid like window that you’re started with is your patch editor. Here you connect nodes in order to create or animate your scene.

Untitled_-_Editor

You should also see a window that’s filled with black. This is your “viewer” window. Here you’ll see what you’re creating in the patch editor.

Untitled_-_Viewer

Additionally you can open up two more windows, by clicking the corresponding icons in the patch editor. First find the button for Patch Library, and click that to open up a list of nodes available for use within the network.

The Patch Library holds all of the objects that are available for use within the Quartz editor. While you can scroll through all of the possible objects when you’re programming, it’s often more efficient to use the search box at the bottom of the library.

Library

Next open open up the patch inspector.
The patch inspector lets you see and edit the settings and parameters for a given object.

Untitled_-_Editor 3

Let’s start by making a sphere. In patch library search for “sphere” and add it to your patch. Out the gate we’ll notice that this sucks. Rather, this doesn’t currently look interesting, or like a sphere for that matter. What we’re currently seeing is a sphere rendered without any lighting effects. This means that we’re only seeing the outline of the sphere on the black background.

Untitled_-_Viewer_and_Untitled_-_Editor

This brings us to one of the programming conventions in Quartz. In Quartz we have to place objects inside of other components in order to tell QC that we want a parent’s properties to propagate to the child component.

To see what that means let’s add a “lighting” patch to our network. Congratulations, nothing happened. In order to see the lighting object’s properties change the sphere, we need to place the sphere inside of that object. Select the sphere object in the editor, Command-X to cut, double click on the Lighting object, then Command-V to paste.

Untitled_-_Viewer_and_Untitled_-_Editor_1

This is better, but only slightly.

Untitled_-_Viewer_and_Untitled_-_Editor 2

Let’s start by changing the size properties of our sphere. Open the Patch Inspector and click on the Sphere object in the editor. Now we can see a list of properties for our Sphere. Let’s start by adjusting the diameter of our sphere. I’m going to change my diameter to .25.

Sphere_and_Untitled_-_Viewer_and_Untitled_-_Editor 2

Next, select “settings” from the drop down menu in the Patch Inspector. Here I’m going to turn up the number of sub divisions of my sphere to give it a smoother appearance.

Sphere_and_Untitled_-_Viewer_and_Untitled_-_Editor 3

With our sphere looking pretty decent I want to add some subtle animation to give it a little more personality. We can do this by adding a LFO (low-frequency oscillator). We’ll use our LFO to give our sphere a little up and down motion. In the Patch Library search for LFO and add it to your editor next to your sphere.

Untitled_-_Editor 4

Next click the “Result” outlet on the “Wave Generator (LFO)” and connect it to the “Y Position” inlet on the sphere.

Wonderful… but this is going to make me sea sick.

Next we’re going to make some adjustments to the LFO. With your patch inspector open, click on the LFO. Let’s the following changes:

Period to 2
Phase to 0
Amplitude to .01
Offset to 0

Wave_Generator__LFO__and_Untitled_-_Viewer_and_Untitled_-_Editor

Now you should have a sphere that’s very gently bouncing in the space.

Next let’s return to the parent lighting patch to make some changes to the lighting in this environment. We can get back to the parent either by clicking on the button “edit parent” or by clicking on the position in the string of objects where we want to go.

Untitled_-_Editor 5

In the root patch let’s click on the lighting object and change some parameters:

Material Specularity to 0.1
Material Shininess to 128
Light 1 Attenuation to .2
Light 1 X Position to -0.25
Light 1 Y Position 0.5
Light 1 Z Position to 1

Lighting_and_Untitled_-_Viewer_and_Untitled_-_Editor

Excellent. We should now have a sphere gently bobbing in space with a light located just barely to the left, up, and away (as a note these are locations in relation to our perspective looking at the sphere).

At this point we could leave this as it is and open it in Isadora, but it wouldn’t be very exciting. In order for Isadora to have access to make changes to a QC patch we have to “publish” the inputs that we want to change. In other words, we have to choose what parameters we want to have access to in Isadora before we save and close our QC patch.

I’m thinking that I’d like this sphere to have a few different qualities that can be changed from Isadora. I want to be able to:

  • Change the Lighting Color (Hue, Saturation, and Luminosity as separate controls)
  • Change the position of the light
  • Change the Sphere Size

In QC in order to pass a value to an object, the parameter in question needs to be published from the Root patch. This will make more sense in a second, but for now let’s dive into making some parameters available to Isadora. First up we’re going to add a HSL to our patch editor. This is going to give us the ability to control color as Hue, Saturation, and Luminosity as individual parameters.

Connect the Color outlet of the HSL to the Light 1 Color inlet on the lighting object.

Untitled_-_Editor 6

Now let’s do some publishing. Let’s start by right clicking on the HSL object. From the pop-up menu select “Publish Inputs” and one at a time publish Hue, Saturation, and Luminosity. You’ll know a parameter is being published if it’s got a green indicator light.

HSL_Color

Next publish the X, Y, and Z position inputs for the lighting object. This time make sure you change the names Light X Pos, Light Y Pos, and Light Z Pos as you publish the parameters.

Untitled_-_Editor

At this point we’ve published our Color values, and our position values, but only for the light. I still want to be able to change the diameter of sphere from Isadora. To do this we need to publish the diameter parameter variable from the “sphere” object, then again from the lighting object.

First double click on the lighting object to dive inside of it. Now publish the diameter parameter on the sphere, and make sure to name it “Sphere Diameter.” When you return to the root patch you’ll notice that you can now see the “Sphere Diameter” parameter.

Untitled_-_Editor

We now need to publish this parameter one more time so that Isadora will be able to make changes to this variable.

Here we need to pause to talk about good house keeping. Like all things in life, the more organized you can keep your patches, the happier you will be in the long run. To this end we’re going to do a little input splitting, organizing, and commenting. Let’s start by right clicking anywhere in your patch and selecting “Add Note.” When you double click on this sticky note you’ll be able to edit the text inside of it. I’m going to call my first note “Lighting Qualities.”

Next I’m going to go back to my HSL, right click on the patch, and select “Input Splitter” and select Hue. You’ll notice that you now have a separate input for Hue that’s now separate from the HSL. Repeat this process for Saturation and Luminosity. I’m going to do the same thing to my lighting position variables that are published. Next I’m going to make another note called “Sphere Qualities” and then split m sphere diameter and drag it to be inside of this note. When I’m done my patch looks like this:

Untitled_-_Editor 8

Right now this seems like a lot of extra work. For something this simple, it sure is. The practice, however, is important to consider. In splitting out the published inputs, and organizing them in notes we can (at a glance) see what variables are published, and what they’re driving. Commenting and organizing your patches ultimately makes the process of working with them in the future all the easier.

With all of our hard work done, let’s save our Quartz patch.

Working in Isadora

Before we fire up Isadora it’s important to know where it looks to load quartz patches. Isadora is going to look in the Compositions folder that’s located in the Library, System Library, and User Library directories. You can tell Isadora to only look in any combination of these three at start up. Make sure that you copy your new quartz composition into one of those three directories (I’d recommend giving your custom quartz comps a unique color or folder to make them easier to find in the future).

With your QC patch in place, and Isadora fired up lets add our custom patch to a scene. Double click anywhere in the programming space and search for the name of your patch. I called mine Simple Sphere. We can now see that we have our composition with all of the variables we published in QC.

Untitled

We can see what our composition looks like by adding a CI projector and connecting the image output form our QC actor to the image inlet on the projector. Let’s also make sure that we set the CI projector to keep the aspect ratio of our image.

Untitled___Stage_1_and_Untitled

When you do this you should see nothing. What gives?!
If you look back at your custom actor you’ll notice that the diameter of the sphere is currently set to 0. Change that parameter to the 0.1, or any other size you choose.

Untitled___Stage_1_and_Untitled 2

You should now see a dim floating sphere. What gives?!
If you look at the position of your light you’ll notice that it’s currently set to 0,0,0, roughly the same location as the sphere. Let’s move our light so we can see our sphere:

Light 1 X Position to -0.25
Light 1 Y Position 0.5
Light 1 Z Position to 1

Untitled___Stage_1_and_Untitled 3

If you’re connected to an external monitor or projector you’ll also want to make sure that you set the render properties to match the resolution of your output device:

Untitled 3

There you have it. You’ve now built a custom quartz patch that you can drive from Isadora.

Book Keeping

Not all of the work of media design is sexy. In fact, for all of the excitement generated by flash of stunning video, interactive installations, and large scale projection there is a tremendous amount of planning and paper work involved in the process. In the case of a theatrical project, one of the designer’s crucial pieces of work comes in the form of creating a cue sheet. For the uninitiated, a cue sheet is a list of all of the moments in a show that contain media. These documents help the designer, stage manager, and director establish a common understanding about how media / lights / sound are going to be used in a given production.

Drafting a useful cue sheet is often more a matter of preference, but it warrants mentioning some things that can help the media designer get organized as she / he wrestles with a growing list of content to generate. While one could certainly use a word-processing program in order to generate a cue sheet, I prefer to working a spreadsheet. Excel or Google Spreadsheets are a fine medium for writing cues, and have features that can be extremely helpful.

Cue Sheet Must-Haves

In my opinion there are a few must-have columns in organizing your cue sheet:

The Cue – Whatever you’ve settled on using, numbers or letters or some combination of the two, you need a column that puts the cue name / number in plain sight.

The Page Number – If you’re working from a traditional script, keep track of the page number. At some point you’ll be struggling to find the place in the script where a particular cue is getting called, and knowing the page number can ensure that you stay organized in the planning and technical rehearsal process.

Duration – How long does the cue last? In talking with the director it’s important to have a shared understanding of what’s happening in a given moment in a production. Specifying how long a particular effect or video is going to last can help provide some clarity for the designer as she/he is editing, animating, or programming.

From the Script – What’s in the source material that’s driving you to design a particular look? Did a character say something in particular? Is there any stage direction that’s inspiring your choice? Having a quick reference to what’s inspiring you can be a godsend while you’re crafting the content for a production.

Notes – For all that times you say “I’ll remember,” you will invariably forget something. Write it down. Did the director say something inspiring? Write it down. Did the lighting designer mention something about the amount of ambient light during a particular moment? Write it down. Write down what you’re thinking, or brainstorming. You’re never obligated to keep anything, but having a record of what you’ve been thinking gives you something to start from when you sit down to edit / animate / program.

Shooting Notes – If you’re going to need to record video for a production, make note of what particulars you need to keep in mind at the shoot. Do you need a green screen? A particular lighting effect? A particular costume? Keeping track of what you need for a particular moment is going to make the filming process that much easier.

Checklists. At the end of my cue sheet I keep a checklist for each cue. Three columns that help me keep track of what work I’ve done, and what work is left to be done.

Filmed / Animated – Is this cue filmed or animated?
Edited – Is this footage cut and prepped for the playback system?
Installed – Is this footage installed in the playback system?

Working with a Spreadsheet

Simple Formulas

One of the truly magical parts of working with a spreadsheet is one’s ability to use simple formulas to automate a workflow. A simple example comes out of the show that I’m currently working on, The Fall of the House of Escher. A new work that’s come out of the collaborate process of the 2014 ASU Graduate Acting / Directing / Design cohort, this show is built around a structural focus on giving the audience as much agency as possible. Central to this endeavor is a choose-your-own-adventure model for a the production. Given this structure, our production has sections that are distinguished from another another by a alpha-numeric prefix – 1A, 2A, 3A, et cetera. Additionally, I want to pair the section code with the number of a particular cue. This, for example, might look like 3A-34 (section 3A, cue 34). This combination of letters and numbers could make renumber cues, should one change, a difficult process. To simplify this particular problem I can use a simple formula for combining the contents of columns.

House_of_Escher_Media_Cue_Sheet

First I start by creating separate columns for the section of the show, and the cue number. Next I created a third column intended to be a combination of these other two columns. Here I inserts the following formula: =A24&”-“&B24

Here the Google Spreadsheets (or Excel for that matter) will read this formula as: display the contents of cell A24 and insert a “-” symbol and take the contents of cell B24. This may not seem like a big deal until you consider the time saved when a cue is added or removed, forcing a change in the numbering convention of all the following cues.

Conditional Formatting

Conditional formatting largely comes in varieties that are relative to changing the background of a particular cell. In this case Excel has a much wider range of possibilities for this kind of implementation. For me, however, the simplicity of automatic color coding is tremendously helpful. For example, let’s consider the final three checklist categories that I talked about earlier. Let’s say that I want every cell that contains the word “No” to be color coded red. In order to achieve this look first I’d highlight the cells that I want the formula to apply to.

Next I’d click on the background formatting button in the toolbar and select “Conditional Formatting” from the bottom of the list.

House_of_Escher_Media_Cue_Sheet 2

Finally I’d write my simple formula to control the change of color for the Cell’s selected.

House_of_Escher_Media_Cue_Sheet 3

Multiple Tabs

Last by not least, maintaining multiple tabs on a worksheet saves time and organizational energy. Additionally this allows you to cross reference notes, cells, and thoughts in your workbook. You might, for example, maintain a working cue sheet where you can brain storm some ideas and be a little less tidy. You can then use simple reference formulas to pull the relevant data to your final cue sheet that you give to the Stage Manager. Should you have the good fortune of having an assistant, you might make a separate page in your work book to outline her/his responsibilities on a project.

A cleanly organized cue-sheet is far from exciting, but it does ensure that you stay focused and productive as you work.

Live Camera as Mask | TouchDesigner

Back in May I wrote a quick piece about how to use a camera as a mask in Isadora. This is a very powerful technique for working with live cameras, and no matter how many times I see it I’m still struck by how fun it is. This isn’t difficult to program, and one of the questions I wanted to answer this summer was how to create a network in TouchDesigner that could accomplish the same process. Before we get started it’s going to be important to make sure that you have a camera (a web-cam is fine) connected to your computer. While you can create this same effect using any recorded footage, it’s certainly more compelling and playful when you’re working with a live camera. You’ll also want to make sure that you know how the feedback TOP works in Touch Designer. If you’re new to this programming environment you might take a moment to read through how to work with the feedback TOP here.

Getting Started

We’ll start by creating a new project, and by selecting all of the standard template operators and deleting them. We can stay inside of the default container that TouchDesigner starts us off with as we create our network. Let’s start by taking a look at what we’re going to do, and the operators that are going to be involved.

Unlike other tutorials, this time we’re going to work almost exclusively with Texture Operators (TOPS). The effect we’re looking to create is to use the feed from a live camera as a mask to hide, or reveal, another layer that could be either a video or photo (in our case we’ll work with a photo today, though it’s the same process and the same TOP when working with video). To do this we need to first remove a portion of our video stream, we’ll create a little bit of motion blur with the Feedback TOP, next we’ll composite this mask with our background layer, we’ll finish by creating a final composite with a black background layer.

Some like it Hot

Without a kinect we’re really just approximating separation. Luckily, if you can control your light there are some ways to work around this. Essentially what we’re looking to create is an image where the hottest (or brightest) portion is the subject we want to separate from the background. In another post I’ll talk about some more complicated methods, for now lets look at what we can do with just a camera and some light.

We’ll start by creating a string of TOPs like this:

Movie In – Monochrome – Threshold – Feedback – Blur – Level – Composite

You’ll want to make sure that you’ve connected the Threshold to the second inlet on the Composite TOP, and assigned the Target TOP of the Feedback operator as the Composite operator in this first portion of the string.

Remember to adjust the opacity on the Level operator to a value less than .95 and greater than 0.8. You will also need to spend some time adjusting the parameters of the Blur Operator to fine tune the aesthetic that you’re after.

Threshold

The Threshold TOP is going to be your best friend in this string. This operator is going to help you control how much background you throw away. By adjusting the threshold value you’re controlling the pixel values that get passed as white and what values are thrown away and converted to alpha. This means that as long as you can keep light on your subject, and mostly off of your background, you’ll be able to isolate the subject from unwanted background pixels. This will take some adjusting, but it’s well worth your time and attention. If you need a little more fine grain control here, you insert add a Level TOP to adjust your image before it gets to the Threshold TOP.

Composite Composite Composite

The final component of this network is to composite all of our images. First we’ll need to add a Movie In TOP as well as a Constant for our backgrounds. Next we need to add two more Composite TOPs and finally a Null. It might be useful to think of this as a string that looks like three Composite TOPs ended in a Null with some additional operators at each stage. First our composite of our live camera and feedback string is going to combined with our Movie In TOP. In the Composite TOP’s parameters make sure that the Operand method is Multiply. This replaces the white values from our previous string with the pixel values from the Movie In TOP. Next we’re going to composite this string with a constant. In this case I’m using a constant black background. Depending on the venue or needs of a production you might well choose another color, you can do this by adjusting the parameters of the constant TOP. Finally we’ll end the whole string with a Null.

We’ve now used a live feed as a mask to reveal another video or image. Next you might think about where in these strings you might think about adding other operators to achieve different affects or moods. Happy programming.

House of Escher | Media Design

In December of 2012 I was approached at an ASU School of Theatre and Film party and asked if I would be interested in working on a project that would begin the following semester, and premiere a new work in the Fall of 2013. As this is exactly the kind of opportunity that I came to ASU to peruse, I eagerly agreed to be a part of the project. 

Some Background

ASU’s School of Theatre and Film (soon to also include Dance) has a very interesting graduate school program for performers and designers. Operating on a cohort based system, the school admits a group of performers, directors, and designers (Scenic, Costume, and Lighting) every three years. One of the other graduate programs at the school, the one in which I’m enrolled, can enroll students each year. My program, Interdisciplinary Digital Media and Performance (IDM), straddles both the school of Arts, Media, and Engineering as well as the School of Theatre and Film. Students in my program have pursued a variety of paths, and one skill that’s often included in those various paths is media and projection design for stage productions. Just today as I was watching the live web-cast of the XboxOne announcement, I was thinking to myself, “some designer planned, created, and programmed the media for this event… huh, I could be doing something like that someday.”

The latest cohort of actors, designers, and directors started in the Fall of 2011, which means that the group is due to graduate in the Spring of 2013. In both the second and third year of the cohort’s program they work to create a newly devised piece that’s performed in one of the theatre’s on campus as ASU. Occasionally, this group also needs a media designer, and it’s their new show for 2014 that I was asked to be a part of. 

The Fall of the House of Escher

Our devising process started with some source material that we used as the preliminary research to start our discussion about what show we wanted to make. Our source materials were Edgar Allen Poe’s The Fall of the House of Usher, M.C Escher, and Quantum Mechanics. With these three pillars as our starting point we dove into questions of how to tackle these issues, tell an interesting story, and work to meet creative needs of the group. 

One of our first decisions focused on the structure of show that we wanted to create. After a significant amount of discussion we finally settled on tackling a Choose Your Own Adventure (CYOA) kind of structure. This partially arose as a means of exploring how to more fully integrate the audience experience with the live performance. While it also brought significant design limitations and challenges, it ultimately was the methodology the group decided to tackle. 

Shortly after this we also settled on a story as a framework for our production. Much of our exploratory conversation revolved around the original Poe work, and it was soon clear that the arc of the Fall of the House of Usher would be central to the story we set out to tell. The wrinkle in this simple idea came as our conversations time and again came back to how Poe and Quantum Mechanics connect with one another. As we talked about parallel universes, and the problems of uncertainty, we decided to take those very conversations as a cue for what direction to head with the production. While one version of the CYOA model takes patrons on the traditional track of Poe’s gothic story, audience members are also free to send our narrator down different dark paths to explore what else might be lurking in the Usher’s uncanny home. Looking at the photo below you can see where the audience has an opportunity to choose a new direction, and how that impacts the rest of the show. 

While this was a fine starting point, we also realized that it only giving the audience an opportunity to explore one avenue of possibility in the house felt a little flat. To address that point we discussed a repeated journey through the house in a Ground Hog Day-esque repeated style. Each run of the show will send the audience through the CYOA section three times, allowing them the opportunity to see the other dark corners of the house, and learn more about the strange inhabitants of the home. I did a little bit of map-making and mapped out all of the possible paths for our production; that is, what are all of the possible permutations of the three legged journey through the house. The resulting map means that there are twelve different possible variations for the production. A challenge, to be sure. 

Media and the House

So what’s media’s role in this production? The house is characterized by it’s Escher patterned qualities. Impossible architecture and tricks of lighting and perspective create a place that is uncanny, patterned, but also somehow strangely captivating. Just when it seems like the house has shared all of it’s secrets there are little quantum blips and pulses that help us remember that things are somehow not right until ultimately the house collapses. 

Our host (who spends his/her time slipping between the slices of the various paths the audience tumbles through) is caught as a destabilized field of particles only sometimes coalesced. The culminating scene is set in a place beyond the normal, a world of quantum weirdness – small as the inside of an atom, and vast as the universe itself. it’s a world of particles and waves, a tumbling peak inside of the macro and micro realities of our world that are either too big or too small for us to understand on a daily basis. 

Media’s role is to help make these worlds, and to help tell a story grounded in Poe’s original, but transformed by a madcap group of graduate students fighting their way out of a their own quantum entanglement. 

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today’s obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren’t just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you’re not so easily taken advantage of.


With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it’s relationship to arousal states in the subject. While it’s easy to claim that one is impartial and objective at all times, monitoring one’s physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject’s heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 


So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you’re interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I’m at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that’s being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 


Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I’d like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 
My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there’s a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.

Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It’s paying your brother-in law who wanted to learn Premiere Pro to make the videos. It’s a look that’s infomercial snake-oil slick. 




Reactions from Participants – General Comments / Observations

  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like “oh, that’s what you like huh?” or ” you better not be looking at him / her.” The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a “you try it, I dare you” kind of way.
  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question “what are you doing?” was “Finding out I’m a bad person.” It didn’t surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.
  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like “I think I just like this guy’s picture better.”
  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience.
  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.

Tools Used

Heart Rate – Pulse Sensor and Arduino Uno

Programming for Arduino – Arduino

Program to Read Serial data – Processing
Message Protocol – Open Sound Control
OSC Processing Library – Andreas Schlegel OSC Library for Processing 
Programming Initial Tests – MaxMSP
Programming and Playback- Isadora
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop
Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe Premiere, Adobe After Effects