Tag Archives: Media Installations

Geometric Landscapes | TouchDesigner

rolling landscapeWorking on a new piece to premiere in Mexico I spent a lot of time experimenting with creating landscapes and backgrounds. Searching for a way into this exploration I wanted to play with the idea of instancing objects in 3D space, and the illusion of moving and shifting planes in space. This is already a popular visual style, and I was wanted to try my hand at exploring what it might look like to make something like this in TouchDesigner.

I’ve talked about instancing before, and so this challenge seemed like something that would be both fun, and interesting to play with. I also wanted something that mixed material methods – shaded, flat, and wire frame in appearance. Let’s take a look at how we can making something interesting happen using these ideas as a starting point.

Rendering is going to make or break us when thinking about how to set up this project, with that in mind it’s important to remember that a typcially rendering set-up needs something to be rendered (some geometry), a perspective from which to draw the object(s) (a camera), and a light source (a light, we don’t always need a light but as a rule of thumb it’s good to think that we need one). Our Geometry, Light, and Camera are all components, while our render operator is a Texture Operator (TOP). As a point of reference, here’s what a generic rendering set-up looks like:

classic render setup

We can tell our Render TOP to look at multiple Geometry Components, in the same network, or we can nest our active surfaces inside of a single GEO. Much of this depends on what you’re looking to create. The most important thing to consider is that surfaces operators (SOPs) are computed on the CPU unless placed inside of Geometry COMP – placed inside of a Geo they’re computed on the GPU instead. This makes a huge difference in your performance, and as a best practice it’s good to place any rendered geometry inside of a Geo.

Now let’s take a look at what our rendering set-up is going to look like for making our geometric landscape:

rolling landscape render set-up

Here we can see that we have a similar set-up on the left – a light, a geo, a camera, and a Render TOP. On the right I’ve got the geometry viewer open so we can see a little more about the relationship in the scene of the camera, light, and geometry. We can see that the camera is set above our geometry looking down, we can also see that we have a cone light set-up with a wide angle and a wide delta.

Now that we have a general sense of what we’re making let’s dig-in and make something interesting.

Lets start with an empty network. Lets start by adding a Geo to our network.

geo

To get started we’ll need to dive inside of geo to start making some changes. You can do this by double clicking on the Geo, using the quick key “i” (shortcut for inside), or by scroll wheel zooming into your geo. Inside our Geo we’ll see a torus that has it’s render and display flag set (the small blue and purple circles on the surface operator).

Inside of our geo let’s start by frist deleting the Torus SOP. Next let’s add a Grid SOP. Our grid is going to act the key generative element for us inside of this network – it’s going to give our surface its wire texture, the shading on the surface, and the location of where our spheres get placed. Our Grid is the central piece of what we’re making, and we’ll how in just a bit. Once we’ve added a grid to our network, we need to make a few changes to some its properties. First we want to make sure that it’s set to be a Polygon for Primitive type; we want to make sure that our orientation is set to ZX Plane; finally we want to change the size to 20 x 20.

grid setup

Next we’ll connect our Gird SOP to a Noise SOP. We’re going to use the Noise SOP to drive some of the shifting locations of the points in our grid. Before we move on, let’s make one quick change, On the Transform page in the Noise SOP’s parameter’s we can see that the translate z parameter is set to change with the second count of our project – me.time.seconds. This is excellent, and it keeps our noise animated over time, but it also means that we’re only working with 600 samples (in a default TouchDesigner network) because me.time.seconds is locked to our timeline. If, instead, we want noise that doesn’t have a hiccup every 10 seconds, we can instead use the call me.time.absSeconds. This uses the absolute number of seconds that TouchDesigner has been running to drive the transformations in the noise SOP. It’s a small change, but makes for a nicer look (at least in my opinion).

noise SOP

Next we’ll add a Material SOP to the our network. Our material SOP is going to allow us to assign a material to our grid. We’ll do this by also adding a Wireframe Material to our network. Before we assign our wireframe to our material we should see something like this:

before assignment

To assign the wireframe to the material, we’re just going to drag and drop the MAT onto the SOP.

assign MAT

Finally, we’re going to end this string by adding a Null SOP, making sure to turn on the render and display flags.

wire null

At this point we’ve made the Wireframe outlined elements of our grid. We’re now going to use the same data stream that we’ve already programmed to help us create a another layer of texture, and to create the locations for our spheres. Let’s start by adding our spheres to the network. To do this we’re going to do some instancing. When we’re instancing we need some location information for where to generate the copies of our source geometry. To get this information we’re going to use a SOP to CHOP to convert our SOP information into CHOP data.

SOPtoNow before we move on we need to change gears for just a moment. What we’re going to do next is to add another Geo inside of our current Geo – in Russian Nesting Doll Fashion. Why? Well, we’re going to do this in order to take advantage of the Geo’s ability to instance. Why not use the Copy SOP? I love me some Copy SOP action, but in this case the use of instancing is more efficient for this particular activity.

So, let’s add another Geo to our network:

new geo

Next we’re going to replace the torus inside of this Geo with a sphere. We’re also going to add a material inside of the geo. Easy, right? I’m going to set my sphere to be pretty small ( 0.09, 0.09, 0.09 ), I’m also going to make sure that I connect my sphere to a Null (in case I want to make any other changes), and then turn on the Null’s display and render flags. Finally I’m going to add a constant Material. When we’re all said and done you should have something like this:

inside the sphere

Excellent. Now we we back out of this nested Geo we should see just a single sphere. What?! Well, now we can set the Geo to instance – my favorite part.

small sphere

Let’s start by taking a look at the parameters of our Geo. Specifically, we want to look at the Instance page. Here we first need to turn on Instancing. Next we’re going to tell our Geo to look at the CHOP called sopto1. Finally we’ll set the TX, TY, and TZ parameters to correspond to the channels called tx, ty, and tz. If this seems like crazy talk, that’s okay – check out the picture below and it should make more sense:

instance page

We also want to head to Render Page of the Geo, and set this geo to use the material ./constant1 (this means, use the material inside of this geo called constant1 – ./ is a directory pointer indicating where to look for the thing in question, in this case a constant).

constant

Alright, now we can finally see some of our handy work – you should now see a sphere instanced at each of the vertices of the grid that we’re transforming with noise.

Spheres

Now let’s kick it up a notch. Now we’re going to add another Geo to our network.

last geo

We’re going to treat this Geo slightly differently. Inside let’s add an In SOP and a Phong Material. On our SOP In let’s make sure that the display and render flags are turned on, and for your SOP choose a nice dark color – I’m choosing a deep crimson.

geo3

The In SOP allows us to pass in the geometry that we’ve already made, acting as a kind of short-cut for us. When we go back to our Geo we just need to make sure that we’ve set our Render material to be ./phong1 – like with our constant this is the pathway to and the name of the material we want to use.

geo3mat

Alright, now you should have a network that looks something like this:

complete geo network

Now we’re ready to move out of our Geo and get ready to render our scene. Zooming out of our Geo we should see a network that looks something like this:

work space

In order to render our scene we need to revisit what we talked about at the beginning of the post – we need to add a Camera, a Light, and a Render TOP. Let’s go ahead and add these to our network.

rendersetupGeo

What gives?! This doesn’t look right at all. Well, part of what’s happening is that we don’t have our camera and light positioned correctly to render the scene correctly. To make this easier to understand, let’s change up our work space so we can use the geometry viewer (one of my favorite tools). Let’s start by dividing the workspace into two windows. We can do this by using the split work space icon in the menu bar:

menu bar

I like the vertical split, but you can choose whatever arrangement works for you. When you initially click on this split you’ll see two views of your current network location:

split step1

In order to see the geometry viewer we need to change the pane type selection for one of our windows. Clicking on the small drop down triangle will reveal a menu of network views. Let’s select Geometry Viewer from the list.

pane type

You should now see your network on one side, and the geometry from our Geo comp on the other side.

geo viewer

Now we’re cooking with gas. Alright, let’s make our lives just a little easier and change the scale of our light and camera to make them easier to see. Click on your light COMP, if your parameters window has disappeared you can bring it back by pressing “p” on your keyboard.” In the scale parameter, let’s turn that up to 10, 10, 10.

light scale

Great, now we can see part of the reason that our scene isn’t rendering the way we want it to. Our light is currently set up as a point light that’s positioned at the edge of our geometry. Let’s make some changes to our light’s properties so we get something closer to what we want. Let’s start by changing the location of our light, I’m going to place mine at 0, 10, 0.

lightplacement

Now let’s take a look at the Light page of the parameter’s window. Here I’m going to change my light to a Cone, and alter the cone size, delta, and rolloff. Experiment with different settings here to find something that you like.

light properties

Before you get too much experimenting done, however, you’ll probably notice that the cone of our light isn’t pointing towards the surface of our geometry. We can fix this by heading back to the Xform page of the parameters window, and setting the rotation values to -90, 0, 0.

light ROT

With our light starting to take shape, let’s get our camera in order. Over at our camera, let’s make the same initial change we made to the light and turn the scale up to 10, 10, 10 – this is going to make it much easier for us to find our camera.

cameraScale

With the scale turned up we can see that our camera needs to be translated back, up and rotated downwards so it’s looking at the geometry. After a little bit of adjusting I think I like my camera at:

TX    0
TY    6.2
TZ    18.9
RX    -16.8
RY    0
RZ    0

cameraTrans

Boom! Alright, let’s close our geometry viewer and take a closer look at our Render TOP to see what we’ve made.

finalRender

Nice work. This is a good looking start, and now that you know how it’s made you can start to really have fun – camera placement, light placement, noise amplitude, you name it, go crazy, make something fun or weird, or just silly.

Sending and Receiving OSC Values with TouchDesigner

The other day I posted look at how to send and receive TouchOSC data with Troikatronix’s Isadora. In an effort to diversify my programming background I’m often curious about how to translate a given task from one programing environment to another. To this end, I was curious about repeating this same action in Derivative’s TouchDesigner. If you’re one of a growing number of artists interested in visual programming, VJing, media design, interactive system design, media installation, or media for live performance than it’s well worth your time to look at both Isadora and TouchDesigner.

That said, I initially started thinking about control panel interfaces for TouchOSC when I saw Graham Thorne’s instructional videos about how to transmit data from Isadora to Touch OSC (If you have a chance take a look at his posts here: Part 1 & Part 2). Last year I used TouchOSC accelerometer data to drive an installation at Bragg’s Pie Factory in Downtown Phoenix, but I didn’t do much with actual control panels. Receiving data from TouchOSC in TouchDesigner is a piece of cake, but sending it back out again didn’t work the first time that I tried it. To be fair, I did only spend about 3 minutes trying to solve this particular question as it was a rather fleeting thought while working on another project.

Getting this to working Isadora, however, gave me a renewed sense of focus and determination, so with a little bit of experimenting I finally got this working the way that I wanted. While there are any number of things you might do with TouchDesigner and TouchOSC, I wanted to start with a simple deliverable: I wanted the position of Slider 1 to inversely change the position of Slider 2, and vise versa. In this way, moving Slider 1 up moves Slider 2 down, and moving Slider 2 up moves Slider 1 down. In performance setting it’s unlikely that I’d need something this simple, but for the sake of testing an idea this seemed like it would give me the kind of information that I might need.

For the most part this is pretty easy, but there are a few things to watch for in order to get the results that you want.

Here’s the network stream of Chanel Operators (CHOPs):

OSC In – Select – Math – Rename – OSC Out

OSC In

Derivative Documentation on the OSC In CHOP 

The OSC in CHOP is pretty excellent out the gate. We need to do a little configuring to get this up and running, but after that we’re truly off to the races. First off it’s important to make sure that your TouchOSC device is connected to your network and broadcasting to your TouchDesigner machine. If this process is new to you it’s worth taking a little time to read about out to set this all up – start on this page if this is new to you. Once your TouchOSC device is talking to your TouchDesigner network we’re ready to start making magic happen. One of the first things that I did was to activate (just adjust up or down) the two faders that I wanted to work with. This made sure that their IDs were sent to TouchDesigner, making it much easier for me to determine how I was going to rename in them in the future.

Screenshot_100913_094345_PMSelect

Derivative Documentation on the Select CHOP

When you’re working with the OSC In CHOP, all of your TouchOSC data comes through a single pipe. One of the first things to do here is to use the select CHOP to single out the signal that you’re looking for (as a note, you could also export the Channel in question to a Null, or handle this a variety of other ways – in my case a Select CHOP was just the first method I tried). This operator allows you to pull out a single Channel from your OSC In.

Screenshot_100913_094428_PM

Math

Derivative Documentation on the Math CHOP

Next I wanted to remap the values of my incoming signal to be inverted. A Math CHOP lets us quickly remap the range for our data by changing a few values in the Range Tab of that parameters window. You’ll notice that you first need to specify the from range (what’s the range of incoming values), and then specify the to range (what’s the range of outgoing values). In my case, the from values are 0 and 1, and the to values are 1 and 0.

Screenshot_100913_094528_PM

Rename

Derivative Documentation on the Rename CHOP

The Rename CHOP is one of the most important steps in your whole network. When communicating with TouchOSC you need to know exactly what object you’re trying to drive with your data. To accomplish this you need to know the UDP address of your device, the port number that you’re board casting to, and the name of the slider or button that you’re wanting to change. While we can specify the UPD address and port number with another CHOP, we need to use the Rename CHOP (a quick aside – there are, of course, other ways to do this – this just happens to be the method that I got working after a bit of trial and error) to specify the name of the object that we’re driving. In my case I wanted fader 1 to drive fader 2. You’ll notice that formatting of the fader is important. In this particular OSC layout there are multiple tabs. We can see this as evidenced by the “1/” preceding the name of our fader. In changing the name of this input we need to enter the precise name of the fader that we want to change – this is why we activated both faders initially, this gave us the names of both objects in the TouchOSC panel. You’ll notice that changed the name of my fader to “1/fader2”.

Screenshot_100913_094606_PM

OSC Out

Derivative Documentation on the OSC Out CHOP 

Now that we’ve processed and named our signal we’re ready to send it back out to TouchOSC. In this CHOP we need to input the UDP address of the TouchOSC device (you can find this in the info tab of the TouchOSC app) as well as the port that we’re sending to. We also want to make sure that we’re sending this as a sample, and finally we want to turn off “send events every cook.” This last step is important because it ensures that we’re only sending values when they change. If we send messages for every cook we won’t be able to see the inverse relationship between our two sliders. Correction, we’ll be able to see the relationship so long as we don’t set up slider 2 to drive slider 1. In order to create an inverse relationship with both sliders we only want to transmit data as it’s changed (otherwise we’ll find that we’re fighting with the constantly streamed position data from our slider that isn’t being activated).

Screenshot_100913_094705_PM

The last step in this process is to copy the whole string and set up fader 2 to transmit to fader 1, this last step allows both faders to drive one another in turn.

There you have it. You’ve now successfully configured your TouchDesigner network to receive and send data to a mobile device running TouchOSC.

TouchDesigner | Sculpture

In the ever growing list of tools that I’m experimenting with Derivative’s TouchDesigner is a tool that time and again keeps coming up as something that’s worth learning, experimenting with, and developing competencies around it’s work flow. TD is a nodal environmental called a network. Inside of the network nodes can be directly connected by by exporting parameters. 

Nodes, also called Ops (Operations) are split into families specific to the characteristics of their behavior: CHOPS (Channel Operators)TOPS (Texture Operators)SOPS (Surface Operators)MATS (Materials), and DATS (Data Operators). Nodes from within the same families can pass data directly to one another through patch cords (similar to MaxMSP and Isadora). The output of nearly every node can be passed into other nodes by exporting parameter values. This process divides the process of passing data values into two distinct processes, one that’s centered around like to like processes and one that’s about moving from like to different. 

TouchDesigner’s nodes are the most powerful when they’re connected. Like Max single nodes do little by themselves, and are the most powerful when they’re connected. Also like Max the flexibility of TD is it’s ability to build nearly anything, and with that comes the fact that little is already built. Similar to Isadora is the native ability to build user interfaces as a part of the very fabric of building a program / user experience. 

One of the projects that I’m working on this semester is for a sculpture course. This course, called New Systems, is intended to address the link between media and sculpture. One of the areas that I’m interested in exploring is about collecting data from a circus apparatus and using that to drive a visualization in performance. I’m most interested in the direct link between how an apparatus is behaving and how that data can be interpreted in other ways. To that end this semester I set to the work of building an apparatus and determining how to parse that data. In my case I decided to use this opportunity to experiment with TouchDesigner as a means of driving the media. While I was successful in welding together a square from stainless steel, after some consultation from my peers in my sculpture course it was determined that this structure was probably not safe to perform on. Originally I had planned to use a contact mic to capture some data from my interaction with the apparatus, and after a little bit of thinking and consultation with my adviser (Jake Pinholster) I decided that gyroscope data might be more useful.

My current plan is to move away from this being a performance apparatus and instead think of it as installed sculptural piece that serves as a projection surface. For data I’ll be using an iPod Touch running Hexler’s touch OSC. Touch OSC passes data using UDP packets to communicate over a wired or wireless network using Open Sound Control (OSC). One of the many things that Touch OSC can do is pass the accelerometer data from an iOS device out to other applications. In my case Touch OSC is passing this information to TouchDesigner. TD is then used to pull this information and drive some of the media.

One of the challenges that my adviser posed in this process was to create three scenes that the  media moved through. For the sake of experimentation I applied this challenge to the idea of working with containers in TouchDesigner. Containers are a method of encapsulation in TD, they’re a generic kind of object that can hold just about any kind of system. In my case I have three containers that are equivalent to different scenes. The time line moves the viewer through the different containers by cross-fading between them. Each container holds it’s own 3D environment that’s rendered in real time and linked to the live OSC inputs coming from an iPod touch. 

The best way to detail the process of programming this installation is to divide it up into the component pieces that make it all work. The structure of this network is defined by three hierarchical levels: The Container, Control, and Final Output; the individual composited scene, the underlying geometry.


Want to work your way through these ideas one chunk at a time? Visit their individual Posts here:

The Underlying Geometry

The Individual Composited Scene

The Container, Control, and Final Ouput

Want to work you way through the whole process, keep on scrolling.


The Underlying Geometry

One of the benefits of working with TouchDesigner is the ability to work in 3D. 3D objects are in the family of operators called SOPs – Surface Operators. One of the aesthetic directions that I wanted to explore was the feeling of looking into a long box. The world inside of this box would be characterized by examining artifacts as either particles or waves with a vaguely dual-slit kind of suggestion. With that as a starting point I headed into making the container for these worlds of particles and waves.

Before making any 3D content it’s important to know how TouchDesigner processes these objects in order to display them. On their own, Surface Operators can’t be displayed as a rendered texture. In TouchDesigner’s idiom textures are two-dimension surfaces, and it follows that the objects that live in that category are called TOPs, Texture Operators. Operators from different families can’t be directly connected with patch chords. In order to pass the information from a SOP to a TOP one must use a TOP called a Render. The Render TOP must be connected to three COMPs (Compositions) in order to create an image that can be displayed. The render TOP requires a Geometry COMP (something to be rendered), a Light COMP (something to illuminate the scene), and a Camera COMP (the perspective from which the object is to rendered). In this respect TD pulls from conventions familiar to anyone who has worked with Adobe’s After Effects. 

Knowing the component pieces required in order to successfully render a 3D object it’s easier to understand how I started to create the underlying geometry. The Geometry COMP is essentially a container object (with some special attributes) that holds the SOPs responsible for passing a surface to the Render TOP. The default Geometry COMP contains a torus as a geometry. 

We can learn a little about how the COMP is working by taking a look inside of the Geometry object. 

Here the things to pay close attention to are the two flags on the torus object. You’ll notice in the bottom right corner there is a purple and a blue circle that are illuminated. The purple circle is a “Render Flag” and tells TouchDesigner to render the object, and the blue circle is a “Display Flag” which tells TouchDesigner that this is the object that should be displayed in the Geometry COMP.

Let’s take a look at the network that I created.

Now let’s dissect how my geometry network is actually working. At first glance we can see that multiple objects are being combined into a single piece of geometry that’s ultimately being passed out of this Geometry COMP. 

If we look closer we’ll see that here that the SOP network looks like this:

Grid – Noise – Transform – Alpha Noise (here the bypass flag is turned on)

Grid creates a plane that’s created out of polygons. This is different from a rectangle that’s only composed four points. In order to create a surface that can deform I needed a SOP points in the middle of it. The grid is attached to a Noise SOP that’s animating the surface. Noise is attached to a transform SOP that allows me to change the position of this individual plane. The last stop in this chain is another Noise SOP. Originally I was experimenting with varying the transparency of the surface. Ultimately, I decided to move away from this look. Rather than cutting this out of the chain, I simply turned on the Bypass Flag which turns off this single SOP. This whole chain is repeated eight times (for a total of eight grids). 

These Nine planes are then connected so that the rest of the network looks like this:

Merge – Transform – Facet – Texture – Null – Out

Merge takes all of the inputs and puts them together into a single piece of geometry. Transform allows me to move object as a whole in space. Facet is a handy operator that allows you to compute the normals’ of a geometry, which is useful for creating some more dynamic shading. Texture was useful for another direction that I was exploring, ultimately  ended up turning on the bypass flag for this SOP. A null, like in other environments, is really just a place holder kind of object. In the idiomatic structure of TouchDesigner, the Null is operationally an object that one places at the end of operation string. This is considered a best practice for a number of reasons. High on the list of reasons to end a string in a Null is because this allows easy access for making changes to a string. TouchDesigner allows the programmer to insert operations between objects. By always ending a string in a Null it becomes very easy to make changes to the stream without having to worry about re-exporting parameters. Finally all of this ends in an Out. While the Out isn’t necessary for this string, at one point I wasn’t sure if I was going to pass this geometry into another component. Ending in the Out ensured that I would have that flexibility if I needed it.


The Individual Composited Scene

There are always large questions to answer when thinking about creating an interactive work: Who is it for? What does it look likes? What are you trying to communicate? How much instruction do you provide, how little instruction do you provide? And on and on.  As I started to think about how this piece was going to work as an installation rather than as a performance apparatus, I started by thinking about what kind of data I could use to drive the visual elements of this work. One of the sensors that I knew I could easily incorporate into my current sculptural configuration was a an iPod Touch. The Touch has an on-board gyroscope and accelerometer. After a conversation with my adviser (Jake Pinholster) we decided that this would be a direction of exploration worth pulling apart, and from there I went back to TouchDesigner to start thinking about how I wanted to incorporate live data into the piece I was making.

When dealing with a challenge like building an interactive sculptural system that has at least three different visualizations, it can be challenging to think about where to start. Different programmers are bound to have different approaches to addressing this question. My approach was to start by thinking about what kind of input data I had to work with. Because I was dealing with a sensor that relayed spatial information, this also help me think about how to represent that data. Next I thought about what different kinds of ways I wanted to present this information, and finally I addressed how to playback this experience for users. Some of my more esoteric and existential questions (why am I making this? what does it mean? what does it represent?) were addressed through the methodical programming process, and others were sussed out over contemplative cups of coffee. As much as I wish that these projects could have a straight line of execution, a checklist even, I’m discovering more and more that the act of creating and programming is often a winding path with happy (and unhappy) discoveries along the way.

My first step on this journey, however, was to address what kind of inputs I had to use. Hexler has an excellent app for sending UDP messages over wireless connections called Touch OSC. OSC, or Open Sound Control, is a communications protocol that uses UDP messages to send data over wired and wireless networks. It’s functionally similar to MIDI and has some additional flexibilities and constraints. In the case of touch OSC, one of the parameters that you can enable from your iOS device to send xyz data from the accelerometer. Getting Touch OSC up and running does require a few steps to get the ball rolling. First both the computer that’s receiving and the device that’s broadcasting need to be on the same network. Your broadcasting device will need the IP address of the receiving computer, and a specified port to send the data to (how to find your IP address on a Mac, and on a PC). Once this information is set on your broadcasting device, it’s time to add a Channel Operator to your TouchDesigner network.

In TouchDesigner, there is a CHOP called “OSC in.” This CHOP will allow you to receive OSC data over a wireless network. Once you’ve added the CHOP to your TD network you’ll have to specify the port that Touch OSC is broadcasting to, and then you should be in business. In my case once this was set up I could instantly see a stream of accelerometer data coming from my iPod Touch. In order to use these values, however, I needed to take some additional steps. The raw OSC data from Touch OSC comes in as a range of data from -1 to 1. Additionally, the data comes in from one CHOP. My flow of operators looks like:

OSC In – Select – Lag – Math – Null

OSC In is the data input. The CHOP Select allows you to select a single channel out of a bundle of channels. In this case I used this to separate my X, Y, and Z inputs into different streams. The Lag chop helps to smooth out the attack and decay rates of input data. In my case this ensured that my final values used to control another object where kept from being too jittery. The Math CHOP is tremendously powerful, in my case I wanted to be able to map the values of my raw data [ -1 to 1 ], to a larger range of values, say 0 to 200. Finally I ended my string in a Null. A null in this case is very useful in case I need to add any other operators into my string.

Before thinking about how to use these values, it’s important to take a moment to revisit how geometry is rendered in TouchDesigner. The geometry COMPs that are used to create the objects to be displayed can’t be visualized without using a render TOP. The render TOP requires three components in order to generate an image that can be seen. Render requires a source geometry, light, and camera. The Geometry COMP provides the location of surfaces, the light provides the necessary information about how the object is being lit. The camera COMP controls the perspective that the object is being rendered from. This is similar to an approach that one might use when creating 3D content in After Effects – an object to be rendered, a light so the object can be seen, and a camera to control the perspective the audience sees of the object. Because we need to think of rendering by combining multiple COMPs, that can inform how we use live data. 

With some scaled values process and ready to export I was ready to think about how these values could influence the viewers perspective of the geometry. One of my initial thoughts was to render a cube that a user could look inside of. As the observer changed the orientation of the sensor, the virtual environment would also change in kind. While it’s possible to do this by rotating and translating the geometry itself, I instead decided to focus on the orientation of the camera instead. This has a few advantages. One important advantage is the ability to tell a camera to look directly at a specified geometry. This means that in translating the camera (left or right, up or down, in or out) the camera stays focused on the center of the target geometry. This makes changing perspective much simpler.  

Initially I was thinking of rendering the entire 3D scene as a single geometry. In doing this, however, I was experiencing some challenges when thinking about the placement of lights and the overall organization of the geometry, and in applying texture to the surfaces. By using a Phong shader one can apply texture maps to the 3D geometry COMPs that have been created. By separating the interior and exterior pieces of the geometry and then compositing them after rendering I was able to apply different shaders to each geometry.

The portion of my network responsible for compositing the geometry looks like this:

Render 1, Render 2, Constant (black solid) – Composit – Transform – Null – Out

 Render 1, Render 2, and the Constant are the three source surfaces. Render 1 is the box, Render 2 is the merged set of waves, and the Constant is a black background. Another approach to this would be to set one of the camera background’s as black. These three flow into a Composit COMP. Next is the Transition COMP (this allowed for some small adjustments that needed to be made in order to help align the projection with the sculpture. Originally I made this string with a Null as the final output of this Component. I would eventually find that I needed an Out to pass this scene into another display module. 

I used the same techniques as above for the other two scenes – starting with establishing my data stream, generating the geometry, rendering out layers to be composited and then passed out to the visual stream.

Are these pictures too small? You can see higher quality versions by looking at this Flickr Gallery: Graduate School Documentation


The Container, Control, and Final Output

In thinking about how to meet the objectives that I had for this piece, one of my central questions was how to make sure that I could move through three cued scenes – either with manual or automatic triggers. I knew that I had three different aesthetic environments that I wanted to move through. I explored several different options, and the one that ultimately made sense to me given my current level of proficiency (at this point I had only been programming in Touch for a total of three weeks) in TouchDesigner was to use a cross fading approach. Here’s what the whole network looks like:

In thinking about how to ensure that I was being efficient I decided to encapsulate my three different scenes in their own respective containers. You’ll notice on the left hand side that there are three containers – each holding it’s own 3D environment. These are joined through Corss fading TOPs though a final composite (for a mask) until ending in a Null that was used as the display canvas. 

I spent a lot of time thinking about how this piece was going to be both interactive and autonomous. It needed to be interactive in that the user was able to see how their interaction with an object was driving the visual media; it needed to be autonomous in its ability to transition between scenes and then loop back to the beginning of the network. I don’t think I’ve totally cracked the nut that is the right balance of interactivity and self-directed programming, but it feels like I did make strides towards addressing this question. My solution to these issues was to allow the interaction with the projection to be centered around the control of perspective, but to drive the transitions through the scenes with time-line triggers.

Unlike some other interactive programming environments, TouchDesigner has a timeline built into the fabric of the control system. The Timeline is based in frames, and the programmer can specify the number of frames per second as well as the total number of frames for a given project. My Timeline triggering system was the following string of CHOPs:

Timeline – Trigger – Null

Timeline reports out the current frame number. The trigger CHOP can be set to trigger at a given threshold (or in my case a frame number). This in turn is passed to a null and exported to a Corssfade TOP as a rate for crossfade. The Crossfades are daisy-chained together before finally being attached to the null that’s output to the projector.

With the system working I also needed to make a mask for the final projection to ensure that I wasn’t displaying any empty grid onto the floor of the gallery where this was being installed. I would typically make a mask for something like this in Photoshop, but decided to try making this all in the TouchDesigner programming environment. My TOP operator string for this looked like:

Contsant – Transform – Blur – Composite

I started by creating a black constant that’s then passed to a transform so that it can be positioned into place. This is then passed to a blur to soften the edges, and finally to a composite to create a mask that contains a left, right, top, and bottom side. In hindsight I realize that I could use a single constant passed to four transform TOPs, to be a little more tidy. The mask as a composited object is then composited with final render stream before being passed to the Null that’s connected to the projector. 

In the end I’m fairly happy with this project. t’s been a steep learning curve, but well worth the hassle, angst, and late nights. It’s no small thing to have made a piece of interactive media driven sculpture in a programming environment where that’s totally new to me. For as hard as all of this work has proven to be, I have to remind myself that I’m actively doing the work that I came to Graduate School to do. Everyday I realize that I’ve been changed by my time in the desert, and by my time with the gifted and brilliant artists and friends that I’ve found here. 

Are these pictures too small? You can see larger versions of them here:

TouchDesigner | The Container, Control, and Final Ouput

In thinking about how to meet the objectives that I had for this piece, one of my central questions was how to make sure that I could move through three cued scenes – either with manual or automatic triggers. I knew that I had three different aesthetic environments that I wanted to move through. I explored several different options, and the one that ultimately made sense to me given my current level of proficiency (at this point I had only been programming in Touch for a total of three weeks) in TouchDesigner was to use a cross fading approach. Here’s what the whole network looks like:

In thinking about how to ensure that I was being efficient I decided to encapsulate my three different scenes in their own respective containers. You’ll notice on the left hand side that there are three containers – each holding it’s own 3D environment. These are joined through Corss fading TOPs though a final composite (for a mask) until ending in a Null that was used as the display canvas. 

I spent a lot of time thinking about how this piece was going to be both interactive and autonomous. It needed to be interactive in that the user was able to see how their interaction with an object was driving the visual media; it needed to be autonomous in its ability to transition between scenes and then loop back to the beginning of the network. I don’t think I’ve totally cracked the nut that is the right balance of interactivity and self-directed programming, but it feels like I did make strides towards addressing this question. My solution to these issues was to allow the interaction with the projection to be centered around the control of perspective, but to drive the transitions through the scenes with time-line triggers.

Unlike some other interactive programming environments, TouchDesigner has a timeline built into the fabric of the control system. The Timeline is based in frames, and the programmer can specify the number of frames per second as well as the total number of frames for a given project. My Timeline triggering system was the following string of CHOPs:

Timeline – Trigger – Null

Timeline reports out the current frame number. The trigger CHOP can be set to trigger at a given threshold (or in my case a frame number). This in turn is passed to a null and exported to a Corssfade TOP as a rate for crossfade. The Crossfades are daisy-chained together before finally being attached to the null that’s output to the projector.

With the system working I also needed to make a mask for the final projection to ensure that I wasn’t displaying any empty grid onto the floor of the gallery where this was being installed. I would typically make a mask for something like this in Photoshop, but decided to try making this all in the TouchDesigner programming environment. My TOP operator string for this looked like:

Contsant – Transform – Blur – Composite

I started by creating a black constant that’s then passed to a transform so that it can be positioned into place. This is then passed to a blur to soften the edges, and finally to a composite to create a mask that contains a left, right, top, and bottom side. In hindsight I realize that I could use a single constant passed to four transform TOPs, to be a little more tidy. The mask as a composited object is then composited with final render stream before being passed to the Null that’s connected to the projector. 

 In the end I’m fairly happy with this project. t’s been a steep learning curve, but well worth the hassle, angst, and late nights. It’s no small thing to have made a piece of interactive media driven sculpture in a programming environment where that’s totally new to me. For as hard as all of this work has proven to be, I have to remind myself that I’m actively doing the work that I came to Graduate School to do. Everyday I realize that I’ve been changed by my time in the desert, and by my time with the gifted and brilliant artists and friends that I’ve found here. 

 Are these pictures too small? You can see larger versions of them here:

TouchDesigner | The Individual Composited Scene

There are always large questions to answer when thinking about creating an interactive work: Who is it for? What does it look likes? What are you trying to communicate? How much instruction do you provide, how little instruction do you provide? And on and on.  As I started to think about how this piece was going to work as an installation rather than as a performance apparatus, I started by thinking about what kind of data I could use to drive the visual elements of this work. One of the sensors that I knew I could easily incorporate into my current sculptural configuration was a an iPod Touch. The Touch has an on-board gyroscope and accelerometer. After a conversation with my adviser (Jake Pinholster) we decided that this would be a direction of exploration worth pulling apart, and from there I went back to TouchDesigner to start thinking about how I wanted to incorporate live data into the piece I was making.

When dealing with a challenge like building an interactive sculptural system that has at least three different visualizations, it can be challenging to think about where to start. Different programmers are bound to have different approaches to addressing this question. My approach was to start by thinking about what kind of input data I had to work with. Because I was dealing with a sensor that relayed spatial information, this also help me think about how to represent that data. Next I thought about what different kinds of ways I wanted to present this information, and finally I addressed how to playback this experience for users. Some of my more esoteric and existential questions (why am I making this? what does it mean? what does it represent?) were addressed through the methodical programming process, and others were sussed out over contemplative cups of coffee. As much as I wish that these projects could have a straight line of execution, a checklist even, I’m discovering more and more that the act of creating and programming is often a winding path with happy (and unhappy) discoveries along the way.

My first step on this journey, however, was to address what kind of inputs I had to use. Hexler has an excellent app for sending UDP messages over wireless connections called Touch OSC. OSC, or Open Sound Control, is a communications protocol that uses UDP messages to send data over wired and wireless networks. It’s functionally similar to MIDI and has some additional flexibilities and constraints. In the case of touch OSC, one of the parameters that you can enable from your iOS device to send xyz data from the accelerometer. Getting Touch OSC up and running does require a few steps to get the ball rolling. First both the computer that’s receiving and the device that’s broadcasting need to be on the same network. Your broadcasting device will need the IP address of the receiving computer, and a specified port to send the data to (how to find your IP address on a Mac, and on a PC). Once this information is set on your broadcasting device, it’s time to add a Channel Operator to your TouchDesigner network.

In TouchDesigner, there is a CHOP called “OSC in.” This CHOP will allow you to receive OSC data over a wireless network. Once you’ve added the CHOP to your TD network you’ll have to specify the port that Touch OSC is broadcasting to, and then you should be in business. In my case once this was set up I could instantly see a stream of accelerometer data coming from my iPod Touch. In order to use these values, however, I needed to take some additional steps. The raw OSC data from Touch OSC comes in as a range of data from -1 to 1. Additionally, the data comes in from one CHOP. My flow of operators looks like:

OSC In – Select – Lag – Math – Null

OSC In is the data input. The CHOP Select allows you to select a single channel out of a bundle of channels. In this case I used this to separate my X, Y, and Z inputs into different streams. The Lag chop helps to smooth out the attack and decay rates of input data. In my case this ensured that my final values used to control another object where kept from being too jittery. The Math CHOP is tremendously powerful, in my case I wanted to be able to map the values of my raw data [ -1 to 1 ], to a larger range of values, say 0 to 200. Finally I ended my string in a Null. A null in this case is very useful in case I need to add any other operators into my string.

Before thinking about how to use these values, it’s important to take a moment to revisit how geometry is rendered in TouchDesigner. The geometry COMPs that are used to create the objects to be displayed can’t be visualized without using a render TOP. The render TOP requires three components in order to generate an image that can be seen. Render requires a source geometry, light, and camera. The Geometry COMP provides the location of surfaces, the light provides the necessary information about how the object is being lit. The camera COMP controls the perspective that the object is being rendered from. This is similar to an approach that one might use when creating 3D content in After Effects – an object to be rendered, a light so the object can be seen, and a camera to control the perspective the audience sees of the object. Because we need to think of rendering by combining multiple COMPs, that can inform how we use live data. 

With some scaled values process and ready to export I was ready to think about how these values could influence the viewers perspective of the geometry. One of my initial thoughts was to render a cube that a user could look inside of. As the observer changed the orientation of the sensor, the virtual environment would also change in kind. While it’s possible to do this by rotating and translating the geometry itself, I instead decided to focus on the orientation of the camera instead. This has a few advantages. One important advantage is the ability to tell a camera to look directly at a specified geometry. This means that in translating the camera (left or right, up or down, in or out) the camera stays focused on the center of the target geometry. This makes changing perspective much simpler.  

Initially I was thinking of rendering the entire 3D scene as a single geometry. In doing this, however, I was experiencing some challenges when thinking about the placement of lights and the overall organization of the geometry, and in applying texture to the surfaces. By using a Phong shader one can apply texture maps to the 3D geometry COMPs that have been created. By separating the interior and exterior pieces of the geometry and then compositing them after rendering I was able to apply different shaders to each geometry.

The portion of my network responsible for compositing the geometry looks like this:

Render 1, Render 2, Constant (black solid) – Composit – Transform – Null – Out

 Render 1, Render 2, and the Constant are the three source surfaces. Render 1 is the box, Render 2 is the merged set of waves, and the Constant is a black background. Another approach to this would be to set one of the camera background’s as black. These three flow into a Composit COMP. Next is the Transition COMP (this allowed for some small adjustments that needed to be made in order to help align the projection with the sculpture. Originally I made this string with a Null as the final output of this Component. I would eventually find that I needed an Out to pass this scene into another display module. 

I used the same techniques as above for the other two scenes – starting with establishing my data stream, generating the geometry, rendering out layers to be composited and then passed out to the visual stream.

Are these pictures too small? You can see higher quality versions by looking at this Flickr Gallery: Graduate School Documentation

TouchDesigner | The Underlying Geometry

One of the benefits of working with TouchDesigner is the ability to work in 3D. 3D objects are in the family of operators called SOPs – Surface Operators. One of the aesthetic directions that I wanted to explore was the feeling of looking into a long box. The world inside of this box would be characterized by examining artifacts as either particles or waves with a vaguely dual-slit kind of suggestion. With that as a starting point I headed into making the container for these worlds of particles and waves.


Before making any 3D content it’s important to know how TouchDesigner processes these objects in order to display them. On their own, Surface Operators can’t be displayed as a rendered texture. In TouchDesigner’s idiom textures are two-dimension surfaces, and it follows that the objects that live in that category are called TOPs, Texture Operators. Operators from different families can’t be directly connected with patch chords. In order to pass the information from a SOP to a TOP one must use a TOP called a Render. The Render TOP must be connected to three COMPs (Compositions) in order to create an image that can be displayed. The render TOP requires a Geometry COMP (something to be rendered), a Light COMP (something to illuminate the scene), and a Camera COMP (the perspective from which the object is to rendered). In this respect TD pulls from conventions familiar to anyone who has worked with Adobe’s After Effects. 

Knowing the component pieces required in order to successfully render a 3D object it’s easier to understand how I started to create the underlying geometry. The Geometry COMP is essentially a container object (with some special attributes) that holds the SOPs responsible for passing a surface to the Render TOP. The default Geometry COMP contains a torus as a geometry. 

We can learn a little about how the COMP is working by taking a look inside of the Geometry object. 


Here the things to pay close attention to are the two flags on the torus object. You’ll notice in the bottom right corner there is a purple and a blue circle that are illuminated. The purple circle is a “Render Flag” and tells TouchDesigner to render the object, and the blue circle is a “Display Flag” which tells TouchDesigner that this is the object that should be displayed in the Geometry COMP.

Let’s take a look at the network that I created.

Now let’s dissect how my geometry network is actually working. At first glance we can see that multiple objects are being combined into a single piece of geometry that’s ultimately being passed out of this Geometry COMP. 

If we look closer we’ll see that here that the SOP network looks like this:

Grid – Noise – Transform – Alpha Noise (here the bypass flag is turned on)

Grid creates a plane that’s created out of polygons. This is different from a rectangle that’s only composed four points. In order to create a surface that can deform I needed a SOP points in the middle of it. The grid is attached to a Noise SOP that’s animating the surface. Noise is attached to a transform SOP that allows me to change the position of this individual plane. The last stop in this chain is another Noise SOP. Originally I was experimenting with varying the transparency of the surface. Ultimately, I decided to move away from this look. Rather than cutting this out of the chain, I simply turned on the Bypass Flag which turns off this single SOP. This whole chain is repeated eight times (for a total of eight grids). 

These Nine planes are then connected so that the rest of the network looks like this:

Merge – Transform – Facet – Texture – Null – Out
Merge takes all of the inputs and puts them together into a single piece of geometry. Transform allows me to move object as a whole in space. Facet is a handy operator that allows you to compute the normals’ of a geometry, which is useful for creating some more dynamic shading. Texture was useful for another direction that I was exploring, ultimately  ended up turning on the bypass flag for this SOP. A null, like in other environments, is really just a place holder kind of object. In the idiomatic structure of TouchDesigner, the Null is operationally an object that one places at the end of operation string. This is considered a best practice for a number of reasons. High on the list of reasons to end a string in a Null is because this allows easy access for making changes to a string. TouchDesigner allows the programmer to insert operations between objects. By always ending a string in a Null it becomes very easy to make changes to the stream without having to worry about re-exporting parameters. Finally all of this ends in an Out. While the Out isn’t necessary for this string, at one point I wasn’t sure if I was going to pass this geometry into another component. Ending in the Out ensured that I would have that flexibility if I needed it.

TouchDesigner | Time Trigger

Ultimately I don’t know how practical this method is, but at the moment it feels like a victory. Creating a time based trigger in TouchDesigner can be accomplished a ton of ways, the method I’m using uses the following CHOPS: LFO, Constant, Count, Trigger. 

The set to the same sample rate as the project will cross 0 once per second. The constant indicates the rate at which you wish to count. Count will use the input from the LFO and the Constant to count at the rate specified by the constant every time the LFO crosses 0. This is attached to a trigger CHOP. The trigger’s threshold can e set to to the time (in seconds) appropriate for the transition. Bang. A time based triggering method using four CHOPs.



It’s also worth nothing that another method for achieving this same  triggering method would be to use just two CHOPs: Timeline and Trigger. The Timeline chop will report out the current position as a frame number. Provided that you know the time (and therefore frame number) you can use this to set the threshold for a trigger value.

Phase 2 | Halfway House

Media design is an interesting beast in the theatre. Designers are called upon to create digital scenery, interactive installations, abstract imagery, immersive environments, ghost like apparitions, and a whole litany of other illusions or optical candy. The media designer is part system engineer, part installation specialist, and part content creator. This kind of design straddles a very unique part of the theatrical experience as it sits somewhere between the concrete and the ephemeral. We’re often asked to create site specific work that relates to the geometry and architecture of the play, and at the same time challenged to explore what can be expressed through sound and light. 

One of the compelling components of ASU’s School of Theatre and Film (SoTF) is its commitment to staging new works. In addition to producing works that are tried and true, ASU also encourages its students to create works for the stage. As a part of this commitment  the department has developed a three phase program to serve the process of developing a work for full main-stage production. 
  • Phase 1 – Phase one is between a staged reading and a work-shop production of a play. This phase allows the team to focus on sorting out the nuts and bots of the piece – what is the play / work really addressing  and what are the obstacles that need to be addressed before it moves onto the next stage of production. 
  • Phase 2 – Phase two is a workshop production environment  With a small budget and a design team the production team creates a staged version of the work that operates within strict design constraints. Here the lighting plot is fixed, scenic elements are limited, and media has access to two fixed projectors focused on two fixed screens.  This phase is less about the technical aspects of the production, and more focused on getting the work up in front of an audience so that the writer and director have a chance to get some sense of what direction to move next.
  • Phase 3 – Phase 3 is a full main-stage production of a work. Here there production has a full design team, larger budget, and far fewer constraints on the implementation of the production. 
While productions can skip one of the stages, ideally they are produced in at least one phase (either one or two) before before being put up as a phase three show. 
This semester I was selected to be the media designer on call for the two original works slotted in as Phase 2 productions: Los Santos, and The Halfway House. These two new works are both written by current ASU playwrights, who are invested in receiving some critical and informative feedback bout their work. The beginning part of this process begins with production meetings where directors pitch their visions of the production and start the brainstorming / creating process with the designers. Ultimately,  Los Santos decided against using any media for their production. Halfway House, however, did decide that it wanted some media driven moments in their production. 
My role in this process was to work with the director to find the moments where media could be utilized in the production, film and edit the content, and program the playback system for the short run of the production. After reading through the play a few times I met with Laurelann Porter, the director, to talk about how media could be used for this show. Important to the design process was understanding the limitations of the production. In the case of the Phase 2 productions, the projectors and screens are fixed. This limitation is in part a function of reducing the amount of tech-time, as well as limiting the complications imposed a set and lighting when doing complex projection. Looking at the script I thought the best use of media would be to enhance some of the transition moments in the production. Several of the transitions in the show involve moments where there is action taking place “elsewhere” (this is the language used by the playwright). These moments seemed perfect for media to help illustrate. In meeting with the director we identified the major moments that would benefit from some media presence, and started brainstorming from there.
A large part of the production process is planning and organization. In the case of lighting, sound, and media designers are tasked with identifying the moments when their mediums will be used, and creating a cue sheet. Cue sheets are essentially a set of discretely identified moments that allow a stage manager to give directions about how the show runs. Media, lights, and sound all have their own board operators (actual humans), and the stage manager gives them directions about when to start or stop a given cue. Creating a cue sheet with this fact in mind helps to ensure that a designer has working understanding of how to plan the moments that are being created. My process of reading the script looked like this:
  • 1st time through – for the story and arc of the action
  • 2nd time through – identify possible moments for media
  • 3rd time through – refine the moments start to create a working cue sheet
  • 4th time through – further refinement, label cues, look for problematic moments
After talking with the director and identifying what moments were going to be mediated material, it was time to create a shooting list, and plan for how to use a single afternoon with the actors to record all of the necessary footage for the show. We had one afternoon with the actors to film the transition moments. I worked with the director to determine a shooting order (to make sure that we efficiently used the actors’ time), and to identify locations and moments that needed to be captured. From here it was a matter of showing up, setting up, and recording. This transitioned smoothly to the editing process that was a matter of cutting and touching up the footage for the desired look.

The School of Theatre and Film currently have two show control systems at our disposal. Dataton’s Watchout4 and Troikatronix’s Isadora. Given the timing of the phase 2 productions, I knew that the Isadora machine was going to be available to me for show control. Like MaxMSP, Isadora a is a node-based visual programming environment. Importantly, Isadora is truly designed with performance in mind, and has a few features that therefore make it easier to use in a theatrical production environment. 

Typically a theatrical production requires a additional steps for media that are similar to the lighting process – lensing, and plotting for example. For the Phase two productions  the the shows use a standard lighting and media plot that doesn’t change. This means that there’s little additional work in terms of projector placement, focusing, masking, and the like that I have to do as a designer. For a larger production I would need to create a system diagram that outlines the placement of computers, projectors, cable, and other system requirements. Additionally, I would need to do the geometry to figure out where to place the projectors to ensure that I had a wide enough throw with my image to cover my desired surfaces, and I would need to work with the lighting designer to determine where on the lighting plot there was room for this equipment. This element of drafting, planning, and system design can easily be taken for granted by new designers but it’s easily one of the most important steps in the process as has an effect on how the show looks and runs. With all of the physical components in place, and the media assets created the designer is now looks at programming the playback system. In the case of Isadora this also means designing an interface for the operator.
One of the pressing realities of designing media for a theatrical installation is the need to create playback system knowing that someone unfamiliar with the programming environment will be operating the computer driving the media. ASU’s operators are typically undergraduate students that may or may not be technical theatre majors. In some cases an operator may be very familiar with a given programming interface, while others may not have ever run media for a show. Theatre in educational institutions are a wonderful place for students to have an opportunity to learn lots of new tools, and get their feet wet with a number of different technologies. In this respect I think it’s incumbent upon the designer to create a patch that has an interface that’s as accesible as possible for a new operator. In my case, each moment in the show where there is media playing (a cue) has  corresponding button that triggers the start, playback, and stop for the given video. 

Media is notoriously finicky in live performance. It can be difficult to program, washed out by stage lights, perform poorly if it’s not encoded properly, or any host of other possible problems. In the case of Half Way House, the process went very smoothly. The largest problem had more to do with an equipment failure that pushed back equipment installation than with the editing or programming process. While this is a simple execution of using media in a production, it was valuable for a number for the individuals involved in the process – the director, lighting designer, sound designer, and stage manager to name only a few. There are large questions in the theatre world about the role of media in production – is it just fancy set dressing? how is it actively contributing to telling the story of the show? is it worth the cost? does it have a place in an idiom largely built around the concept of live bodies? And the list goes on. I don’t think that this implementation serves to address any of those questions, but for the production team it did start the process of demystifying the work of including media in a production, and that’s not nothing.

Tools Used
Programming and Playback- Isadora | TrokaTronix
Projector – InFocus HD projector
Video Editing – Adobe After Effects , Adobe Premiere
Image Editing – Adobe Photoshop
Filming / Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe PremiereAdobe After Effects

Emerge | Commons

This year I was fortunate to have the opportunity to contribute to the performance schedule of ASU’s conference about art, science, and the future. This is the second year that Emerge has happened at ASU, with the final night being  a culminating festival of performance and art. In the Fall of 2012 I worked with a group of artists to put together a proposal for creating a performance in Neeb Plaza on ASU’s campus. This courtyard that sits nestled between Neeb hall, the Art building, and Design houses a new student generated installation called X-Space each year. Looking to solicite the creation of new works, the Herberger institute put out a call for artists interested in organizing a performance that occurs in X-Space. Called X-Act, applicants were asked to consider hw they would use the space and engage the campus. Early in January my team found out that we our proposal, Commons, was selected. One of the stipulations of the grant was that we would have a showing during the final showcase of Emerge. With this news in mind, our team started the process of creating the installation we had proposed.

One of the elements that our team was committed to realizing was finding a way to integrate projection into the performance on this very geometrically interesting space. I started by measuring the physical dimensions of the space in order to determine the distance required for the projectors that I had available for this project. Using a bit of math one can calculate the throw distance of a projector. Alternatively it’s also easy to use Projector Central’s Projection Calculator in order to lock down approximate distances that you might need. With the numbers in front of me I was able to start making a plan about potential projector placement, as well as my options for the performance given the constraint of the size of image that I could create. With the limitations of distance roughly mapped out I headed to the space after dark to do some initial tests. The hard truth about the amount of ambient light in the plaza, and the limits of the InFocus projectors meant that I needed to shy away from projecting large in favor of being brighter. The compromise of brightness and size was to map the front surfaces of X-Space. To accomplish this, I needed to connect two projectors with a Matrox TripleHead. This piece of equipment allows for multi-monitor work where the computer sees the two projectors as though they were a single canvas. 

It took about 4 hours to pull of the necessary equipment, install, and 
focus the projectors. 
Once I had the projectors up and in place I was finally able to start mapping the surfaces. I had decided early on that I was going to use a piece of software called Modul8 to control my media playback. Modul8 is a VJ software software package that’s robust and easy to use. Unlike other pieces of software, Modul8 is more like an instrument than an autonomous agent that can run independently. While there are a bunch of functions that you can automate inside of the software, it’s largely built around the idea of live-mixing the media that you’re using. In terms of automation, Modul8 allows the operator to use audio input to control a number of playback triggers. For this project the team used a track by DJ Earworm for audio, largely motivated by the desires of the group recruited for the dance performance. One of the additional benefits of Modul8 is it’s ability to send Syphon out media. This means that this piece of playback software can be easily integrated with the mapping tool MadMapper. Here it was as important to know what media system (projectors, hardware, and software) I was using, as the conceptual idea around the performance itself. 
Media Diagram


After getting the hardware installed I started mappping the surfaces of X-Space, creating individual quads and masks for each plane. All in all it took me about three hours to create the maps and masks for the architecture. At this point I was finally able to start experimenting with what kind of media I wanted to use, and how I wanted to arrange it in the space. All in all I budgeted about 16 hours to get this project up and running. Implementing the plan I had created ended up taking about 16.5 hours. This meant that I had one night where I worked on this installation until 3:15 AM, and another night where I was working until just before midnight.  We also had a rather unfortunate miscommunication with the Emerge planning staff about this installation, and the importance of having a security guard available to monitor the site over night. Installation started on Thursday evening, and each of the team members took a shift over night to monitor the outdoor equipment. Luckily we ended up with security for the second night, and didn’t have to pull any more all-nighters. 


Finally, while this project looked beautiful on the empty space, there was a miscommunication about audience placement and how stanchions were going to be used at the actual event. While the team had discussed the importance of roping off the performance space, that request was lost on the actual event planners. Consequently the audience largely obstructed the projections as they used the actual stage space as seating. Additionally, the space was filled with stage lighting and projectors rented for another performance which only served to wash-out Commons media, and distract audience members. While this was certainly not a failure, it did leave a lot to be desired given the time, planning, and sleepless nights that implementation required. It’s just another lessoned learned, even if learned the hard way.



Tools Used
Programming and Playback- Modul8
Mapping – MadMapper
Multi-Monitor Control – Matrox TripleHead2Go Digital
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop 
Documentation – iPhone 4S, Canon 7D
Editing Documentation – Adobe PremiereAdobe After Effects





Commons
X-Act Proposal

Ethan Jackson | MSD New Production Innovation | College of Design
Chelsea Pace | MFA Performance | School of Theatre and Film
Kris Pourzal | MFA Dance / School of Dance
Matthew Ragan | MFA Interdisciplinary Digital Media and Performance | School of Theatre and Film & Arts, Media + Engineering

Activating a campus of this size is a challenge. With so many majors across so many schools, it would be impossible to activate the entirety of the campus with only arts students, or any small group of students for that matter. We propose Commons.

We are excited to propose a 100 person ensemble comprised mostly of non-performers. This ensemble would be assembled by the team over the next several months by contacting graduate students and undergraduates from various departments and schools across ASU, both inside and outside of the Herberger Institute. The inattention is that the sample of students would be a proportional and accurate representation of the population of the Tempe Campus student body.


This project is ambitions and we are not ignorant of the challenges presented by gathering an ensemble of this size. The difficulty is doubled with you consider that we intend to bring mostly non-performers into the ensemble. The groundwork fro the process of contacting graduates students to enlist undergraduates from across campus is already being laid through contacts in Preparing Future Faculty and the Graduate and Professional Student Associate. 

The piece inherently activates the campus by reaching out across so many disciplines and getting people together, working together, and making art. The choreography will be created by the team and also crowd sourced from the assembled ensemble and the music (potentially) will b a remix of music surged by the group.

Not to be confused with a flash-mob, Commons will be a collaboration with all 100 performers. Created as an ensemble, the performs will truly have ownership over the piece and more than just regurgitating choreography, the piece will be brought to life by the population of Arizona State University.


The piece will use the X-Space, the cement plaza to the south, and the wall of the building west of the X-Space. When the audience enters the cement courtyard immediately south of the space, the floor will be lit with interactive projections triggered by the movement of the crowd. After the audience has gathered, the ensembles will emerge from the X-Space installation and begin a choreographed sequence. Theatrical lights, projections, and sound will be utilized to create an immersive environment for both the audience and the performers.

As the choreography builds and more performers are added, a live video feed will begin and will be projected several stories high onto the textured wall of the building to the west of the courtyard. THe projection will be live video of the performance and of the the audience.


The piece is approximately 30 minutes in duration and would allow for various groups of students who are professional distanced from performing to express themselves in a performative and expressive way. The piece ends with the performers exiting through the crowd and out onto campus where they will continue to perform choreography for 10 minutes in a space that is significant to their experience at ASU.

Rather than making something with HIDA students that only HIDA students see and perform in, Commons will truly activate the campus to come together, make something, and take it out into their communities across campus. 

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today’s obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren’t just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you’re not so easily taken advantage of.


With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it’s relationship to arousal states in the subject. While it’s easy to claim that one is impartial and objective at all times, monitoring one’s physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject’s heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 


So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you’re interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I’m at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that’s being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 


Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I’d like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 
My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there’s a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.

Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It’s paying your brother-in law who wanted to learn Premiere Pro to make the videos. It’s a look that’s infomercial snake-oil slick. 




Reactions from Participants – General Comments / Observations

  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like “oh, that’s what you like huh?” or ” you better not be looking at him / her.” The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a “you try it, I dare you” kind of way.
  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question “what are you doing?” was “Finding out I’m a bad person.” It didn’t surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.
  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like “I think I just like this guy’s picture better.”
  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience.
  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.

Tools Used

Heart Rate – Pulse Sensor and Arduino Uno

Programming for Arduino – Arduino

Program to Read Serial data – Processing
Message Protocol – Open Sound Control
OSC Processing Library – Andreas Schlegel OSC Library for Processing 
Programming Initial Tests – MaxMSP
Programming and Playback- Isadora
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop
Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe Premiere, Adobe After Effects