# TouchDesigner | The Object CHOP

The Object CHOP has long been one of the most challenging CHOPs for me to really wrap my head around. Following along with some conversations on the Facebook Help Group 1, it’s clear that I’m not the only one who has bumped their head against how to take advantage of this operator.

With that in mind, here are a few tricks and techniques that you might find helpful when working with the object CHOP.

## Distance Between Many Objects Part 1

At first glance, it seems like the object CHOP can only perform calculations between single objects, but in fact you can use this operator to perform calculations between many objects provided that you format the input data correctly, and set up your object CHOP to account for multiple samples.

In a first example let’s say that we want to find the distance between several green spheres and a blue box:

First let’s collect our position information. I’ve used an object CHOP per sphere to find its distance, but you might also use a script CHOP, or a put positions in a table that you reference for the spheres, or drive them with custom parameters. How you position them doesn’t matter. What we need, however, is a single CHOP with three channels that hold the transformation information of those spheres. My trick in this network is to use object CHOPs to find their positions, then put them in sequence with a join CHOP:

Next we can use a single object CHOP that’s fed reference positions from this join CHOP, and a target Geometry COMP:

Other important pieces here are the start and end parameters on the channel page.

This is where we set how many samples the object CHOP will evaluate. This can be a bit confusing – here especially as the join CHOP has started at a sample index of 1 rather than 0. The devil is in the details, so it’s worth keeping a close eye for these kinds of oddities. Because of this we compensate in our start position by moving back one sample index.

Next make sure to set your object CHOP to output measurements, and distance. What you’ll then end up with is a single channel with a sample for each distance between your box and spheres. We can convert this to a table if we wanted to see the actual values:

## Distance Between Many Objects Part 2

We may also want to measure distances between multiple blue boxes. Say, for example, that we had two different blue boxes and we wanted to know the distances of our spheres to both of those boxes?

Similar to our first exercise we’ll start by collecting all of the position information for our spheres. We also need position information for our boxes. In this case, however, we need to stretch a single sample to be 4 samples long – this is part of our data preparation step to ensure we correctly calculate distance.

Here a simple stretch CHOP has been used to make sure we have four samples of data for each box. Next we can join this data so all of our box position information is in a single set of CHOP channels:

Before moving on, we need to take a moment to adjust our sphere position data. In our first example we only collected the four positions… we need to set up the correct extend behavior for this series so that our CHOPs know what values to use when CHOPs of mismatched lengths are combined. We can use an extend CHOP set to cycle to do this trick:

Finally, we can then use an object CHOP to calculate the distance between our box and our spheres:

## Distance Between Many Objects plus Bearing

If we also calculate the bearing between our boxes and spheres, we’ll end up with rotation information… what can we do with this? We could use this to calculate the correct rotation for a set of instances. For example:

Here each line is correctly rotated, scaled, and placed based on calculations from the object CHOP.

## Bearing

You can also use the object CHOP to just calculate bearing – or rotation from one object to another. Here you can see how this might be used to rotate instances to sit flat on a sphere’s surface, or rotate an arrow to point towards an object:

## Bearing and Distance

Or you might use the combination of bearing and distance to make some strange abstract art:

## Collision

You can also use the object CHOP to simulate a kind of collision calculation where the distance you’re measuring can help you tell how close an object is to another and if they’re on top of one another:

#### GitHub

Clone the Repo to Follow Along

# TouchDesigner | Reflection and Refraction

## I can haz Reflections?! Refractions?

Zoe loves all things reflective and refractive and it was almost a year ago that they started looking into how to achieve compelling illusions of reflection and refraction. Then I went to Macau, then Chicago, then Zoe dove headlong into their thesis project… fast forward to 2019, and it was time for me to finally follow through on a long overdue promise to create some examples of reflection and refraction in TouchDesigner. It didn’t hurt that Zoe gently reminded me that it was time for more refractive rendering in life. Good places to start for these kinds of questions are to look at existing references in the world.

## Reflection

Reflections are hard. In part because they often mean that we need to see the whole world – even the parts that our virtual camera can’t. We might know this intuitively, but the reach of this is easy to forget. When we point the camera in our smartphone at a mirror we see ourselves, the world behind us, above us, and and and. If we point a virtual camera at a virtual mirror we need the same things. That can be a wobbly bit to wrap your head around, and develop a better sense of this challenge I look a look at a reference book I picked up earlier this year – OpenGL 4 Shading Language Cookbook – Second Edition. This has a great chapter on reflection techniques, specifically generating them by using cube-maps. Cubemaps look like an unfolded box, and have a long history of use in computer graphics.

One of the primary challenges of using cubemaps is that you need to also know the perspective of the object that’s reflective. In other words, cube maps can be very convincing as long as you move the camera, but not the reflective object. But what if we want the option to both move the camera, and the object? In this quick tutorial, we look at how we can use a cube map to create convincing reflections, as well as what steps we need to consider if want not only the camera to move, but the object itself.

## Refraction

The one and only Carlos Garcia (L05) has a great example posted on the TouchDesigner forum. This great example helps illustrate the part of what we’re after with this kind of work is the sleight of hand that hints at refraction, but isn’t necessarily true to the physics of light. Almost all realtime rendering tricks are somewhere between the Truth (with a capital T) of the world, and the truth (sneaky lower case t) of perception. We’ve all fallen for the perceptual tricks of optical illusions, and many times the real work of the digital alchemist is to fool observers into believing a half truth. Carlos’ example proves just that point, and helps us see that with a little tricksy use of the displacement TOP we can achieve a healthy bit of trickery.

That’s an excellent start to our adventure, but we can dig-in a little more if we keep searching. Another post on the forum links over to an article on medium that showcases an approach for webGL that leverages the use of a UV map to “pre-compute” the direction of displacement of the light that passes through a transparent object. This is an interesting approach, and in fact there’s a middle ground between the webGL example and Carlos’ TOX that gives us some interesting results.

In the following tutorial we can see how we remix these two ideas. The big picture perspective here is that we can leverage TouchDesigner’s real-time rendering engine to provide the same “pre-computed” asset that’s utilized in the webGL approach, and then use Carlos’ displacement TOP technique. We can also short-cut Carlos’ use of a rendering a second version of the object as a mask, and instead use a threshold TOP looking at our alpha channel to achieve the same effect. This isn’t a huge change, but it’s a bit faster in some implementations and saves us the wobbles that sometimes come with multiple render passes. Finally, a little post processing can help us achieve some more convincing effects that help sell our illusion as a true to the eye.

# TouchDesigner | Delay Scripts

It’s hard to appreciate some of the stranger complexities of working in a programming environment until you stumble on something good and strange. Strange how Matt? What a lovely question, and I’m so glad that you asked!

Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need some-something to happen at some time in the future. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming?

Typically we start to consider this through operations that involve some form of delay. I might issue the command for an operation now, but I want the environment to wait some fixed period of time before executing those instructions. In Python we have a lovely option for using the time module to perform an operation called sleep – this seems like a lovely choice, but in fact you’ll be oh so sorry if you try this approach:

 import time # it may be tempting to do this, but # this is not the correct way to delay # an operation in Python inside of # TouchDesigner time.sleep(1) print("oh, hello there")

But whyyyyyyyy?!

Well, Python is blocking inside of TouchDesigner. This means that all of the Python code needs to execute before you can proceed to the next frame. So what does that mean? Well, copy and paste the code above into a text DAT and run this script.

If you keep an eye on the timeline at the bottom of the screen, you should see it pause for 1 second while the time.sleep() operation happens, then we print “oh, hello there” to the text port and we start back up again. In practice this will seem like Touch has frozen, and you’ll soon be cursing yourself for thinking that such a thing would be so easy.

So, if that doesn’t work… what does? Is there any way to delay operations in Python? What do we do?!

Well, as luck would have it there’s a lovely method called run() in the td module. That’s lovely and all, but it’s a little strange to understand how to use this method. There’s lots of interesting nuance to this method, but for now let’s just get a handle on how to use it – both from a simple standpoint, and with more complex configurations.

To get started let’s examine the same idea that we saw above. Instead of using time.sleep() we can instead use run() with an argument called delayFrames. The same operation that we looked at above, but run in a non-blocking way would look like this:

 # delay scripts example # first we write out script as a string delay_print = "print('hello world')" # next we run this string, and specify # how many frames we want to wait before # we run this operation. run(delay_print, delayFrames = 60 )

view raw
delay-script-ex1.py
hosted with ❤ by GitHub

If you try copying and pasting the code above into a text DAT you should have much better results – or at least results where TouchDesigner doesn’t stop running while it waits for the Python bits to finish.

Okay… so that sure is swell and all, so what’s so complicated? Well, let’s suppose you want to pass some arguments into that script – in fact we’ll see in a moment that we sometimes have to pass arguments into that script. First things first – how does that work?

 # delay scripts example # first we write out script as a string delay_print = '''print( '{noun} sure does {verb} {adj}'.format( noun=args[0], verb=args[1], adj=args[2]))''' noun = "Matthew" verb = "love" adj = "TouchDesigner" # next we run this string. # this time we'll pass in arguments for our script. # args are accessed as a list called args. Arguments # go in order in the run command, and you can see in # how the script is formatted how we access them. run(delay_print, noun, verb, adj, delayFrames = 60 )

Notice how when we wrote our string we used args[some_index_value] to indicate how to use an argument. That’s great, right? I know… but why do we need that exactly? Well, as it turns out there are some interesting things to consider about running scripts. Let’s think about a situation where we have a constant CHOP whose parameter value0 we want to change each time in a for loop. How do we do that? We need to pass a new value into our script each time it runs. Let’s try something like:

 # delay scripts example # first we write out script as a string delay_script = "constant_op.par.value0 = args[0]" constant_op = op('constant1') # next we run our script for each_time in range(10): run(delay_script, (each_time+1), delayFrames = 60 * (each_time + 1))

view raw
args-and-delay.py
hosted with ❤ by GitHub

What you should see is that your constant CHOP increments every second:

But that’s just the tip of the iceberg. We can run strings, whole DATs, even the contents of a table cell.

This approach isn’t great for everything… in fact, I’m always hesitant to use delay scripts too heavily – but sometimes they’re just what you need, and for that very reason they’re worth understanding.

If you’ve gotten this far and are wondering why on earth this is worth writing about – check out this post on the forum: Replicator set custom parms error. It’s a pretty solid example of how and why it’s worth having a better understanding of how delay scripts work, and how you can make them better work for you.

Happy Programming.

# TouchDesigner | Deferred Lighting – Cone Lights

With a start on point lights, one of the next questions you might ask is “what about cone lights?” Well, it just so happens that there’s a way to approach a deferred pipeline for cone lights just like with point lights. This example still has a bit to go with a few lingering mis-behaviors, but it is a start for those interested in looking at complex lighting solutions for their realtime scenes.

You can find a repo for all of this work and experimentation here: TouchDesigner Deferred Lighting.

TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

This approach is very similar to point lights, with the additional challenge of needing to think about lights as directional. We’ll see that the first stage and last of this process – is consistent with our Point Light example, but in the middle we need to make some changes. We can get started by again with color buffers.

## Color Buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

• position – renderselect_postition
• normals – renderselect_normal
• color – renderselect_color
• uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

 // TouchDesigner vertex shader // struct and data to fragment shader out VS_OUT{ vec4 position; vec3 normal; vec4 color; vec2 uv; } vs_out; void main(){ // packing data for passthrough to fragment shader vs_out.position = TDDeform(P); vs_out.normal = TDDeformNorm(N); vs_out.color = Cd; vs_out.uv = uv[0].st; gl_Position = TDWorldToProj(vs_out.position); }

 // TouchDesigner frag shader // struct and data from our vertex shader in VS_OUT{ vec4 position; vec3 normal; vec4 color; vec2 uv; } fs_in; // color buffer assignments layout (location = 0) out vec4 o_position; layout (location = 1) out vec4 o_normal; layout (location = 2) out vec4 o_color; layout (location = 3) out vec4 o_uv; void main(){ o_position = fs_in.position; o_normal = vec4( fs_in.normal, 1.0 ); o_color = fs_in.color; o_uv = vec4( fs_in.uv, 0.0, 1.0 ); }

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

## Light Attributes

Here we’ll begin to see a divergence from our previous approach.

We are still going to compute and pack data for the position, color, and falloff for our point lights like in our previous example. The difference now is that we also need to compute a look-at position for each of our lights. In addition to our falloff data we’ll need to also consider the cone angle and delta of our lights. For the time being cone angle is working, but cone delta is broken – pardon my learning in public here.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our null SOP null_lightpos. We convert this to CHOP data and use the attributes from this null (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene. In this case we’re using a null since we want to position the look-at points at some other position than our lights themselves. Notice that our circle has one transform SOP to describe light position, and another transform SOP to describe look-at position. In the next stage we’ll use our null_light_pos CHOP and our null_light_lookat CHOP for the lighting calculations – we’ll also end up using the results of our object CHOP null_cone_rot to be able to describe the rotation of our lights when rendering them as instances.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

• position – null_light_pos
• color – null_light_color
• falloff – null_light_falloff
• light cone – null_light_cone

This means that sample 0 from each of these four CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

The additional light cone attribute here is used to describe the radius of the cone and the degree of softness at the edges (again pardon the fact that this isn’t yet working).

## Combining Buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

 \\TouchDesigner glsl TOP uniform int numLights; uniform vec3 viewDirection; // camera location uniform vec3 specularColor; uniform float shininess; uniform sampler2D samplerPos; uniform sampler2D samplerNorm; uniform sampler2D samplerColor; uniform sampler2D samplerUv; uniform samplerBuffer lightPos; // position as xyz uniform samplerBuffer lightColor; // color as rgb uniform samplerBuffer lightFalloff; // falloff constant, linear, quadratic uniform samplerBuffer lightLookat; // lookat position xyz uniform samplerBuffer lightCone; // angle, delta, and falloff out vec4 fragColor; #define PI 3.14159265359 void main() { vec2 screenSpaceUV = vUV.st; vec2 resolution = uTD2DInfos[0].res.zw; // parse data from g-buffer vec3 position = texture( sTD2DInputs[0], screenSpaceUV ).xyz; vec3 normal = texture( sTD2DInputs[1], screenSpaceUV ).xyz; vec4 color = texture( sTD2DInputs[2], screenSpaceUV ); vec2 uv = texture( sTD2DInputs[3], screenSpaceUV ).rg; vec3 cameraVec = normalize(viewDirection – position); // set up placeholder for final color vec3 finalColor = vec3(0.0); // loop through all lights for ( int light = 0; light < numLights; ++light ){ // parse lighitng data based on the current light index vec3 currentLightPos = texelFetchBuffer( lightPos, light ).xyz; vec3 currentLightColor = texelFetchBuffer( lightColor, light ).xyz; vec3 currentLightFalloff = texelFetchBuffer( lightFalloff, light ).xyz; vec3 currentLightLookat = texelFetchBuffer( lightLookat, light ).xyz; vec3 currentLightCone = texelFetchBuffer( lightCone, light ).xyz; // cone attributes float uConeAngle = currentLightCone.x; float uConeDelta = currentLightCone.y; float uConeRolloff = currentLightCone.z; // calculate the distance between the current fragment and the light source float lightDist = length( currentLightPos – position ); vec3 lightVec = normalize( currentLightPos – position ); // spot float fullcos = cos(radians((uConeAngle / 2.0) + uConeDelta)); fullcos = (fullcos * 0.5) + 0.5; float scale = 0.5 / (1.0 – fullcos); float bias = (0.5 – fullcos) / (1.0 – fullcos); vec2 coneLookupScaleBias = vec2(scale, bias); vec3 spot = normalize( currentLightLookat – currentLightPos ); float spotEffect = dot( spot, –lightVec ); float coneAngle = radians( uConeAngle / 2.0); float ang = acos(spotEffect); float dimmer; if( ang > coneAngle ) dimmer = 0.0; else dimmer = texture( sTDSineLookup, 1.0 – ( ang / coneAngle ) ).r; vec3 toLight = normalize( currentLightPos – position ); vec3 diffuse = max( dot( normal, toLight ), 0.0 ) * color.rgb * currentLightColor; float diffuseDot = clamp(dot(lightVec, normal), 0.0, 1.0); vec3 colorSum = diffuseDot * currentLightColor; vec3 halfAng = normalize((cameraVec + lightVec).xyz); float specDot = pow(clamp(dot(halfAng, normal), 0.0, 1.0), shininess); colorSum += specDot * currentLightColor * 0.3 * diffuse; colorSum *= dimmer; // accumulate lighting finalColor += colorSum; } // final color out fragColor = TDOutputSwizzle( vec4( finalColor, color.a ) ); }

If you look at the final pieces of our for loop you’ll find that much of this process is borrowed from the example Malcolm wrote (Thanks Malcolm!). This starting point serves as a baseline to help us get started from the position of how other lights are handled in Touch.

## Representing Lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Our divergence here is that rather than using spheres, we’re instead using cones to represent our lights. In a future iteration the width of the cone base should scale along with our cone angle, but for now let’s celebrate the fact that we have a way to see where our lights are coming from. You’ll notice that the rotate attributes generated from the object CHOP are used to describe the rotation of the instances. Ultimately, we probably don’t need these representations, but they sure are handy when we’re trying to get a sense of what’s happening inside of our shader.

## Post Processing for Final Output

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

# TouchDesigner | Deferred Lighting – Point Lights

A bit ago I wanted to get a handle on how one might approach real time rendering with LOTS of lights. The typical openGL pipeline has some limitations here, but there’s a lot interesting potential with Deferred Lighting (also referred to as deferred shading). Making that leap, however, is no easy task and I asked Mike Walczyk for some help getting started. There’s a great starting point for this idea on the derivative forum but I wanted a 099 approach and wanted to pull it apart to better understand what was happening. With that in mind, this is a first pass at looking through using point lights in a deferred pipeline, and what those various stages look like.

You can find a repo for all of this work and experimentation here: TouchDesigner Deferred Lighting.

TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

## Color Buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

• position – renderselect_postition
• normals – renderselect_normal
• color – renderselect_color
• uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

 // TouchDesigner vertex shader // struct and data to fragment shader out VS_OUT{ vec4 position; vec3 normal; vec4 color; vec2 uv; } vs_out; void main(){ // packing data for passthrough to fragment shader vs_out.position = TDDeform(P); vs_out.normal = TDDeformNorm(N); vs_out.color = Cd; vs_out.uv = uv[0].st; gl_Position = TDWorldToProj(vs_out.position); }

 // TouchDesigner frag shader // struct and data from our vertex shader in VS_OUT{ vec4 position; vec3 normal; vec4 color; vec2 uv; } fs_in; // color buffer assignments layout (location = 0) out vec4 o_position; layout (location = 1) out vec4 o_normal; layout (location = 2) out vec4 o_color; layout (location = 3) out vec4 o_uv; void main(){ o_position = fs_in.position; o_normal = vec4( fs_in.normal, 1.0 ); o_color = fs_in.color; o_uv = vec4( fs_in.uv, 0.0, 1.0 ); }

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

## Light Attributes

Next we’re going to compute and pack data for the position, color, and falloff for our point lights.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our Circle SOP circle1. We convert this CHOP data and use the attributes from this circle (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

• position – null_light_pos
• color – null_light_color
• falloff – null_light_falloff

This means that sample 0 from each of these three CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

## Combining Buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

 uniform int numLights; uniform vec3 viewDirection; uniform vec3 specularColor; uniform float shininess; uniform sampler2D samplerPos; uniform sampler2D samplerNorm; uniform sampler2D samplerColor; uniform sampler2D samplerUv; uniform samplerBuffer lightPos; uniform samplerBuffer lightColor; uniform samplerBuffer lightFalloff; out vec4 fragColor; void main() { vec2 screenSpaceUV = vUV.st; vec2 resolution = uTD2DInfos[0].res.zw; // parse data from g-buffer vec3 position = texture( sTD2DInputs[0], screenSpaceUV ).rgb; vec3 normal = texture( sTD2DInputs[1], screenSpaceUV ).rgb; vec4 color = texture( sTD2DInputs[2], screenSpaceUV ); vec2 uv = texture( sTD2DInputs[3], screenSpaceUV ).rg; // set up placeholder for final color vec3 finalColor = vec3(0.0); // loop through all lights for ( int light = 0; light < numLights; ++light ){ // parse lighitng data based on the current light index vec3 currentLightPos = texelFetchBuffer( lightPos, light ).xyz; vec3 currentLightColor = texelFetchBuffer( lightColor, light ).xyz; vec3 currentLightFalloff = texelFetchBuffer( lightFalloff, light ).xyz; // calculate the distance between the current fragment and the light source float lightDist = length( currentLightPos – position ); // diffuse contrabution vec3 toLight = normalize( currentLightPos – position ); vec3 diffuse = max( dot( normal, toLight ), 0.0 ) * color.rgb * currentLightColor; // specular contrabution vec3 toViewer = normalize( position – viewDirection ); vec3 h = normalize( toLight – toViewer ); float spec = pow( max( dot( normal, h ), 0.0 ), shininess ); vec3 specular = currentLightColor * spec * specularColor; // attenuation float attenuation = 1.0 / ( 1.0 + currentLightFalloff.y * lightDist + currentLightFalloff.z * lightDist * lightDist ); diffuse *= attenuation; specular *= attenuation; // accumulate lighting finalColor += diffuse + specular; } // final color out fragColor = TDOutputSwizzle( vec4( finalColor, color.a ) ); }

## Representing Lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

## Post Processing for Final Output

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

# presets and cue building – a beyond basics checklist | TouchDesigner 099

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

• Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.
• The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit.
• I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

## A General Checklist

### Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

### Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues  – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

• What existing tools do you like – what’s their data structure / solution?
• How is your data organized – arrays, dictionaries, etc.
• Do you have a readme to refer back to when you extend your project in the future?
• Do you have a way to add entries?
• Do you have a way to recall entries?
• Do you have a way to update entries?
• Do you have a way to copy entries?
• Do you have a validation process in-line to ensure your entries are valid?
• Do you have a means of externalizing your cues and other config data

### Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

### Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

### Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

### Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

### Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

### Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

### Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

 { "cues": { "cue1": { "Tox": "Swank", "Level_1": 0, "Noise": 1, "Level3": 4, "Blacklvl": 0.75 }, "cue2": { "Tox": "Curl", "Bouncy": 0.775, "Curve": 100.0, "Augment": 13, "Blklvl": 0.75 }, "cue3": { "Tox": "Boop", "Boopness": 0.775 } } }

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

 { "cues": { "cue1": { "Tox": "Swank", "Par1": 0, "Par2": 1, "Par3": 4, "Par4": 0.75 }, "cue2": { "Tox": "Curl", "Par1": 0.775, "Par2": 100.0, "Par3": 13, "Par4": 0.75 }, "cue3": { "Tox": "Boop", "Par1": 0.875, "Par2": None, "Par3": None, "Par4": None } } }

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

### Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

### Separation is a good thing… mostly

Richard’s killer post about optimization in touch has an excellent recommendation – keep your UI separate. This suggestion is HUGE, and it does far more good than you might intentionally imagine.

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

• “What problem am I solving by adding this level of additional complexity?”
• “Is there another way to solve this problem without an additional process?”
• “What are the possible problems / issues this might cause?”
• “Can I test this in a small way before re-factoring the whole project?”

### Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

### Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

### Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

# Python in TouchDesigner | The Channel Class | TouchDesigner

The Channel Class Wiki Documentation

Taking a little time to better understand the channel class provides a number of opportunities for getting a stronger handle on what’s happening in TouchDesigner. This can be especially helpful if you’re working with CHOP executes or just trying to really get a handle on what on earth CHOPs are all about.

To get started, it might be helpful to think about what’s really in a CHOP. Channel Operators are largely arrays (lists in python lingo) of numbers. These arrays can be only single values, or they might be a long set of numbers. In any given CHOP all of the channels will have the same length (we could also say that they have the same number of samples). That’s helpful to know as it might shape the way we think of channels and samples.

Before we go any further let’s stop to think through the above just a little bit more. Let’s first think about a constant CHOP with channel called ‘chan1’. We know we can write a python reference for this chop like this:

op( 'constant1' )[ 'chan1' ]

or like this:

op( 'constant1' )[ 0 ]

Just as a refresher, we should remember that the syntax here is:
op( stringNameToOperator )[ channelNameOrIndex ]

That’s just great, but what happens if we have a pattern CHOP? If we drop down a default pattern CHOP (which has 1000 samples), and we try the same expression:

op( 'pattern1' )[ 'chan1' ]

We now get a constantly changing value. What gives?! Well, we’re now looking at bit list of numbers, and we haven’t told Touch where in that list of values we want to grab an index – instead touch is moving through that index with me.time.frame-1 as the position in the array. If you’re scratching your head, that’s okay we’re going to pull this apart a little more.

Okay, what’s really hiding from us is that CHOP references have a default value that’s provided for us. While we often simplify the reference syntax to:

op( stringNameToOperator )[ channelNameOrIndex ]

In actuality, the real reference syntax is:
op( stringNameToOperator )[ channelNameOrIndex ][ arrayPosition ]

In single sample CHOPs we don’t usually need to worry about this third argument – if there’s only one value in the list Touch very helpfully grabs the only value there. In a multi-sample CHOP channel, however, we need more information to know what sample we’re really after. Let’s try our reference to a narrow down to a single sample in that pattern CHOP. Let’s say we want sample 499:

op( 'pattern1' )[ 'chan1' ][ 499 ]

With any luck you should now be seeing that you’re only getting a single value. Success!

But what does this have to do with the Channel Class? Well, if we take a closer look at the documentation ( Channel Class Wiki Documentation ), we might find some interesting things, for example:

Members

• valid (Read Only) True if the referenced chanel value currently exists, False if it has been deleted. Identical to bool(Channel).
• index (Read Only) The numeric index of the channel.
• name (Read Only) The name of the channel.
• owner (Read Only) The OP to which this object belongs.
• vals Get or set the full list of Channel values. Modifying Channel values can only be done in Python within a Script CHOP.

Okay, that’s great, but so what? Well, let’s practice our python and see what we might find if we try out a few of these members.

We might start by adding a pattern CHOP. I’m going to change my pattern CHOP to only be 5 samples long for now – we don’t need a ton of samples to see what’s going on here. Next I’m going to set up a table DAT and try out the following bits of python:

python
op( 'null1' )[0].valid
op( 'null1' )[0].index
op( 'null1' )[0].name
op( 'null1' )[0].owner
op( 'null1' )[0].exports
op( 'null1' )[0].vals

I’m going to plug that table DAT into an eval DAT to evaluate the python expressions so I can see what’s going on here. What I get back is:

True
0
chan1
/project1/base_the_channel_class/null1
[]
0.0 0.25 0.5 0.75 1.0

If we were to look at those side by side it would be:

Python In Python Out
op( ‘null1’ )[0].valid True
op( ‘null1’ )[0].index 0
op( ‘null1’ )[0].name chan1
op( ‘null1’ )[0].owner /project1/base_the_channel_class/null1
op( ‘null1’ )[0].exports []
op( ‘null1’ )[0].vals 0.0 0.25 0.5 0.75 1.0

So far that’s not terribly exciting… or is it?! The real power of these Class Members comes from CHOP executes. I’m going to make a quick little example to help pull apart what’s exciting here. Let’s add a Noise CHOP with 5 channels. I’m going to turn on time slicing so we only have single sample channels. Next I’m going to add a Math CHOP and set it to ceiling – this is going to round our values up, giving us a 1 or a 0 from our noise CHOP. Next I’ll add a null. Next I’m going to add 5 circle TOPs, and make sure they’re named circle1 – circle5.

Here’s what I want – Every time the value is true (1), I want the circle to be green, when it’s false (0) I want the circle to be red. We could set up a number of clever ways to solve this problem, but let’s imagine that it doesn’t happen too often – this might be part of a status system that we build that’s got indicator lights that help us know when we’ve lost a connection to a remote machine (this doesn’t need to be our most optimized code since it’s not going to execute all the time, and a bit of python is going to be simpler to write / read). Okay… so what do we put in our CHOP execute?! Well, before we get started it’s important to remember that our Channel class contains information that we might need – like the index of the channel. In this case we might use the channel index to figure out which circle needs updating. Okay, let’s get something started then!

python
def onValueChange(channel, sampleIndex, val, prev):

# set up some variables
offColor        = [ 1.0, 0.0, 0.0 ]
onColor         = [ 0.0, 1.0, 0.0 ]
targetCircle    = 'circle{digit}'

# describe what happens when val is true
if val:
op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = onColor[0]
op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = onColor[1]
op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = onColor[2]

# describe what happens when val is false
else:
op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = offColor[0]
op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = offColor[1]
op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = offColor[2]
return

Alright! That works pretty well… but what if I want to use a select and save some texture memory?? Sure. Let’s take a look at how we might do that. This time around we’ll only make two circle TOPs – one for our on state, one for our off state. We’ll add 5 select TOPs and make sure they’re named select1-select5. Now our CHOP execute should be:

python
def onValueChange(channel, sampleIndex, val, prev):

# set up some variables
offColor        = 'circle_off'
onColor         = 'circle_on'
targetCircle    = 'select{digit}'

# describe what happens when val is true
if val:
op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = onColor

# describe what happens when val is false
else:
op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = offColor
return

Okay… I’m going to add one more example to the sample code, and rather than walk you all the way through it I’m going to describe the challenge and let you pull it apart to understand how it works – challenge by choice, if you’re into what’s going on here take it all apart, otherwise you can let it ride.

Okay… so, what I want is a little container that displays a channel’s name, an indicator if the value is > 0 or < 0, another green / red indicator that corresponds to the >< values, and finally the text for the value itself. I want to use selects when possible, or just set the background TOP for a container directly. To make all this work you’ll probably need to use .name, .index, and .vals.

You can pull mine apart to see how I made it work here: base_the_channel_class.

Happy Programming!

BUT WAIT! THERE’S MORE!

Ivan DesSol asks some interesting questions:

Questions for the professor:
1) How can I find out which sample index in the channel is the current sample?
2) How is that number calculated? That is, what determines which sample is current?

If we’re talking about a multi sample channel let’s take a look at how we might figure that out. I mentioned this in passing above, but it’s worth taking a little longer to pull this one apart a bit. I’m going to use a constant CHOP and a trail CHOP to take a look at what’s happening here.

Let’s start with a simple reference one more time. This time around I’m going to use a pattern CHOP with 200 samples. I’m going to connect my pattern to a null (in my case this is null7). My resulting python should look like:

op( 'null7' )[ 'chan1' ]

Alright… so we’re speeding right along, and our value just keeps wrapping around. We know that our multi sample channel has an index, so for fun games and profit let’s try using me.time.frame:

op( 'null7' )[ 'chan1' ][ me.time.frame ]

Alright… well. That works some of the time, but we also get the error “Index invalid or out of range.” WTF does that even mean?! Well, remember an array or list has a specific length, when we try to grab something outside of that length we’ll seen an error. If you’re still stratching you’re head that’s okay – let’s take a look at it this way.

Let’s say we have a list like this:

fruit = [ apple, orange, kiwi, grape ]

We know that we can retrieve values from our list with an index:

print( fruit[ 0 ] ) | returns "apple"
print( fruit[ 1 ] ) | returns "orange"
print( fruit[ 2 ] ) | returns "kiwi"
print( fruit[ 3 ] ) | returns "grape"

If, however, we try:

print( fruit[ 4 ] )

Now we should see an out of range error… because there is nothing in the 4th position in our list / array. Okay, Matt – so how does that relate to our error earlier? The error we were seeing earlier is because me.time.frame (in a default network) evaluates up to 600 before going back to 1. So, to fix our error we might use modulo:

op( 'null7' )[ 'chan1' ][ me.time.frame % 200 ]

Wait!!! Why 200? I’m using 200 because that’s the number of samples I have in my pattern CHOP.

Okay! Now we’re getting somewhere.
The only catch is that if we look closely we’ll see that our refence with an index, and how touch is interpreting our previous refernce are different:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame % 200 ] 0.6381909251213074

WHAT GIVES MAAAAAAAAAAAAAAT!?
Alright, so there’s one more thing for us to keep in mind here. me.time.frame starts sequencing at 1. That makes sense, because we don’t usually think of frame 0 in animation we think of frame 1. Okay, cool. The catch is that our list indexes from the 0 position – in programming languages 0 still represents an index position. So what we’re actually seeing here is an off by one error.

Now that we now what the problem is it’s easy to fix:

op( 'null7' )[ 'chan1' ][ ( me.time.frame - 1 ) % 200 ]

Now we’re right as rain:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame ] 0.6381909251213074
op( ‘null7’ )[ ‘chan1’ ][ ( me.time.frame – 1 ) % 200 ] 0.6331658363342285

Hope that helps!

# Building a Calibration UI | Reusing Palette Components – The Stoner | TouchDesigner

Here’s our second stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study

As many of you have already found there’s a wealth of interesting examples and useful tools tucked away in the palette in touch designer. If you’re unfamiliar with this feature, it’s located on the left hand side of the interface when you open touch, and you can quickly summon it into existence with the small drawer and arrow icon:

Tucked away at the bottom of the tools list is the stoner. If you’ve never used the stoner it’s a killer tool for all your grid warping needs. It allows for key stoning and grid warping, with a healthy set of elements that make for fast and easy alterations to a given grid. You can bump points with the keyboard, you can use the mouse to scroll around, there are options for linear curves, bezier curves, persepective mapping, and bilinear mapping. It is an all around wonderful tool. The major catch is that using the tox as is runs you about 0.2 milliseconds when we’re not looking at the UI, and about 0.5 milliseconds when we are looking at the UI. That’s not bad, in fact that’s a downright snappy in the scheme of things, but it’s going to have limitations when it comes to scale, and using multiple stoners at the same time.

That’s slick. But what if there was a way to get almost the same results at a cost of 0 milliseconds for photos, and only 0.05 milliseconds when working with video? As it turns out, there’s a gem of a feature in the stoner that allows us to get just this kind of performance, and we’re going to take a look at how that works as well as how to take advantage of that feature.

Let’s start by taking a closer look at the stoner itself. We can see now that there’s a second outlet on our op. Let’s plug in a null to both outlets and see what we’re getting.

Well hello there, what is this all about?!

Our second output is a 32 bit texture made up of only red and green channels. Looking closer we can see that it’s a gradient of green in the top left corner, and red in the bottom right corner. If we pause here for a moment we can look at how we might generate a ramp like this with a GLSL Top.

If you’re following along at home, let’s start by adding a GLSL Top to our network. Next we’ll edit the pixel shader.

out vec4 fragColor;

void main()
{
fragColor = vec4( vUV.st , 0.0 , 1.0 );
}

So what do we have here exactly? For starters we have an explicit declaration of our out vec4 (in very simple terms – our texture that we want to pass out of the main loop); a main loop where we assign values to our output texture.

What’s a vec4?

In GLSL vectors are a data type. We use vectors for all sorts of operations, and as a datatype they’re very useful to us as we often want variable with several positions. Keeping in mind that GLSL is used in pixeltown (one of the largest burrows on your GPU), it’s helpful to be able to think of variables that carry multiple values – like say information about a red, green, blue, and alpha value for a given pixel. In fact, that’s just what our vec4 is doing for us here, it represents the RGBA values we want to associate with a given pixel.

vUV is an input variable that we can use to locate the texture coordinate of a pixel. This value changes for every pixel, which is part of the reason it’s so useful to us. So what is this whole vec4( vUV.st, 0.0, 1.0) business? In GL we can fill in the values of a vec4 with a vec2 – vUV.st is our uv coordinate as a vec2. In essence what we’ve done is say that we want to use the uv coordinates to stand in for our red and green values, blue will always be 0, and our alpha will always be 1. It’s okay if that’s a wonky to wrap your head around at the moment. If you’re still scratching your head you can read more at links below

Okay, so we’ve got this silly gradient, but what is it good for?!

Let’s move around our stoner a little bit to see what else changes here.

That’s still not very sexy – I know, but let’s hold on for just one second. We first need to pause for a moment and think about what this might be good for. In fact, there’s a lovely operator that this plays very nicely with. The remap TOP. Say what now? The remap top can be used to warp input1 based on a map in input2. Still scratching your head? That’s okay. Let’s plugin a few other ops so we can see this in action. We’re going to rearrange our ops here just a little and add a remap TOP to the mix.

Here we can see that the red / green map is used on the second input our our remap top, and our movie file is used on the first input.

Okay. But why is this anything exciting?

Richard Burns just recently wrote about remapping, and he very succinctly nails down exactly why this is so powerful:

It’s commonly used by people who use the stoner component as it means they can do their mapping using the stoners render pipeline and then simply remove the whole mapping tool from the system leaving only the remap texture in place.

If you haven’t read his post yet it’s well worth a read, and you can find it here.

Just like Richard mentions we can use this new feature to essentially completely remove or disable the stoner in our project once we’ve made maps for all of our texture warping. This is how we’ll get our cook time down to just 0.05 milliseconds.

Let’s look at how we can use the stoner to do just this.

For starters we need to add some empty bases to our network. To keep things simple for now I’m just going to add them to the same part of the network where my stoner lives. I’m going to call them base_calibration1 and base_calibration2.

Next we’re going to take a closer look at the stoner’s custom parameters. On the Stoner page we can see that there’s now a place to put a path for a project.

Let’s start by putting in the path to our base_calibration1 component. Once we hit enter we should see that our base_calibration1 has new set of inputs and outputs:

Let’s take a quick look inside our component to see what was added.

Ah ha! Here we’ve got a set of tables that will allow the stoner UI to update correctly, and we’ve got a locked remap texture!

So, what do we do with all of this?

Let’s push around the corners of our texture in the stoner and hook up a few nulls to see what’s happening here.

You may need to toggle the “always refresh” parameter on the stoner to get your destination project to update correctly. Later on we’ll look at how to work around this problem.

So far so good. Here we can see that our base_calibration1 has been updated with the changes we made to the stoner. What happens if we change the project path now to be base_calibration2? We should see that inputs and outputs are added to our base. We should also be able to make some changes to the stoner UI and see a two different calibrations.

Voila! That’s pretty slick. Better yet if we change the path in the stoner project parameter we’ll see that the UI updates to reflect the state we left our stoner in. In essence, this means that you can use a single stoner to calibrate multiple projectors without needing multiple stoners in your network. In fact, we can even bypass or delete the stoner from our project once we’re happy with the results.

There are, of course, a few things changes that we’ll make to integrate this into our project’s pipeline but understanding how this works will be instrumental in what we build next. Before we move ahead take some time to look through how this works, read through Richard’s post as well as some of the other documentation. Like Richard mentions, this approach to locking calibration data can be used in lots of different configurations and means that you can remove a huge chunk of overhead from your projects.

Next we’ll take the lessons we’ve learned here combined with the project requirements we laid out earlier to start building out our complete UI and calibration pipeline.

# WonderDome

In 2012 Dan Fine started talking to me about a project he was putting together for his MFA thesis. A fully immersive dome theatre environment for families and young audiences. The space would feature a dome for immersive projection, a sensor system for tracking performers and audience members, all built on a framework of affordable components. While some of the details of this project have changed, the ideas have stayed the same – an immersive environment that erases boundaries between the performer and the audience, in a space that can be fully activated with media – a space that is also watching those inside of it.

Fast forward a year, and in mid October of 2013 the team of designers and our performer had our first workshop weekend where we began to get some of our initial concepts up on their feet. Leading up to the workshop we assembled a 16 foot diameter test dome where we could try out some of our ideas. While the project itself has an architecture team that’s working on an portable structure, we wanted a space that roughly approximated the kind of environment we were going to be working in. This test dome will house our first iteration of projection, lighting, and sound builds, as well as the preliminary sensor system.

Both Dan and Adam have spent countless hours exploring various dome structures, their costs, and their ease of assembly. Their research ultimately landed the team on using a kit from ZipTie Domes for our test structure. ZipTie Domes has a wide variety of options for structures and kits. With a 16 foot diameter dome to build we opted to only purchase the hub pieces for this structure, and to cut and prep the struts ourselves – saving us the costs of ordering and shipping this material.

In a weekend and change we were able to prep all of the materials and assemble our structure. Once assembled we were faced with the challenge of how to skin it for our tests. In our discussion about how to cover the structure we eventually settled on using a parachute for our first tests. While this material is far from our ideal surface for our final iteration, we wanted something affordable and large enough to cover our whole dome. After a bit of searching around on the net, Dan was able to locate a local military base that had parachutes past their use period that we were able to have for free. Our only hiccup here was that the parachute was multi colored. After some paint testing we settled on treating the whole fabric with some light gray latex paint. With our dome assembled, skinned, and painted we were nearly ready for our workshop weekend.

# Media

There’s healthy body of research and methodology for dome projection on the web, and while reading about a challenge prepped the team for what we were about to face it wasn’t until we go some projections up and running that we began to realize what we were really up against. Our test projectors are InFocus 3118 HD machines that are great. There are not, however, great when it comes to dome projection. One of our first realizations in getting some media up on the surface of the dome was the importance of short throw lensing. Our three HD projectors at a 16 foot distance produced a beautifully bright image, but covered less of our surface than we had hoped. That said, our three projectors gave us a perfect test environment to begin thinking about warping and edge blending in our media.

## TouchDesigner

One of the discussions we’ve had in this process has been about what system is going to drive the media inside of the WonderDome. One of the most critical elements to the media team in this regard is the ability to drop in content that the system is then able to warp and edge blend dynamically. One of the challenges in the forefront of our discussions about live performance has been the importance of a flexible media system that simplifies as many challenges as possible for the designer. Traditional methods of warping and edge blending are well established practices, but their implementation often lives in the media artifact itself, meaning that the media must be rendered in a manner that is distorted in order to compensate for the surface that it will be projected onto. This method requires that the designer both build the content, and build the distortion / blending methods. One of the obstacles we’d like to overcome in this project is to build a drag and drop system that allows the designer to focus on crafting the content itself, knowing that the system will do some of the heavy lifting of distortion and blending. To solve that problem, one of the pieces of software that we were test driving as a development platform is Derivative’s TouchDesigner.

Out of the workshop weekend we were able to play both with rendering 3D models with virtual cameras as outputs, as well as with manually placing and adjusting a render on our surface. The flexibility and responsiveness of TouchDesigner as a development environment made this process relatively fast and easy. It also meant that we had a chance to see lots of different kinds of content styles (realistic images, animation, 3D rendered puppets, etc.) in the actual space. Hugely important was a discovery about the impact of movement (especially fast movement) coming from a screen that fills your entire field of view.

## TouchOSC Remote

Another hugely important discovery was the implementation of a remote triggering mechanism. One of our other team members, Alex Oliszewski, and I spent a good chunk of our time talking about the implementation of a media system for the dome. As we talked through our goals for the weekend it quickly became apparent that we needed for him to have some remote control of the system from inside of the dome, while I was outside programming and making larger scale changes. The use of TouchOSC and Open Sound Control made a huge difference for us as we worked through various types of media in the system. Our quick implementation gave Alex the ability to move forward and backwards through a media stack, zoom, and translate content in the space. This allowed him the flexibility to sit away from a programming window to see his work. As a designer who rarely gets to see a production without a monitor in front of me, this was a huge step forward. The importance of having some freedom from the screen can’t be understated, and it was thrilling to have something so quickly accessible.

# Lights

Adam Vachon, our lighting designer, also made some wonderful discoveries over the course of the weekend. Adam has a vested interest in interactive lighting, and to this end he’s also working in TouchDesigner to develop a cue based lighting console that can use dynamic input from sensors to drive his system. While this is a huge challenge, it’s also very exciting to see him tackling this. In many ways it really feels like he’s doing some exciting new work that addresses very real issues for theaters and performers who don’t have access to high end lighting systems. (You can see some of the progress Adam is making on his blog here)

While it’s still early in our process it’s exciting to see so many of the ideas that we’ve had take shape. It can be difficult to see a project for what it’s going to be while a team is mired in the work of grants, legal, and organization. Now that we’re starting to really get our hands dirty, the fun (and hard) work feels like it’s going to start to come fast and furiously.

# Thoughts from the Participants:

What challenges did you find that you expected?

The tracking; I knew it would hard, and it has proven to be even more so. While a simple proof-of-concept test was completed with a Kinect, a blob tracking camera may not be accurate enough to reliably track the same target continuously. More research is showing that Ultra Wide Band RFID Real Time Locations System may be the answer, but such systems are expensive. That said, I am now in communications with a rep/developer for TiMax Tracker (an UWB RFID RTLS) who might be able to help us out. Fingers crossed!

What challenges did you find that you didn’t expect?

The computers! Just getting the some of computers to work the way they were “supposed” to was a headache! That said, it is nothing more than what I should have expected in the first place. Note for the future: always test the computers before workshop weekend!

DMX addressing might also become a problem with TouchDesigner, though I need to do some more investigation on that.

How do you plan to overcome some of these challenges?

Bootcamping my macbook pro will help on the short term computer-wise, but it is definitely a final solution. I will hopefully be obtaining a “permanent” test light within the next two weeks as well, making it easier to do physical tests within the Dome.

As for TouchDesigner, more playing around, forum trolling, and attending Mary Franck’s workshop at the LDI institute in January.

What excites you the most about WonderDome?

I get a really exciting opportunity: working to develop a super flexible, super communicative lighting control system with interactivity in mind. What does that mean exactly? Live tracking of performers and audience members, and giving away some control to the audience. An idea that is becoming more an more to me as an artist is finding new ways for the audience to directly interact with a piece of art. On our current touch-all-the-screens-and-watch-magic-happen culture, interactive and immersive performance is one way for an audience to have a more meaningful experience at the theatre.

What challenges did you find that you expected?

From the performer’s perspective, I expected to wait around. One thing I have learned in working with media is to have patience. During the workshop, I knew things would be rough anyway and I was there primarily as a body in space – as proof of concept. I expected this and didn’t really find it to be a challenge but as I am trying to internally catalogue what resources or skills I am utilizing in this process, so far one of the major ones is patience. And I expect that to continue.

I expected there to be conflicts between media and lights (not the departments, the design elements themselves). There were challenge, of course, but they were significant enough to necessitate a fundamental change to the structure. That part was unexpected…

Lastly, directing audience attention in an immersive space I knew would be a challenge, mostly due to the fundamental shape of the space and audience relationship. Working with such limitations for media and lights is extremely difficult in regard to cutting the performer’s body out from the background imagery and the need to raise the performer up.

What challenges did you find that you didn’t expect?

Honestly, the issue of occlusion on all sides had not occurred to me. Of course it is obvious, but I have been thinking very abstractly about the dome (as opposed to pragmatically). I think that is my performer’s privilege: I don’t have to implement any of the technical aspects and therefore, I am a bit naive about the inherent obstacles therein.

I did not expect to feel so shy about speaking up about problem solving ideas. I was actually kind of nervous about suggesting my “rain fly” idea about the dome because I felt like 1) I had been out of the conversation for some time and I didn’t know what had already been covered and 2) every single person in the room at the time has more technical know-how than I do. I tend to be relatively savvy with how things function but I am way out of my league with this group. I was really conscious of not wanting to waste everyone’s time with my kindergarten talk if indeed that’s what it was (it wasn’t…phew!). I didn’t expect to feel insecure about this kind of communication.

How do you plan to overcome some of these challenges?

Um. Tenacity?

What excites you the most about WonderDome?

It was a bit of a revelation to think of WonderDome as a new performance platform and, indeed, it is. It is quite unique. I think working with it concretely made that more clear to me than ever before. It is exciting to be in dialogue on something that feels so original. I feel privileged to be able to contribute, and not just as a performer, but with my mind and ideas.

Soft skills: knowing that it isn’t about you, patience, sense of humor
Practical skills: puppeteering, possibly the ability to run some cues from a handheld device

# Case Study: Vesturport’s Woyzeck

The challenge of re-imagining a classic work often lies in finding the right translation of ideas, concepts, and imagery for a modern context. Classic pieces of theatre carry many pieces of baggage to the production process: their history, the stories of their past incarnations, the lives of famous actors and actresses who performed in starring roles, the interpretation of their designers, and all the flotsam and jetsam that might be found with any single production of the piece in question. A classic work, therefore, is not just the text of the author but a historical thread that traces the line of the work from its origin to its current manifestation. The question that must be addressed in the remounting of a classic work is, why: why this classic work, why now, why does this play matter more than any other?

In 2008 Iceland’s Vesturport theatre company presented their re-imagining of Büchner’s Woyzeck, a work about class, status, and madness. Written between 1836 – 1837, Büchner’s play tells the story of Woyzeck, a lowly soldier stationed in a German town. He lives with Marie, with whom he has had a child. For extra pay Woyzeck performs odd jobs for the captain and is involved in medical experiments for the Doctor. Over the course of the play’s serialized vignettes Woyzeck’s grasp on the world begins to break apart as the result of his confrontation with an ugly world of betrayal and abuse. The end of the play is a jealous, psychologically crippled, and cuckolded Woyzeck who ruthlessly lures Marie to the pond in the woods where he kills her. There is some debate about the actual ending to Büchner’s play. While the version that is most frequently produced has a Woyzeck who is unpunished, there is some speculation that one version of the play ended with the lead character facing a trial for his crime. As a historical note, Büchner’s work is loosely based upon the true story of Johann Christian Woyzeck, a wigmaker, who murdered the window with whom he lived. Tragically, Büchner’s died in 1837 from typhus and never saw Woyzeck performed. It wasn’t, in fact, performed until 1913. In this respect, Woyzeck has always been a play that is performed outside of its original time in history. It has always been a window backwards to a different time, while simultaneously being a means for the theatre to examine the time in which it is being produced.

It therefore comes as no surprise that in 2008 a play offering a commentary on the complex social conditions of class and status opens in a country standing at the edge of a financial crisis that would come to shape the next three years of its economic standing in the world. A play about the use and misuse of power in a world where a desperate Woyzeck tries to explain to a bourgeoisie captain that the poor are “flesh and blood… wretched in this world and the next…” (Büchner) rings as a warning about what that corner of the world was soon to face.

# The Response to Vesturport’s Aesthetic

From the moment of its formation, Vesturport has been a company that often appropriates material and looks to add an additional element of spectacle – early in their formation as a troupe they mounted productions of Romeo and Juliet and Titus Andronicus. This additional element of spectacle is specifically characterized by a gymnastic and aerial (contemporary circus) aesthetic. The company’s connection to a circus aesthetic is often credited to Gisli Örn Gardarsson’s, the company’s primary director, background as a gymnast (Vesturport). The use of circus as a mechanism for story telling is both compelling and engaging. Peta Tait captures this best as she talks about what circus represents:

Circus performance presents artistic and physical displays of skillful action by highly rehearsed bodies that also perform cultural ideas: of identity, spectacle, danger, transgression. Circus is performative, making and remaking itself as it happens. Its languages are imaginative, entertaining and inventive, like other art forms, but circus is dominated by bodies in action [that] can especially manipulate cultural beliefs about nature, physicality and freedom. (Tait 6)

The very nature of circus as a performance technique, therefore, brings a kind of translation to Vesturport’s work that is unlike the work of other theatre companies. They are also unique in their use of language, as their productions frequently feature translations that fit the dominant language of a given touring venue. More than a company that features the use of circus as a gimmick, Vesturport uses the body’s relationship to space as a translation of ideas into movement, just as their use of language itself is a constant flow of translation.

Vesturport’s production of Woyzeck invites the audience to play with them as “Gardarsson’s gleefully physical staging of Büchner’s masterpiece … is played out on an industrial set of gleaming pipes, green astroturf, and water-filled plexiglass tanks” (Vesturport). Melissa Wong, in writing for Theatre Journal sees a stage that “resembled a swimming pool and playground” that fills the stage with a “playful illusion.” The playful atmosphere of the production, however, is always in flux as a series of nightmarish moments of abuse are juxtaposed against scenes of slapstick comedy and aerial feats. Wong later sees a Woyzeck who “possessed a vulnerability that contrasted with the deliberately grotesque portrayals of the other characters.” Wong’s ultimate assessment of the contrasting moments of humor and spectacle is that they “served to emphasize the pathos of the play, especially at the end when the fun and frolicking faded away to reveal the broken man that Woyzeck had become.” Not all American critics, however, shared her enthusiasm for Vesturport’s production. Charles Isherwood in writing for the New York Times sees the use of circus as a distraction, writing that, “the circus is never in serious danger of being spoiled by that party-pooping Woyzeck…it’s hard to fathom what attracted these artists to Büchner’s deeply pessimistic play, since they so blithely disregard both its letter and its spirit.” Jason Best shares a similar frustration with the production, writing “by relegating Büchner’s words to second place, the production ends up more impressive as spectacle than effective as drama.” Ethan Stanislawski was frustrated by a lack of depth in Gardarsson’s production saying “this Woyzeck is as comical, manic, and intentionally reckless as it is intellectually shallow.”

# Circus as an Embodied Language

Facing such sharp criticism, why does this Icelandic company use circus as a method for interrogating text? Certainly one might consider the mystique of exploring new dimensions of theatricality, or notions of engaging the whole body in performance. While these are certainly appealing suggestions, there is more to the idea of circus as a physical manifestation of idea. Facing such sharp criticism, why does this Icelandic company use circus as a method for interrogating text? Certainly one might consider the mystique of exploring new dimensions of theatricality, or notions of engaging the whole body in performance. While these are certainly appealing suggestions, there is more to the idea of circus as a physical manifestation of idea. Tait writes “… aerial acts are created by trained, muscular, bodies. These deliver a unique aesthetic that blends athleticism and artistic expression. As circus bodies, they are indicative of highly developed cultural behavior. The ways in which spectators watch performers’ bodies – broadly, socially, physical and erotically – come to the fore with the wordless performance of an aerial act.” Spivak reminds us that:

Logic allows us to jump from word to word by means of clearly indicated connections. Rhetoric must work in the silence between and around words in order to see what works and how much. The jagged relationship between rhetoric and logic, condition and effect of knowing, is a relationship by which a world is made for the agent, so that the agent can act in an ethical way, a political way, a day-to-day way; so that the agent can be alive in a human way, in the world. (Spivak 181)

Woyzeck’s challenge is fundamentally about understanding how to live in this world. A world that is unjust, frequently characterized by subjugation, and exploitative. Gardarsson uses circus to depict a world that is both ugly and beautiful. He uses circus to call our attention to these problems as embodied manifestation. The critics miss what’s happening in the production, and this is especially evident when looking at what Tait has to say the role of new circus as a medium:

New circus assumes its audience is familiar with the format of traditional live circus, and then takes its artistic inspiration from a cultural idea of circus as identity transgression and grotesque abjection, most apparent in literature [and] in cinema. Early [new circus in the 1990’s] shows reflected a trend in new circus practice to include queer sexual identities and expand social ideas of freakish bodies. Artistic representation frequently exaggerates features of traditional circus…. (Tait 123)

What Isherwood misses is that the use of garish spectacle that makes light of an ugly world is, in fact, at the very heart of what Gardarsson is trying to express. The working-poor Woyzeck who questions, and thinks, and is criticized for thinking is ruining the Captain and the Doctor’s circus-filled party. Woyzeck’s tragedy lies in his fight to survive, to be human, in the inhuman world that surrounds him – what could be more “deeply pessimistic” (as Isherwood calls it) than a vision of the world where fighting to be human drives a man to destroy the only anchor to the world (Marie) that he ever had?

# Conclusions

Melissa Wong best sums up the production in seeing the tragedy in a Woyzeck “who seemed in some ways to be the most humane character in the production…the one who failed to survive.” Her assessment of Gardarsson’s use of levity is that it points “to the complicity of individuals [the audience] who, as part of society, had watched Woyzeck’s life as entertainment without fully empathizing with the depth of his existential crisis” (Wong). She also rightly points out that the use of humor in the play “enabled us to access questions that in the bleakness of their full manifestation might have been too much to bear” (Wong). Tait also reminds us that the true transformative nature of circus as a medium is not what is happening with the performer, but how the experience of viewing the performer is manifest in the viewer.

Aerial motion and emotion produce sensory encounters; a spectator fleshes culturally identifiable motion, emotionally. The action of musical power creates buoyant and light motion, which corresponds with reversible body phenomenologies in the exaltation of transcendence with and of sensory experience. The aerial body mimics the sensory motion of and within lived bodies in performance of delight, joy, exhilaration, and elation. Aerial bodies in action seem ecstatic in their fleshed liveness. (Tait 152)

Here circus functions as a mechanism for translation and confrontation in a play whose thematic elements are difficult to grapple with. Vesturport’s method and execution look to find the spaces between words, and while not perfect, strive to push the audience into a fleshed and lived experience of Büchner’s play rather than a purely intellectual theatrical exercise.

# Works Cited

Büchner, Georg. Woyzeck. Trans. Eric Bentley. New York: Samuel French, 1991.

Best, Jason.”Woyzeck | Review.” 14 October 2005. The Stage. The Stage Meida Company Limited. 3 October 2013 <www.thestage.co.uk/reviews/review.php/10047/woyzeck>.

Isherwood, Charles. Outfitting Woyzeck With a Pair of Rose-Colored Glasses. 17 October 2008. 2 October 2013 <theater.nytimes.com/2008/10/17/theater/reviews/17woyz.html>.

Pareles, Jon. “Shaking Up ‘Woyzeck’ With early Rock and Flying Trapeze.” 13 October 2008. The New York Times. <www.nytimes.com/2008/10/14/arts/music/14cave.html?_r=2&scp=1&sq=woyzeck&st=cse&oref=slogin&>.

Richarsdon, Stan. Woyzec nytheatre.com review. 15 October 2008. The new York Theatre Experience. 2 October 2013 <www.nytheatre.com/Review/stan-richardson-2008-10-15-woyzeck>.

Spivak, Gayatri Chakravorty. Outside in the Teaching Machine. New York: Routledge, 1993.

Stanislawski, Ethan. Theatre Review (NYC): Woyzeck by George Buchner at UNDER St. Marks and BAM. 21 October 2008. 4 October 2013 <blogcritics.org/theater-review-nyc-woyzeck-by-georg/>.

Tait, Peta. Circus Bodies: Cultural Idenity in Aerial Performance. New York: Routledge, 2005.

Thielman, Sam. Review: “Woyzeck”. 16 October 2008. 5 October 2013 <http://variety.com/2008/legit/reviews/woyzeck-3-1200471537/>.

Vesturport. Woyzeck by Georg Buchner | A Vesturport and Reykavik City Theatre production. 15 Janruary 2000. 7 October 2013 <http://vesturport.com/theater/woyzeck-georg-buchner/>.

Wong, Melisa Wansin. “Woyzeck (review).” Theatre Journal 61.4 (2009): 638-640.

Woyzeck. Dir. Gisli Örn Gardarsson. Vesturport. Vesturport and Reykjavik City Threatre. Vesturport, 2009.