Category Archives: Theory

presets and cue building – a beyond basics checklist | TouchDesigner 099

from the facebook help group

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

https://www.facebook.com/groups/touchdesignerhelp/permalink/835733779925957/

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

  • Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.
  • The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit. 
  • I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues  – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

  • What existing tools do you like – what’s their data structure / solution?
  • How is your data organized – arrays, dictionaries, etc.
  • Do you have a readme to refer back to when you extend your project in the future?
  • Do you have a way to add entries?
  • Do you have a way to recall entries?
  • Do you have a way to update entries?
  • Do you have a way to copy entries?
  • Do you have a validation process in-line to ensure your entries are valid?
  • Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

icon_library.PNG

icon_library_example.PNG

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

{
      "cues": {
           "cue1": {
                "Tox": "Swank",
                "Level_1": 0,
                "Noise": 1,
                "Level3": 4,
                "Blacklvl": 0.75
           },
           "cue2": {
               "Tox": "Curl",
               "Bouncy": 0.775,
               "Curve": 100.0,
               "Augment": 13,
               "Blklvl": 0.75
           },
           "cue3": {
               "Tox": "Boop",
               "Boopness": 0.775
           }
      }
}

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

{
      "cues": {
           "cue1": {
                "Tox": "Swank",
                "Par1": 0,
                "Par2": 1,
                "Par3": 4,
                "Par4": 0.75
           },
           "cue2": {
               "Tox": "Curl",
               "Par1": 0.775,
               "Par2": 100.0,
               "Par3": 13,
               "Par4": 0.75
           },
           "cue3": {
               "Tox": "Boop",
               "Par1": 0.875,
               "Par2": None,
               "Par3": None,
               "Par4": None
           }
      }
}

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing  your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

Richard’s killer post about optimization in touch has an excellent recommendation – keep your UI separate. This suggestion is HUGE, and it does far more good than you might intentionally imagine.

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

  • “What problem am I solving by adding this level of additional complexity?”
  • “Is there another way to solve this problem without an additional process?”
  • “What are the possible problems / issues this might cause?”
  • “Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

Python in TouchDesigner | The Channel Class | TouchDesigner

The Channel Class Wiki Documentation

Taking a little time to better understand the channel class provides a number of opportunities for getting a stronger handle on what’s happening in TouchDesigner. This can be especially helpful if you’re working with CHOP executes or just trying to really get a handle on what on earth CHOPs are all about.

To get started, it might be helpful to think about what’s really in a CHOP. Channel Operators are largely arrays (lists in python lingo) of numbers. These arrays can be only single values, or they might be a long set of numbers. In any given CHOP all of the channels will have the same length (we could also say that they have the same number of samples). That’s helpful to know as it might shape the way we think of channels and samples.

Before we go any further let’s stop to think through the above just a little bit more. Let’s first think about a constant CHOP with channel called ‘chan1’. We know we can write a python reference for this chop like this:

op( 'constant1' )[ 'chan1' ]

or like this:

op( 'constant1' )[ 0 ]

Just as a refresher, we should remember that the syntax here is:
op( stringNameToOperator )[ channelNameOrIndex ]

python_refs.PNG

That’s just great, but what happens if we have a pattern CHOP? If we drop down a default pattern CHOP (which has 1000 samples), and we try the same expression:

op( 'pattern1' )[ 'chan1' ]

We now get a constantly changing value. What gives?! Well, we’re now looking at bit list of numbers, and we haven’t told Touch where in that list of values we want to grab an index – instead touch is moving through that index with me.time.frame-1 as the position in the array. If you’re scratching your head, that’s okay we’re going to pull this apart a little more.

multi_sample_chop.gif

Okay, what’s really hiding from us is that CHOP references have a default value that’s provided for us. While we often simplify the reference syntax to:

op( stringNameToOperator )[ channelNameOrIndex ]

In actuality, the real reference syntax is:
op( stringNameToOperator )[ channelNameOrIndex ][ arrayPosition ]

In single sample CHOPs we don’t usually need to worry about this third argument – if there’s only one value in the list Touch very helpfully grabs the only value there. In a multi-sample CHOP channel, however, we need more information to know what sample we’re really after. Let’s try our reference to a narrow down to a single sample in that pattern CHOP. Let’s say we want sample 499:

op( 'pattern1' )[ 'chan1' ][ 499 ]

With any luck you should now be seeing that you’re only getting a single value. Success!

But what does this have to do with the Channel Class? Well, if we take a closer look at the documentation ( Channel Class Wiki Documentation ), we might find some interesting things, for example:

Members

  • valid (Read Only) True if the referenced chanel value currently exists, False if it has been deleted. Identical to bool(Channel).
  • index (Read Only) The numeric index of the channel.
  • name (Read Only) The name of the channel.
  • owner (Read Only) The OP to which this object belongs.
  • vals Get or set the full list of Channel values. Modifying Channel values can only be done in Python within a Script CHOP.

Okay, that’s great, but so what? Well, let’s practice our python and see what we might find if we try out a few of these members.

We might start by adding a pattern CHOP. I’m going to change my pattern CHOP to only be 5 samples long for now – we don’t need a ton of samples to see what’s going on here. Next I’m going to set up a table DAT and try out the following bits of python:

python
op( 'null1' )[0].valid
op( 'null1' )[0].index
op( 'null1' )[0].name
op( 'null1' )[0].owner
op( 'null1' )[0].exports
op( 'null1' )[0].vals

I’m going to plug that table DAT into an eval DAT to evaluate the python expressions so I can see what’s going on here. What I get back is:

True
0
chan1
/project1/base_the_channel_class/null1
[]
0.0 0.25 0.5 0.75 1.0

If we were to look at those side by side it would be:

Python In Python Out
op( ‘null1’ )[0].valid True
op( ‘null1’ )[0].index 0
op( ‘null1’ )[0].name chan1
op( ‘null1’ )[0].owner /project1/base_the_channel_class/null1
op( ‘null1’ )[0].exports []
op( ‘null1’ )[0].vals 0.0 0.25 0.5 0.75 1.0

So far that’s not terribly exciting… or is it?! The real power of these Class Members comes from CHOP executes. I’m going to make a quick little example to help pull apart what’s exciting here. Let’s add a Noise CHOP with 5 channels. I’m going to turn on time slicing so we only have single sample channels. Next I’m going to add a Math CHOP and set it to ceiling – this is going to round our values up, giving us a 1 or a 0 from our noise CHOP. Next I’ll add a null. Next I’m going to add 5 circle TOPs, and make sure they’re named circle1 – circle5.

Here’s what I want – Every time the value is true (1), I want the circle to be green, when it’s false (0) I want the circle to be red. We could set up a number of clever ways to solve this problem, but let’s imagine that it doesn’t happen too often – this might be part of a status system that we build that’s got indicator lights that help us know when we’ve lost a connection to a remote machine (this doesn’t need to be our most optimized code since it’s not going to execute all the time, and a bit of python is going to be simpler to write / read). Okay… so what do we put in our CHOP execute?! Well, before we get started it’s important to remember that our Channel class contains information that we might need – like the index of the channel. In this case we might use the channel index to figure out which circle needs updating. Okay, let’s get something started then!

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = [ 1.0, 0.0, 0.0 ]
    onColor         = [ 0.0, 1.0, 0.0 ]
    targetCircle    = 'circle{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = onColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = onColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = onColor[2]

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = offColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = offColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = offColor[2]
    return

channel_execute_par.gif

Alright! That works pretty well… but what if I want to use a select and save some texture memory?? Sure. Let’s take a look at how we might do that. This time around we’ll only make two circle TOPs – one for our on state, one for our off state. We’ll add 5 select TOPs and make sure they’re named select1-select5. Now our CHOP execute should be:

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = 'circle_off'
    onColor         = 'circle_on'
    targetCircle    = 'select{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = onColor

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = offColor
    return

Okay… I’m going to add one more example to the sample code, and rather than walk you all the way through it I’m going to describe the challenge and let you pull it apart to understand how it works – challenge by choice, if you’re into what’s going on here take it all apart, otherwise you can let it ride.

channel_execute_select.gif

Okay… so, what I want is a little container that displays a channel’s name, an indicator if the value is > 0 or < 0, another green / red indicator that corresponds to the >< values, and finally the text for the value itself. I want to use selects when possible, or just set the background TOP for a container directly. To make all this work you’ll probably need to use .name, .index, and .vals.

multi_sample_more_members.gif

You can pull mine apart to see how I made it work here: base_the_channel_class.

Happy Programming!


BUT WAIT! THERE’S MORE!

Ivan DesSol asks some interesting questions:

Questions for the professor:
1) How can I find out which sample index in the channel is the current sample?
2) How is that number calculated? That is, what determines which sample is current?

If we’re talking about a multi sample channel let’s take a look at how we might figure that out. I mentioned this in passing above, but it’s worth taking a little longer to pull this one apart a bit. I’m going to use a constant CHOP and a trail CHOP to take a look at what’s happening here.

multi_sample_ref.PNG

Let’s start with a simple reference one more time. This time around I’m going to use a pattern CHOP with 200 samples. I’m going to connect my pattern to a null (in my case this is null7). My resulting python should look like:

op( 'null7' )[ 'chan1' ]

Alright… so we’re speeding right along, and our value just keeps wrapping around. We know that our multi sample channel has an index, so for fun games and profit let’s try using me.time.frame:

op( 'null7' )[ 'chan1' ][ me.time.frame ]

Alright… well. That works some of the time, but we also get the error “Index invalid or out of range.” WTF does that even mean?! Well, remember an array or list has a specific length, when we try to grab something outside of that length we’ll seen an error. If you’re still stratching you’re head that’s okay – let’s take a look at it this way.

Let’s say we have a list like this:

fruit = [ apple, orange, kiwi, grape ]

We know that we can retrieve values from our list with an index:

print( fruit[ 0 ] ) | returns "apple"
print( fruit[ 1 ] ) | returns "orange"
print( fruit[ 2 ] ) | returns "kiwi"
print( fruit[ 3 ] ) | returns "grape"

If, however, we try:

print( fruit[ 4 ] )

Now we should see an out of range error… because there is nothing in the 4th position in our list / array. Okay, Matt – so how does that relate to our error earlier? The error we were seeing earlier is because me.time.frame (in a default network) evaluates up to 600 before going back to 1. So, to fix our error we might use modulo:

op( 'null7' )[ 'chan1' ][ me.time.frame % 200 ]

Wait!!! Why 200? I’m using 200 because that’s the number of samples I have in my pattern CHOP.

Okay! Now we’re getting somewhere.
The only catch is that if we look closely we’ll see that our refence with an index, and how touch is interpreting our previous refernce are different:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame % 200 ] 0.6381909251213074

WHAT GIVES MAAAAAAAAAAAAAAT!?
Alright, so there’s one more thing for us to keep in mind here. me.time.frame starts sequencing at 1. That makes sense, because we don’t usually think of frame 0 in animation we think of frame 1. Okay, cool. The catch is that our list indexes from the 0 position – in programming languages 0 still represents an index position. So what we’re actually seeing here is an off by one error.

Now that we now what the problem is it’s easy to fix:

op( 'null7' )[ 'chan1' ][ ( me.time.frame - 1 ) % 200 ]

Now we’re right as rain:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame ] 0.6381909251213074
op( ‘null7’ )[ ‘chan1’ ][ ( me.time.frame – 1 ) % 200 ] 0.6331658363342285

Hope that helps!

Building a Calibration UI | Reusing Palette Components – The Stoner | TouchDesigner

Here’s our second stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study

stoner

As many of you have already found there’s a wealth of interesting examples and useful tools tucked away in the palette in touch designer. If you’re unfamiliar with this feature, it’s located on the left hand side of the interface when you open touch, and you can quickly summon it into existence with the small drawer and arrow icon:

pallet

Tucked away at the bottom of the tools list is the stoner. If you’ve never used the stoner it’s a killer tool for all your grid warping needs. It allows for key stoning and grid warping, with a healthy set of elements that make for fast and easy alterations to a given grid. You can bump points with the keyboard, you can use the mouse to scroll around, there are options for linear curves, bezier curves, persepective mapping, and bilinear mapping. It is an all around wonderful tool. The major catch is that using the tox as is runs you about 0.2 milliseconds when we’re not looking at the UI, and about 0.5 milliseconds when we are looking at the UI. That’s not bad, in fact that’s a downright snappy in the scheme of things, but it’s going to have limitations when it comes to scale, and using multiple stoners at the same time.

stoner

That’s slick. But what if there was a way to get almost the same results at a cost of 0 milliseconds for photos, and only 0.05 milliseconds when working with video? As it turns out, there’s a gem of a feature in the stoner that allows us to get just this kind of performance, and we’re going to take a look at how that works as well as how to take advantage of that feature.

stoner_fast

Let’s start by taking a closer look at the stoner itself. We can see now that there’s a second outlet on our op. Let’s plug in a null to both outlets and see what we’re getting.

stoner_nulls

Well hello there, what is this all about?!

Our second output is a 32 bit texture made up of only red and green channels. Looking closer we can see that it’s a gradient of green in the top left corner, and red in the bottom right corner. If we pause here for a moment we can look at how we might generate a ramp like this with a GLSL Top.

glsl_vuvst

If you’re following along at home, let’s start by adding a GLSL Top to our network. Next we’ll edit the pixel shader.

out vec4 fragColor;

void main()
{
 fragColor = vec4( vUV.st , 0.0 , 1.0 );
}

So what do we have here exactly? For starters we have an explicit declaration of our out vec4 (in very simple terms – our texture that we want to pass out of the main loop); a main loop where we assign values to our output texture.

What’s a vec4?

In GLSL vectors are a data type. We use vectors for all sorts of operations, and as a datatype they’re very useful to us as we often want variable with several positions. Keeping in mind that GLSL is used in pixeltown (one of the largest burrows on your GPU), it’s helpful to be able to think of variables that carry multiple values – like say information about a red, green, blue, and alpha value for a given pixel. In fact, that’s just what our vec4 is doing for us here, it represents the RGBA values we want to associate with a given pixel.

vUV is an input variable that we can use to locate the texture coordinate of a pixel. This value changes for every pixel, which is part of the reason it’s so useful to us. So what is this whole vec4( vUV.st, 0.0, 1.0) business? In GL we can fill in the values of a vec4 with a vec2 – vUV.st is our uv coordinate as a vec2. In essence what we’ve done is say that we want to use the uv coordinates to stand in for our red and green values, blue will always be 0, and our alpha will always be 1. It’s okay if that’s a wonky to wrap your head around at the moment. If you’re still scratching your head you can read more at links below

Read about more GLSL Data Types

Read about writing your own GLSL TOP

Okay, so we’ve got this silly gradient, but what is it good for?!

Let’s move around our stoner a little bit to see what else changes here.

pushingpoints

That’s still not very sexy – I know, but let’s hold on for just one second. We first need to pause for a moment and think about what this might be good for. In fact, there’s a lovely operator that this plays very nicely with. The remap TOP. Say what now? The remap top can be used to warp input1 based on a map in input2. Still scratching your head? That’s okay. Let’s plugin a few other ops so we can see this in action. We’re going to rearrange our ops here just a little and add a remap TOP to the mix.

remapTOP.PNG

Here we can see that the red / green map is used on the second input our our remap top, and our movie file is used on the first input.

Okay. But why is this anything exciting?

Richard Burns just recently wrote about remapping, and he very succinctly nails down exactly why this is so powerful:

It’s commonly used by people who use the stoner component as it means they can do their mapping using the stoners render pipeline and then simply remove the whole mapping tool from the system leaving only the remap texture in place.

If you haven’t read his post yet it’s well worth a read, and you can find it here.

Just like Richard mentions we can use this new feature to essentially completely remove or disable the stoner in our project once we’ve made maps for all of our texture warping. This is how we’ll get our cook time down to just 0.05 milliseconds.

Let’s look at how we can use the stoner to do just this.

For starters we need to add some empty bases to our network. To keep things simple for now I’m just going to add them to the same part of the network where my stoner lives. I’m going to call them base_calibration1 and base_calibration2.

calibration_bases

Next we’re going to take a closer look at the stoner’s custom parameters. On the Stoner page we can see that there’s now a place to put a path for a project.

stoner_path

Let’s start by putting in the path to our base_calibration1 component. Once we hit enter we should see that our base_calibration1 has new set of inputs and outputs:

base_capture1_added

Let’s take a quick look inside our component to see what was added.

inside_base1.PNG

Ah ha! Here we’ve got a set of tables that will allow the stoner UI to update correctly, and we’ve got a locked remap texture!

So, what do we do with all of this?

Let’s push around the corners of our texture in the stoner and hook up a few nulls to see what’s happening here.

working_with_calibration1

You may need to toggle the “always refresh” parameter on the stoner to get your destination project to update correctly. Later on we’ll look at how to work around this problem.

So far so good. Here we can see that our base_calibration1 has been updated with the changes we made to the stoner. What happens if we change the project path now to be base_calibration2? We should see that inputs and outputs are added to our base. We should also be able to make some changes to the stoner UI and see a two different calibrations.

working_with_calibration2.PNG

Voila! That’s pretty slick. Better yet if we change the path in the stoner project parameter we’ll see that the UI updates to reflect the state we left our stoner in. In essence, this means that you can use a single stoner to calibrate multiple projectors without needing multiple stoners in your network. In fact, we can even bypass or delete the stoner from our project once we’re happy with the results.

no_stoner

There are, of course, a few things changes that we’ll make to integrate this into our project’s pipeline but understanding how this works will be instrumental in what we build next. Before we move ahead take some time to look through how this works, read through Richard’s post as well as some of the other documentation. Like Richard mentions, this approach to locking calibration data can be used in lots of different configurations and means that you can remove a huge chunk of overhead from your projects.

Next we’ll take the lessons we’ve learned here combined with the project requirements we laid out earlier to start building out our complete UI and calibration pipeline.

WonderDome | Workshop Weekend 1

WonderDome

In 2012 Dan Fine started talking to me about a project he was putting together for his MFA thesis. A fully immersive dome theatre environment for families and young audiences. The space would feature a dome for immersive projection, a sensor system for tracking performers and audience members, all built on a framework of affordable components. While some of the details of this project have changed, the ideas have stayed the same – an immersive environment that erases boundaries between the performer and the audience, in a space that can be fully activated with media – a space that is also watching those inside of it.

Fast forward a year, and in mid October of 2013 the team of designers and our performer had our first workshop weekend where we began to get some of our initial concepts up on their feet. Leading up to the workshop we assembled a 16 foot diameter test dome where we could try out some of our ideas. While the project itself has an architecture team that’s working on an portable structure, we wanted a space that roughly approximated the kind of environment we were going to be working in. This test dome will house our first iteration of projection, lighting, and sound builds, as well as the preliminary sensor system.

Both Dan and Adam have spent countless hours exploring various dome structures, their costs, and their ease of assembly. Their research ultimately landed the team on using a kit from ZipTie Domes for our test structure. ZipTie Domes has a wide variety of options for structures and kits. With a 16 foot diameter dome to build we opted to only purchase the hub pieces for this structure, and to cut and prep the struts ourselves – saving us the costs of ordering and shipping this material.

In a weekend and change we were able to prep all of the materials and assemble our structure. Once assembled we were faced with the challenge of how to skin it for our tests. In our discussion about how to cover the structure we eventually settled on using a parachute for our first tests. While this material is far from our ideal surface for our final iteration, we wanted something affordable and large enough to cover our whole dome. After a bit of searching around on the net, Dan was able to locate a local military base that had parachutes past their use period that we were able to have for free. Our only hiccup here was that the parachute was multi colored. After some paint testing we settled on treating the whole fabric with some light gray latex paint. With our dome assembled, skinned, and painted we were nearly ready for our workshop weekend.

Media

There’s healthy body of research and methodology for dome projection on the web, and while reading about a challenge prepped the team for what we were about to face it wasn’t until we go some projections up and running that we began to realize what we were really up against. Our test projectors are InFocus 3118 HD machines that are great. There are not, however, great when it comes to dome projection. One of our first realizations in getting some media up on the surface of the dome was the importance of short throw lensing. Our three HD projectors at a 16 foot distance produced a beautifully bright image, but covered less of our surface than we had hoped. That said, our three projectors gave us a perfect test environment to begin thinking about warping and edge blending in our media.

TouchDesigner

One of the discussions we’ve had in this process has been about what system is going to drive the media inside of the WonderDome. One of the most critical elements to the media team in this regard is the ability to drop in content that the system is then able to warp and edge blend dynamically. One of the challenges in the forefront of our discussions about live performance has been the importance of a flexible media system that simplifies as many challenges as possible for the designer. Traditional methods of warping and edge blending are well established practices, but their implementation often lives in the media artifact itself, meaning that the media must be rendered in a manner that is distorted in order to compensate for the surface that it will be projected onto. This method requires that the designer both build the content, and build the distortion / blending methods. One of the obstacles we’d like to overcome in this project is to build a drag and drop system that allows the designer to focus on crafting the content itself, knowing that the system will do some of the heavy lifting of distortion and blending. To solve that problem, one of the pieces of software that we were test driving as a development platform is Derivative’s TouchDesigner.

Out of the workshop weekend we were able to play both with rendering 3D models with virtual cameras as outputs, as well as with manually placing and adjusting a render on our surface. The flexibility and responsiveness of TouchDesigner as a development environment made this process relatively fast and easy. It also meant that we had a chance to see lots of different kinds of content styles (realistic images, animation, 3D rendered puppets, etc.) in the actual space. Hugely important was a discovery about the impact of movement (especially fast movement) coming from a screen that fills your entire field of view.

TouchOSC Remote

Another hugely important discovery was the implementation of a remote triggering mechanism. One of our other team members, Alex Oliszewski, and I spent a good chunk of our time talking about the implementation of a media system for the dome. As we talked through our goals for the weekend it quickly became apparent that we needed for him to have some remote control of the system from inside of the dome, while I was outside programming and making larger scale changes. The use of TouchOSC and Open Sound Control made a huge difference for us as we worked through various types of media in the system. Our quick implementation gave Alex the ability to move forward and backwards through a media stack, zoom, and translate content in the space. This allowed him the flexibility to sit away from a programming window to see his work. As a designer who rarely gets to see a production without a monitor in front of me, this was a huge step forward. The importance of having some freedom from the screen can’t be understated, and it was thrilling to have something so quickly accessible.

Lights

Adam Vachon, our lighting designer, also made some wonderful discoveries over the course of the weekend. Adam has a vested interest in interactive lighting, and to this end he’s also working in TouchDesigner to develop a cue based lighting console that can use dynamic input from sensors to drive his system. While this is a huge challenge, it’s also very exciting to see him tackling this. In many ways it really feels like he’s doing some exciting new work that addresses very real issues for theaters and performers who don’t have access to high end lighting systems. (You can see some of the progress Adam is making on his blog here)

Broad Strokes

While it’s still early in our process it’s exciting to see so many of the ideas that we’ve had take shape. It can be difficult to see a project for what it’s going to be while a team is mired in the work of grants, legal, and organization. Now that we’re starting to really get our hands dirty, the fun (and hard) work feels like it’s going to start to come fast and furiously.


Thoughts from the Participants:

From Adam Vachon

What challenges did you find that you expected?

The tracking; I knew it would hard, and it has proven to be even more so. While a simple proof-of-concept test was completed with a Kinect, a blob tracking camera may not be accurate enough to reliably track the same target continuously. More research is showing that Ultra Wide Band RFID Real Time Locations System may be the answer, but such systems are expensive. That said, I am now in communications with a rep/developer for TiMax Tracker (an UWB RFID RTLS) who might be able to help us out. Fingers crossed!

What challenges did you find that you didn’t expect?

The computers! Just getting the some of computers to work the way they were “supposed” to was a headache! That said, it is nothing more than what I should have expected in the first place. Note for the future: always test the computers before workshop weekend!

DMX addressing might also become a problem with TouchDesigner, though I need to do some more investigation on that.

How do you plan to overcome some of these challenges?

Bootcamping my macbook pro will help on the short term computer-wise, but it is definitely a final solution. I will hopefully be obtaining a “permanent” test light within the next two weeks as well, making it easier to do physical tests within the Dome.

As for TouchDesigner, more playing around, forum trolling, and attending Mary Franck’s workshop at the LDI institute in January.

What excites you the most about WonderDome?

I get a really exciting opportunity: working to develop a super flexible, super communicative lighting control system with interactivity in mind. What does that mean exactly? Live tracking of performers and audience members, and giving away some control to the audience. An idea that is becoming more an more to me as an artist is finding new ways for the audience to directly interact with a piece of art. On our current touch-all-the-screens-and-watch-magic-happen culture, interactive and immersive performance is one way for an audience to have a more meaningful experience at the theatre.

 

From Julie Rada

What challenges did you find that you expected?

From the performer’s perspective, I expected to wait around. One thing I have learned in working with media is to have patience. During the workshop, I knew things would be rough anyway and I was there primarily as a body in space – as proof of concept. I expected this and didn’t really find it to be a challenge but as I am trying to internally catalogue what resources or skills I am utilizing in this process, so far one of the major ones is patience. And I expect that to continue.

I expected there to be conflicts between media and lights (not the departments, the design elements themselves). There were challenge, of course, but they were significant enough to necessitate a fundamental change to the structure. That part was unexpected…

Lastly, directing audience attention in an immersive space I knew would be a challenge, mostly due to the fundamental shape of the space and audience relationship. Working with such limitations for media and lights is extremely difficult in regard to cutting the performer’s body out from the background imagery and the need to raise the performer up.

What challenges did you find that you didn’t expect?

Honestly, the issue of occlusion on all sides had not occurred to me. Of course it is obvious, but I have been thinking very abstractly about the dome (as opposed to pragmatically). I think that is my performer’s privilege: I don’t have to implement any of the technical aspects and therefore, I am a bit naive about the inherent obstacles therein.

I did not expect to feel so shy about speaking up about problem solving ideas. I was actually kind of nervous about suggesting my “rain fly” idea about the dome because I felt like 1) I had been out of the conversation for some time and I didn’t know what had already been covered and 2) every single person in the room at the time has more technical know-how than I do. I tend to be relatively savvy with how things function but I am way out of my league with this group. I was really conscious of not wanting to waste everyone’s time with my kindergarten talk if indeed that’s what it was (it wasn’t…phew!). I didn’t expect to feel insecure about this kind of communication.

How do you plan to overcome some of these challenges?

Um. Tenacity?

What excites you the most about WonderDome?

It was a bit of a revelation to think of WonderDome as a new performance platform and, indeed, it is. It is quite unique. I think working with it concretely made that more clear to me than ever before. It is exciting to be in dialogue on something that feels so original. I feel privileged to be able to contribute, and not just as a performer, but with my mind and ideas.

Notes about performer skills:

Soft skills: knowing that it isn’t about you, patience, sense of humor
Practical skills: puppeteering, possibly the ability to run some cues from a handheld device

Vesturport’s Woyzeck | A Case Study

Case Study: Vesturport’s Woyzeck

The challenge of re-imagining a classic work often lies in finding the right translation of ideas, concepts, and imagery for a modern context. Classic pieces of theatre carry many pieces of baggage to the production process: their history, the stories of their past incarnations, the lives of famous actors and actresses who performed in starring roles, the interpretation of their designers, and all the flotsam and jetsam that might be found with any single production of the piece in question. A classic work, therefore, is not just the text of the author but a historical thread that traces the line of the work from its origin to its current manifestation. The question that must be addressed in the remounting of a classic work is, why: why this classic work, why now, why does this play matter more than any other?

In 2008 Iceland’s Vesturport theatre company presented their re-imagining of Büchner’s Woyzeck, a work about class, status, and madness. Written between 1836 – 1837, Büchner’s play tells the story of Woyzeck, a lowly soldier stationed in a German town. He lives with Marie, with whom he has had a child. For extra pay Woyzeck performs odd jobs for the captain and is involved in medical experiments for the Doctor. Over the course of the play’s serialized vignettes Woyzeck’s grasp on the world begins to break apart as the result of his confrontation with an ugly world of betrayal and abuse. The end of the play is a jealous, psychologically crippled, and cuckolded Woyzeck who ruthlessly lures Marie to the pond in the woods where he kills her. There is some debate about the actual ending to Büchner’s play. While the version that is most frequently produced has a Woyzeck who is unpunished, there is some speculation that one version of the play ended with the lead character facing a trial for his crime. As a historical note, Büchner’s work is loosely based upon the true story of Johann Christian Woyzeck, a wigmaker, who murdered the window with whom he lived. Tragically, Büchner’s died in 1837 from typhus and never saw Woyzeck performed. It wasn’t, in fact, performed until 1913. In this respect, Woyzeck has always been a play that is performed outside of its original time in history. It has always been a window backwards to a different time, while simultaneously being a means for the theatre to examine the time in which it is being produced.

It therefore comes as no surprise that in 2008 a play offering a commentary on the complex social conditions of class and status opens in a country standing at the edge of a financial crisis that would come to shape the next three years of its economic standing in the world. A play about the use and misuse of power in a world where a desperate Woyzeck tries to explain to a bourgeoisie captain that the poor are “flesh and blood… wretched in this world and the next…” (Büchner) rings as a warning about what that corner of the world was soon to face.

The Response to Vesturport’s Aesthetic

From the moment of its formation, Vesturport has been a company that often appropriates material and looks to add an additional element of spectacle – early in their formation as a troupe they mounted productions of Romeo and Juliet and Titus Andronicus. This additional element of spectacle is specifically characterized by a gymnastic and aerial (contemporary circus) aesthetic. The company’s connection to a circus aesthetic is often credited to Gisli Örn Gardarsson’s, the company’s primary director, background as a gymnast (Vesturport). The use of circus as a mechanism for story telling is both compelling and engaging. Peta Tait captures this best as she talks about what circus represents:

Circus performance presents artistic and physical displays of skillful action by highly rehearsed bodies that also perform cultural ideas: of identity, spectacle, danger, transgression. Circus is performative, making and remaking itself as it happens. Its languages are imaginative, entertaining and inventive, like other art forms, but circus is dominated by bodies in action [that] can especially manipulate cultural beliefs about nature, physicality and freedom. (Tait 6)

The very nature of circus as a performance technique, therefore, brings a kind of translation to Vesturport’s work that is unlike the work of other theatre companies. They are also unique in their use of language, as their productions frequently feature translations that fit the dominant language of a given touring venue. More than a company that features the use of circus as a gimmick, Vesturport uses the body’s relationship to space as a translation of ideas into movement, just as their use of language itself is a constant flow of translation.

Vesturport’s production of Woyzeck invites the audience to play with them as “Gardarsson’s gleefully physical staging of Büchner’s masterpiece … is played out on an industrial set of gleaming pipes, green astroturf, and water-filled plexiglass tanks” (Vesturport). Melissa Wong, in writing for Theatre Journal sees a stage that “resembled a swimming pool and playground” that fills the stage with a “playful illusion.” The playful atmosphere of the production, however, is always in flux as a series of nightmarish moments of abuse are juxtaposed against scenes of slapstick comedy and aerial feats. Wong later sees a Woyzeck who “possessed a vulnerability that contrasted with the deliberately grotesque portrayals of the other characters.” Wong’s ultimate assessment of the contrasting moments of humor and spectacle is that they “served to emphasize the pathos of the play, especially at the end when the fun and frolicking faded away to reveal the broken man that Woyzeck had become.” Not all American critics, however, shared her enthusiasm for Vesturport’s production. Charles Isherwood in writing for the New York Times sees the use of circus as a distraction, writing that, “the circus is never in serious danger of being spoiled by that party-pooping Woyzeck…it’s hard to fathom what attracted these artists to Büchner’s deeply pessimistic play, since they so blithely disregard both its letter and its spirit.” Jason Best shares a similar frustration with the production, writing “by relegating Büchner’s words to second place, the production ends up more impressive as spectacle than effective as drama.” Ethan Stanislawski was frustrated by a lack of depth in Gardarsson’s production saying “this Woyzeck is as comical, manic, and intentionally reckless as it is intellectually shallow.”

Circus as an Embodied Language

Facing such sharp criticism, why does this Icelandic company use circus as a method for interrogating text? Certainly one might consider the mystique of exploring new dimensions of theatricality, or notions of engaging the whole body in performance. While these are certainly appealing suggestions, there is more to the idea of circus as a physical manifestation of idea. Facing such sharp criticism, why does this Icelandic company use circus as a method for interrogating text? Certainly one might consider the mystique of exploring new dimensions of theatricality, or notions of engaging the whole body in performance. While these are certainly appealing suggestions, there is more to the idea of circus as a physical manifestation of idea. Tait writes “… aerial acts are created by trained, muscular, bodies. These deliver a unique aesthetic that blends athleticism and artistic expression. As circus bodies, they are indicative of highly developed cultural behavior. The ways in which spectators watch performers’ bodies – broadly, socially, physical and erotically – come to the fore with the wordless performance of an aerial act.” Spivak reminds us that:

Logic allows us to jump from word to word by means of clearly indicated connections. Rhetoric must work in the silence between and around words in order to see what works and how much. The jagged relationship between rhetoric and logic, condition and effect of knowing, is a relationship by which a world is made for the agent, so that the agent can act in an ethical way, a political way, a day-to-day way; so that the agent can be alive in a human way, in the world. (Spivak 181)

Woyzeck’s challenge is fundamentally about understanding how to live in this world. A world that is unjust, frequently characterized by subjugation, and exploitative. Gardarsson uses circus to depict a world that is both ugly and beautiful. He uses circus to call our attention to these problems as embodied manifestation. The critics miss what’s happening in the production, and this is especially evident when looking at what Tait has to say the role of new circus as a medium:

New circus assumes its audience is familiar with the format of traditional live circus, and then takes its artistic inspiration from a cultural idea of circus as identity transgression and grotesque abjection, most apparent in literature [and] in cinema. Early [new circus in the 1990’s] shows reflected a trend in new circus practice to include queer sexual identities and expand social ideas of freakish bodies. Artistic representation frequently exaggerates features of traditional circus…. (Tait 123)

What Isherwood misses is that the use of garish spectacle that makes light of an ugly world is, in fact, at the very heart of what Gardarsson is trying to express. The working-poor Woyzeck who questions, and thinks, and is criticized for thinking is ruining the Captain and the Doctor’s circus-filled party. Woyzeck’s tragedy lies in his fight to survive, to be human, in the inhuman world that surrounds him – what could be more “deeply pessimistic” (as Isherwood calls it) than a vision of the world where fighting to be human drives a man to destroy the only anchor to the world (Marie) that he ever had?

Conclusions

Melissa Wong best sums up the production in seeing the tragedy in a Woyzeck “who seemed in some ways to be the most humane character in the production…the one who failed to survive.” Her assessment of Gardarsson’s use of levity is that it points “to the complicity of individuals [the audience] who, as part of society, had watched Woyzeck’s life as entertainment without fully empathizing with the depth of his existential crisis” (Wong). She also rightly points out that the use of humor in the play “enabled us to access questions that in the bleakness of their full manifestation might have been too much to bear” (Wong). Tait also reminds us that the true transformative nature of circus as a medium is not what is happening with the performer, but how the experience of viewing the performer is manifest in the viewer.

Aerial motion and emotion produce sensory encounters; a spectator fleshes culturally identifiable motion, emotionally. The action of musical power creates buoyant and light motion, which corresponds with reversible body phenomenologies in the exaltation of transcendence with and of sensory experience. The aerial body mimics the sensory motion of and within lived bodies in performance of delight, joy, exhilaration, and elation. Aerial bodies in action seem ecstatic in their fleshed liveness. (Tait 152)

Here circus functions as a mechanism for translation and confrontation in a play whose thematic elements are difficult to grapple with. Vesturport’s method and execution look to find the spaces between words, and while not perfect, strive to push the audience into a fleshed and lived experience of Büchner’s play rather than a purely intellectual theatrical exercise.

Works Cited

Büchner, Georg. Woyzeck. Trans. Eric Bentley. New York: Samuel French, 1991.

Best, Jason.”Woyzeck | Review.” 14 October 2005. The Stage. The Stage Meida Company Limited. 3 October 2013 <www.thestage.co.uk/reviews/review.php/10047/woyzeck>.

Isherwood, Charles. Outfitting Woyzeck With a Pair of Rose-Colored Glasses. 17 October 2008. 2 October 2013 <theater.nytimes.com/2008/10/17/theater/reviews/17woyz.html>.

Pareles, Jon. “Shaking Up ‘Woyzeck’ With early Rock and Flying Trapeze.” 13 October 2008. The New York Times. <www.nytimes.com/2008/10/14/arts/music/14cave.html?_r=2&scp=1&sq=woyzeck&st=cse&oref=slogin&>.

Richarsdon, Stan. Woyzec nytheatre.com review. 15 October 2008. The new York Theatre Experience. 2 October 2013 <www.nytheatre.com/Review/stan-richardson-2008-10-15-woyzeck>.

Spivak, Gayatri Chakravorty. Outside in the Teaching Machine. New York: Routledge, 1993.

Stanislawski, Ethan. Theatre Review (NYC): Woyzeck by George Buchner at UNDER St. Marks and BAM. 21 October 2008. 4 October 2013 <blogcritics.org/theater-review-nyc-woyzeck-by-georg/>.

Tait, Peta. Circus Bodies: Cultural Idenity in Aerial Performance. New York: Routledge, 2005.

Thielman, Sam. Review: “Woyzeck”. 16 October 2008. 5 October 2013 <http://variety.com/2008/legit/reviews/woyzeck-3-1200471537/>.

Vesturport. Woyzeck by Georg Buchner | A Vesturport and Reykavik City Theatre production. 15 Janruary 2000. 7 October 2013 <http://vesturport.com/theater/woyzeck-georg-buchner/>.

Wong, Melisa Wansin. “Woyzeck (review).” Theatre Journal 61.4 (2009): 638-640.

Woyzeck. Dir. Gisli Örn Gardarsson. Vesturport. Vesturport and Reykjavik City Threatre. Vesturport, 2009.

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.

Lessons from the Road

You need a tech rider.

Better yet, you need a tech rider with diagrams, specific dimensions, and clear expectations.

In early June I was traveling with my partner, Lauren Breunig, to an aerial acrobatics festival in Denver, Colorado. Lauren is an incredibly beautiful and talented aerialist. One of the apparatuses that she performs on is what she calls “sliding trapeze.” This is essentially a trapeze bar with fabric loops instead of ropes.

Earlier this year Lauren was invited to perform at the Aerial Acrobatics Arts Festival of Denver as a performer in their “innovative” category. As an aerialist Lauren has already in many venues across the country, both on her invented apparatus as well as on more traditional circus equipment. In all of these cases she’s had to submit information about her apparatus, clearance requirements, and possible safety concerns.

So when it came time to answer some questions about rigging for the festival it seemed like old hat. One of the many things that Lauren had to submit was her height requirements for her bar provided that a truss was being suspended somewhere between 27 and 29 feet from the floor of the stage. In her case the height of the truss less critical than the height for her bar. In her case, the minimum distance from the floor to the rigging points is 15.5 feet. At this height her apparatus is high enough off of the ground that she can safely perform all of her choreography. This is also the lower limit of a height where she can jump to her bar unaided. Where this gets tricky is how one makes up the difference between the required rigging points and the height of the truss. The festival initially indicated that they would drop steel cable to make up the differences between required heights and the height of the truss, making it seems as though the performers only needed to worry about bringing their apparatus.

When we dropped off Lauren’s equipment we discovered that the realities of the rigging were slightly different than what the email correspondence had indicated would be the case. The truss had been set at a height of 27 feet, but the festival was no longer planning on dropping any cables for performers. Additionally they told us that they only had limited access to span sets and other equipment for making up the height difference. Luckily Lauren had packed some additional span sets, and had thought through some solutions that used some webbing (easily available from REI) to make up any discrepancies that might come up. This also, unfortunately, made her second guess the specs she had sent to the festival originally, and left her wondering if she had accurately determined the correct heights for her apparatus.

Memory Measurements

Having rigged and re-rigged this apparatus in numerous venues, Lauren had a strong sense of how her equipment worked with ceilings less than 20 feet. This also meant that she had didn’t have any fixed heights, and had instead lots of numbers bouncing around her head – one venue was rigged at 15.5 feet, but the ceiling was really at 17 feet; another the beams were at 22 or 23 feet, and the apparatus had been rigged at heights between 15.5 and 17 feet; and so on and so on. Additionally she typically rigs her own equipment, and has is therefore able to make specific adjustments based on what she’s seeing and feeling in a given space. For the festival, this wasn’t a possibility. So, after the miscommunication about the rigging situation and suddenly feeling insecure about the measurements she sent ahead we suddenly found ourselves talking through memories of other venues and trying to determine what height she actually needed.

Reverse engineering heights

We started by first talking through previous rigged situations – how high were the beams, how long is the apparatus, how far off the ground was she. Part of the challenge here was that this particular apparatus hangs at two different lengths because the fabric ropes stretch. This means that without a load it’s at a different distance from the floor than with a load. While this isn’t a huge difference, it’s enough to prevent her from being able to jump to her bar if it’s rigged too high or to put her in potential danger of smashing her feet if it’s rigged too low. While there were several things we knew, it was difficult to arrive at a hard and fast number with so many variables that were unknown or a range.

Drawing it out

Ultimately what helped the most was sitting down and drawing out some of the distances and heights. While this was far from perfect, it did finally give us some reference points to point to rather than just broadly talk through. A diagram goes a long way to providing a concrete representation of what you’re talking about, and it’s worth remembering the real value in this process. It meant that were were suddenly able to talk about things that we knew, only remembered, or guessed. This processes, however, still didn’t solve all of the problems Lauren was facing. We still had some questions about the wiggle room in our half-remembered figures, and making sure that she would be rigged at a height that was both safe and visually impressive. Finally, after an hour of drawing, talking, and drawing again we got to a place where we were reasonably confident about how she might proceed the next day. In thinking about this process, I realized that we could have made our lives a lot easier if we had done a little more homework before coming to the festival.

What she really needed

A Diagram

A complete drawing of the distances, apparatus, performer, rigging range, and artist-provided equipment would have made a lot of this easier. While the rigging process went without a hitch once she was in the theater, being able to send a drawing of what her apparatus looked like and how it needed to be rigged would have but as at ease and ensured that all parties were on the same page. A picture codifies concepts that might otherwise be difficult to communicate, and in our case this would have been a huge help.

A Fuller tech rider

While Lauren did send a Tech Rider with her submission, it occurred to us that a fuller tech rider would- have helped the festival, and it would have helped us. When dealing with an apparatus that she had to jump to reach, it would have been helpful for us to know exactly how high she could jump. There’s also a sweet spot that’s not too high for this apparatus, but where Lauren still needs a boost to reach the bar; this would have been another helpful range to have already known. While we have a reasonable amount of rigging materials, there’s also some equipment that we don’t have. Specifying what we plan to provide, or can provide with adequate notice would have been helpful inclusions in the conversation she was having with the festival. In hindsight, some of the statements that should have been added to her rider include:

  • the artist can jump for heights of
  • the artist needs assistance for heights
  • the artist will provide rigging for
  • the artist requires confirmation by

What does this have to do with projectors?

Let’s face it, tech riders are not the most exciting part of the production world. That said, by failing to specify what you need and what you are planning on providing it’s easy to suddenly be in a compromising position. While the consequences are different for an aerialist vs. a projectionist, the resulting slow-down in the tech process, or the need to reconfigure some portion of performance are very real concerns. The closer you are to a process or installation, the more difficult it becomes to really see all of the moving parts. Our exposure to any complicated process creates blind spots in the areas that we’ve automated, set-up once, or take for granted simply because they seem factual and straightforward. These are the privileges, and pitfalls, of working with the same equipment or apparatus for extended periods of time – we become blind to our assumptions about our process. Truly, this is the only way to work with a complicated system. At some point, some portion of the process becomes automated in our minds or in our practice in order to facilitate higher order problem solving. Once my projectors are hung and focused, I don’t think about the lensing when I’m trying to solve a programming problem.

While this may well be the case when you’re on your home turf, it’s another thing entirely to think about setting up shop somewhere new. When thinking about a new venue, it becomes imperative to look at your process with eyes divorced from your regular practice, and to instead think about how someone with unfamiliar eyes might look at your work. That isn’t to say that those eyes don’t have any experience, just that they’re fresh to your system / apparatus. In this way it might be useful to think of the tech rider as a kind of pre-flight checklist. Pilots have long known that there are simply too many things to remember when looking over a plane before take-off. Instead, they rely on check-lists to ensure that everything gets examined. Even experienced pilots rely on these checklists, and even obvious items get added to the list.

Knowing your equipment

Similarly, it’s not enough to just “know” your equipment. While intuition can be very useful, it’s also desperately important to have documentation of your actual specifications – what are the actual components of your machine, what are your software version numbers, how much power do you need, etc. There are always invisible parts of our equipment that are easy to take for granted, and it’s these elements that are truly important to think about when you’re setting up in a new venue. Total certainty may well be a pipe-dream, but it isn’t impractical to take a few additional steps to ensure that you’re ready to tackle any problems that may arise.

Packing your Bags

The real magic of this comes down to packing your bags. A solid rider, and an inventory of your system will cover most of your bases but good packing is going to save you. Finding room for that extra roll of gaff tape, or that extra power strip, or that USB mouse may mean that it takes you longer or that you travel one bag heavier but it will also mean a saved trip once you’re at the theatre. Including an inventory in your bags may seem like a pain, but it also means that you have a quick reference to know what you brought with you. It also means that when you’re in the heat of strike you know exactly what goes where. Diagrams and lists may not be the sexiest part of the work we do, but they do mean saved time and fewer headaches. At the end of the day, a few saved hours may mean a few more precious hours of sleep, or better yet a chance to grab a drink after a long day.

A Variety of Approaches | Lessons from Grad School


I’m learning a lot in grad school. Some of the lessons that I’m learning are consistent with my goals and aspirations, some are lessons about realigning my expectations with reality, and some are unexpected discoveries about the nature of a discipline’s approach. As an interdisciplinary student my coursework is a purposeful patchwork from multiple departments and schools. This approach means that I’m fortunate to see the world through multiple lenses, and it also means that at times I’m a servant to many masters. In my case, I’ve seen the approach of the school of Art (in my second semester I took a media and sculpture course), AME (this stands for Arts, Media + Engineering), and the school of Theatre and Film

In thinking and talking about why we make art/sculpture/programs it seems like I’ve continually run into similar questions. Questions that are rooted in the desire to find meaning, direction, or justification for the art. While one might think of this as more ideological exercise than useful discussion, I think there’s value in the wrestling with questions of motivation and function. “Why” and “for what” help to focus the creator in the process of finding the path for a particular project. To that end I think there are six statements that I’ve heard time and again in talking with other makers, performers, designers, and the like.

Six Statements of focus:

  • The act of creation is about
  • The aesthetic experience is
  • The function of the object/art/program is
  • The proof is in
  • Value is derived from
  • The meaning of the object/art/program

How a discipline finishes the above statements can help to illustrate how their practitioners are encouraged to think of the world, and their contribution to their particular field. As a disclaimer, I don’t think think any of the following observations are good or bad. These are my observations about how new and developing artists in these respective fields are encouraged to think about their work, and the process of making their work. 

The Artist / Sculptor’s Method

  • The act of creation is about is the exploration.
  • The aesthetic experience is both in the artist’s method and in the viewer’s observation.
  • The function of the object/art/program is inconsequential; the suggestion of an function is just as powerful.
  • The proof is in the critique of the work by an outside artist who is successful.
  • Value is derived from the act of creating something new; if the art is successful or not is in some ways inconsequential so long as the artist is being pushed to deepen his/her methods and unique style.
  • The meaning of the object/art/program can be explicit, implied, or absent; this is the maker’s choice, and they are in no way bound to create a piece that has specific meaning. 

In many ways this approach is about concisely making Art with a capital A, while trying to imagine that you’re only creating art with ironic italics. There’s something of an identity crisis in this approach that almost feeds off the expectation that an audience may willingly accept impenetrable art as a sign that it must be intellectually advanced. Discussions in this environment tend to start from a place of process rather than working backwards from the indented experience. For example, my class often spent more time talking about what we were currently engaged in doing, rather than exploring what we wanted the audience to experience in seeing our work. Here it feels like the answers are hidden, and that part of the artist experience is finding solutions on your own. Ironically there’s a very Ryandian kind of perspective to this field. A kind of rugged individualism that covets the secrets to other people’s magic tricks. There is also a quiet acceptance that good work may take a lot of time, or it may take very little. Sometimes the artist just has to spend 14 hours sanding, and that’s just a part of the work. There is some kind of hipster-zen clarity about the world that can be read as detachment or general disregard for the world. 

The Programmer’s Method

    • The act of creation is about novelty and newness.
    • The aesthetic experience is secondary to the methodology in the programming.
    • The function of the object/art/program even if inconsequential must be based on logical rules.
    • The proof is in the procedural methodology; further, the proof is in the object / program’s reliable operation.
    • Value is derived from efficiencies and brevity (of the code).
    • The meaning of the object/art/program is allowed to be absent, or so abstract as to be invisible.

The programmers approach is built on rules. The starting point for a creative work might be an interest in continuing to explore a particular procedure, or the curiosity about how to accomplish a particular end. Some  works are born out the necessity of a project or contract. More than anything, I’ve noticed that this perspective is always grounded in the procedural steps for accomplishing a particular task. An effective program requires an understanding of the necessary pieces to accomplish a particular end. It also often requires a bit of creative problem solving in order to ensure that one isn’t stopped by hurdles. 

Depending on the project, the programmer may or may not start with the aesthetic of the finished product. In many cases, before the programmer can start to address how a particular system looks, s/he first must think about how to ensure that the system is consistently producing the intended results. Unlike the Artist’s method, the programmer relies on the experience of others who have had similar experiences. Before reinventing the wheel, the programmer first tries to establish how someone else has solved the same problem – what was the most elegant solution requiring the fewest system resources. What trade-offs need to be made in order to ensure consistent, stable operation? More importantly, the programmer lives in a world characterized as a race. Lots of other programmers are all working to solve the same problem, for the same pay-day. “Perfect” comes in a distant second from “done,” and while the goal is to always have elegant solutions, having a solution always trumps not having one. 

The Media (Theatrical) Designer’s Method

    • The act of creation is about conveying a message or feeling.
    • The aesthetic experience is primary to the work, and should have a purposeful relationship to the world of the production.
    • The function of the object/art/program is help tell the story of the production or performance.
    • The proof is in observer and the actor’s relationship with the media. 
    • Value is derived from the purposeful connection or disconnection of the art / program / work to the world that it exists inside of the play or performance.
    • The meaning of the object/art/program can be abstract or didactic so long as it is purposeful.

The media designer is in an interesting position in the theatre. Somewhere between lights, set, and sound is the realm of the media designer. Designer’s for the theatre are often bound by the world of the play and how their work supports the larger thematic and idiomatic conventions of the script. More importantly, the media designer’s work must live in the same world as the performance. The media may be comprised of contrasting images or ideas, it might be in aesthetic dissonance with the world or it may be in harmony, but it always lives in the same place as the performance. This work must also consciously consider the role and placement of the audience, the relationship between the media and the performers, and the amount of liveness required for a particular performance. 

Between the artist and the programmer, the media designer sometimes relies on the magic of implied causation (when the actor performs a particular gesture a technician presses a button to cue the shift in the media giving the audience the illusion of a direct relationship between the actor and the media), but may also need to create a system of direct causation (the actor or dancer is actually the impetus for changes in the media). Like the programmer, the media designer is also in a sort of race. The countdown to opening night is always an element of the design process. While “done” still trumps “perfect” this question takes on a different kind of dynamic for the media designer. “Done” might be something that happens during the second or third night of tech, and ideally “perfect” happens before opening. 

Personal Essay

Twenty six of my thirty one years have, in some way, involved performance: from community musicals where I performed along side my mother, to gravity-defying circus performance for the Christopher Reeves foundation. I have also worked purposefully to provide educational access for populations that have not traditionally been able to engage with the arts. In this respect it was my work for an educational outreach program in rural New Hampshire and Vermont that had a deeply resonant impact on my view of the power of arts in education. Over the course of a five year period working for Keene State College’s Upward Bound Program I was a residential director, teacher, advisor, counselor, college-coach, and facilitator. As I transitioned to another position at Keene State my role changed from supporting potential students to supporting college faculty and staff. In my role as Rich Media Specialist for Keene State’s Center for Engagement, Learning, and Teaching I worked as an instructional designer, blackboard administrator, media maker, researcher, and faculty collaborator. While working full time in higher education, I also continued to develop as a performer through an ongoing circus training regimen. In thinking about graduate school I saw that I had been shaped by those three distinct forces: performance based art, technology, and a passion for teaching. I came to ASU to create a life where those three forces might co-exist in a meaningful and transformative program of study. In fact, that’s what I’ve found at ASU. In my first year I will have participated in, or contributed to (as performer, media creator, or system designer) eleven Phoenix-area productions while also having served as instructor or TA to over 350 students. My introduction to ASU has been, to say it mildly, a whirlwind of exposure to new ideas, methods, and opportunities to collaborate or participate. Especially interesting to me has been the opportunity to engage other artists in a critical dialogue about the impact, consequences, and outcomes of including digital media in live performance. 

I sometimes find it difficult to know what I will be doing in the next ten days, let alone ten years. That said, my vision for a professional life after graduating from ASU does include some specific goals. Without a doubt, my work will include some element of physical computing to address the issues of how to integrate real-time data from performers into the experience of seeing a theatrical production. Specifically, I plan to start a circus company with a heavy emphasis on the incorporation of traditional and generative media as elements of the performance. This involves the development of both physical apparatuses capable of capturing and transmitting meaningful data, as well as the development of applications to parse and interpret the data for playback-system integration. Further, I think this kind of work is potentially the most meaningful when partnered with an educational institution where performers, media makers, and technicians can collaborate on this process. Finally, my hope is that ten years after graduation I will be in a position to spearhead the implementation a of an integrated technology and circus program for the development of artists looking to transcend the traditional ideas of physical and mediated performance.

The Composite Effect

Though on first inspection it may be difficult to believe, every media consumable is an artifact born of composited forms. The composite is the leviathan that directly, or indirectly, manipulates the interpretation of cultural forms, and it will only get stranger. The current status of the composite in the creation of cultural forms can be explored by first examining its use in music, the manipulation of images (still and moving), and the growing field of augmented reality. The imminent status of the composite might best be understood by observing the futures suggested by an independent filmmaker and a large manufacturing company.

Contemporary Examples of the Composite

In The Language of New Media, Lev Manovich defines digital compositing as having a specific and well-defined meaning. Particularly, he writes that “it refers to the process of combining a number of moving image sequences, and possibly stills, into a single sequence with the help of a special compositing software…” (Manovich , Kindle Ed. 136). Here Manovich is specifically talking about the use of this term in relation to the manipulation of video. This initial definition can be described by considering the weather report on most news programming. While providing some glib observations on city life in tandem with an abridged description of the temperature, the weather-reporter is often witnessed standing in front of some form of map. In some cases this map includes animated patterns of repeating cloud formations, often indicating where there might be rain or snow. The image of this person standing in front of a map is, in fact, a lie. The truth is that this particular reporter is standing in front of an evenly lit backdrop that is either bright green or blue. This flat color can then be removed and in its place a still image or animation is added. Here we can see a simple example of a composite: the combination of a number (in this case two) of moving image sequences into a single sequence. While Manovich initially talks about this in relation to images, he later applies the same principles in talking about the DJ.
The DJ’s art is measured by his ability to go from one track to another seamlessly. A great DJ is thus a compositor and anti-montage artist par excellence. He is able to create a perfect temporal transition from very different musical layers; and he can do this in real time, in front of a dancing crowd. (Manovich , Kindle Ed. 144)

The rise of digital authoring and distribution tools has helped to disseminate the work of the DJ beyond a single fixed set in a club. For the modern DJ, the recording studio is the computer and the distributor is the internet. The specific form born out of this type of creative act is sometimes referred to as a mash-up. While this form may be a composite of only two or three songs, many popular artists create works that are comprised of multiple samples from multiple songs. Jordan Roseman, known as DJ Earworm, describes his work by saying “what I do is take a bunch of songs apart and put them back together again in a different way. I end up with tracks called mashups.” Roseman’s work is the quintessential composite in its nature. His modular approach to disassembling music into component pieces before reassembling it into a new unified work reads as though he were quoting Manovich. In writing about the process of working with composited work Manovich notes that, “a typical new media object is put together from elements that come from difference sources, these elements need to be coordinated and adjusted to fit together” (Manovich , Kindle Ed. 138). The DJ’s appropriation and rearrangement of material in the pursuit of creating a new work is a variation of the theme of sampling and looping and has largely been met with enthusiasm. Many devoted fans of specific artists attend concerts to witness the act of live mixing and compositing. Greg Gillis, known as Girl Talk, has garnered especially high praise and support for the form.  Gillis is a composite artist. A musician whose work is entirely based in the deconstruction of established works so that he can reassemble the component pieces into something different.

While the mash-up is quickly finding home in the realm of pop-music, arguably the most well established form of composite is now largely invisible. The manipulation of photographic imagery has become so commonplace that it is nearly invisible to most consumers. In this regard, advertising, particularly, has become the champion of the composited work. Should one open any Sunday-Super-Shopper adverts for any corporate chain the only imagery involved is entirely created out of a number of independent images paired and layered together. This pairing and stacking of visual components is now so commonplace as to seem banal. High-end advertising is especially notorious for the use of composited works. The perfect photograph of a hamburger is a fine example of this magic: the composition is carefully framed and arranged by a food photographer; later the photo is manipulated to ensure for a proper distribution of sesame seeds; another artist will color correct the image for the most appetizing shade of green in the lettuce and red in the tomato; finally a designer will move the hamburger digitally to ensure that it is being presented in an appropriately branded environment. Layer upon layer, the composited image has ceased to be a record of the actual object, and has instead become an abstract representation of what it ought to be. This is the work of composite in advertising. Perhaps a more startling example can be found in the pages of the Ikea catalog. According to the Wall Street Journal, twelve percent of the images in the Swedish furniture distributor’s catalog were not actual photographs of real objects but were instead three-dimensional computer renderings (Hansegard). To accomplish this, first a scene is created in wire-frame with 3-D modeling software. The resulting wireframe is then painted with textures, and light in order to resemble an actual room. The WSJ goes on in the article to note that in 2013 one out of four images used in the print catalog and online will be made exclusively of computer renderings (Hansegard). This is both astonishing in its execution, but also a prime example of the true power of the composite. The amalgamated form of polygons and texture are layered and manipulated until the line between real object and assembled representation becomes invisible.

The spread of the composite into all cultural forms is further fueled by the efforts and exploration of the technology juggernaut Google. In April of 2012 Google released one of its first promotional videos for the development of “Project Glass” (Google). The video presents a montage of moments out of the day of the subject. The video is shot entirely from a first person perspective and demonstrates what life might be like if instead of looking at the screen of a mobile device, that same information was instead accessible through a heads-up display. This additional semi-transparent layer of information is often referred to as augmented reality. Google’s device, Project Glass, is a lightweight augmented reality (AR) system worn in place of a pair of glasses. The wearable device contains a touch surface for control, a camera, wireless antennas, onboard computer processor, and a prism display. The promise of AR is that it will change the existing relationship between user and screen forever. Simply put, there is no more screen as it’s conceptualized today because the screen is potentially everywhere and everything. When using this device, every visual experience is a composited image composed of the physical world and an additional layer of data. Google’s efforts with AR aren’t the first examples of the layered reality push. With the rise of smart phones as portable computing devices many applications (apps) have been developed across mobile platforms for access to additional layers of information. AR applications typically take advantage of GPS data (in order to determine physical location), gyroscopic inputs (in order to determine the orientation of the device), and Internet connectivity (to populate the field of view with data). While not the immersive experience of Project Glass, AR apps have proven to be an interesting investigation into the possibilities of the composite as rendered in real-time with only a mobile device.

Future Implications of compositing: Constant Composited Reality

While the promises of Project Glass might be intriguing, the future is often difficult to predict. Corning, maker of the Gorilla Glass used in many touch screen devices, has its own vision of what the future might resemble and in February of 2012 released a short promotional video, “A Day made of Glass 2”, suggesting the possibilities of touch surfaces in the near future (Corning Incorporated).  In their six-minute montage of the near future nearly every surface is fabricated out of touch-sensitive transparent glass. Each interaction with a different product reveals sheets of glass that can be transparent or opaque with a single press or swipe. In Corning’s day of glass every surface suddenly becomes a window of composited information. Tablet computers are transparent sheets that act as windows into the world where a sea of data is waiting to be revealed. For Corning it is not enough that every surface might be a display, every surface should have the potential of being a composite display. While this may initially seem like flights of technological fancy, it’s worth mentioning that in the summer of 2012 the display manufacturer Samsung began showcasing its transparent LCD screens (Sidev). Corning’s vision of the future seems limited only by the current economy of cost for these displays, and in many ways lacks any real revolutionary implications about the role of the composite in the consumption of information.

 

Beyond Corning’s slick vision of a world crafted of transparent touch surfaces the video short made by Sight Systems provides for a more interesting suggestion of what the future of displays might look like. While the short is built around a questionable storyline, the real magic happens here in regards to the representation of an augmented reality display worn as contact lenses. Sight Systems seems to suggest a vision of AR that is not simply additionally layered information but instead approaches composited reality. Here the proposal isn’t merely that the world could contain additional layered information about text messages and weather, but rather that the world aught to be constructed according to the desires of the subject. A world of constantly composited visual representations. A world of constructed realities filled with notifications and advertisements ad nauseum.

 

Closing Thoughts

The divining of specific manifestations of future technological advancements or ideological implementations is beyond reasonable conjecture. What is not unreasonable, however, is a recognition of the influence of the composite. Today’s new media artifacts are crafted realities built upon stacked films of both visual data and invisible meta-data. Digital reality, and expression is predicated upon notions and expressions of the composite, and it is not presumptions to assume that this is only the beginning. The future may well be a world of composited reality.

Works Cited

Corning Incorporated. “A Day Made of Glass 2. Same Day. Expanded Corning Vision.” Online Video Clip. YouTube. YouTube, 12 Feb. 2012. Web. 13 Dec. 2012. <http://www.youtube.com/watch?v=jZkHpNnXLB0>.
 
Eveleth, Rose. “How fake images change our memory and behavior.” bbc.com. BBC. Web. 13 Dec. 2012. 14 Dec. 2012. <http://www.bbc.com/future/story/20121213-fake-pictures-make-real-memories/1>.
illegal A.R.T. “Girl Talk.” illegal-art.net. illegal art. Web. 15 Dec. 2012. <http://illegal-art.net/girltalk/>.
Google. “Project Glass: One day….” Online Video Clip. YouTube. YouTube, 4 April, 2012. Web. 13 Dec. 2012. <http://www.youtube.com/watch?v=9c6W4CCU9M4>.
 
Hansegard, Jens. “IKEA’s New Catalogs: Less Pine, More Pixelsc.” wsj.com. Wall Street Journal. Web. 23 Aug. 2012. 13 Dec. 2012. <http://online.wsj.com/article/SB10000872396390444508504577595414031195148.html?mod=WSJEUROPE_business_LeadStoryCollection>
 
Manovich, Lev. The Language of New Media. Ed. Roger F. Malina. Kindle Edition. Cambridge: The MIT Press, 2001.
Roseman, Jordan. “earworm MASHUPS.” Web. 15 Dec. 2012. <http://djearworm.com/>.
 
SidevDisplaySystems. “L’ écran transparent Samsung NL22B bientôt chez SIDEV.” Online Video Clip. YouTube. YouTube, 16 Jul. 2012. Web. 13 Dec. 2012. <http://www.youtube.com/watch?feature=player_embedded&v=rZslQZ6iMgA>.
 
Sight Systems. “Sight.” Online Video Clip. Vimeo. Vimeo, 24 July, 2012. Web. 13 Dec. 2012. <https://vimeo.com/46304267>.