# presets and cue building | TouchDesigner 099

I’ve been thinking about state machines and cuing systems lately. Specifically, that there aren’t many good resources that I’ve found so far that talk new artist programmers through how to think about or wrestle with these ideas. Like many Touch programmers I’ve tried lots of different ways of thinking about this problem, and just today I saw someone post on the Facebook help group.

## from the facebook help group:

Hi, i’m working arround Matthews AME394 Simple VJ-Setup Tutorial. No Questions, but how can i do nearly the same with different blending times between the moduls. I tried a lot with getting different values out of a table DAT into the length parameter of a timerCHOP. But cannot figur out the right steps to get my goal. Any helps? this i need in a theater situation with different scenes to blend one after another with scenebuttons or only one button and a countCHOP or something else.

This challenge is so very familiar, and while there are lots of ways to solve this problem sometimes the hardest part is having an idea of where to start.  Today what I want to look at is just that – where do we start? This isn’t the best solution, or the only solution – it’s just a starting point solution. It’s a pass at the most basic parts of this equation to help us get started in thinking about what the real problems are, how we want to tackle them, and how we can go about exposes the real issues we need to solve for.

So where do we start? In this simple little state machine we’re going to start with a table full of states. For the sake of simplicity I’m going to keep this as simple as possible… though it might well uncover the more interesting and more challenging pieces that lie ahead.

I’m going to start with the idea that I’ve got a piece of content (an image or a movie) that I want to play. I want to apply some post process effects (in this case just black level and image inversion changes), and I want to have different transition times between these fixed states. Here the only transition I’m worrying about is one that goes from one chain of operations to another. I’m also going to stick with images for now.

So what do we need in our network to get started?!

We’re going to borrow from an idea that often gets used in these kinds of challenges, and we’re going to think of this as operating with two decks – an A deck, and a B deck. Our deck is essentially a chain of operators that allow for all of the possibilities that we might want to explore in our application. In this case I’m only working with a level TOP, but you can imagine that we might use all sorts of operations to make for interesting composition choices.

Alright, so we’re going to lay out a quick easy deck:

moviefilein > level > fit

Next we’re going to repeat this whole chain, then connect both of our fit TOPs to a cross TOP:

If you’re scratching your head at this fit TOP in line, that’s okay. For us, the fit TOP is going to act as our safety. This makes sure that no matter what the resolution of the incoming file might be, that we always make sure that both decks have matching proportions. We’d probably want a little more thought in how this would work for an event or a show, but for now this is enough to help ensure that don’t experience any unexpected resolution shifts during our transitions.

Next we’re going to add a simple tweening system to our network to control how we blend between states. In this case I’m going to use a constant, a speed, and a null. I need to make sure that my speed is set to clamp, and that my min and max values are 0 and 1 respectively. Right now I only have two different decks, so I don’t want to go any higher that 1 or any lower than 0.

Now we’re cooking with propane! So where do we go next?

## some simple cues

movie_file trans_time blk_lvl invert
Banana.tif 1 0 0
Butterfly1.tif 2 0.12 1
Butterfly5.tif 5 0.2 0
Mettler.2.jpg 10 0.05 0
OilDrums.jpg 0.5 0.25 1
Starfish.tif 1 0 1

In this simple examination of this challenge I’m going to use a table to store our cues. In a larger system I’d probably use python storage (which is really a dictionary), but for the sake of keeping it simple let’s start with just a table. Our simple cues are organized above, and we can put all of those values into a table DAT. You’ll notice that for now I’m only worrying about file name and not path – all of these files come from the same directory so we can treat them mostly the same way. We’ll also notice that I’m thinking of my transition times in terms of seconds. All of this can, of course, be much more complicated. The trick is to sort out a very simple example first to identify pressure points and challenges before you dig yourself into a hole.

Okay, let’s add a table DAT to our network and copy all over our cues over.

Now that we have all of our pieces organized it is time to think through the logic of how we make this all work. For now let’s use a button, a count CHOP, and a CHOP Execute DAT. We need to make sure our button is set to be momentary, and we also need to make sure our count CHOP is set to loop – starting at 1 and ending at 6. That matches our row indices from our table DAT.

This is great Matt, but why python?

Well, we could do a lot of this with a complex set of CHOPs and selects but these kinds of states tend to be better handled, logically at least, through written code. Python will let us explicitly describe exactly what happens, and in what order those things happen. That’s no small thing, and while it might be a little rocky to wrap your head around using Python in Touch at first, it’s well worth it in the end. So what do we write in our CHOP Execute?

## a little bit of logic | python

Uhhhhhhh… wait. What?

Okay. First off we just define a set of variables that we’re going to use. This makes our code a little easier to write, and easier to read. Next all of the action really happens in our onValueChange function.

We’re going to do all of this in a little logical if statement. If this thing, do that thing… in all the other cases, do something else.

First we check to see what our deck position is… which means that we check to see which output we’re currently seeing more of. If our cross TOP’s index is greater that 0.5 we know that we’re closer to 1, which also means we’re seeing more of deck B than deck A. That means that we want to make changes in deck A before we start transitioning. First we change our file, change all of our settings, then finally set a value in our constant CHOP. But why 1 / that value? And why multiplied by -1?

A default network runs at 60 fps. A speed CHOP fed by a constant with a value of 1 will rise a count of 1 over 60 frames. Said another way, an input value of 1 to our speed in a default network will increase by a count of one every second. If we divide that number in half we go twice as slow. A value of 0.5 will increase by a count of 1 every 2 seconds. 1 / our table value will let us think in seconds rather than in fractions while we’re writing our cues. Yeah, but what about being multiplied by -1?! Well, if we want to get back to the 0 index in our cross TOP we need a negative value feeding our speed CHOP. Multiplying by -1 here means that we don’t need to think about the order of cues in our table DAT, and instead our bits of Python will keep us on the rails. Our else statement does all of the same things, but to our B deck. It also uses a positive value to feed our speed CHOP – since we need an increasing value.

There you have it, a simple cuing system.

This is great Matt, but what if I want to tween settings on that level TOP? Or any other set of complicated things?! Well, I’d be that at this point you’ve got enough to get you started. You might use a column to indicate if you’re transitioning to a totally new cue or just to new values in the same source image. You could also choose to put your parameter values in CHOPs instead so you could manipulate them with other CHOPs before exporting them to your decks.

What if I don’t want linear transitions?! A speed is just a linear ramp! That’s okay. You might choose to use a lookup CHOP and a more complicated curve. You could even make several types of curves with animation COMPs and switch between them. Or you could use a lag  CHOP to change your attack and release slopes. Or you could use a trigger CHOP, or a fileter CHOP. There are lots of ways to shape curves with math, now it’s up to you to figure out exactly what you’re after.

Happy programming!

# TouchDesigner | Email | Shrink Instance

## Original Email – Mon, Jul 13, 2015 at 9:43 PM

Hey Matt!
So I’ve tried banging my brain around 10 different ways to where I’m sure it now resembles a sphereSOP – noiseSOP but I’m still coming up a little short. Naturally, I thought about your THP 494 Shape lesson but I was not able to figure out what I’m missing so I’m coming straight to teacher for some guidance.

I’m trying to recreate the attached image. I’m using a GridSOP, MetaballSOP – MagnetSOP which sort of works but I’m missing some very important part of the process. Can you have a look at this and give me a hint as to what I’m missing?

You certainly do not have to correct the work unless you want to but a point in the right direction (haha) would really help me sleep tonight!

## Reply – Tue, Jul 14, 2015 at 12:23 AM,

That was a good brain teaser.

I’m including two different approaches to solve this problem. The first looks at using your magnet approach, and the second is more GPU focused.

Following your model with magnet, you had just about nailed it – all of the information you needed was in that SOP to CHOP, it was just a matter of reformatting it. In the magnet base, I grabbed the tz channel of the SOP to CHOP, scaled it, and renamed it scale, merged it in your instance CHOP, then applied it to the scale parameter of your instances. Pretty right on with your existing approach. The draw back here is that the magnet SOP is very expensive – nearly 7 milliseconds by itself – bottle-necking your performance at about 30 fps. This also keeps you pretty limited in terms of the number of points you can work with – CPU bottlenecks can be tricky to work around.

So, I started to think about how I would solve this problem on the GPU, and remembered that an array of pixels is just a different kind of grid. The second approach translates a circle TOP, and then converts that to CHOP information, merges this with a SOP to CHOP (using the xyz data from a gird), and then instances from there. I was looking at over 1400 instances without a problem. The challenge you’ll encounter, however, is when you try to replicate that many source textures. I did a quick test, and things slowed down when I had that many texture instances drawn using the newer texture instance approach. I was, however, able to get performance back up to 60 FPS if I loaded a 3D Texture Array TOP, and then turned off it’s active parameter. Markus uses this trick often.

Anyway, that’s as far as a I got in the little bit of time that I carved out. The next steps (in terms of mimicking your source image) would be to get the displacement right pushing instances up and down to make an opening in the center of the array.

Alright, well I put another hour into this because I got really interested in the idea of displacement. I think this still needs a little more work to really dial it in, but it’s a solid starting point for sure.

Hope this helps.

Best,
Matthew

Look at the example file on GitHub – shrinkInstance_locked

# 2D Sliders | TouchDesigner

The other day I took a moment to write down how to change a horizontal slider into a vertical one in TouchDesigner. This got me thinking, and I realized that in addition to a good ol vertical slider, a 2D slider (or an XY slider) would be another handy tool to have your disposal. Why make these yourself when there are a ton of tools already made in the TUIK tool set? Good question – there are a ton of wonderful pre-made control tools at our disposal there, but if we want to take some time to better understand how those tools work and and the programming logic behind them then it’s worth taking some time to make our own versions. It’s part mental exercise, part challenge, all fun. Maybe.

For starters, why use a 2D slider anyway? There are lots of things you can do with these kinds of interface objects. These are especially handy for corner pinning objects – this control surface reports out an x and y value, which you can then scale and apply to the corner of an image. With four of these 2D sliders you now can control the four vertices of an image. More generally, these kinds of controls are useful when you want two values to come out of a single control surface. That’s all well and good, but lets get to making one, shall we?

We’re going to start this process with a regular horizontal slider. Let’s begin by adding a Slider COMP to your network.

Eventually, we’re going to make some changes to the parent object for our slider, but for now we’re going to dive inside and re-arrange the guts of our component just a little. When you get inside of the Slider you should see a familiar set-up: a Panel CHOP, an Out CHOP, and a Container called Knob.

In another post I talked about how we can decode what’ s happening in the expression in the x parameter field of our Knob. In summary we can decode the expression this way. The expression:

me.parent().panel.u.val*me.parent().par.panelw-me.par.panelw/2

In English this might read – my parent’s panel u value (.5) * my parents width (100) – my panel’s width / 2. Great. Why? Well by multiplying the u value (a normalized value) with the width of the panel I get my position in terms of pixels – almost. The last thing I need to do is to take into consideration the width of the knob (divided by 2 so you split it’s value equally left and right).

This is all excellent news, but how does it help us with a 2D slider? First we need to think about what’s happening in a 1D slider vs. a 2D slider. In our regular horizontal (or vertical) slider we’re just tracking changes in a single direction – left and right, or up and down. In a 2D slider, we want to track both of those changes – up and down and left and right at the same time. To do this, we’re going to take our existing expression, and the expression that we used in our vertical slider and use both of them.

We already have an expression that allows us to track horizontal change, so how do we change that to track vertical change? In the expression above we’re going to make the following changes – first we want to use the value “v” instead of “u” that’s coming out of our panel (If you just had a moment when you said – “u, v, what now?!” take a break and read the Wikipedia page about uv mapping). We’ll also need to look at panelh (panel height) instead of panelw (panel width). This means our new expression for watching the change in a vertical dimension is:

me.parent().panel.v.val*me.parent().par.panelh-me.par.panelh/2

So far so good? Okay, next we’re going to use this expression in the Y parameter field of the knob component. This means that we have the two following expressions that are in the x and y parameters of our knob component:

x = me.parent().panel.u.val*me.parent().par.panelw-me.par.panelw/2
y = me.parent().panel.v.val*me.parent().par.panelh-me.par.panelh/2

Okay, now we’re going to make one more change to our knob’s parameters while we’re here. Instead of this control object being a rectangle, let’s make it a square. We can notice that its current width and height are 5 and 20 respectively. Let’s change that to 10 and 10. You certainly don’t have to work with a square knob, but I like the way this looks for this type of control. We also need to make some changes to our Panel CHOP.

The panel chop gives us all sorts of useful information about what’s happening in a panel at any given moment. In our case we can use this CHOP to tell us the horizontal and vertical position of the knob that for our slider. If we look at how our panel is currently set up we can see that it’s selecting the u value and renaming it “v1.” We want our slider to send out both u and v information, so let’s change the select to read “u v” – separating our variables with spaces tells TouchDesigner that we’d like both to be selected. We can also rename this values while we’re here. I choose to rename them xPos and yPos (x Position, Y Position). Choose whatever name makes sense for you, and that it’s too long – if you end up needing to call these values in another expression having a shorter name is helpful.

Now let’s take a moment and head back up to our parent container. Here in our parent let’s change the width and height to be 100 and 100. We should now see a square panel with a square inside of it, that we can drag about.

If we connect a Null CHOP or a Trail CHOP we can see the values reported out of our new tool. Sweet.

Alright, this is pretty cool, and I’m mostly happy with it… but it’s missing something. One of the things you’ll see with other 2D sliders are guild lines that pass through the middle of the central knob – these help you visually maintain a sense of where your control object is pointed, and let’s face it they just look cooler.

Okay, so let’s add some guild lines, where do we start?

We can begin by diving back into our Slider component. Let’s work the easy way, and take our knob component, and make two copies of it. We can call these two new additions xWire and yWire. Instead of hard coding in the dimensions of these wires, we are instead going to use some expressions to define what they look like. Why? Well, while we certainly could do this by hard coding the numbers it also means that if we make any changes to the parent component, it also means we need to make changes several pieces inside of the component. This works just fine if you’re only making something you want to use once, but if you want to make a component that you can use and reuse then using some expressions is going to save you a ton of time – and as a bonus you’ll learn more about using python expressions in TouchDesigner. Enough of my soap box, let’s do this.

Le’ts start with our xWire component. I’d like my guild lines to be one pixel tall (or wide for the yWire), and the same width as the parent component. To make this happen let’s use the following expressions in the Width field:

me.parent().par.panelw

In plain English this reads – what is my parent component’s panel width parameter. By using this expression we know that the width of this object will always match the width of our parent. Perfect. Before you celebrate too soon we need to add one more expression. We’d like the line that we’re drawing to stay aligned with our central knob (no really, we do want that). For this we need to keep in mind that our xWire that stretches the width of the parent component needs to move vertically – this may seem counter intuitive, but it’ll make perfect sense here in one moment. How do we do that? Well, luckily we were just practicing ways to make a knob stay aligned to a position when we made a vertical slider. We’re going to use some of the same ideas from that expression here. In the Y paramter field we’re going to use this expression:

op(‘panel1’) [ ‘yPos’ ] * me.parent().par.panelh

Say what now?! In plain English we’re saying – look at the operator called “panel1” and find the value called “xPos”, multiply that by the panel height of my parent. Now we should have a working xWire component. Before jump up to look at our 2D slider, here’s what you should have so far:

If jump out of our 2D slider and take a look at what we have so far, we should see something like this:

This is almost what we want, right? Now we just need to repeat this process to add our yWire guideline. Let’s dive back inside of our slider to finish up. For our yWire we’re going to set the width of our component to 1 and use an expression to change the height:

me.parent().par.panelh

We’re also going to use an expression to change the X position of our component:

op(‘panel1’) [ ‘xPos’ ] * me.parent().par.panelw

These are like our other expressions just calling different values. At this point you should have something that looks like this:

Now we’re finally ready to back out of our 2D slider, and admire our hard work.

Nice.

Alright, the last thing to do after you’ve done all of this hard work is to save this component as a .tox file that you can reuse. If you’ve never done that before, you can read how that works here.

# Inside Wonder Dome | TouchDesigner

In approaching some of the many challenges of Wonder Dome one of the most pressing and intimidating was how to approach programming media playback for a show with a constant media presence. One of the challenges we had embraced as a team for this project was using Derivative’s TouchDesigner as our primary programming environment for show-control. TouchDesigner, like most programming environments, has very few limitations in terms of what you can make and do, but it also requires that you know what it is that you want to make and to do. Another challenge was the fact that while our team was full of bright and talented designers, I was the person with the broadest TouchDesigner experience. One of the hard conversations that Dan and I had during a planning meeting centered around our choices of programming environments and approaches for Wonder Dome. I told Dan that I was concerned that I would end up building an interface / patch that no one else knew how to use, fix, or program. This is one of the central challenges of a media designer – how to do you make sure that you’re building something that can be used / operated by another person. I wish there were an easy answer to this question, but sadly this is one situation that doesn’t have simple answers. The solution we came to was for me to do the programming and development – start to finish. For a larger implementation I think we could have developed an approach that would have divided some of the workload, but for this project there just wasn’t enough time for me to both teach the other designers how to use / program in TouchDesigner and to do the programming needed to ensure that we could run the show. Dan pointed out in his thesis paper on this project that our timeline shook out to just 26 days from when we started building the content of the show until we opened.

The question that follows, then, is – how did we do it? How did we manage to pull of this herculean feat in less than a month, what did we learn along the way, and what was an approach that, at the end of the process, gave us results that we used?

# Organization

Make a plan and stay organized. I really can’t emphasize this enough. Wonder Dome’s process lived and died in our organization as a team, and as individuals. One of the many hurdles that I approached was what our cuing system needed to be, and how it was going to relate to the script. With three people working on media, our cue sheet was a bit of a disaster at times. This meant that in our first days working together we weren’t always on the same page in terms of what cue corresponded to what moment in the play. We also knew that we were going to run into times when we needed to cut cues, re-arrange them, or re order them. For a 90 minute show with 20 media cues this is a hassle, but not an impossibility. Our 25 minute long kids show had, at the beginning, over 90 media cues.

In beginning to think about how to face this task I needed an approach that could be flexible, and responsive – fast fast fast. The solution that I approached here was to think about using a replicator to build a large portion of the interface. Replicators can be a little intimidating to use, but they are easily one of the most powerful tools that you can use in TouchDesigner. Here the principle is that you set up a model operator that you’d like subsequent copies to look like / behave like. You then use a table to drive the copies that you make – one copy operator per row in the table. If you change the table, you’ve changed / remade your operators. In the same way if you change your template operator – this is called your “Master Operator” – then you change all of the operators at once. For those reasons alone it’s easy to see how truly powerful this component is, but it’s also means that a change in your table might render your control panel suddenly un-usable.

Getting started here I began by first formatting my cue sheet in a way that made the most sense for TouchDesigner. This is a great time to practice your Excel skills and to use whatever spreadsheet application / service that you prefer to do as much formatting as possible for you. In my case I used the following as my header rows:

• Cue Number – what was the number / name for the cue. Specifically this is what the stage manager was calling for over headset. This is also the same name / number for the cue that was in the media designer script. When anyone on the team was talking about M35 I wanted to make sure that we were all talking about the same thing.
• Button Type – Different cues sometimes need different kinds of buttons. Rather than going through each button and making changes during tech, I wanted to be able to update the master cue sheet for the replicator, and for the properties specified to show up in the button. Do I want a momentary button, a toggle, a toggle down, etc. These things mattered, and by putting these details in the master table It was one less adjustment that I needed to make by hand.
• Puppet – Wonder Dome had several different types of cues. Two classifications came to make a huge difference for us during the tech process. Puppet entrances / exits, and puppet movements. Ultimately, we started to treat puppet entrances and exits as a different classification of cue (rather than letters and numbers we just called for “Leo On” and “Leo Off”, this simplified the process of using digital puppets in a huge way for us), but we still had puppet movements that were cued in TouchDesigner. During the tech process we quickly found out that being able to differentiate between what cues were puppet movements and what cues were not was very important to us. By adding this column I could make sure that these buttons were a different color – and therefore differentiated from other types of cues.

Here I also took a programming precaution. I knew that invariably I was going to want to make changes to the table, but might not want those changes to be implemented immediately – like in the middle of a run for example. To solve this problem I used a simple copy script to make sure that I could copy the changed table to an active table when we were in a position to make changes to the show. By the end of the process I was probably fast enough to make changes on the fly and for them to be correctly formatted, but at the beginning of the process I wasn’t sure this was going to be the case. The last thing I wanted to do was to break the show control system, and then need 25 minutes to trouble shoot the misplacement of a 1 or 0. At the end of the day, this just made me feel better, and even if we didn’t need it in place I felt better knowing that I wasn’t going to break anything if I was thinking on my feet.

Above you can see a replicator in action – looking at an example like this I think helps to communicate just how useful this approach was. Method, like organization, is just a way to ensure that you’re working in a way that’s meaningful and thoughtful. I’m sure there are other methods that would have given us the same results, or even better results, but this approach helped me find a way to think about being able to quickly implement cue sheet changes into our show control environment. It also mean that we standardized our control system. With all of the buttons based on the same Master Operator it gave the interface a clean and purposed look – staring down the barrel of a 25 show run, I wanted something that I didn’t might looking at.

Thinking more broadly when it comes to organization, beyond just the use of replicators for making buttons I also took the approach that the show should be modular and organized as possible. This meant using base and container components to hold various parts of the show. Communication to lighting and sound each had their own module, as did our puppets. For the sake of performance I also ended up placing each of the locations in their own base as well. This had the added bonus of allowing for some scripting to turn cooking on and off for environments that we were using or not using at any given point in the show. We had a beast of a media server, but system resources were still important to manage to ensure smooth performance.

# Show Control

Show control, however, is about more than just programming buttons. Driving Wonder Dome meant that we needed a few additional features at our fingertips during the show. Our show control system had two preview screens – one for the whole composite, and one for puppets only. One of the interesting features of working in a dome is how limited your vision becomes. The immersive quality of the projection swallows observers, which is downright awesome. This also means that it’s difficult to see where all of the media is at any given point. This is one of the reasons that we needed a solid preview monitor – just to be able to see the whole composition in one place. We also needed to be able to see the puppets separately at times – partially to locate them in space, but also to be able to understand what they looked like before being deformed and mapped onto the curved surface of the dome.

The central panel of our control system had our cues, our puppet actions, our preview monitors, and a performance monitor. During the show there were a number of moments when we had a dome transformation that was happening while nearly simultaneously a puppet was entering or exiting. While originally I was trying to drive all of this with a single mouse, I quickly abandoned that idea. Instead I created a simple TouchOSC interface to use on an iPad with another hand. This allowed me to take a double handed approach to diving the media added some challenge, but paid itself back ten fold with a bit of practice. This additional control panel also allowed me to drive the glitch effects that were a part of the show. Finally it also made for an easy place to reset many of the parameters of various scenes. In the change over between shows many elements needed to be reset, and but assigning a button on my second interface for this task I was able to move through the restore process much faster.

If you’d like to learn more about using TouchOSC with TouchDesigner there a few pages that you might take a glance at here:

# Cues

Beyond creating a system for interacting with TouchDesigner, a big question for me was how to actually think about the process of triggering changes within my network. Like so many things, this seems self evident on the face of it – this button with do that thing. But when you start to address the question of “how” then the process becomes a little more complicated. Given the unstable nature of our cue sheet, I knew that I needed a name-based approach that I called from a central location. Similar to my module based approach for building the master cue sheet, I used the same idea when building a master reference sheet.

With a little push and guidance from the fabulous Mary Franck, I used an evaluate DAT to report out the state of all of the buttons from the control panel, and name them in a way that allowed for easy calling – specifically I made sure that each cue maintained it’s name letter and number convention from our cue sheet.

On the face of this it seems like that’s an awful lot of scripts to write – it is, but like all things there are easier and harder ways to solve any problem. My approach to here was to let google spreadsheets do some work for me. Since cue sheet was already set-up as a spread sheet, writing some simple formulas to do the formatting for me was a quick and easy way to tackle this. It also meant that with a little bit of planning my tables for TouchDesigner were formatted quickly and easily.

It was also here that I settled on using a series of Execute DATs to drive the cooking states of the various modules to control our playback performance. I think these DATs were some of the hardest for me to wrap my head around – partially because this involved a lot of considered monitoring of our system’s overall performance, and the decisions and stacking necessary to ensure that we were seeing smooth video as frequently as possible. While this certainly felt like a headache, by the time the show was running we rarely dropped below 28 frames per second.

If you want to read a little more about some of the DAT work that went into Wonder Dome you can start here:

# Communication

All of the designers on the Wonder Dome team had wrestled with the challenges of communication between departments when it comes to making magic happen in the theatre. To this end, Adam, Steve, and I set out from the beginning to make sure that we had a system for lights, media, and sound to all be able to talk with one another without any headache. What kind’s of data did we need to share? To create as seamless a world as possible we wanted any data that might be relevant for another department to be easily accessible. This looked like different things for each of us, but talking about it from the beginning ensured that we built networks and modules that could easily communicate.

In talking with lighting, one of our thoughts was about passing information relative to the color of the environment that we found ourselves in at any given point. To achieve this I cropped the render to a representative area, then took the average of the pixel values in that area, then converted the texture data to channel data and streamed lighting the RGBA values over OSC. We also made a simple crossfader in our stream for the times when we wanted the lighting in the scene to be different from the average of the render.

This technique was hardly revolutionary, but it did create very powerful transitions in the show and allowed media to drive lighting for the general washes that filled the space. This had the added benefit of offloading some programming responsibility from lighting. While I had done a lot of work in the past to coordinate with sound, I hadn’t done much work coordinating with lights. In fact, this particular solution was one that we came up with one afternoon while we were asking questions like “what if…” about various parts of the show. We knew this was possible, but we didn’t expect to solve this problem so quickly and for it to be so immediately powerful. Through the end of the run we continued to consistently get positive audience response with this technique. Part of the reason this solution was so important was be cause Adam was busy building a control system that ultimately allowed him to control two moving lights with two wacom tablets – keeping the washing lighting driven by media kept both of his hands free to operate the moving lights.

The approach to working with sound was, of course, very different from working with lights. Knowing that we wanted to use spatialized sound for this show Stephen Christensen built an incredible Max patch that allowed him to place sound anywhere he wanted in the dome. Part of our conversation from the beginning was making sure that media could send location data bout puppets or assets – we wanted the voice of the puppeteers to always be able to follow the movement of the puppets across the dome. This meant that created an OSC stream for sound that carried the location of the puppets, as well as any other go or value changes for moments where sound and media needed to be paired together.

Communicating with sound wasn’t just a one way street though. Every day the Wonder Dome had a 90 minute block of free time when festival visitors were allowed to explore the dome and interact with some of the technology outside of the framework of the show. One of the components that we built for this was a 3D environment that responded to sound, animating the color and distribution of objects based on the highs, mids, and lows from the music that was being played. Here sound did the high, mid, low processing on its end, and then passed me a stream of OSC messages. to get a smoother feel from the data I used a Lag CHOP before using this to drive any parameters in my network.

# Components and Reuse

Perhaps the most important lesson to be learned from this project was the importance of developing solid reusable components. This, again, isn’t anything revolutionary but it is worth remembering whenever working on a new project. The components that you build to use and reuse can make or break your efficiency and the speed of your workflow. One example of this would be a tool that we created to make placing content on the dome. Our simple tool for moving images and video around the dome would be used time and again throughout the project, and if I hadn’t take the time early on to create something that I intended to reuse, I would have instead spent a lot of time re-inventing the wheel every time we needed to solve that problem.

In addition to using this placement tool for various pieces of media in the show, this is also how we placed the puppets. During the development phase of this tool I thought we might want to be able to drive the placement of content from a iPad or another computer during tech. To make this easier, I made sure that there was a mechanism embedded in the tool to allow for easy control from multiple inputs. This meant that when we finally decided to adapt this tool for use with the puppets, we already had a method for changing their location during the show. There are, of course, limits to how much anyone can plan ahead on any project but I would argue that taking the time to really think about what a component needs to be do before developing it makes good sense. I also made use of local variables when working with components in order to make it easier to enable or disable various pieces of the tool.

I nearly forgot to mention one of the most critical parts of this process. Documentation and commenting. If I hadn’t commented my networks I would have been lost time after time. One of the most important practices to develop and to continue is good commenting. Whenever I was working on something that I couldn’t understand immediately by just looking at it, I added a comment. I know that some programmers use the ability to insert comments with individual operators, but I haven’t had as much success with that method. Personally, I find that inserting a text DAT is the best way for me to comment. I typically write in a text editor using manual carriage returns. I also make sure that I date my comments, so if I make a change I can leave the initial comments and then append the comment with new information. I can’t say enough about the importance of commenting – especially if you’re working with another programmer. Several times during the process I would help lighting solve a problem, and good commenting helped ensure that I could communicate important details about what was happening in the network to the other programmer.

I think it’s also important to consider how you document your work. This blog often functions as my method of documentation. If I learning something that I want to hold onto, or something that I think will be useful to other programmers then I write it down. It doesn’t do me any good to solve the same problem over and over again – writing down your thoughts and process help you organize your approach. There have been several times when I find shortcuts or new efficiency in a process only when I’m writing about it – the act of taking it all a apart to see how the pieces connect make you question what you did the first time and if there’s a better way. At times it can certainly feel tedious, but I’ve also been served time and again by the ability to return to what I’ve written down.

# TouchDesigner | These are the DATs you’ve been looking for

If you’re new to TouchDesigner, it’s easy to feel like DATs are a hard nut to crack. This is especially true if you’re also new to programming in general. Scripting can be daunting as you’re getting started, but it’s also incredibly important – take it from someone who is still learning, dat by dat.

So what’s the big deal about DATs anyway? Better yet, why should you care? DATs can help in all sorts of ways, but lets look at a concrete example of how they can help solve some interesting problems that you might face if you’re out to save some information to use later.

As the Wonder Dome team has been busy building interfaces, programming methods, and performance tools we’ve hit countless situations where being able to save some data for later use is absolutely necessary.

Our lighting designer, Adam Vachon, wants to be able to mix color live during a rehearsal and then record that mix to in a cue later. Better yet, he might want to create a cue sheet with all of that data saved in a single table so he can quickly recall it during tech. Over in media, we want to be able to place video content in lots of difference places across the dome and with varying degrees of visual effects applied and we also want to be able to record that data for later recall.

DATs, are a wonderful solution for this particular problem. With a few DATs, and some simple scripts we can hold onto the position of our sliders to use later. Let’s take a look at how we can make that happen.

First let’s look at a simple problem. I want to be able to add the values from one table to the bottom of another table. If you’re new to programming, this process is called appending. We can see an example of this if we look at two different tables that we want to add together.

Here we have two tables, and we’d like to combine them. We can do this by writing a simple script that tells TouchDesigner to take the contents of cells from table2 and to add them to table1 in a specific order. One of the things that’s important to understand is how tables are referenced in TouchDesigner. One of the ways that a programmer can pull information from a cell is to ask for the data by referencing the address of the cell. This is just like writing a formula in something Google Spreadsheets or Excel – you just need to know the name of the cell that you want information from. Let’s take a look at how the addressing system works:

Taking a moment to study table3 and you’ll be referencing cells in a flash. It’s just rows and columns, with the only catch that the numbering system starts at 0. Cool, right? Okay, so if we want to write our script to append cells from one table to another we’re going to use this format:

n = op(“table1”)
m1 = op(“table2”)[0,0]
m2 = op(“table2”)[0,1]
m3 = op(“table2”)[0,2]

n.appendRow( [ m1, m2, m3 ] )

So what’s happening here? First we’re defining table1 as a variable we’re calling n. Next we’re naming three new variables m1, m2, and m3. These correspond to the data in the first row of table2, in column 1, 2, 3. The next operation in our script to append n (that’s table1) with a new row using the values m1, m2, and m3 in that order. You might decide that you want these added to n in a different order, which is easy, right? All you have to do is to change the order that you’ve listed them – try making the order of variables in the brackets [ m2, m1, m3 ] instead to see what happens. Alright, at this point our network should look like this:

Now, to run our script we’re just going to right click on text3, and select “Run Script” from the contextual menu.

Great! Now we’ve successfully appended one table with data from the first row of another table.

If you’re still with me, now we can start to make the real magic happen. Once we understand how a script like this works, we can put it to work to do some interesting tasks for us. Let’s look at a simple example where we have three sliders, that we want to be able to save the data from.

To get started, let’s make three slider COMPs, and connect them to a merge CHOP.

Now lets add a Chop to DAT, and export the merge to the Chop to.

The chopto DAT is a special kind of operator that allows us to see CHOP data in DAT format. This coverts our CHOP into a table of three floats. At this point you can probably guess where we’re headed – we’re going to use our simple script that we just wrote to append the contents of our chopto to another table. Before we get there, we still need to get a few more ducks in a row.

Next let’s create a table with one row and three columns. Name these columns anything you want, in my case I’m going to call them (rather generically) Value 1, Value 2, and Value 3. I’m also going to create a big empty table, and finally I’m going to connect both of these with a merge DAT. Why two tables? I want my first table to hold my header information for the final table. This way I can clear the whole table of saved floats without also deleting the first row of my final table.

As a quick reminder, the names of your DATs is going to be very important when we start to write our script. The names of our DATs is how we can identify them, and consequently how we can point TouchDesigner to the data that we want to use.

Next I’m going to add a button COMP to my network, and a panel execute DAT. In the panel execute DAT I’m going to make sure that it’s looking at the operator button1 and watching for the panel value select. I’m also going to make sure that the On to Off box is checked – this tells the DAT when to run the script. Next I’m going to slightly alter the script we wrote earlier to right for our tables here. I’m also going to make sure that the script is in the right place in the DAT. Take a closer look at the example below to see how to format your DAT.

Alright, now it’s time for DAT table magic. At this point you can make your sliders and button viewer active, and you’re ready to make changes and then record slider states. Happy appending.

In case you still have questions you can take a closer look at my example here – record_method_example.

# Multiple Windows | TouchDesigner

For an upcoming project that I’m working on our show control needs to be able to send out video content to three different projectors. The lesson I’ve learned time and again with TouchDesigner is to first start by looking through their online documentation to learn about what my options are, and to get my bearings. A quick search of their support wiki landed me on the page about Multiple Monitors.

To get started I decided to roll with the multiple window component method – this seemed like it would be flexible and easy to address out the gate. Before I was ready for this step I had to get a few other things in order in my network. Ultimately, the need that I’m working to fill is distortion and blending for the interior surface of a dome using three projectors that need to warp and edge blend in real time. First up on my way to solving that problem was looking at using a cube map ) in order to address some of this challenge. In this first network we can see six faces of a cube map composited together, exported to a phong shader, and then applied to a dome surface which is then rendered in real time from three different perspectives.

A general over view of the kind of technique I’m talking about can be found here. The real meat and potatoes of what I was after in this concept testing was in this part of the network:

Here I have three camera components driving three different Render TOPs, which are in turn passing to three Null TOPs that are named P1, P2, and P3 – projector 1 – 3. As this was a test of the concepts of multiple monitor outs, you’ll notice that there isn’t much difference between the three different camera perspectives and that I haven’t added in any edge blending or masking elements to the three renders. Those pieces are certainly on their way, but for the sake of this network I was focused on getting multiple windows out of this project.

If we jump out of this Container Comp we can see that I’ve added three Window Components and a Button to my network. Rather than routing content into these window elements, I’ve instead opted to just export the contents to the window comps.

If we take a closer look at that parameters of the Window Comp we can see what’s going on here in a little more detail:

Here we can see that I’ve changed the Operator path to point to my null TOP inside of my container COMP. Here we can see that the path is “/project1/P1”. The general translation of this pathway would be “/the_name_of_container/the_name_of_the_operator“. Setting Operator path to your target operator will export the specified null when the window is opened, but it will not display the contents of the null in the node itself. If you’d like to see a preview of the render on the window node, you’ll also need to change the node pathway on the Common Page of the Window Comp. Here we can see what that looks like:

Finally, I wanted to be able to test using a single button to open and close all three windows. When our media server is up and running I’d like to be able to open all three windows with a single click rather than opening them one window comp at a time. In order to test this idea, I added a single button component to my network. By exporting the state of this button to the “Open” parameter of the window on the Window Page I’m able to toggle all three windows on and off with a single button.

# TouchDesigner | Import from a System Folder

One of the handy building blocks to have in your arsenal as you’re working with any media system is an understanding of how it works with system folders. Isadora, for example, pulls assets from the folder specified when you load the original file. This means that you can change an asset, save it with the same name and in placing it in the proper system folder your changes show up without having to re-import any files. What then, if I want to automatically pull files from a folder on my computer? In TouchDesigner that’s a relatively simple thing to accomplish without too much work. Better yet, the underlying principles open up a lot of much more interesting possibilities.

Alright, let’s start with a simple problem that we can solve with TouchDesigner. I want to build a simple patch where I can use two control panel buttons to navigate through a list of images (or movies) in a specified folder – essentially I want to be able to add my photos or videos to a folder on my computer, and then display them in TouchDesigner without having to drag them into the network or having to add a Movie-In TOP.

Let’s start by looking at the full network.

Our system is made of three strings – a series of CHOPs, a series of DATs, and a TOP. We’re going to use two buttons to control a series of channel operators to give us a number. We’re going to use this number to select a specific file from a folder, and then we’re going to use that file name in a Movie-In Texture Operator to be displayed.

Button – Trigger – Count – Math – Null

First things first, the important place to start in this particular string is actually in the middle at the Count CHOP. The Count CHOP, in this case, is going to watch for a change in a signal and use that as a mechanism for determining when to increment or reset. There are lots of different ways to count, but in this particular case the Count CHOP is an excellent method for incrementing from a button press. Alright, so if we know that we’re going to start with a count, then we don’t actually need the Trigger CHOP – especially as I’m going to watch a Button for a signal change. That said, the Trigger CHOP might be handy if I’m watching something that doesn’t have a specific binary, or a signal where I want a certain threshold to act as the trigger. You’ll notice that my Trigger is bypassed, but I’ve left it in my network just in case I want to use a different kind of triggering method. Alright, with some of those things in mind, lets move back to the beginning of this string.

First off I start by adding to Button COMPs to my network (If you want to learn more about Buttons and Control Panels Start here). Zooming into the button COMP I can set the text for the button itself. First I want to find the Text TOP labeled “bg.” Then I want to open the parameter window for this TOP and change the Text to “Forward” (you can name this whatever you want, I just wanted to know that this button was going to move me forward through my set of photos).

We’ll do the same thing for the other button, and in my case I both changed it’s name and it’s color. You can do both from the Text TOP in your second Button COMP.

Moving back out of the Button COMP I’m going to make a few other changes to my buttons. I want to change their position on the control panel, and I want to make them a little larger to accommodate the longer words inside of them. To do this I’m going to open up the parameter window of the Button COMPs and change the width of both to 66. I’m also going to change the Y location of button labeled “Forward” to 55 to make sure that my buttons aren’t on top of one another.

Before moving on, I also want to change the button Type for both of these COMPs to “Momentary.”

Next I’m going to connect my Button called “forward” to the top inlet on my Count CHOP, and the Button “reset” to the middle inlet on the Count CHOP. This means that every time I click the forward button I’ll increment the count up by 1, and when I click reset it will set the counter back to 0.

In my Math CHOP the only change that I’m going to make is to Pre-Add 1 to the integer coming form my count. I’m going to do this because TouchDesigner counts the first row of a table as 0. When we get to our DATs we’ll see that the 0 line in our table is actually the header of a table. While I could tell TouchDesigner not to show the header, that doesn’t feel like the best solution – especially as I want to be able to see what columns are referring to.

Last but not least, I’m going to end this string of CHOPs in a Null. If you’re new to TouchDesigner, it’s important to know that ending a string in a Null can be a true life safer. Ending in a Null allows you to make changes up stream of the Null, without any headache. Let’s imagine in a complicated network that you’ve connected 20 or 50 different operators to a CHOP. You decide that you want to change something in your string, and you delete the final CHOP (the one that’s connected to everything) leaving you to reconnect all of those patch chords in order to get your network back up and running again. The better solution is to end the string in a Null, and then connect the Null to those 20 or 50 or 1000 other operators. Changing things up stream of the Null is easy, especially as you don’t have to reconnect all of your patch chords again.

Now let’s take a look at our DAT stream.

(Text) – Folder – Select – Null

Here the Text DAT isn’t an essential part of my network, at least in terms of operation. As a general rule of thumb, I try to add comments to my networks as I’m working in them in order to make sure that I can remember how I did something. I highly recommend having some kind of commenting system, especially as this can prove to be a helpful tool to communicate with collaborators about what you’re doing with a specific part of a network. All of that to say, that we can ignore my first Text DAT.

The Folder DAT will pull the contents of a specified system folder. Ultimately, I’ll need the pathway of the files in this folder. I can set the Folder DAT to give me all of that information, and specify if I want it to include the content of sub folders, or the names of sub folders. In my case I only want the names and locations of the files in this single folder.

Next I want to select a single line out of this whole table. To do this, we’re going to connect the Folder DAT to a Select DAT.

First up I want to set this DAT to extract Rows by Index. I want to make sure that both Include First Row, and Include First Column is turned off. Next I want to export the Null CHOP to both the Start Row Index and the End Row Index. Finally, I want to set the Start Column Index to 5, and the End Column Index to 5 – I’m telling the Select DAT that I’m only interested in the sixth column in this table (but you set it to 5, whats the deal? remember 0 counts as a column, so when you’re counting columns in your table 0 is the new 1). I’m also telling it that I only want to see the row that’s specified by the CHOP string.

Last but not least, I’m going to pass this to a Null.

Finally, let’s add a Movie-In TOP to our network. Now, in the Movie-In parameter dialog we’re going to do a little typing. We need to tell the Movie-in TOP that it should reference a cell in a table in a DAT. Easy. Right? Sure it is. We’re going to first click the little plus that shows up as your pointer hovers over the left side of the parameter window.

Next we’re going to toggle the field from a constant into a expression by clicking on the little blue box.

Finally, we’re going to write a short expression to tell TouchDesigner where to look for the pathway of a file to load. Our expression looks like this:

op(‘null1’)[0,0]

If we were writing that in English it would read – Look at the Operator named “null1” and find the contents of the cell in the 0 column position and in the 0 row position. It’s important to note that the name in single quotes should be the name of the DAT Null that we created before, otherwise this won’t act as expected.

There you have it. We’ve now made a network that will allow us to load the contents of a folder, sequentially based on how we use two buttons in our control panel.

# WonderDome

In 2012 Dan Fine started talking to me about a project he was putting together for his MFA thesis. A fully immersive dome theatre environment for families and young audiences. The space would feature a dome for immersive projection, a sensor system for tracking performers and audience members, all built on a framework of affordable components. While some of the details of this project have changed, the ideas have stayed the same – an immersive environment that erases boundaries between the performer and the audience, in a space that can be fully activated with media – a space that is also watching those inside of it.

Fast forward a year, and in mid October of 2013 the team of designers and our performer had our first workshop weekend where we began to get some of our initial concepts up on their feet. Leading up to the workshop we assembled a 16 foot diameter test dome where we could try out some of our ideas. While the project itself has an architecture team that’s working on an portable structure, we wanted a space that roughly approximated the kind of environment we were going to be working in. This test dome will house our first iteration of projection, lighting, and sound builds, as well as the preliminary sensor system.

Both Dan and Adam have spent countless hours exploring various dome structures, their costs, and their ease of assembly. Their research ultimately landed the team on using a kit from ZipTie Domes for our test structure. ZipTie Domes has a wide variety of options for structures and kits. With a 16 foot diameter dome to build we opted to only purchase the hub pieces for this structure, and to cut and prep the struts ourselves – saving us the costs of ordering and shipping this material.

In a weekend and change we were able to prep all of the materials and assemble our structure. Once assembled we were faced with the challenge of how to skin it for our tests. In our discussion about how to cover the structure we eventually settled on using a parachute for our first tests. While this material is far from our ideal surface for our final iteration, we wanted something affordable and large enough to cover our whole dome. After a bit of searching around on the net, Dan was able to locate a local military base that had parachutes past their use period that we were able to have for free. Our only hiccup here was that the parachute was multi colored. After some paint testing we settled on treating the whole fabric with some light gray latex paint. With our dome assembled, skinned, and painted we were nearly ready for our workshop weekend.

# Media

There’s healthy body of research and methodology for dome projection on the web, and while reading about a challenge prepped the team for what we were about to face it wasn’t until we go some projections up and running that we began to realize what we were really up against. Our test projectors are InFocus 3118 HD machines that are great. There are not, however, great when it comes to dome projection. One of our first realizations in getting some media up on the surface of the dome was the importance of short throw lensing. Our three HD projectors at a 16 foot distance produced a beautifully bright image, but covered less of our surface than we had hoped. That said, our three projectors gave us a perfect test environment to begin thinking about warping and edge blending in our media.

## TouchDesigner

One of the discussions we’ve had in this process has been about what system is going to drive the media inside of the WonderDome. One of the most critical elements to the media team in this regard is the ability to drop in content that the system is then able to warp and edge blend dynamically. One of the challenges in the forefront of our discussions about live performance has been the importance of a flexible media system that simplifies as many challenges as possible for the designer. Traditional methods of warping and edge blending are well established practices, but their implementation often lives in the media artifact itself, meaning that the media must be rendered in a manner that is distorted in order to compensate for the surface that it will be projected onto. This method requires that the designer both build the content, and build the distortion / blending methods. One of the obstacles we’d like to overcome in this project is to build a drag and drop system that allows the designer to focus on crafting the content itself, knowing that the system will do some of the heavy lifting of distortion and blending. To solve that problem, one of the pieces of software that we were test driving as a development platform is Derivative’s TouchDesigner.

Out of the workshop weekend we were able to play both with rendering 3D models with virtual cameras as outputs, as well as with manually placing and adjusting a render on our surface. The flexibility and responsiveness of TouchDesigner as a development environment made this process relatively fast and easy. It also meant that we had a chance to see lots of different kinds of content styles (realistic images, animation, 3D rendered puppets, etc.) in the actual space. Hugely important was a discovery about the impact of movement (especially fast movement) coming from a screen that fills your entire field of view.

## TouchOSC Remote

Another hugely important discovery was the implementation of a remote triggering mechanism. One of our other team members, Alex Oliszewski, and I spent a good chunk of our time talking about the implementation of a media system for the dome. As we talked through our goals for the weekend it quickly became apparent that we needed for him to have some remote control of the system from inside of the dome, while I was outside programming and making larger scale changes. The use of TouchOSC and Open Sound Control made a huge difference for us as we worked through various types of media in the system. Our quick implementation gave Alex the ability to move forward and backwards through a media stack, zoom, and translate content in the space. This allowed him the flexibility to sit away from a programming window to see his work. As a designer who rarely gets to see a production without a monitor in front of me, this was a huge step forward. The importance of having some freedom from the screen can’t be understated, and it was thrilling to have something so quickly accessible.

# Lights

Adam Vachon, our lighting designer, also made some wonderful discoveries over the course of the weekend. Adam has a vested interest in interactive lighting, and to this end he’s also working in TouchDesigner to develop a cue based lighting console that can use dynamic input from sensors to drive his system. While this is a huge challenge, it’s also very exciting to see him tackling this. In many ways it really feels like he’s doing some exciting new work that addresses very real issues for theaters and performers who don’t have access to high end lighting systems. (You can see some of the progress Adam is making on his blog here)

While it’s still early in our process it’s exciting to see so many of the ideas that we’ve had take shape. It can be difficult to see a project for what it’s going to be while a team is mired in the work of grants, legal, and organization. Now that we’re starting to really get our hands dirty, the fun (and hard) work feels like it’s going to start to come fast and furiously.

# Thoughts from the Participants:

What challenges did you find that you expected?

The tracking; I knew it would hard, and it has proven to be even more so. While a simple proof-of-concept test was completed with a Kinect, a blob tracking camera may not be accurate enough to reliably track the same target continuously. More research is showing that Ultra Wide Band RFID Real Time Locations System may be the answer, but such systems are expensive. That said, I am now in communications with a rep/developer for TiMax Tracker (an UWB RFID RTLS) who might be able to help us out. Fingers crossed!

What challenges did you find that you didn’t expect?

The computers! Just getting the some of computers to work the way they were “supposed” to was a headache! That said, it is nothing more than what I should have expected in the first place. Note for the future: always test the computers before workshop weekend!

DMX addressing might also become a problem with TouchDesigner, though I need to do some more investigation on that.

How do you plan to overcome some of these challenges?

Bootcamping my macbook pro will help on the short term computer-wise, but it is definitely a final solution. I will hopefully be obtaining a “permanent” test light within the next two weeks as well, making it easier to do physical tests within the Dome.

As for TouchDesigner, more playing around, forum trolling, and attending Mary Franck’s workshop at the LDI institute in January.

What excites you the most about WonderDome?

I get a really exciting opportunity: working to develop a super flexible, super communicative lighting control system with interactivity in mind. What does that mean exactly? Live tracking of performers and audience members, and giving away some control to the audience. An idea that is becoming more an more to me as an artist is finding new ways for the audience to directly interact with a piece of art. On our current touch-all-the-screens-and-watch-magic-happen culture, interactive and immersive performance is one way for an audience to have a more meaningful experience at the theatre.

What challenges did you find that you expected?

From the performer’s perspective, I expected to wait around. One thing I have learned in working with media is to have patience. During the workshop, I knew things would be rough anyway and I was there primarily as a body in space – as proof of concept. I expected this and didn’t really find it to be a challenge but as I am trying to internally catalogue what resources or skills I am utilizing in this process, so far one of the major ones is patience. And I expect that to continue.

I expected there to be conflicts between media and lights (not the departments, the design elements themselves). There were challenge, of course, but they were significant enough to necessitate a fundamental change to the structure. That part was unexpected…

Lastly, directing audience attention in an immersive space I knew would be a challenge, mostly due to the fundamental shape of the space and audience relationship. Working with such limitations for media and lights is extremely difficult in regard to cutting the performer’s body out from the background imagery and the need to raise the performer up.

What challenges did you find that you didn’t expect?

Honestly, the issue of occlusion on all sides had not occurred to me. Of course it is obvious, but I have been thinking very abstractly about the dome (as opposed to pragmatically). I think that is my performer’s privilege: I don’t have to implement any of the technical aspects and therefore, I am a bit naive about the inherent obstacles therein.

I did not expect to feel so shy about speaking up about problem solving ideas. I was actually kind of nervous about suggesting my “rain fly” idea about the dome because I felt like 1) I had been out of the conversation for some time and I didn’t know what had already been covered and 2) every single person in the room at the time has more technical know-how than I do. I tend to be relatively savvy with how things function but I am way out of my league with this group. I was really conscious of not wanting to waste everyone’s time with my kindergarten talk if indeed that’s what it was (it wasn’t…phew!). I didn’t expect to feel insecure about this kind of communication.

How do you plan to overcome some of these challenges?

Um. Tenacity?

What excites you the most about WonderDome?

It was a bit of a revelation to think of WonderDome as a new performance platform and, indeed, it is. It is quite unique. I think working with it concretely made that more clear to me than ever before. It is exciting to be in dialogue on something that feels so original. I feel privileged to be able to contribute, and not just as a performer, but with my mind and ideas.