Tag Archives: WonderDome

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp – this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP – I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]”.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1”)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)[“transz”]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1” and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)[“noise”]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.

Multiple Windows | TouchDesigner

For an upcoming project that I’m working on our show control needs to be able to send out video content to three different projectors. The lesson I’ve learned time and again with TouchDesigner is to first start by looking through their online documentation to learn about what my options are, and to get my bearings. A quick search of their support wiki landed me on the page about Multiple Monitors.

To get started I decided to roll with the multiple window component method – this seemed like it would be flexible and easy to address out the gate. Before I was ready for this step I had to get a few other things in order in my network. Ultimately, the need that I’m working to fill is distortion and blending for the interior surface of a dome using three projectors that need to warp and edge blend in real time. First up on my way to solving that problem was looking at using a cube map ) in order to address some of this challenge. In this first network we can see six faces of a cube map composited together, exported to a phong shader, and then applied to a dome surface which is then rendered in real time from three different perspectives.

Screenshot_121913_105543_PM

A general over view of the kind of technique I’m talking about can be found here. The real meat and potatoes of what I was after in this concept testing was in this part of the network:

Screenshot_121913_105619_PM

Here I have three camera components driving three different Render TOPs, which are in turn passing to three Null TOPs that are named P1, P2, and P3 – projector 1 – 3. As this was a test of the concepts of multiple monitor outs, you’ll notice that there isn’t much difference between the three different camera perspectives and that I haven’t added in any edge blending or masking elements to the three renders. Those pieces are certainly on their way, but for the sake of this network I was focused on getting multiple windows out of this project.

If we jump out of this Container Comp we can see that I’ve added three Window Components and a Button to my network. Rather than routing content into these window elements, I’ve instead opted to just export the contents to the window comps.

Screenshot_121913_105514_PM

If we take a closer look at that parameters of the Window Comp we can see what’s going on here in a little more detail:

Screenshot_121913_111605_PM

Here we can see that I’ve changed the Operator path to point to my null TOP inside of my container COMP. Here we can see that the path is “/project1/P1”. The general translation of this pathway would be “/the_name_of_container/the_name_of_the_operator“. Setting Operator path to your target operator will export the specified null when the window is opened, but it will not display the contents of the null in the node itself. If you’d like to see a preview of the render on the window node, you’ll also need to change the node pathway on the Common Page of the Window Comp. Here we can see what that looks like:

Screenshot_121913_111619_PM

Finally, I wanted to be able to test using a single button to open and close all three windows. When our media server is up and running I’d like to be able to open all three windows with a single click rather than opening them one window comp at a time. In order to test this idea, I added a single button component to my network. By exporting the state of this button to the “Open” parameter of the window on the Window Page I’m able to toggle all three windows on and off with a single button.

WonderDome | Workshop Weekend 1

WonderDome

In 2012 Dan Fine started talking to me about a project he was putting together for his MFA thesis. A fully immersive dome theatre environment for families and young audiences. The space would feature a dome for immersive projection, a sensor system for tracking performers and audience members, all built on a framework of affordable components. While some of the details of this project have changed, the ideas have stayed the same – an immersive environment that erases boundaries between the performer and the audience, in a space that can be fully activated with media – a space that is also watching those inside of it.

Fast forward a year, and in mid October of 2013 the team of designers and our performer had our first workshop weekend where we began to get some of our initial concepts up on their feet. Leading up to the workshop we assembled a 16 foot diameter test dome where we could try out some of our ideas. While the project itself has an architecture team that’s working on an portable structure, we wanted a space that roughly approximated the kind of environment we were going to be working in. This test dome will house our first iteration of projection, lighting, and sound builds, as well as the preliminary sensor system.

Both Dan and Adam have spent countless hours exploring various dome structures, their costs, and their ease of assembly. Their research ultimately landed the team on using a kit from ZipTie Domes for our test structure. ZipTie Domes has a wide variety of options for structures and kits. With a 16 foot diameter dome to build we opted to only purchase the hub pieces for this structure, and to cut and prep the struts ourselves – saving us the costs of ordering and shipping this material.

In a weekend and change we were able to prep all of the materials and assemble our structure. Once assembled we were faced with the challenge of how to skin it for our tests. In our discussion about how to cover the structure we eventually settled on using a parachute for our first tests. While this material is far from our ideal surface for our final iteration, we wanted something affordable and large enough to cover our whole dome. After a bit of searching around on the net, Dan was able to locate a local military base that had parachutes past their use period that we were able to have for free. Our only hiccup here was that the parachute was multi colored. After some paint testing we settled on treating the whole fabric with some light gray latex paint. With our dome assembled, skinned, and painted we were nearly ready for our workshop weekend.

Media

There’s healthy body of research and methodology for dome projection on the web, and while reading about a challenge prepped the team for what we were about to face it wasn’t until we go some projections up and running that we began to realize what we were really up against. Our test projectors are InFocus 3118 HD machines that are great. There are not, however, great when it comes to dome projection. One of our first realizations in getting some media up on the surface of the dome was the importance of short throw lensing. Our three HD projectors at a 16 foot distance produced a beautifully bright image, but covered less of our surface than we had hoped. That said, our three projectors gave us a perfect test environment to begin thinking about warping and edge blending in our media.

TouchDesigner

One of the discussions we’ve had in this process has been about what system is going to drive the media inside of the WonderDome. One of the most critical elements to the media team in this regard is the ability to drop in content that the system is then able to warp and edge blend dynamically. One of the challenges in the forefront of our discussions about live performance has been the importance of a flexible media system that simplifies as many challenges as possible for the designer. Traditional methods of warping and edge blending are well established practices, but their implementation often lives in the media artifact itself, meaning that the media must be rendered in a manner that is distorted in order to compensate for the surface that it will be projected onto. This method requires that the designer both build the content, and build the distortion / blending methods. One of the obstacles we’d like to overcome in this project is to build a drag and drop system that allows the designer to focus on crafting the content itself, knowing that the system will do some of the heavy lifting of distortion and blending. To solve that problem, one of the pieces of software that we were test driving as a development platform is Derivative’s TouchDesigner.

Out of the workshop weekend we were able to play both with rendering 3D models with virtual cameras as outputs, as well as with manually placing and adjusting a render on our surface. The flexibility and responsiveness of TouchDesigner as a development environment made this process relatively fast and easy. It also meant that we had a chance to see lots of different kinds of content styles (realistic images, animation, 3D rendered puppets, etc.) in the actual space. Hugely important was a discovery about the impact of movement (especially fast movement) coming from a screen that fills your entire field of view.

TouchOSC Remote

Another hugely important discovery was the implementation of a remote triggering mechanism. One of our other team members, Alex Oliszewski, and I spent a good chunk of our time talking about the implementation of a media system for the dome. As we talked through our goals for the weekend it quickly became apparent that we needed for him to have some remote control of the system from inside of the dome, while I was outside programming and making larger scale changes. The use of TouchOSC and Open Sound Control made a huge difference for us as we worked through various types of media in the system. Our quick implementation gave Alex the ability to move forward and backwards through a media stack, zoom, and translate content in the space. This allowed him the flexibility to sit away from a programming window to see his work. As a designer who rarely gets to see a production without a monitor in front of me, this was a huge step forward. The importance of having some freedom from the screen can’t be understated, and it was thrilling to have something so quickly accessible.

Lights

Adam Vachon, our lighting designer, also made some wonderful discoveries over the course of the weekend. Adam has a vested interest in interactive lighting, and to this end he’s also working in TouchDesigner to develop a cue based lighting console that can use dynamic input from sensors to drive his system. While this is a huge challenge, it’s also very exciting to see him tackling this. In many ways it really feels like he’s doing some exciting new work that addresses very real issues for theaters and performers who don’t have access to high end lighting systems. (You can see some of the progress Adam is making on his blog here)

Broad Strokes

While it’s still early in our process it’s exciting to see so many of the ideas that we’ve had take shape. It can be difficult to see a project for what it’s going to be while a team is mired in the work of grants, legal, and organization. Now that we’re starting to really get our hands dirty, the fun (and hard) work feels like it’s going to start to come fast and furiously.


Thoughts from the Participants:

From Adam Vachon

What challenges did you find that you expected?

The tracking; I knew it would hard, and it has proven to be even more so. While a simple proof-of-concept test was completed with a Kinect, a blob tracking camera may not be accurate enough to reliably track the same target continuously. More research is showing that Ultra Wide Band RFID Real Time Locations System may be the answer, but such systems are expensive. That said, I am now in communications with a rep/developer for TiMax Tracker (an UWB RFID RTLS) who might be able to help us out. Fingers crossed!

What challenges did you find that you didn’t expect?

The computers! Just getting the some of computers to work the way they were “supposed” to was a headache! That said, it is nothing more than what I should have expected in the first place. Note for the future: always test the computers before workshop weekend!

DMX addressing might also become a problem with TouchDesigner, though I need to do some more investigation on that.

How do you plan to overcome some of these challenges?

Bootcamping my macbook pro will help on the short term computer-wise, but it is definitely a final solution. I will hopefully be obtaining a “permanent” test light within the next two weeks as well, making it easier to do physical tests within the Dome.

As for TouchDesigner, more playing around, forum trolling, and attending Mary Franck’s workshop at the LDI institute in January.

What excites you the most about WonderDome?

I get a really exciting opportunity: working to develop a super flexible, super communicative lighting control system with interactivity in mind. What does that mean exactly? Live tracking of performers and audience members, and giving away some control to the audience. An idea that is becoming more an more to me as an artist is finding new ways for the audience to directly interact with a piece of art. On our current touch-all-the-screens-and-watch-magic-happen culture, interactive and immersive performance is one way for an audience to have a more meaningful experience at the theatre.

 

From Julie Rada

What challenges did you find that you expected?

From the performer’s perspective, I expected to wait around. One thing I have learned in working with media is to have patience. During the workshop, I knew things would be rough anyway and I was there primarily as a body in space – as proof of concept. I expected this and didn’t really find it to be a challenge but as I am trying to internally catalogue what resources or skills I am utilizing in this process, so far one of the major ones is patience. And I expect that to continue.

I expected there to be conflicts between media and lights (not the departments, the design elements themselves). There were challenge, of course, but they were significant enough to necessitate a fundamental change to the structure. That part was unexpected…

Lastly, directing audience attention in an immersive space I knew would be a challenge, mostly due to the fundamental shape of the space and audience relationship. Working with such limitations for media and lights is extremely difficult in regard to cutting the performer’s body out from the background imagery and the need to raise the performer up.

What challenges did you find that you didn’t expect?

Honestly, the issue of occlusion on all sides had not occurred to me. Of course it is obvious, but I have been thinking very abstractly about the dome (as opposed to pragmatically). I think that is my performer’s privilege: I don’t have to implement any of the technical aspects and therefore, I am a bit naive about the inherent obstacles therein.

I did not expect to feel so shy about speaking up about problem solving ideas. I was actually kind of nervous about suggesting my “rain fly” idea about the dome because I felt like 1) I had been out of the conversation for some time and I didn’t know what had already been covered and 2) every single person in the room at the time has more technical know-how than I do. I tend to be relatively savvy with how things function but I am way out of my league with this group. I was really conscious of not wanting to waste everyone’s time with my kindergarten talk if indeed that’s what it was (it wasn’t…phew!). I didn’t expect to feel insecure about this kind of communication.

How do you plan to overcome some of these challenges?

Um. Tenacity?

What excites you the most about WonderDome?

It was a bit of a revelation to think of WonderDome as a new performance platform and, indeed, it is. It is quite unique. I think working with it concretely made that more clear to me than ever before. It is exciting to be in dialogue on something that feels so original. I feel privileged to be able to contribute, and not just as a performer, but with my mind and ideas.

Notes about performer skills:

Soft skills: knowing that it isn’t about you, patience, sense of humor
Practical skills: puppeteering, possibly the ability to run some cues from a handheld device