Tag Archives: media design

Inside Wonder Dome | TouchDesigner

first test gridIn approaching some of the many challenges of Wonder Dome one of the most pressing and intimidating was how to approach programming media playback for a show with a constant media presence. One of the challenges we had embraced as a team for this project was using Derivative’s TouchDesigner as our primary programming environment for show-control. TouchDesigner, like most programming environments, has very few limitations in terms of what you can make and do, but it also requires that you know what it is that you want to make and to do. Another challenge was the fact that while our team was full of bright and talented designers, I was the person with the broadest TouchDesigner experience. One of the hard conversations that Dan and I had during a planning meeting centered around our choices of programming environments and approaches for Wonder Dome. I told Dan that I was concerned that I would end up building an interface / patch that no one else knew how to use, fix, or program. This is one of the central challenges of a media designer – how to do you make sure that you’re building something that can be used / operated by another person. I wish there were an easy answer to this question, but sadly this is one situation that doesn’t have simple answers. The solution we came to was for me to do the programming and development – start to finish. For a larger implementation I think we could have developed an approach that would have divided some of the workload, but for this project there just wasn’t enough time for me to both teach the other designers how to use / program in TouchDesigner and to do the programming needed to ensure that we could run the show. Dan pointed out in his thesis paper on this project that our timeline shook out to just 26 days from when we started building the content of the show until we opened.

The question that follows, then, is – how did we do it? How did we manage to pull of this herculean feat in less than a month, what did we learn along the way, and what was an approach that, at the end of the process, gave us results that we used?

Organization

organizationMake a plan and stay organized. I really can’t emphasize this enough. Wonder Dome’s process lived and died in our organization as a team, and as individuals. One of the many hurdles that I approached was what our cuing system needed to be, and how it was going to relate to the script. With three people working on media, our cue sheet was a bit of a disaster at times. This meant that in our first days working together we weren’t always on the same page in terms of what cue corresponded to what moment in the play. We also knew that we were going to run into times when we needed to cut cues, re-arrange them, or re order them. For a 90 minute show with 20 media cues this is a hassle, but not an impossibility. Our 25 minute long kids show had, at the beginning, over 90 media cues.

In beginning to think about how to face this task I needed an approach that could be flexible, and responsive – fast fast fast. The solution that I approached here was to think about using a replicator to build a large portion of the interface. Replicators can be a little intimidating to use, but they are easily one of the most powerful tools that you can use in TouchDesigner. Here the principle is that you set up a model operator that you’d like subsequent copies to look like / behave like. You then use a table to drive the copies that you make – one copy operator per row in the table. If you change the table, you’ve changed / remade your operators. In the same way if you change your template operator – this is called your “Master Operator” – then you change all of the operators at once. For those reasons alone it’s easy to see how truly powerful this component is, but it’s also means that a change in your table might render your control panel suddenly un-usable.

button replicator set-up

Getting started here I began by first formatting my cue sheet in a way that made the most sense for TouchDesigner. This is a great time to practice your Excel skills and to use whatever spreadsheet application / service that you prefer to do as much formatting as possible for you. In my case I used the following as my header rows:

  • Cue Number – what was the number / name for the cue. Specifically this is what the stage manager was calling for over headset. This is also the same name / number for the cue that was in the media designer script. When anyone on the team was talking about M35 I wanted to make sure that we were all talking about the same thing.
  • Button Type – Different cues sometimes need different kinds of buttons. Rather than going through each button and making changes during tech, I wanted to be able to update the master cue sheet for the replicator, and for the properties specified to show up in the button. Do I want a momentary button, a toggle, a toggle down, etc. These things mattered, and by putting these details in the master table It was one less adjustment that I needed to make by hand.
  • Puppet – Wonder Dome had several different types of cues. Two classifications came to make a huge difference for us during the tech process. Puppet entrances / exits, and puppet movements. Ultimately, we started to treat puppet entrances and exits as a different classification of cue (rather than letters and numbers we just called for “Leo On” and “Leo Off”, this simplified the process of using digital puppets in a huge way for us), but we still had puppet movements that were cued in TouchDesigner. During the tech process we quickly found out that being able to differentiate between what cues were puppet movements and what cues were not was very important to us. By adding this column I could make sure that these buttons were a different color – and therefore differentiated from other types of cues.

Here I also took a programming precaution. I knew that invariably I was going to want to make changes to the table, but might not want those changes to be implemented immediately – like in the middle of a run for example. To solve this problem I used a simple copy script to make sure that I could copy the changed table to an active table when we were in a position to make changes to the show. By the end of the process I was probably fast enough to make changes on the fly and for them to be correctly formatted, but at the beginning of the process I wasn’t sure this was going to be the case. The last thing I wanted to do was to break the show control system, and then need 25 minutes to trouble shoot the misplacement of a 1 or 0. At the end of the day, this just made me feel better, and even if we didn’t need it in place I felt better knowing that I wasn’t going to break anything if I was thinking on my feet.

replicator in action

Above you can see a replicator in action – looking at an example like this I think helps to communicate just how useful this approach was. Method, like organization, is just a way to ensure that you’re working in a way that’s meaningful and thoughtful. I’m sure there are other methods that would have given us the same results, or even better results, but this approach helped me find a way to think about being able to quickly implement cue sheet changes into our show control environment. It also mean that we standardized our control system. With all of the buttons based on the same Master Operator it gave the interface a clean and purposed look – staring down the barrel of a 25 show run, I wanted something that I didn’t might looking at.

Thinking more broadly when it comes to organization, beyond just the use of replicators for making buttons I also took the approach that the show should be modular and organized as possible. This meant using base and container components to hold various parts of the show. Communication to lighting and sound each had their own module, as did our puppets. For the sake of performance I also ended up placing each of the locations in their own base as well. This had the added bonus of allowing for some scripting to turn cooking on and off for environments that we were using or not using at any given point in the show. We had a beast of a media server, but system resources were still important to manage to ensure smooth performance.

notThatStory_fullMap

If you want to learn more about replicators you can read through this post about getting started using them.

Show Control

Show control, however, is about more than just programming buttons. Driving Wonder Dome meant that we needed a few additional features at our fingertips during the show. Our show control system had two preview screens – one for the whole composite, and one for puppets only. One of the interesting features of working in a dome is how limited your vision becomes. The immersive quality of the projection swallows observers, which is downright awesome. This also means that it’s difficult to see where all of the media is at any given point. This is one of the reasons that we needed a solid preview monitor – just to be able to see the whole composition in one place. We also needed to be able to see the puppets separately at times – partially to locate them in space, but also to be able to understand what they looked like before being deformed and mapped onto the curved surface of the dome.

show_control

The central panel of our control system had our cues, our puppet actions, our preview monitors, and a performance monitor. During the show there were a number of moments when we had a dome transformation that was happening while nearly simultaneously a puppet was entering or exiting. While originally I was trying to drive all of this with a single mouse, I quickly abandoned that idea. Instead I created a simple TouchOSC interface to use on an iPad with another hand. This allowed me to take a double handed approach to diving the media added some challenge, but paid itself back ten fold with a bit of practice. This additional control panel also allowed me to drive the glitch effects that were a part of the show. Finally it also made for an easy place to reset many of the parameters of various scenes. In the change over between shows many elements needed to be reset, and but assigning a button on my second interface for this task I was able to move through the restore process much faster.

2014-04-09 14.44.46

If you’d like to learn more about using TouchOSC with TouchDesigner there a few pages that you might take a glance at here:

TouchOSC | Serious Show Control
Sending and Receiving OSC Values
Visualizing OSC Data

Cues

Beyond creating a system for interacting with TouchDesigner, a big question for me was how to actually think about the process of triggering changes within my network. Like so many things, this seems self evident on the face of it – this button with do that thing. But when you start to address the question of “how” then the process becomes a little more complicated. Given the unstable nature of our cue sheet, I knew that I needed a name-based approach that I called from a central location. Similar to my module based approach for building the master cue sheet, I used the same idea when building a master reference sheet.

With a little push and guidance from the fabulous Mary Franck, I used an evaluate DAT to report out the state of all of the buttons from the control panel, and name them in a way that allowed for easy calling – specifically I made sure that each cue maintained it’s name letter and number convention from our cue sheet.

master ref

 

On the face of this it seems like that’s an awful lot of scripts to write – it is, but like all things there are easier and harder ways to solve any problem. My approach to here was to let google spreadsheets do some work for me. Since cue sheet was already set-up as a spread sheet, writing some simple formulas to do the formatting for me was a quick and easy way to tackle this. It also meant that with a little bit of planning my tables for TouchDesigner were formatted quickly and easily.

excel script formattingIt was also here that I settled on using a series of Execute DATs to drive the cooking states of the various modules to control our playback performance. I think these DATs were some of the hardest for me to wrap my head around – partially because this involved a lot of considered monitoring of our system’s overall performance, and the decisions and stacking necessary to ensure that we were seeing smooth video as frequently as possible. While this certainly felt like a headache, by the time the show was running we rarely dropped below 28 frames per second.

cooking on and off

If you want to read a little more about some of the DAT work that went into Wonder Dome you can start here:

Evaluate DAT Magic
These are the DATs You’ve Been Looking For

Communication

All of the designers on the Wonder Dome team had wrestled with the challenges of communication between departments when it comes to making magic happen in the theatre. To this end, Adam, Steve, and I set out from the beginning to make sure that we had a system for lights, media, and sound to all be able to talk with one another without any headache. What kind’s of data did we need to share? To create as seamless a world as possible we wanted any data that might be relevant for another department to be easily accessible. This looked like different things for each of us, but talking about it from the beginning ensured that we built networks and modules that could easily communicate.

Screenshot_032314_115125_AM

In talking with lighting, one of our thoughts was about passing information relative to the color of the environment that we found ourselves in at any given point. To achieve this I cropped the render to a representative area, then took the average of the pixel values in that area, then converted the texture data to channel data and streamed lighting the RGBA values over OSC. We also made a simple crossfader in our stream for the times when we wanted the lighting in the scene to be different from the average of the render.

WD_AdamThis technique was hardly revolutionary, but it did create very powerful transitions in the show and allowed media to drive lighting for the general washes that filled the space. This had the added benefit of offloading some programming responsibility from lighting. While I had done a lot of work in the past to coordinate with sound, I hadn’t done much work coordinating with lights. In fact, this particular solution was one that we came up with one afternoon while we were asking questions like “what if…” about various parts of the show. We knew this was possible, but we didn’t expect to solve this problem so quickly and for it to be so immediately powerful. Through the end of the run we continued to consistently get positive audience response with this technique. Part of the reason this solution was so important was be cause Adam was busy building a control system that ultimately allowed him to control two moving lights with two wacom tablets – keeping the washing lighting driven by media kept both of his hands free to operate the moving lights.

Screenshot_032314_115026_AM

The approach to working with sound was, of course, very different from working with lights. Knowing that we wanted to use spatialized sound for this show Stephen Christensen built an incredible Max patch that allowed him to place sound anywhere he wanted in the dome. Part of our conversation from the beginning was making sure that media could send location data bout puppets or assets – we wanted the voice of the puppeteers to always be able to follow the movement of the puppets across the dome. This meant that created an OSC stream for sound that carried the location of the puppets, as well as any other go or value changes for moments where sound and media needed to be paired together.

Screenshot_032314_114946_AM

Communicating with sound wasn’t just a one way street though. Every day the Wonder Dome had a 90 minute block of free time when festival visitors were allowed to explore the dome and interact with some of the technology outside of the framework of the show. One of the components that we built for this was a 3D environment that responded to sound, animating the color and distribution of objects based on the highs, mids, and lows from the music that was being played. Here sound did the high, mid, low processing on its end, and then passed me a stream of OSC messages. to get a smoother feel from the data I used a Lag CHOP before using this to drive any parameters in my network.

Components and Reuse

Perhaps the most important lesson to be learned from this project was the importance of developing solid reusable components. This, again, isn’t anything revolutionary but it is worth remembering whenever working on a new project. The components that you build to use and reuse can make or break your efficiency and the speed of your workflow. One example of this would be a tool that we created to make placing content on the dome. Our simple tool for moving images and video around the dome would be used time and again throughout the project, and if I hadn’t take the time early on to create something that I intended to reuse, I would have instead spent a lot of time re-inventing the wheel every time we needed to solve that problem.

Screenshot_032314_114332_AM

In addition to using this placement tool for various pieces of media in the show, this is also how we placed the puppets. During the development phase of this tool I thought we might want to be able to drive the placement of content from a iPad or another computer during tech. To make this easier, I made sure that there was a mechanism embedded in the tool to allow for easy control from multiple inputs. This meant that when we finally decided to adapt this tool for use with the puppets, we already had a method for changing their location during the show. There are, of course, limits to how much anyone can plan ahead on any project but I would argue that taking the time to really think about what a component needs to be do before developing it makes good sense. I also made use of local variables when working with components in order to make it easier to enable or disable various pieces of the tool.

Screenshot_032314_114451_AM

You can read more about some of this process here:

3D Solutions for a 2D World
Container Display

Documentation and Comments

comment exampleI nearly forgot to mention one of the most critical parts of this process. Documentation and commenting. If I hadn’t commented my networks I would have been lost time after time. One of the most important practices to develop and to continue is good commenting. Whenever I was working on something that I couldn’t understand immediately by just looking at it, I added a comment. I know that some programmers use the ability to insert comments with individual operators, but I haven’t had as much success with that method. Personally, I find that inserting a text DAT is the best way for me to comment. I typically write in a text editor using manual carriage returns. I also make sure that I date my comments, so if I make a change I can leave the initial comments and then append the comment with new information. I can’t say enough about the importance of commenting – especially if you’re working with another programmer. Several times during the process I would help lighting solve a problem, and good commenting helped ensure that I could communicate important details about what was happening in the network to the other programmer.

I think it’s also important to consider how you document your work. This blog often functions as my method of documentation. If I learning something that I want to hold onto, or something that I think will be useful to other programmers then I write it down. It doesn’t do me any good to solve the same problem over and over again – writing down your thoughts and process help you organize your approach. There have been several times when I find shortcuts or new efficiency in a process only when I’m writing about it – the act of taking it all a apart to see how the pieces connect make you question what you did the first time and if there’s a better way. At times it can certainly feel tedious, but I’ve also been served time and again by the ability to return to what I’ve written down.

 

 

TouchDesigner | These are the DATs you’ve been looking for

Silly DAT screenshotIf you’re new to TouchDesigner, it’s easy to feel like DATs are a hard nut to crack. This is especially true if you’re also new to programming in general. Scripting can be daunting as you’re getting started, but it’s also incredibly important – take it from someone who is still learning, dat by dat.

So what’s the big deal about DATs anyway? Better yet, why should you care? DATs can help in all sorts of ways, but lets look at a concrete example of how they can help solve some interesting problems that you might face if you’re out to save some information to use later.

As the Wonder Dome team has been busy building interfaces, programming methods, and performance tools we’ve hit countless situations where being able to save some data for later use is absolutely necessary.

Our lighting designer, Adam Vachon, wants to be able to mix color live during a rehearsal and then record that mix to in a cue later. Better yet, he might want to create a cue sheet with all of that data saved in a single table so he can quickly recall it during tech. Over in media, we want to be able to place video content in lots of difference places across the dome and with varying degrees of visual effects applied and we also want to be able to record that data for later recall.

DATs, are a wonderful solution for this particular problem. With a few DATs, and some simple scripts we can hold onto the position of our sliders to use later. Let’s take a look at how we can make that happen.

First let’s look at a simple problem. I want to be able to add the values from one table to the bottom of another table. If you’re new to programming, this process is called appending. We can see an example of this if we look at two different tables that we want to add together.

two

Here we have two tables, and we’d like to combine them. We can do this by writing a simple script that tells TouchDesigner to take the contents of cells from table2 and to add them to table1 in a specific order. One of the things that’s important to understand is how tables are referenced in TouchDesigner. One of the ways that a programmer can pull information from a cell is to ask for the data by referencing the address of the cell. This is just like writing a formula in something Google Spreadsheets or Excel – you just need to know the name of the cell that you want information from. Let’s take a look at how the addressing system works:

Screenshot_030514_124543_AM

Taking a moment to study table3 and you’ll be referencing cells in a flash. It’s just rows and columns, with the only catch that the numbering system starts at 0. Cool, right? Okay, so if we want to write our script to append cells from one table to another we’re going to use this format:

n = op(“table1”)
m1 = op(“table2”)[0,0]
m2 = op(“table2”)[0,1]
m3 = op(“table2”)[0,2]

n.appendRow( [ m1, m2, m3 ] )

So what’s happening here? First we’re defining table1 as a variable we’re calling n. Next we’re naming three new variables m1, m2, and m3. These correspond to the data in the first row of table2, in column 1, 2, 3. The next operation in our script to append n (that’s table1) with a new row using the values m1, m2, and m3 in that order. You might decide that you want these added to n in a different order, which is easy, right? All you have to do is to change the order that you’ve listed them – try making the order of variables in the brackets [ m2, m1, m3 ] instead to see what happens. Alright, at this point our network should look like this:

Screenshot_030514_125145_AM

Now, to run our script we’re just going to right click on text3, and select “Run Script” from the contextual menu.

simple script

Great! Now we’ve successfully appended one table with data from the first row of another table.

If you’re still with me, now we can start to make the real magic happen. Once we understand how a script like this works, we can put it to work to do some interesting tasks for us. Let’s look at a simple example where we have three sliders, that we want to be able to save the data from.

Screenshot_030514_125808_AM

To get started, let’s make three slider COMPs, and connect them to a merge CHOP.

Screenshot_030514_125922_AM

Now lets add a Chop to DAT, and export the merge to the Chop to.

chop to

The chopto DAT is a special kind of operator that allows us to see CHOP data in DAT format. This coverts our CHOP into a table of three floats. At this point you can probably guess where we’re headed – we’re going to use our simple script that we just wrote to append the contents of our chopto to another table. Before we get there, we still need to get a few more ducks in a row.

Next let’s create a table with one row and three columns. Name these columns anything you want, in my case I’m going to call them (rather generically) Value 1, Value 2, and Value 3. I’m also going to create a big empty table, and finally I’m going to connect both of these with a merge DAT. Why two tables? I want my first table to hold my header information for the final table. This way I can clear the whole table of saved floats without also deleting the first row of my final table.

Screenshot_030514_010521_AM

As a quick reminder, the names of your DATs is going to be very important when we start to write our script. The names of our DATs is how we can identify them, and consequently how we can point TouchDesigner to the data that we want to use.

Next I’m going to add a button COMP to my network, and a panel execute DAT. In the panel execute DAT I’m going to make sure that it’s looking at the operator button1 and watching for the panel value select. I’m also going to make sure that the On to Off box is checked – this tells the DAT when to run the script. Next I’m going to slightly alter the script we wrote earlier to right for our tables here. I’m also going to make sure that the script is in the right place in the DAT. Take a closer look at the example below to see how to format your DAT.

Screenshot_030514_011212_AM

Alright, now it’s time for DAT table magic. At this point you can make your sliders and button viewer active, and you’re ready to make changes and then record slider states. Happy appending.

slider table action

In case you still have questions you can take a closer look at my example here – record_method_example.

TouchDesigner | 3D solutions for a 2D world

12761364894_714f3b8985_nOne of the fascinating pieces of working in TouchDesigner is the ability to use 3D tools to solve 2D problems. For the last seven months or so I’ve been working on Daniel Fine’s thesis project – Wonder Dome. Dome projection is a wild ride, and one of the many challenges we’ve encountered is thinking about how to place media on the dome without the time intensive process of pre-rendering all of the content specifically for this projection environment. To address some of these issues we started working with Los Angles based Vortex Immersion Media, their lead programmer is the TouchDesigner specialist Jeff Smith of Eve Vapor. Part of the wonderful opportunity we’ve had in working with Vortex is getting to take an early look at Jeff’s custom built Dome Mapping tool. Built exclusively in TouchDesigner it’s an incredibly powerful piece of software built to make the dome warping and blending process straightforward. The next step in the process for us was to consider how we were going to place content on the interior surface of the dome. The dome mapping tool that we’re using uses a square raster as an input, and can be visualized by looking at a polar array. If you’re at all interested in dome projection, start by looking at Paul Bourke’s site – the wealth of information here has proven to be invaluable to the Wonder Dome team as we’ve wrestled with dome projection challenges. This square image is beautiful mapped to the interior surface of the dome, making placing content a matter of considering where on the array you might want a piece of artwork to live.

There are a number of methods for addressing this challenge, and my instinct was to build a simple TouchDesigner network that would allow us to see, place, and manipulate media in real time while we were making the show. Here’s what my simple asset placement component looks like:

asset placement

This component based approach makes it easy for the design and production team to see what something looks like in the dome in real time, and to place it quickly and easily. Additionally this component is built so that adding in animation for still assets is simple and straight forward.

Let’s start by taking a look at the network that drives this object, and cover the conceptual structure behind its operation.

Screenshot_022814_125736_AM

In this network we have a series of sliders that are controlling some key aspects of the media – orientation along a circular path, distance from the center, and zoom. These sliders also pass out values to a display component to make it easy to take note of the values needed for programming animation.

We also have a render chain that’s doing a few interesting things. First we’re taking a piece of source media, and using that to texture a piece of geometry with the same aspect ratio as our source. Next we’re placing that rectangle in 3D space and locking its movement to a predefined circular pathway. Finally we’re rendering this from the perspective of a camera looking down on the object as though it were on a table.

Here I’m using a circle SOP to create a circle that will be the pathway that my Geo COMP will rotate around. Here I ended this network in a null so that if we needed to make any changes I wouldn’t have to change the export settings for this pathway.

Screenshot_022714_095918_AM

You’ll also notice that we’re looking at the parameters for the circle where I’ve turned on the bulls-eye so we’re only seeing the parameters that I’ve changed. I’ve made this a small NURBS curve to give me a simple circle.

The next thing I want to think about is setting up a surface to be manipulated in 3D space. This could be a rectangle or a grid. I’m using a rectangle in this particular case, as I don’t need any fancy deformation to be applied to this object. In order to see anything made in 3D space we need to render those objects. The render process looks like a simple chain of component operators: a geo COMP, a camera COMP, a light COMP, and a render TOP. In order to render something in 3D space we need a camera (a perspective that we’re viewing the object from), a piece of geometry (something to render), and a light (something to illuminate the object).

Screenshot_022814_125903_AM

We can see in my network below that I’ve used an in TOP so that I can feed this container from the parent portion of the network. I’ve also given this a default image so that I can always see something in my container. You might also notice that while I have a camera and a geo, I don’t have a light COMP. This is because I’m using a material type that doesn’t require any lighting. More about that in a moment. We can also see that my circle is being referenced by the Geo, and that the in TOP is also being referenced by the Geo. To better understand what’s happening here we need to dive into the Geo COMP.

Screenshot_022814_125926_AM

Inside of the Geo COMP we can see a few interesting things at work. One thing you’ll notice is that I have a constant MAT and an info CHOP inside of this object. Both of these operators are referencing the in TOP in that’s in the parent network. My constant is referencing the in to establish what is going to be applied to the Geo as a material. My info CHOP gives me quick access to several of the attributes of my source file. Included in this list of attributes is the resolution of the source media. I can use this information to determine the aspect ratio of the source, and then make sure that my rectangle is sized to match. Using this process I don’t have to rely on a particular aspect ratio for my source material, I can pass this container any shape of rectangular image, and it will size itself appropriately.

Initially I just had three sliders that controlled the placement of my media in this environment. Then I started thinking about what I would really need during our technical rehearsals. It occurred to me that I would want the option to be able to place the media on the surface of the dome from a position other than behind the media server. To address this need I built a simple TouchOSC interface to replicate my three sliders. Next I captured that OSC information with TouchDesigner, and then passed that stream of floats into my container. From here I suddenly had to do some serious thinking about what I wanted this object to do. Ideally, I wanted to be able to control the container either form the media server, or from a remote access panel (TouchOSC). I also wanted the ability to record the position information that was being passed so I could use it later. This meant that I needed to think about how I was going to capture and recall the same information from three possible sources. To do this I first started by packaging my data with merge CHOPs. I also took this opportunity to rename my channels. For example, my OSC data read – osc_rot, osc_dist, osc_zoom; rotation, distance, and zoom sliders from my TouchOSC panel. I repeated this process for the sliders, and for the table that I was using. I also knew that I wanted to rename my stream, and pass it all to a null CHOP before exporting it across the network. To keep my network a little more tidy I used a base to encapsulate all of the patching and selecting, and switching that needed to happen for this algorithm to work properly.

Screenshot_022814_010413_AM

Inside of the base COMP we can see that I’m taking my three in CHOPs selecting for the appropriate channel, passing this to a switch (so I can control what value is driving the rendering portion of my network) and then back out again. You may also notice that I’m passing the switch values to a null, and then exporting that do a opViewer TOP. The opViewer TOP creates a rendered image of the channel operator at work. Why would I do this? Well, I wanted a confidence monitor for my patch-bay. The base COMP allows you to assign a TOP to its display. Doing this meant that I could see into a portion of the base, without having to actually be inside of this component.

Screenshot_022814_010506_AM

With all of the patching setup, I needed to build an interface that would control all of these changes. I also needed a way to capture the values coming out of TouchOSC, store them in a table, and then recall them later.

Screenshot_022814_010616_AM

The solution here was to build a few buttons to drive this interface. To drive the witch CHOP in my base component, I used three buttons encapsulated inside of a container COMP and set to operate as radio buttons. I then used a panel CHOP in the container to export which button was currently being toggled into the on position. Next I added a button COMP to record the values set from TouchOSC. Using a Chop to DAT I was able to capture the float values streaming into my network, and I knew that what I wanted was to be able to copy a set of these values to a table. To do this I used a panel execute DAT. This class of DAT looks at the panel properties of a specified container (buttons and sliders also qualify here), and runs a script when the specified conditions in the DAT are met. This is the portion of the network that gave me the most headache. Understanding how these DATs work, and the best method of working with them took some experimentation. To trouble shoot this, I started by writing my script in a text DAT, and then running it manually. Once I had a script that was doing what I wanted, I then set to the task of better understanding the panel execute DAT. For those interested in Python scripting in TouchDesigner, here’s the simple method that I used:

m = op (“chopto1”)
n = op (“table1”)
n.copy(m)

Here the operator chopto1 is the DAT that is capturing the OSC stream. The operator table1 is an empty table that I want to copy values to, it’s the destination for my data. The Python method copy starts by specifying the destination, and then the source that you want to pull from.

Screenshot_022814_010809_AM

Finally ready to work with the panel execute DAT, I discovered that all of my headaches were caused by misplacing the script. To get the DAT to operate properly I just had to make sure that my intended script was between the parameter specified, and the return call.

Screenshot_022814_010926_AM

One last helpful hint / tip that I can offer from working on this component is how to specify the order of your buttons in a container. One handy feature in the container parameters page is your ability to have TouchDesigner automatically array your buttons rather than placing them yourself. The catch, how do you specify the order the buttons should appear in? If you look a the parameter page for the buttons themselves, you’ll notice that they have a smartly named parameter named “Alignment Order.” This, sets their alignment order in the parent control panel.

If I’ve learned nothing else, I have learned that sometimes it’s the simplest things that are the easiest to miss.

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp – this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP – I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]”.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1”)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)[“transz”]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1” and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)[“noise”]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.

Interface Building – Execute DATs | TouchDesigner

Sometimes it’s easy to forget about the most obvious features of a device. In my case, I finally decided to do some investigating about the nature and function of the LAN port on the back of an InFocus 2116. It is not uncommon to see projectors with network access ports these days but I had always assumed that they only worked with the access software that the manufacturer is looking to sell / distribute. InFocus produces a free piece of software called ProjectorNet ( ) that’s designed to give system admins quick access to the settings and status of connected projectors. This seems like a handy piece of software, but just wasn’t something I had been in a position to review or experiment with. Last week when I finally gave myself some time to look at my LAN options for this InFocus, I noticed something when I booted up the machine – in a rather unassuming way, the projector was listing an IP address on the lamp-up screen.

Being the curious type, I decided to see what I got if I pinged the address. I also looked for open ports, and discovered that it was listing for http. Opening up a web browser I decided to try my luck and see what would happen if I just typed in the IP address of the projector itself. I was greeted by a lovely log-in screen for the projector.

Screenshot_122113_034210_PM

Selecting Administrator from the drop down menu, and leaving the password field blank (I just guessed that the password was either going to be blank or “admin”), I was a shocked to see the holy grail of projector finds. Access to all of the projector’s settings and calibration tools. Jackpot. For anyone who has ever been in the unfortunate position of trying to wrangle the menus of a projector, you’ll know how maddening this experience can be – especially if there’s any chance that the previous user might have left the projector in ceiling mode (upside down) or rear-projection mode (backwards).

Screenshot_122113_034239_PM

As it turns out, the task of remote wrangling and futzing is in fact something I’ve been wasting time doing. In thinking about how to use this find to my best advantage I started thinking about the production that I’ll be working on in the Spring of 2014 – Wonder Dome. One of the challenges of Wonder Dome is the complex multi-projector installation, calibration, and operation that our team will be working with. Suddenly having the ability to manage our projection system over the network is a huge win – and a discovery that started me working on the application of this particular find.

Our media server is going to run a custom piece of software developed in Derivative’s TouchDesigner. As I’ve been working on various parts of the media system, the issue of easy calibration has been high on our wish list. To that end it seemed like being able to power and manage the projectors from within TouchDesigner would be a more than handy. Here’s the small piece of part of our calibration window dedicated to this process:

Screenshot_122113_035750_PM

Here I have four fields where the IP address of the projectors can be entered. Saving the show file will mean that we’ll only need to do this process once, but also means that if for some reason we swap out a projector, we can easily change the IP address. The Projector Status button opens all three address in separate tabs of my default web browser. Let’s take a look at how to make that work.

Here’s what this part of the network looks like:

Screenshot_122113_040214_PM

Here I have four Field Components, and a Button Component. In this particular network I’ve altered one field comp to act as a static label (Projector IP Address), and I’ve altered the button. Turning off the top field was fairly straightforward. Looking at the Panel page of this Comp you’ll notice a toggle for “Enable.” By setting this parameter to “Off” the panel element is no longer active.

Screenshot_122113_041200_PM

I knew that I wanted the button to pull from three IP address. I started by first adding three field Comps. Next I added my button comp. To pull in the three strings from the field comps I needed to add inputs to the button. Let’s take a look inside of the button comp to see how this works.

Screenshot_122113_042847_PM

Other than the usual button ingredients, I’ve added a few other elements. I have three In DATs, one Text DAT, three Substitute DATs, a single Merge DAT, a Null DAT, all ending in a Panel Execute DAT.

Here the important starting principle is that our Panel Execute DAT needs the following string in order to open our web page “viewfile http://IP_Address_Here“. Listing three viewfile commands means that all of those files are opened at once. Practically that meant that in order to make this panel command work I needed to correctly format my IP address and add them to the Panel Execute DAT in order to open the three web pages. If we take a look at the format of the In – Text, – Substitute DAT string we’ll see how this works.

Screenshot_122113_042950_PM

Here’s how the following DATs work in this network.

  • In DAT – this pulls in the text string entered into the Field Comp.
  • Text DAT – in this DAT I’ve formatted my command for the Execute DAT, with the exception of including a placeholder for the IP address of my projectors.
  • Substitute DAT – the substitute DAT uses the string of my Text DAT, and then removes the placeholder and replaces that value with the IP address of my projectors.

Let’s look at the parameters of the Substitute DAT so we can see how this node works.

Screenshot_122113_043321_PM

Here I specified that the term “P1” should be replaced by the contents of the In DAT. I exported the values of the string with the expression “op(“in1″)[0,0]” which means – in the operator named “in1” pull the contents of the first cell in the first row of the table.

These three Substitute DATs are then combined with a Merge DAT, passed to a Null (just in case I need to make any further modifications at another point), and finally passed into a Panel Execute DAT.

Let’s quickly take a closer look at the Panel Execute DAT to make sure that we know exactly what it’s doing. First off we want to make sure that we’re using T-Script for this particular method. You can check this by looking for the “T” in the upper right hand corner of the properties dialog box.

Screenshot_012414_122114_AM

We also want to make sure that we force this DAT to stay speaking T-Script. We can do this by bringing up the “Common” page, and selecting “Node” for the language method.

Screenshot_012414_122434_AM

Next let’s test this to make sure it’s working. First we’ll move up a level so we can see our button. We’ll make our button something we can interact with by clicking on the View Active button in the bottom right hand corner (it’s the button that looks like a + sign). Now we should be able to click our button which should in turn launch three browser windows.

Screenshot_012414_122801_AM

Bingo, bango our button now opens up three tabs in Chrome. If you need some more information about working with buttons in general you can do some more reading here.

Multiple Windows | TouchDesigner

For an upcoming project that I’m working on our show control needs to be able to send out video content to three different projectors. The lesson I’ve learned time and again with TouchDesigner is to first start by looking through their online documentation to learn about what my options are, and to get my bearings. A quick search of their support wiki landed me on the page about Multiple Monitors.

To get started I decided to roll with the multiple window component method – this seemed like it would be flexible and easy to address out the gate. Before I was ready for this step I had to get a few other things in order in my network. Ultimately, the need that I’m working to fill is distortion and blending for the interior surface of a dome using three projectors that need to warp and edge blend in real time. First up on my way to solving that problem was looking at using a cube map ) in order to address some of this challenge. In this first network we can see six faces of a cube map composited together, exported to a phong shader, and then applied to a dome surface which is then rendered in real time from three different perspectives.

Screenshot_121913_105543_PM

A general over view of the kind of technique I’m talking about can be found here. The real meat and potatoes of what I was after in this concept testing was in this part of the network:

Screenshot_121913_105619_PM

Here I have three camera components driving three different Render TOPs, which are in turn passing to three Null TOPs that are named P1, P2, and P3 – projector 1 – 3. As this was a test of the concepts of multiple monitor outs, you’ll notice that there isn’t much difference between the three different camera perspectives and that I haven’t added in any edge blending or masking elements to the three renders. Those pieces are certainly on their way, but for the sake of this network I was focused on getting multiple windows out of this project.

If we jump out of this Container Comp we can see that I’ve added three Window Components and a Button to my network. Rather than routing content into these window elements, I’ve instead opted to just export the contents to the window comps.

Screenshot_121913_105514_PM

If we take a closer look at that parameters of the Window Comp we can see what’s going on here in a little more detail:

Screenshot_121913_111605_PM

Here we can see that I’ve changed the Operator path to point to my null TOP inside of my container COMP. Here we can see that the path is “/project1/P1”. The general translation of this pathway would be “/the_name_of_container/the_name_of_the_operator“. Setting Operator path to your target operator will export the specified null when the window is opened, but it will not display the contents of the null in the node itself. If you’d like to see a preview of the render on the window node, you’ll also need to change the node pathway on the Common Page of the Window Comp. Here we can see what that looks like:

Screenshot_121913_111619_PM

Finally, I wanted to be able to test using a single button to open and close all three windows. When our media server is up and running I’d like to be able to open all three windows with a single click rather than opening them one window comp at a time. In order to test this idea, I added a single button component to my network. By exporting the state of this button to the “Open” parameter of the window on the Window Page I’m able to toggle all three windows on and off with a single button.

AutoCAD and Dynamic Blocks for Media Designers

This semester (Fall 2013) I decided to take an AutoCAD course taught by ASU’s Jennifer Setlow. Jen’s course is primarily designed to serve lighting and scenic designers. That said, it’s already proven to be an invaluable experience for a media designer as it’s exposed me many of the models and methods that a lighting designer would use when creating a lighting plot.

As a final project Jen asked that students identify a project that would be challenging intellectually and technically. Ideally, this project would also be useful to the student in some capacity that reaches beyond the classroom itself.

With that in mind, as a final project I’ve opted to create drawings of the projectors that we keep in stock at ASU. in addition to detail drawings of the projectors themselves, I also want to create a set of dynamic properties that allow the designer to visualize the throw distance of the projectors when placed in a drafting of the theatre. My hope is that this will allow for easier plotting and planning not only for myself but for future designers.

One of the problems to consider here is how to dynamically resize a portion of a block in a drawing based on another changing property of the block. In other words I want to be able to shift the shape of the cone of the projector based dragging a the dynamic handle of a drawing.

We can solve this problem with a little bit of digging on the Internet, and some careful work in AutoCAD. My initial starting point was to look at a helpful video from CAD Masters (you can see the whole channel here).

For the sake of this process, I’m going to focus on a simple implementation of this particular concept. To get started on this we first have to create a new drawing. With our new drawing created we need to add a few features that we can use.

Lets start by making a rectangle, and a triangle (to represent our projection cone).

Next I’m going to convert this shape to a block. First I’ll select the whole object, then type “Block” into my command line.

The block command will bring up a dialog box that will allow me to convert this object into a block (essentially a single object). First I’m going to give my block a name, in my case I’m going to call it “Barco1.” For the Base Point I’m going to click on the button that says “Pick Point” and select the bottom center of the projector. Next I’m going to make sure that I check the box that says “Open in Block Editor.” Finally I’m going to click, “OK.”

This will open the our new bock in the Block Editor where we can make some of the more interesting changes to our projector. In the Block Editor we have a new contextual ribbon, and a new pallet (the Block Authoring Palettes).

Here in the block editor we’re going to use a linear parameter as a handle. Let’s place this parameter coming out of the center point of our cone.

I only one a single handle on this parameter, so I’m going to click on the bottom blue arrow, and his the delete key to delete just that handle.

Next I’m going to associate an action with this parameter. From the Actions tab on the Block Authoring Palette, I’m going to select “Scale.” After this I’ll first select my parameter, and then my object to scale (in this case the triangle) and hit enter.

Finally I’ll click on “Test Block” in my ribbon to see if this block is working the way I had expected.

For now this is pretty close.
Coming up, how to dynamically see zoom and lens shift.