Tag Archives: TouchDesigner

Vertical Sliders Please | TouchDesigner

vertical sliderIf you’ve done any interface building in TouchDesigner, you’ve surely noticed that there is a button and a slider Comp at your disposal. These are outstanding building blocks to use when you’re creating interfaces, and give you a ton of flexibility and dependability. In the past I’ve taken some time to talk about making some changes to the Button Comp, but what about the slider Comp? Won’t someone think of the slider comps?! Perhaps most importantly, what about making a vertical slider – the standard slider comp is horizontal, and wouldn’t it be nice to have one that worked up and down instead of left to right? Well, today is your lucky day – let’s look at how to make a few changes to our stock horizontal slider and turn it into a vertical slider instead.

Let’s start by first looking at what’s inside of our slider, and what makes it tick so to speak.

slider insides

Unlike the button comp, the slider looks much simpler – after all it’s only made up of a container (the knob) a Panel CHOP and an Out CHOP. That seems pretty bare bones, but if we take a closer look at the knob we can see some interesting things at work here. Of the most interest to us is the expression in the x parameter of the knob. Here you should see an expression that reads:

me.parent().panel.u.val*me.parent().par.panelw-me.par.panelw/2

What on earth does that mean?! Well let’s take a closer look at this by first experimenting with a Constant CHOP that will let us see these numbers individually. In our Constant I’m going to break this longer expression into its component parts so we can see what’s happening.

me.parent().panel.u.val

me.parent().par.panelw

me.par.panelw

At this point you should be seeing an error message. What gives?

error

The answer to that questions lies in the last expression that we’re using – me.par.panelw. A constant, doesn’t have a panel that’s associated with it, so by asking for this value we’re getting an error message telling us that we are in fact asking for the impossible. If we want the panel width of our knob, we need to change this expression slightly so instead it reads: op(‘knob’).par.panelw. Once you make that change, you should now see something like this in your Constant.

fixed

 

Okay, so what are all of these values.

Our first expression (me.parent().panel.u.val) is looking for the u value from the parent’s control panel. We can confirm this value by looking at the Panel CHOP in this slider. At the middle position of the container, the panel u value is .5 – all the way to the right is 1, all the way to the left is 0. So far so good.

samesies

Our second expression (me.parent().par.panelw) is looking for the width parameter of the parent operator. If we look at our parent’s panel values we can see that that parameter panelw is in fact 100. That’s two values down, one to go.

sliderparent

Our third expression (me.par.panelw) is asking for the width parameter of the current operator. We can verify this by looking at our width parameter of the knob, and can see that they match.

panelw

Okay. Now let’s look at what is happening in the full expression:

me.parent().panel.u.val*me.parent().par.panelw-me.par.panelw/2

In English this might read – my parent’s panel u value (.5) * my parents width (100) – my panel’s width / 2. Great. Why? Well by multiplying the u value (a normalized value) with the width of the panel I get my position in terms of pixels – almost. The last thing I need to do is to take into consideration the width of the knob (divided by 2 so you split it’s value equally left and right). So what can we do with this new information?

Let’s make a vertical slider.

First lets start by taking our Slider COMP and swap the width and height values:

setp1vertslider

Next, let’s dive into our slider, and make a few changes. Inside of the slider, on the knob copy the expression from the x parameter and paste it into the y parameter. Let’s also swap the width and height values:

xtoy

Hang on to your boots, because now we need to pay close attention to some details. We did some swapping around with our expressions, so before this is working correctly we need to make sure we change the expressions so they’ll work they way we expect them to. First up we need to change our u value to v, and then we need to look at panelh instead of panelw – remember that we’ve swapped our directions, instead of moving left and right we’re moving up and down and our expression needs to reflect that change. Here’s what your expression should look like:

me.parent().panel.v.val*me.parent().par.panelh-me.par.panelh/2

newknob

Finally, we need to change the Panel CHOP to output the v value instead of the u value:

utov

Congratulations, you should now have a vertical slider that works just like the horizontal ones.

vsliders

 

OSC Remote Control | TouchDesigner

13771584713_fb81838217_mThere’s a lot to love about the internet, really. But I think one of my favorite things is how it connects people, how it flattens old hierarchies (not really, but let me wax idealistic for the sake of this intro) and connects people. In starting to program with TouchDesigner, I did the thing that any smart n00b would do – I joined the forum. The TouchDesigner forum is a great place to ask questions, find answers, learn from some of the best, and to offer help. We’ve all been stuck on a problem, and a commons like this one is a great place to ask questions, and keep tabs on what others are doing. To that end I shared a technique for sending and receiving OSC data with TouchDesigner back in October of 2013. I also shared this on the forum, because this happened to be something that I figured that others might want to know more about. My post was a simple example, but often it’s the simple examples that help move towards complex projects. As it turns out, someone else was fighting the same battle, and had some questions about how to make some headway – specifically they wanted to look at how to create an interface that could be controlled remotely with TouchOSC or from the TouchDesigner control panel itself. Ideally, each interface’s changes would be reflected in the other – changes on a smartphone would show up in the TouchDesigner control panel, and vice versa. I caught the first part of the exchange, and then I got swallowed by the theatre. First there was The Fall of the Hose of Escher, then Before You Ruin It took over my life, and then I spent almost a month solid in Wonder Dome. Long story short, I missed responding to a question, and finally made up for my bad Karma by responding, even if belatedly. It then occurred to me that I might as well write down the process of solving this problem. If you want to see the whole exchange you can read the thread here.

Enough jibber-jabber, let’s start programming.

For the sake of our sanity, I’m going to focus on just working with two sliders and two toggle buttons. The concepts we’ll cover here can then be applied to any other kind of TouchOSC widget, but let’s start small first. One more disclaimer, this one creates a messy network. If we took a little more time we could clean it up, but for now we’re going to let it be a little bit sloppy  - as much as it pains me to do so.

First things first, if you’re new to using TouchOSC you should take some time to get used to the basics and start by reading through this overview on using TouchOSC. Open Sound Control (OSC) messages are messages that are sent over a network. OSC can be used locally on a computer to allow for communication between programs, and it can also be used between multiple machines in order to allow them to communicate with one another. Interface tools like TouchOSC allow users to create custom interfaces that control some aspect of a program. That’s a very simplistic way of looking at OSC, but it’s a good start. The important takeaway here is that when we use OSC we’re actually relying on some network protocols to facilitate the communication between computers.

To get started I selected a simple interface on an iPod Touch that came with TouchOSC. Again, I wanted something with two toggles and two sliders to serve as our example. We can set-up our mobile device by starting TouchOSC and hitting the circular option / configuration button in the corner.

Once we do that we’ll see a screen with a few different options. We want to select the OSC connection tab.

oBGZO3662ohWeEYLEld_chDzyCFrrH1KFjmrRzwl0gA

From here we need to configure a few settings. First off you’ll want to set add the IP address of your computer into the “Host” field. To find your computer’s IP address you can use the ipconfig command to quickly find your IP address (if this sounds like another language, check out this youTube video to see what I’m talking about). I also want to take note of the IP address of the device – this is the “Local IP address.” I also want to take a close look at the outgoing and incoming ports, I’ll need to use these numbers in TouchDesigner to make sure that I can talk and listen to this device.

eAPW1uy3r24Z1X5Xba6Y6CRsU8g623zOkAU_BJIKXtY

Alright, now that we know a few things about our mobile device now we can head back to TouchDesigner.

First off, let’s take a look at how to listen to the incoming data. In a new network at an OSC In CHOP.

OSC_In

In the parameters box let’s make sure that we’re listening to the same port that we’re broadcasting on – in my case it’s port 10000. Now you should be able to start hitting buttons and moving sliders to see some new channels appear in your CHOP. Here’s it’s important to take a closer look at the naming convention for our channels. Let’s look at 1/fader1 first. TouchOSC has a great utility for creating your layouts, and if you let it do the naming for you this is the kind of format that you’ll see. The semantics of this are page/widgetNameWidgetNumber. So by looking at 1/fader1 we can read this to mean that this is on page one, it’s a fader, and it’s number (the order it was created) is one. This naming convention is going to be important to take note of, and will save you a lot of headaches if you take some time to really wrap your head around how these widgets are named.

Before we start building an interface to control let’s take a moment to get a few more ducks in a row. I’m going to connect my OSC In CHOP to four different select CHOPS – each control is going to be routed to a button or slider, and I want to make sure that I’m only dealing with one channel to do this.

4selectsFor my own sanity I’ve named each of the selects with a name that corresponds to what channel they’re carrying. You can choose any naming convention that you’d like, but definitely choose a naming convention. This kind of patching can get messy quickly, and a solid method for naming your operators will server you well.

Before we move on let’s take a closer look at one of the selects to see how we can pull out a particular channel.

fader1

You’ll notice that in the Channel Names I’ve used the name *fader1 rather than 1/fader1. What gives?! Either of those names is going to give me the same result in this case. I’ve elected to use the asterisk modifier to save myself some time and because I’m only using page one of this particular interface. What does the asterisk modifier do? I’m so glad you asked – this particular modifier will give results that match anything after the asterisk. If, for example, I had 1/fader1, 2/fader1, and 3/toggle1 as incoming channels, the asterisk would pull out the 1/fader1 and 2/fader1 channels. Naming and patterning isn’t important for this particular example, but it’s a technique to keep your back pocket for a rainy day.

Okay, now that we’ve got our OSC In data ready, let’s build a quick interface. In this instance I’m going to use two vertical sliders from the TUIK presets in the pallet browser. You can find them here:

TUIK

I’m also going to add two Button COMPs  and a Container COMP to my network.

interfaceSetup

Before we move forward, I need to make a few quick changes to by buttons. You’ll notice that the sliders both have CHOP inlets on their left-hand side, but my buttons do not. This means that my sliders are all ready set-up to receive a channel stream to change them. My buttons aren’t ready yet, so let’s take a look at the changes we need to make. Let’s start by diving into our button.

buttonCOMP

If this is your first time taking a look inside of button comp take a moment to read through a quick overview about working with buttons in TouchDesigner. We’re going to want to add a CHOP In to this component, and we’re going to also want the changes from the Touch interface to drive how our button’s color changes. Here’s what that’s going to look like:

new buttonHere I’ve added an In CHOP that feeds to a Math CHOP that I’ve renamed to “i” that’s finally connected to our Out CHOP. So why a Math, and why rename it? If you look closely at the Text TOP  you’ll see that it’s driven by several expressions allowing it to change color based on its active state – this is part of what’s happening in the expression CHOP originally called “i”. Here our Math CHOP is set to add our in and the “ii” CHOP (formerly just “i”) together. By changing the name of the operators, I’ve avoided re-writing the scripts to save myself some time.

With our buttons and sliders finally ready we can start connecting our interface elements. There are lots of ways to build interface elements in TouchDesigner, but today I’m just going to use the ability to wire components vertically to do this. First I’m going to layout my buttons and sliders in an approximate location that matches my TouchOSC layout just to help me get organized initially. Next I’m going to connect them through their vertical inlets to my container COMP. In the end, if you’re following along, it should look something like this:

vierticalwires

Finally, you’ll want to take some time to adjust the placement of your buttons and sliders, as well as adjusting their color or other elements of their appearance. My final set up looks like this:

finalcontrol

If you’re still with me, this is where the real magic starts to happen. First up we’re going to connect the our corresponding OSC In values to our control panel elements that we want to control. Fader 1 to fader 1, toggle 1 to toggle 1 and so on. Next we’re going to connect all four of our control panel elements to a single Merge CHOP.

almsot there

Next we’ll need to do a little renaming. To make things easier I’m going to use a Rename CHOP and a Constant CHOP. Here our Constant CHOP holds all of the names that we want to apply to the channels that are coming into our Rename CHOP. Here’s where all of that funny business about naming our channels becomes important. To make sure that I’m feeding back data to TouchOSC in a way that properly associates changes to my sliders I need to follow the naming conventions exactly the way they’re coming into TouchDesigner. 1/fader1 that’s since become v1 needs to have it’s name changed back to 1/fader1. You can see what I mean by taking a closer look at the network below:

reanme

Last but not least we’re going to connect our rename CHOP to an OSC Out CHOP. When we set out our OSC out we need to know the IP address of the device that we want to communicate with, and we need to know the port that we’re broadcasting to. I also like to change my OSC Out to be samples, and to turn off “Send every cook.” Sending with every cook is going to create a lot more network traffic, and while it’s not an issue for TouchOSC, if you’re working with someone using MaxMSP having this trick up your sleeve is going to make them much happier. Here’s what our OSC Out operator should look like:

OSCOUT

Whew! Alright gang, at this point you should be ready to start making the magic happen. If you’ve got everything set-up correctly you should now be able to drive your control panel in TouchDesigner either from the panel we created, or from a TouchOSC setup. As a bonus (why we did all of this hard work) you should also be able to see your changes in either environment reflected in the other.

simplefeedback

Inside Wonder Dome | TouchDesigner

first test gridIn approaching some of the many challenges of Wonder Dome one of the most pressing and intimidating was how to approach programming media playback for a show with a constant media presence. One of the challenges we had embraced as a team for this project was using Derivative’s TouchDesigner as our primary programming environment for show-control. TouchDesigner, like most programming environments, has very few limitations in terms of what you can make and do, but it also requires that you know what it is that you want to make and to do. Another challenge was the fact that while our team was full of bright and talented designers, I was the person with the broadest TouchDesigner experience. One of the hard conversations that Dan and I had during a planning meeting centered around our choices of programming environments and approaches for Wonder Dome. I told Dan that I was concerned that I would end up building an interface / patch that no one else knew how to use, fix, or program. This is one of the central challenges of a media designer – how to do you make sure that you’re building something that can be used / operated by another person. I wish there were an easy answer to this question, but sadly this is one situation that doesn’t have simple answers. The solution we came to was for me to do the programming and development – start to finish. For a larger implementation I think we could have developed an approach that would have divided some of the workload, but for this project there just wasn’t enough time for me to both teach the other designers how to use / program in TouchDesigner and to do the programming needed to ensure that we could run the show. Dan pointed out in his thesis paper on this project that our timeline shook out to just 26 days from when we started building the content of the show until we opened.

The question that follows, then, is – how did we do it? How did we manage to pull of this herculean feat in less than a month, what did we learn along the way, and what was an approach that, at the end of the process, gave us results that we used?

Organization

organizationMake a plan and stay organized. I really can’t emphasize this enough. Wonder Dome’s process lived and died in our organization as a team, and as individuals. One of the many hurdles that I approached was what our cuing system needed to be, and how it was going to relate to the script. With three people working on media, our cue sheet was a bit of a disaster at times. This meant that in our first days working together we weren’t always on the same page in terms of what cue corresponded to what moment in the play. We also knew that we were going to run into times when we needed to cut cues, re-arrange them, or re order them. For a 90 minute show with 20 media cues this is a hassle, but not an impossibility. Our 25 minute long kids show had, at the beginning, over 90 media cues.

In beginning to think about how to face this task I needed an approach that could be flexible, and responsive – fast fast fast. The solution that I approached here was to think about using a replicator to build a large portion of the interface. Replicators can be a little intimidating to use, but they are easily one of the most powerful tools that you can use in TouchDesigner. Here the principle is that you set up a model operator that you’d like subsequent copies to look like / behave like. You then use a table to drive the copies that you make – one copy operator per row in the table. If you change the table, you’ve changed / remade your operators. In the same way if you change your template operator – this is called your “Master Operator” – then you change all of the operators at once. For those reasons alone it’s easy to see how truly powerful this component is, but it’s also means that a change in your table might render your control panel suddenly un-usable.

button replicator set-up

Getting started here I began by first formatting my cue sheet in a way that made the most sense for TouchDesigner. This is a great time to practice your Excel skills and to use whatever spreadsheet application / service that you prefer to do as much formatting as possible for you. In my case I used the following as my header rows:

  • Cue Number – what was the number / name for the cue. Specifically this is what the stage manager was calling for over headset. This is also the same name / number for the cue that was in the media designer script. When anyone on the team was talking about M35 I wanted to make sure that we were all talking about the same thing.
  • Button Type – Different cues sometimes need different kinds of buttons. Rather than going through each button and making changes during tech, I wanted to be able to update the master cue sheet for the replicator, and for the properties specified to show up in the button. Do I want a momentary button, a toggle, a toggle down, etc. These things mattered, and by putting these details in the master table It was one less adjustment that I needed to make by hand.
  • Puppet – Wonder Dome had several different types of cues. Two classifications came to make a huge difference for us during the tech process. Puppet entrances / exits, and puppet movements. Ultimately, we started to treat puppet entrances and exits as a different classification of cue (rather than letters and numbers we just called for “Leo On” and “Leo Off”, this simplified the process of using digital puppets in a huge way for us), but we still had puppet movements that were cued in TouchDesigner. During the tech process we quickly found out that being able to differentiate between what cues were puppet movements and what cues were not was very important to us. By adding this column I could make sure that these buttons were a different color – and therefore differentiated from other types of cues.

Here I also took a programming precaution. I knew that invariably I was going to want to make changes to the table, but might not want those changes to be implemented immediately – like in the middle of a run for example. To solve this problem I used a simple copy script to make sure that I could copy the changed table to an active table when we were in a position to make changes to the show. By the end of the process I was probably fast enough to make changes on the fly and for them to be correctly formatted, but at the beginning of the process I wasn’t sure this was going to be the case. The last thing I wanted to do was to break the show control system, and then need 25 minutes to trouble shoot the misplacement of a 1 or 0. At the end of the day, this just made me feel better, and even if we didn’t need it in place I felt better knowing that I wasn’t going to break anything if I was thinking on my feet.

replicator in action

Above you can see a replicator in action – looking at an example like this I think helps to communicate just how useful this approach was. Method, like organization, is just a way to ensure that you’re working in a way that’s meaningful and thoughtful. I’m sure there are other methods that would have given us the same results, or even better results, but this approach helped me find a way to think about being able to quickly implement cue sheet changes into our show control environment. It also mean that we standardized our control system. With all of the buttons based on the same Master Operator it gave the interface a clean and purposed look – staring down the barrel of a 25 show run, I wanted something that I didn’t might looking at.

Thinking more broadly when it comes to organization, beyond just the use of replicators for making buttons I also took the approach that the show should be modular and organized as possible. This meant using base and container components to hold various parts of the show. Communication to lighting and sound each had their own module, as did our puppets. For the sake of performance I also ended up placing each of the locations in their own base as well. This had the added bonus of allowing for some scripting to turn cooking on and off for environments that we were using or not using at any given point in the show. We had a beast of a media server, but system resources were still important to manage to ensure smooth performance.

notThatStory_fullMap

If you want to learn more about replicators you can read through this post about getting started using them.

Show Control

Show control, however, is about more than just programming buttons. Driving Wonder Dome meant that we needed a few additional features at our fingertips during the show. Our show control system had two preview screens – one for the whole composite, and one for puppets only. One of the interesting features of working in a dome is how limited your vision becomes. The immersive quality of the projection swallows observers, which is downright awesome. This also means that it’s difficult to see where all of the media is at any given point. This is one of the reasons that we needed a solid preview monitor – just to be able to see the whole composition in one place. We also needed to be able to see the puppets separately at times – partially to locate them in space, but also to be able to understand what they looked like before being deformed and mapped onto the curved surface of the dome.

show_control

The central panel of our control system had our cues, our puppet actions, our preview monitors, and a performance monitor. During the show there were a number of moments when we had a dome transformation that was happening while nearly simultaneously a puppet was entering or exiting. While originally I was trying to drive all of this with a single mouse, I quickly abandoned that idea. Instead I created a simple TouchOSC interface to use on an iPad with another hand. This allowed me to take a double handed approach to diving the media added some challenge, but paid itself back ten fold with a bit of practice. This additional control panel also allowed me to drive the glitch effects that were a part of the show. Finally it also made for an easy place to reset many of the parameters of various scenes. In the change over between shows many elements needed to be reset, and but assigning a button on my second interface for this task I was able to move through the restore process much faster.

2014-04-09 14.44.46

If you’d like to learn more about using TouchOSC with TouchDesigner there a few pages that you might take a glance at here:

TouchOSC | Serious Show Control
Sending and Receiving OSC Values
Visualizing OSC Data

Cues

Beyond creating a system for interacting with TouchDesigner, a big question for me was how to actually think about the process of triggering changes within my network. Like so many things, this seems self evident on the face of it – this button with do that thing. But when you start to address the question of “how” then the process becomes a little more complicated. Given the unstable nature of our cue sheet, I knew that I needed a name-based approach that I called from a central location. Similar to my module based approach for building the master cue sheet, I used the same idea when building a master reference sheet.

With a little push and guidance from the fabulous Mary Franck, I used an evaluate DAT to report out the state of all of the buttons from the control panel, and name them in a way that allowed for easy calling – specifically I made sure that each cue maintained it’s name letter and number convention from our cue sheet.

master ref

 

On the face of this it seems like that’s an awful lot of scripts to write – it is, but like all things there are easier and harder ways to solve any problem. My approach to here was to let google spreadsheets do some work for me. Since cue sheet was already set-up as a spread sheet, writing some simple formulas to do the formatting for me was a quick and easy way to tackle this. It also meant that with a little bit of planning my tables for TouchDesigner were formatted quickly and easily.

excel script formattingIt was also here that I settled on using a series of Execute DATs to drive the cooking states of the various modules to control our playback performance. I think these DATs were some of the hardest for me to wrap my head around – partially because this involved a lot of considered monitoring of our system’s overall performance, and the decisions and stacking necessary to ensure that we were seeing smooth video as frequently as possible. While this certainly felt like a headache, by the time the show was running we rarely dropped below 28 frames per second.

cooking on and off

If you want to read a little more about some of the DAT work that went into Wonder Dome you can start here:

Evaluate DAT Magic
These are the DATs You’ve Been Looking For

Communication

All of the designers on the Wonder Dome team had wrestled with the challenges of communication between departments when it comes to making magic happen in the theatre. To this end, Adam, Steve, and I set out from the beginning to make sure that we had a system for lights, media, and sound to all be able to talk with one another without any headache. What kind’s of data did we need to share? To create as seamless a world as possible we wanted any data that might be relevant for another department to be easily accessible. This looked like different things for each of us, but talking about it from the beginning ensured that we built networks and modules that could easily communicate.

Screenshot_032314_115125_AM

In talking with lighting, one of our thoughts was about passing information relative to the color of the environment that we found ourselves in at any given point. To achieve this I cropped the render to a representative area, then took the average of the pixel values in that area, then converted the texture data to channel data and streamed lighting the RGBA values over OSC. We also made a simple crossfader in our stream for the times when we wanted the lighting in the scene to be different from the average of the render.

WD_AdamThis technique was hardly revolutionary, but it did create very powerful transitions in the show and allowed media to drive lighting for the general washes that filled the space. This had the added benefit of offloading some programming responsibility from lighting. While I had done a lot of work in the past to coordinate with sound, I hadn’t done much work coordinating with lights. In fact, this particular solution was one that we came up with one afternoon while we were asking questions like “what if…” about various parts of the show. We knew this was possible, but we didn’t expect to solve this problem so quickly and for it to be so immediately powerful. Through the end of the run we continued to consistently get positive audience response with this technique. Part of the reason this solution was so important was be cause Adam was busy building a control system that ultimately allowed him to control two moving lights with two wacom tablets – keeping the washing lighting driven by media kept both of his hands free to operate the moving lights.

Screenshot_032314_115026_AM

The approach to working with sound was, of course, very different from working with lights. Knowing that we wanted to use spatialized sound for this show Stephen Christensen built an incredible Max patch that allowed him to place sound anywhere he wanted in the dome. Part of our conversation from the beginning was making sure that media could send location data bout puppets or assets – we wanted the voice of the puppeteers to always be able to follow the movement of the puppets across the dome. This meant that created an OSC stream for sound that carried the location of the puppets, as well as any other go or value changes for moments where sound and media needed to be paired together.

Screenshot_032314_114946_AM

Communicating with sound wasn’t just a one way street though. Every day the Wonder Dome had a 90 minute block of free time when festival visitors were allowed to explore the dome and interact with some of the technology outside of the framework of the show. One of the components that we built for this was a 3D environment that responded to sound, animating the color and distribution of objects based on the highs, mids, and lows from the music that was being played. Here sound did the high, mid, low processing on its end, and then passed me a stream of OSC messages. to get a smoother feel from the data I used a Lag CHOP before using this to drive any parameters in my network.

Components and Reuse

Perhaps the most important lesson to be learned from this project was the importance of developing solid reusable components. This, again, isn’t anything revolutionary but it is worth remembering whenever working on a new project. The components that you build to use and reuse can make or break your efficiency and the speed of your workflow. One example of this would be a tool that we created to make placing content on the dome. Our simple tool for moving images and video around the dome would be used time and again throughout the project, and if I hadn’t take the time early on to create something that I intended to reuse, I would have instead spent a lot of time re-inventing the wheel every time we needed to solve that problem.

Screenshot_032314_114332_AM

In addition to using this placement tool for various pieces of media in the show, this is also how we placed the puppets. During the development phase of this tool I thought we might want to be able to drive the placement of content from a iPad or another computer during tech. To make this easier, I made sure that there was a mechanism embedded in the tool to allow for easy control from multiple inputs. This meant that when we finally decided to adapt this tool for use with the puppets, we already had a method for changing their location during the show. There are, of course, limits to how much anyone can plan ahead on any project but I would argue that taking the time to really think about what a component needs to be do before developing it makes good sense. I also made use of local variables when working with components in order to make it easier to enable or disable various pieces of the tool.

Screenshot_032314_114451_AM

You can read more about some of this process here:

3D Solutions for a 2D World
Container Display

Documentation and Comments

comment exampleI nearly forgot to mention one of the most critical parts of this process. Documentation and commenting. If I hadn’t commented my networks I would have been lost time after time. One of the most important practices to develop and to continue is good commenting. Whenever I was working on something that I couldn’t understand immediately by just looking at it, I added a comment. I know that some programmers use the ability to insert comments with individual operators, but I haven’t had as much success with that method. Personally, I find that inserting a text DAT is the best way for me to comment. I typically write in a text editor using manual carriage returns. I also make sure that I date my comments, so if I make a change I can leave the initial comments and then append the comment with new information. I can’t say enough about the importance of commenting – especially if you’re working with another programmer. Several times during the process I would help lighting solve a problem, and good commenting helped ensure that I could communicate important details about what was happening in the network to the other programmer.

I think it’s also important to consider how you document your work. This blog often functions as my method of documentation. If I learning something that I want to hold onto, or something that I think will be useful to other programmers then I write it down. It doesn’t do me any good to solve the same problem over and over again – writing down your thoughts and process help you organize your approach. There have been several times when I find shortcuts or new efficiency in a process only when I’m writing about it – the act of taking it all a apart to see how the pieces connect make you question what you did the first time and if there’s a better way. At times it can certainly feel tedious, but I’ve also been served time and again by the ability to return to what I’ve written down.

 

 

TouchOSC | Serious Show Control

TouchOSC BuffetI know. You love your iDevice / Android. You love your phone, your tablet, your phablet, your you name it. You love them. Better yet, you’ve discovered Hexler’s TouchOSC and the thought of controlling your show / set / performance set you on fire – literally. You were beside yourself with glee and quickly set yourself to the task of triggering things remotely. Let’s be honest, it’s awesome. It’s beyond awesome, in some respects there are few things cooler than being able to build a second interface for a programming environment that works on your favorite touch screen device. But before we get too proud of ourselves, let’s have a moment of honesty. True honesty. You’ve dabbled and scrambled, but have you ever really sat down to fully understand all of the different kinds of buttons and switches in TouchOSC? I mean really looked at them and thought about what they’re good for? I know, because I hadn’t either. But it’s time, and it’s time in a big way.

TouchOSC_Editor_-_Untitled_1_touchoscFirst things first, make sure you download a copy of the TouchOSC editor for your OS of choice. If you happen to be using windows, save yourself a huge hassle and make sure that you download the version of the editor (32 or 64 bit) that corresponds to the version of Java that you’re running – otherwise you’ll find yourself unable to open the editor and be grumpy. Great. Now in the editor create a new control setup. Make sure you’re configured to work on the device that you’re planning on playing with / use the most. In my case I’m working with an iPad layout. I prefer to use my iPad in a horizontal orientation most of the time for show control. I’m also only working with one page for now, and happy to leave it as named “1″. You’ll notice for now that the box next to auto for OSC is checked – we’ll leave that for now. Alright, now we’re going to do something that most of us have never done.

In your empty control panel, right click and add one of every different kind of object. Yep. Add one of each so we can look at how they all work. You might choose to skip the repetition of vertical and horizontal sliders – that’s a-okay, but make sure you pick one of them. By the end you should have something that looks like this:

TouchOSC_Editor_-_Untitled_1_touchosc

That’s certainly not sexy, but it’s going to teach you more than you can imagine. Next upload that interface to your device of choice. If you’ve never uploaded a new interface to TouchOSC you’ll need to know the IP address of your computer. If you’re using a windows machine you can use the command prompt and ipconfig to find this information, on a mac use the network pane in System Preferences. Next you’ll want to transfer this layout to your device, Helxer has a wonderful piece of documentation about do get this done, and you can read it here.

Next take a moment to read through what all of those lovely widgets do and how they talk to your programming environment. After that the real fun starts. Start listening to your network with your programming environment of choice, and look at what kinds of messages you get from all of these different kinds of buttons and sliders.

Isadora

If you’re working with Troikatronix Isadora you can start to see the signal flow coming from your layout by first going to the communications drop down menu, and then selecting “Stream Setup.”

Communications_and_Menubar

Next you’ll want to select “auto detect input.”

Menubar_and_Untitled

Now you can start moving sliders and toggling buttons so you can record their address. Make sure that you select “Renumber Ports” and click “OK” on the bottom of the page.

Menubar_and_Untitled

Now you’re ready to use the OSC listener actor to use those inputs to drive any number of elements inside of your scene. You’ll notice that the “channel number” on the listener actor corresponds to the port number on the Stream-Setup dialog.

Menubar_and_Untitled

 

Now you’re off to the races. Do something exciting, or not, but play with all of the buttons, sliders, and gizmos so you know how they all work. For extra credit, remember that you can also send messages back to your layout. To do this you need to use the OSC Transmit actor. You’ll also need to know the IP address of your device, and the port that you’re transmitting to – you can find all of this information on the same page in TouchOSC where you initially set your target IP address.

Quartz Composer

If you’re working with Apple’s Quartz Composer you have to do a little more work to get your OSC stream up and running. For starters, you’ll need to know a little bit more about your OSC messages. You’ll notice in the TouchOSC editor that each object has an address that’s associated with it. For example, my fader is ” /1/fader1 “. To Touch OSC this reads as Fader 1 on Page 1 of your layout. In order to read information from this slider in Quartz, we’ll need to know this address, along with the type of information that we’re sending. In the case of a slider we’re sending a float (a number with fractional parts). To get started let’s open up a new Quartz Composer patch, and add an “OSC Receiver” from the library.

Menubar_and_Untitled_-_Editor

Now let’s take a closer look at that node. If we look at the inspector, and at the settings page we can see lots of useful information.

Menubar_and_OSC_Receiver_and_Untitled_-_Editor

Here we can see what port we’re set to receive information on by default, as well as an example of how we need to format the key’s from our layout so Quartz can properly receive the signals coming over the network. Let’s set up our fader to be properly decoded by Quartz. First we’ll need to remove the test key. Next we’ll want to add a new key by typing in the dialog box /1/fader1 and designating this as a float. Finally, click the plus sign to add this key the receiver. I usually add something else to my Quartz patch to make sure that I’m passing values through my network, this is optional.

Menubar_and_OSC_Receiver_and_Untitled_-_Editor

There you go. Now, to add the other buttons and sliders from your layout, you’ll need to similarly add keys for each of the buttons, sliders, and gizmos. Remember that you can find this information in your TouchOSC editor for this layout:

TouchOSC_Editor_-_Untitled_1_touchosc

Now you’re cooking with gas. Experiment with your layout, and how you can use this buttons and sliders to drive your Quartz Composer creations. For extra credit remember that you can also transmit back to your device. In this case you’ll need to use the “OSC Sender” object in Quartz. You’ll need to know the IP address of your target device, as well as the port that you’re communicating on. Have fun, and making something interesting… or if nothing else, learn something along the way.

TouchDesigner

If you’re working with Derivative’s TouchDesigner you have two options for receiving OSC information – CHOPs or DATS. Both of these methods work well but might have different uses for different situations.

Screenshot_032514_083023_PMLet’s start by looking at the OSC in CHOP. First we need to specify what port we want to listen to. Take a look at the TouchOSC layout to make sure the ports and addresses match. Then start moving sliders and buttons to see them appear in TouchDesigner.

Screenshot_032514_083345_PM

To use these to drive various parts of your network you can either use a select CHOP or reference these channels with expressions.

Next let’s look at what we see coming from the OSC In DAT.

Screenshot_032514_083545_PMHere instead of a changing single we see table with a message header and then with our float values coming in as static messages. Depending on the circumstance one or the other of these methods is going to help you drive your network.

For extra credit use the OSC out Chop to push values back to your TouchOSC layout – remember that you’ll need the IP address of your device, the port you’re receiving messages on, and you’ll need to format the header of the message correctly so that it corresponds to the right slider or button. Have fun learning and playing with your layout of all of the different kinds of controls.

TouchDesigner | Evaluate DAT Magic

13185491723_10766c1574_oLet’s say you’re an ambitions person, maybe too ambitious, and all of a sudden you have a control panel with 68 buttons that need to trigger various scenes, effects, and transitions. Let’s also say that your TouchDesigner network is  vast enough that you need (and I mean in a big way) a simple method for getting information about what’s active in your network. If you find yourself programming the media for a live theatrical event with these same kinds of concerns, the Evaluate DAT might just be the best friend that you didn’t know that you needed.

My example above might sound a little far fetched, so let’s instead look at a more concrete example. Let’s imagine that we have a digital puppet that we’d like to trigger to enter and exit with a button set to toggle. Just to make this more interesting, let’s say that you actually have three puppets and three visual effects that you’d like to all control with buttons from a single control panel. If all of your puppets and effects are all located in a similar place this might not be a huge hassle for you, if however, you’re building a complicated network it will soon become very important to be able to call the states of these buttons across your network.

Here’s my example set-up for us – a control panel that has all of our controls, a base component where we’re going to store the status of our buttons, and a scene where our puppet needs to make an entrance and an exit based on our button’s active state.

Screenshot_031614_012640_AM

I’m going to assume that you’re already familiar with working with control panels in TouchDesigner, but if that’s not the case you can learn a little more here before you keep reading. Lets start by taking a look inside of our “trigger_ref” base component where we are going to store the status of our buttons to use other places in our network.

Screenshot_031614_013152_AM

This should look pretty impressively boring at first glance. To really get a feel for what’s happening here, let’s take a closer took how how this is being driven by our control panel.

evaluate DAT in action

Alright, so here we can see that Puppet 1, 2, and 3 all act as Toggle switches, VFX 1 acts as a momentary trigger, and VFX2 and VFX3 both act as toggles. Before we start to look at why this is important, let’s first see how this is working.

To really make our Evaluate DAT sing we need to start by feeding a table with some instructions about what information we would like it to evaluate. In my case I’ve made a table with a name for the item that I’d like to call in one column, and the pathway to the operator and channel that’s being evaluated in the other column.

Screenshot_031614_013937_AM

If we were to write these instructions in English they might read something like: “TouchDesigner, please watch operator ‘button0′ in ‘ctrl’ and tell me when it’s v1 changes, and give that the name ‘Puppet1′. It’s important note that the formatting and syntax of your request to the Evaluate DAT matters. In my case I needed to put the names in the first column in quotations. I also made sure to tell the evaluate DAT that I wanted an integer (you’ll notice that in front of the operator’s pathway I’ve int() to specify that I want an integer returned to me). Next in the Evaluate DAT I need to check a few settings in the operators parameters.

Screenshot_031614_014828_AM

The most important parameter to take note of here is that I’ve told the Evaluate DAT that I’d like the output in Expressions. This ensures that my input table is being evaluated.

Okay, but why is this interesting / important / worth your attention? While it might be tempting to avoid an organization method like this, it means that when you’re working with a large control panel, or a large cue stack, that you can refer to a button by its name rather than by a complicated network pathway. Better yet, once you write the expression to call this value from our Evaluate DAT it becomes easy to copy and paste that expression and only need to change the name of the value that you’re calling. Let’s go back to our puppet example.

Screenshot_031614_020356_AMHere I have my video that I’d like to change with my puppet button. I’ve already set up my network with a few operators. First I have a constant CHOP, and a trigger CHOP. Next I have a Movie in TOP, a Level TOP, and a Null TOP. I’d like to use my trigger CHOP to change the opacity of my Movie.

To start let’s make sure that our constant CHOP is set up with a channel that we’ll name “Puppet1″ .

Screenshot_031614_021044_AM

Let’s also make sure that our trigger is set to have a sustain value of 1, and let’s make sure that we’re happy with how the trigger is going to fire and release. For now I’m only interested in making sure that my sustain is changed.

Screenshot_031614_020704_AM Next we’re going to write a quick expression that will allow our level TOP to reference the trigger CHOP. In your level top we’re going to use the expression on the opacity Parameter on the Post page of Level TOP:

op( “trigger1″ )[ "Puppet1" ]

Screenshot_031614_021154_AM

Now we’re going to go back to our constant CHOP to see how this really makes our network programming fun. Here for the value of the Puppet1 channel we’re going to write the following expression:

op( “../trigger_ref/buttons” ) [ "Puppet1" , 1 ]

Screenshot_031614_021552_AM

Now, let’s finally see what this means when we use this in conjunction with our control panel.

puppet inout

Great, so now we toggle the opacity of a video on and off with a button. What’s important here is that what if you suddenly decide that you want this action to happen with the Puppet 2 button instead? Now, rather than having to look up where your original button was, along with it’s reference you can instead just change the expression in the Constant CHOP to be:

op( “../trigger_ref/buttons” ) [ "Puppet2" , 1 ]

Right? We’ve changed all of those button values into a table that we can centrally reference by name. If you’re just getting started with TouchDesigner this might not seem like a huge revelation, but once you start to build more complicated networks the ability to call something by a name (maybe even a name that’s in your cue sheet) becomes hugely important.

TouchDesigner | Container Display

10620433406_3bcbc8f36e_oA quick one today that addresses problems I didn’t know that I had until I decided that I wanted to show off. In Wonder Dome I’m finding that sometimes the best way to build a scene is to use methods encapsulated inside of a container that make some change to some source imagery. This approach gives me a quick access to the parameters that I’m the most likely to use in a given method, without requiring that I always dive into the container to make changes. For example, I’ve build a handy tool for moving video assets around in the dome. This is fantastic, but if I drop this container into another container that needs its own set of buttons I quickly end up with a mess. Here’s an example of what I’m talking about:

Screenshot_030614_010956_PM

Looking at this container from the root folder I can the control panels for this network, and for it’s child. There are lots of ways to address this. I might, for example, just dive into the container find the child container and turn off its display parameter.

Screenshot_030614_012024_PM

That’s a fine approach if I’m only working with a couple of different scenes, but what happens when I suddenly have 10 or 20 scenes in a given network, each with a number of containers with control panels inside of them? Then I have a headache on my hands as I have to methodically go through and carefully disable each of the containers that I don’t want displayed.

We can solve this problem by using a python expression to conjure some TouchDesigner black magic. I’m going to solve this problem by asking my encapsulated containers to look at their parent operator and to complete a simple logical operation. When I’m ready to lock down a scene in my network, I add a capital “P” to the name of the container – for me this means it’s “P”erformance ready.

Now, instead of toggling that display parameter on my encapsulated container instead I’m going to use this expression:

0 if me.parent().name[0] == “P” else 1

“What on earth does this mean?!”
We can call for the name of our parent with the following expression:

me.parent().name

Better yet, we can ask for a specific character in from that name. If we ask for the character in the 0 position (remember that in programming languages 0 is often the first number in a list), we get the first letter from the name of our parent operator. Putting these two ideas together we get the expression:

me.parent().name[0]

Alright, now that’t we have the fist character from our parent operator’s name, let’s do a simple logical operation. Using an if else statement we can set one of two values. In this case I’m toggling between the values 0 or 1 (off or on). The syntax of this starts with the value if your logical operation is true, and then specifies what value to use if that statement is false. With this in mind, the whole expression is:

0 if me.parent().name[0] == “P” else 1

Alright, so if we were to look at our expression as a sentence it might read something like: “Hey TouchDesigner, if my parent’s name starts with the letter ‘P’ set this value to 0, otherwise leave it as 1.”

Screenshot_030614_012941_PM

 

Alright, now that we’ve written our expression in the right place, let’s see what it’s doing.

button expressions

 

Now we have a quick way to turn child control panels on and off without needing to dive into the container and hunt for our container display parameters.

TouchDesigner | Replicators, and Buttons, and Tables, oh my

replicator cue buttons

I want 30 buttons. Who doesn’t want 30 buttons in their life? Control panels are useful for all sorts of operations in TouchDesigner, but they can also be daunting if you’re not accustomed to how they work. In the past when I’ve wanted lots of buttons and sliders I’ve done all of my lay-out work the hard way… like the hardest way possible, one button or slider at a time. This is great practice, and for anyone who is compulsively organized this activity can be both maddening and deeply satisfying at the same time. If you feel best when you’re meticulously aligning your buttons and sliders in perfect harmony, this might be the read for you. If, however, you like buttons (lots of them), and you want to be able to use one of the most powerful components in TouchDesigner, then Replicators might just be the tool you’ve been looking for – even if you didn’t know it.

Replicators, big surprise, make copies of things. On the surface of it, this might not seem like the most thrilling of operators, but lets say that you want to automate a process that might be tedious if you set out to do it by hand. For example, let’s say you’re designing a tool or console that needs 30 buttons of the same type, but you’d like them have unique names and follow a template and have unique names. You could do this one button at a time, but if you ever change something inside of the button (like maybe you want the colors to change depending on their select state), making that change to each button might be a huge hassle. Replicators can help us make that process fast, easy, just plain awesome.

First thing, let’s make a container and dive inside of it to start making our buttons. Next let’s make a table. I’m going to use my table DAT to tell the replicator how to make the copies of my button. As you start to think about making a table imagine that each row (or column) is going to be used to create our copies. For example, I’ve created a table with thirty rows, Cue 1 – Cue 30.

Screenshot_030614_011703_AM

Next let’s make a button COMP and rename it button0 (don’t worry, this will make sense in a a few minutes).

Screenshot_030614_012057_AM

Alright, now we’re gonna do some detailed work to set-up our button so some magic will happen later. First up we’re going to make a small change to the Align Order of our button. The Align Order parameter of a button establishes the order in which your buttons are arranged in the control panel when you use the Align Parameter (if this doesn’t make sense right now, hang in there and it will soon). We’re going to use a simple python expression to specify this parameter. Expand the field for the Align Order and type the python expression:

me.digits

What’s this all about? This expression is calling for the number that’s in the name of our button. Remember that we renamed our button to button0. Using this expression we’ve told TouchDesigner that the alignment order parameter of this button should be 0 (in many programming languages you start counting at 0, so this makes this button the first in a list).

Screenshot_030614_012854_AM

Alright, now let’s start making some magic happen. Let’s go inside of our button and make a few changes to the background text. If this is your first time looking inside of the button component take a moment to get your bearings – we have a table DAT that’s changing the color of the button based on how we interact with this component, we have some text that we’re using for the background of the button, and we have a few CHOPs so we can see what the state of our button is when it’s toggled or pressed.

Screenshot_030614_013526_AM

Next we’re going to take a closer look at just the text TOP called bg. Looking first at the text page of the parameters dialogue gives us a sense of what’s happening here, specifically we can see that the text field is where we can change what’s displayed on this button. First up we’re going to tell this TOP that we want to pull some data from a DAT. In the DAT field type the path:

../table1

Next change the DAT Row to the following python expression:

me.parent().digits

Finally, clear the text parameter so it’s blank. With any luck you’re button should now read, “Cue 1,” or whatever you happened to type in the first row of your table.

Screenshot_030614_014601_AM

So what’s happening here?! First, we’re telling this TOP that we’re going to use some information from a table. Specifically, we’re going to use a table whose name is table1, and it’s location is in the parent network (that’s what ../ means). If we were to take our network path ../table1 and write it as a sentence it might read something like this – “hey TouchDesigner I want you to use a table DAT for this TOP, to find it look in the network above me for something named table1.” Next we have an expression that’s telling this TOP what row to pull text from. We just used a similar expression when we were setting our alignment order, and the same idea applies here – me.parent().digits as a sentence might read “hey TouchDesigner, look at my parent (the component that I’m inside of) and find the number at the end of its name.” Remember that we named this component button0, and that 0 (the digits from that name) correspond to the first row in our table.

Now it’s time to blow things up. Let’s add a replicator component to our network.

Screenshot_030614_015654_AM

In the parameters for our replicator first specify that table1 is the template table. Next let’s change the Operator Prefix to button to match the convention that we already started. Last by not least set your Master Operator to be your button0.

Screenshot_030614_020047_AM

If we set up everything correctly we should now have a total of 30 buttons arranged in a grid in our network. They should also each have a unique name that corresponds to the table that we used to make this array.

Screenshot_030614_020310_AM

Last but not least, let’s back up and see what this looks like from the container view. At this point you should be sorely disappointed, because our control panel looks all wrong – in fact it looks like there’s only one button. What gives?

Screenshot_030614_020604_AM

The catch here is that we still need to do a little bit of house keeping to get to what we’re after. In the Container’s parameters on the layout page make sure the Align parameter is set to “Layout Grid Columns” or “Layout Grid Rows” depending on what view better suits your needs. You might also want to play with the Align Margin if you’d like your buttons to have a little breathing room.

Screenshot_030614_020954_AM

Bingo-Bango you’ve just made some replicator mojo happen. With your new grid of buttons you’re ready to do all sorts of fun things. Remember that each of these buttons is based on the template of the master operator that we specified in the replicator. In our case that’s button0. This means that if you change that operator – maybe you change the color, or add an expression inside of the button – when you click the “recreate all operators” in the replicator, this remakes your buttons with those changes applied to every button.

Happy replicating.


A big thank you to Richard Burns and Space Monkeys for sharing this brief tutorial that inspired me to do some more looking and playing with Replicators:

TouchDesigner | These are the DATs you’ve been looking for

Silly DAT screenshotIf you’re new to TouchDesigner, it’s easy to feel like DATs are a hard nut to crack. This is especially true if you’re also new to programming in general. Scripting can be daunting as you’re getting started, but it’s also incredibly important – take it from someone who is still learning, dat by dat.

So what’s the big deal about DATs anyway? Better yet, why should you care? DATs can help in all sorts of ways, but lets look at a concrete example of how they can help solve some interesting problems that you might face if you’re out to save some information to use later.

As the Wonder Dome team has been busy building interfaces, programming methods, and performance tools we’ve hit countless situations where being able to save some data for later use is absolutely necessary.

slider action

Our lighting designer, Adam Vachon, wants to be able to mix color live during a rehearsal and then record that mix to in a cue later. Better yet, he might want to create a cue sheet with all of that data saved in a single table so he can quickly recall it during tech. Over in media, we want to be able to place video content in lots of difference places across the dome and with varying degrees of visual effects applied and we also want to be able to record that data for later recall.

DATs, are a wonderful solution for this particular problem. With a few DATs, and some simple scripts we can hold onto the position of our sliders to use later. Let’s take a look at how we can make that happen.

First let’s look at a simple problem. I want to be able to add the values from one table to the bottom of another table. If you’re new to programming, this process is called appending. We can see an example of this if we look at two different tables that we want to add together.

two

Here we have two tables, and we’d like to combine them. We can do this by writing a simple script that tells TouchDesigner to take the contents of cells from table2 and to add them to table1 in a specific order. One of the things that’s important to understand is how tables are referenced in TouchDesigner. One of the ways that a programmer can pull information from a cell is to ask for the data by referencing the address of the cell. This is just like writing a formula in something Google Spreadsheets or Excel - you just need to know the name of the cell that you want information from. Let’s take a look at how the addressing system works:

Screenshot_030514_124543_AM

Taking a moment to study table3 and you’ll be referencing cells in a flash. It’s just rows and columns, with the only catch that the numbering system starts at 0. Cool, right? Okay, so if we want to write our script to append cells from one table to another we’re going to use this format:

n = op(“table1″)
m1 = op(“table2″)[0,0]
m2 = op(“table2″)[0,1]
m3 = op(“table2″)[0,2]

n.appendRow( [ m1, m2, m3 ] )

So what’s happening here? First we’re defining table1 as a variable we’re calling n. Next we’re naming three new variables m1, m2, and m3. These correspond to the data in the first row of table2, in column 1, 2, 3. The next operation in our script to append n (that’s table1) with a new row using the values m1, m2, and m3 in that order. You might decide that you want these added to n in a different order, which is easy, right? All you have to do is to change the order that you’ve listed them – try making the order of variables in the brackets [ m2, m1, m3 ] instead to see what happens. Alright, at this point our network should look like this:

Screenshot_030514_125145_AM

Now, to run our script we’re just going to right click on text3, and select “Run Script” from the contextual menu.

simple script

Great! Now we’ve successfully appended one table with data from the first row of another table.

If you’re still with me, now we can start to make the real magic happen. Once we understand how a script like this works, we can put it to work to do some interesting tasks for us. Let’s look at a simple example where we have three sliders, that we want to be able to save the data from.

Screenshot_030514_125808_AM

To get started, let’s make three slider COMPs, and connect them to a merge CHOP.

Screenshot_030514_125922_AM

Now lets add a Chop to DAT, and export the merge to the Chop to.

chop to

The chopto DAT is a special kind of operator that allows us to see CHOP data in DAT format. This coverts our CHOP into a table of three floats. At this point you can probably guess where we’re headed – we’re going to use our simple script that we just wrote to append the contents of our chopto to another table. Before we get there, we still need to get a few more ducks in a row.

Next let’s create a table with one row and three columns. Name these columns anything you want, in my case I’m going to call them (rather generically) Value 1, Value 2, and Value 3. I’m also going to create a big empty table, and finally I’m going to connect both of these with a merge DAT. Why two tables? I want my first table to hold my header information for the final table. This way I can clear the whole table of saved floats without also deleting the first row of my final table.

Screenshot_030514_010521_AM

As a quick reminder, the names of your DATs is going to be very important when we start to write our script. The names of our DATs is how we can identify them, and consequently how we can point TouchDesigner to the data that we want to use.

Next I’m going to add a button COMP to my network, and a panel execute DAT. In the panel execute DAT I’m going to make sure that it’s looking at the operator button1 and watching for the panel value select. I’m also going to make sure that the On to Off box is checked – this tells the DAT when to run the script. Next I’m going to slightly alter the script we wrote earlier to right for our tables here. I’m also going to make sure that the script is in the right place in the DAT. Take a closer look at the example below to see how to format your DAT.

Screenshot_030514_011212_AM

Alright, now it’s time for DAT table magic. At this point you can make your sliders and button viewer active, and you’re ready to make changes and then record slider states. Happy appending.

slider table action

TouchDesigner | 3D solutions for a 2D world

12761364894_714f3b8985_nOne of the fascinating pieces of working in TouchDesigner is the ability to use 3D tools to solve 2D problems. For the last seven months or so I’ve been working on Daniel Fine’s thesis project – Wonder Dome. Dome projection is a wild ride, and one of the many challenges we’ve encountered is thinking about how to place media on the dome without the time intensive process of pre-rendering all of the content specifically for this projection environment. To address some of these issues we started working with Los Angles based Vortex Immersion Media, their lead programmer is the TouchDesigner specialist Jeff Smith of Eve Vapor. Part of the wonderful opportunity we’ve had in working with Vortex is getting to take an early look at Jeff’s custom built Dome Mapping tool. Built exclusively in TouchDesigner it’s an incredibly powerful piece of software built to make the dome warping and blending process straightforward. The next step in the process for us was to consider how we were going to place content on the interior surface of the dome. The dome mapping tool that we’re using uses a square raster as an input, and can be visualized by looking at a polar array. If you’re at all interested in dome projection, start by looking at Paul Bourke’s site – the wealth of information here has proven to be invaluable to the Wonder Dome team as we’ve wrestled with dome projection challenges. This square image is beautiful mapped to the interior surface of the dome, making placing content a matter of considering where on the array you might want a piece of artwork to live.

There are a number of methods for addressing this challenge, and my instinct was to build a simple TouchDesigner network that would allow us to see, place, and manipulate media in real time while we were making the show. Here’s what my simple asset placement component looks like:

asset placement

This component based approach makes it easy for the design and production team to see what something looks like in the dome in real time, and to place it quickly and easily. Additionally this component is built so that adding in animation for still assets is simple and straight forward.

Let’s start by taking a look at the network that drives this object, and cover the conceptual structure behind its operation.

Screenshot_022814_125736_AM

In this network we have a series of sliders that are controlling some key aspects of the media – orientation along a circular path, distance from the center, and zoom. These sliders also pass out values to a display component to make it easy to take note of the values needed for programming animation.

We also have a render chain that’s doing a few interesting things. First we’re taking a piece of source media, and using that to texture a piece of geometry with the same aspect ratio as our source. Next we’re placing that rectangle in 3D space and locking its movement to a predefined circular pathway. Finally we’re rendering this from the perspective of a camera looking down on the object as though it were on a table.

Here I’m using a circle SOP to create a circle that will be the pathway that my Geo COMP will rotate around. Here I ended this network in a null so that if we needed to make any changes I wouldn’t have to change the export settings for this pathway.

Screenshot_022714_095918_AM

You’ll also notice that we’re looking at the parameters for the circle where I’ve turned on the bulls-eye so we’re only seeing the parameters that I’ve changed. I’ve made this a small NURBS curve to give me a simple circle.

The next thing I want to think about is setting up a surface to be manipulated in 3D space. This could be a rectangle or a grid. I’m using a rectangle in this particular case, as I don’t need any fancy deformation to be applied to this object. In order to see anything made in 3D space we need to render those objects. The render process looks like a simple chain of component operators: a geo COMP, a camera COMP, a light COMP, and a render TOP. In order to render something in 3D space we need a camera (a perspective that we’re viewing the object from), a piece of geometry (something to render), and a light (something to illuminate the object).

Screenshot_022814_125903_AM

We can see in my network below that I’ve used an in TOP so that I can feed this container from the parent portion of the network. I’ve also given this a default image so that I can always see something in my container. You might also notice that while I have a camera and a geo, I don’t have a light COMP. This is because I’m using a material type that doesn’t require any lighting. More about that in a moment. We can also see that my circle is being referenced by the Geo, and that the in TOP is also being referenced by the Geo. To better understand what’s happening here we need to dive into the Geo COMP.

Screenshot_022814_125926_AM

Inside of the Geo COMP we can see a few interesting things at work. One thing you’ll notice is that I have a constant MAT and an info CHOP inside of this object. Both of these operators are referencing the in TOP in that’s in the parent network. My constant is referencing the in to establish what is going to be applied to the Geo as a material. My info CHOP gives me quick access to several of the attributes of my source file. Included in this list of attributes is the resolution of the source media. I can use this information to determine the aspect ratio of the source, and then make sure that my rectangle is sized to match. Using this process I don’t have to rely on a particular aspect ratio for my source material, I can pass this container any shape of rectangular image, and it will size itself appropriately.

Initially I just had three sliders that controlled the placement of my media in this environment. Then I started thinking about what I would really need during our technical rehearsals. It occurred to me that I would want the option to be able to place the media on the surface of the dome from a position other than behind the media server. To address this need I built a simple TouchOSC interface to replicate my three sliders. Next I captured that OSC information with TouchDesigner, and then passed that stream of floats into my container. From here I suddenly had to do some serious thinking about what I wanted this object to do. Ideally, I wanted to be able to control the container either form the media server, or from a remote access panel (TouchOSC). I also wanted the ability to record the position information that was being passed so I could use it later. This meant that I needed to think about how I was going to capture and recall the same information from three possible sources. To do this I first started by packaging my data with merge CHOPs. I also took this opportunity to rename my channels. For example, my OSC data read – osc_rot, osc_dist, osc_zoom; rotation, distance, and zoom sliders from my TouchOSC panel. I repeated this process for the sliders, and for the table that I was using. I also knew that I wanted to rename my stream, and pass it all to a null CHOP before exporting it across the network. To keep my network a little more tidy I used a base to encapsulate all of the patching and selecting, and switching that needed to happen for this algorithm to work properly.

Screenshot_022814_010413_AM

Inside of the base COMP we can see that I’m taking my three in CHOPs selecting for the appropriate channel, passing this to a switch (so I can control what value is driving the rendering portion of my network) and then back out again. You may also notice that I’m passing the switch values to a null, and then exporting that do a opViewer TOP. The opViewer TOP creates a rendered image of the channel operator at work. Why would I do this? Well, I wanted a confidence monitor for my patch-bay. The base COMP allows you to assign a TOP to its display. Doing this meant that I could see into a portion of the base, without having to actually be inside of this component.

Screenshot_022814_010506_AM

With all of the patching setup, I needed to build an interface that would control all of these changes. I also needed a way to capture the values coming out of TouchOSC, store them in a table, and then recall them later.

Screenshot_022814_010616_AM

The solution here was to build a few buttons to drive this interface. To drive the witch CHOP in my base component, I used three buttons encapsulated inside of a container COMP and set to operate as radio buttons. I then used a panel CHOP in the container to export which button was currently being toggled into the on position. Next I added a button COMP to record the values set from TouchOSC. Using a Chop to DAT I was able to capture the float values streaming into my network, and I knew that what I wanted was to be able to copy a set of these values to a table. To do this I used a panel execute DAT. This class of DAT looks at the panel properties of a specified container (buttons and sliders also qualify here), and runs a script when the specified conditions in the DAT are met. This is the portion of the network that gave me the most headache. Understanding how these DATs work, and the best method of working with them took some experimentation. To trouble shoot this, I started by writing my script in a text DAT, and then running it manually. Once I had a script that was doing what I wanted, I then set to the task of better understanding the panel execute DAT. For those interested in Python scripting in TouchDesigner, here’s the simple method that I used:

m = op (“chopto1″)
n = op (“table1″)
n.copy(m)

Here the operator chopto1 is the DAT that is capturing the OSC stream. The operator table1 is an empty table that I want to copy values to, it’s the destination for my data. The Python method copy starts by specifying the destination, and then the source that you want to pull from.

Screenshot_022814_010809_AM

Finally ready to work with the panel execute DAT, I discovered that all of my headaches were caused by misplacing the script. To get the DAT to operate properly I just had to make sure that my intended script was between the parameter specified, and the return call.

Screenshot_022814_010926_AM

One last helpful hint / tip that I can offer from working on this component is how to specify the order of your buttons in a container. One handy feature in the container parameters page is your ability to have TouchDesigner automatically array your buttons rather than placing them yourself. The catch, how do you specify the order the buttons should appear in? If you look a the parameter page for the buttons themselves, you’ll notice that they have a smartly named parameter named “Alignment Order.” This, sets their alignment order in the parent control panel.

If I’ve learned nothing else, I have learned that sometimes it’s the simplest things that are the easiest to miss.

TouchDesigner | Animation Comp

The needs of the theatre are an interesting bunch. In my time designing and working on media for live productions I’ve often found myself in situations where I’ve needed to playback pre-built content, and other times when I’ve wanted to drive the media based on the input of the performers or audience. There have also been situations when I’ve needed to control a specific element of the media, while also making space for some dynamic element.

Let’s look at an example of this so we can get to the heart of the matter. For a production that I worked on in October we used Quartz composer to create some of the pieces of media. Working with Quartz meant that I could use sound and video inputs to dynamically drive the media, but there were times when I wanted to control specific parameters with a predetermined animation method. For example, I wanted to have an array of cubes that were rotating and moving in real time. I then wanted to be able to fly through the cubes in a controlled manner. The best part of working with Quartz was my ability to respond to the needs of the directors in the moment. In the past I would have answered a question like “can we see that a little slower?” by saying “sure – I’ll need to change some key-frames and re-render the video, so we can look at it tomorrow.” Driving the media through quartz meant that I could say “sure, lets look at that now.”

In working with TouchDesigner I’ve come up with lots of different methods for achieving that same end, but all of them have ultimately felt a clunky or awkward. Then I found the Animation Component.

Let’s look at a simple example of how to take advantage of the animation comp to create a reliable animation effect that we can trigger with a button.

Let’s take a look at our network and talk through what’s happening in the different pieces:

Screenshot_011514_125716_AM

First things first let’s take a quick inventory of the operators that we’re using:

Button Comp - this acts as the trigger for our animation.
Animation Comp – this component holds four channels of information that will drive our torus.
Trail CHOP - I’m using this to have a better sense what’s happening in the animation Comp.
Geometry Comp – this is holding our 3D assets that we’re going to change in real time.

Let’s start by looking at the Animation Comp. This component is a little bit black magic in all of the best ways, but it does take some exploring to learn how it to best take advantage of it. The best place to start when we want to learn about a new operator or component is at the wiki. We can also dive into the animation comp and take a closer look at the pieces driving it, though for this particular use case we can leave that alone. What we do want to do is to look at the animation editor. We can find this by right clicking on the animation comp and selecting “Edit Animation…” from the pop-up menu.

open animation editor

We should now see a new window at the bottom of the screen that looks like a time-line.

Screenshot_011614_113551_PM

If you’ve ever worked with the Graph Editor in After Effects, this works on the same principle of adding key frames to a time line.

In thinking about the animation I want to create I know that I want to have the ability to effect the x, y, and z position of a 3D object and I want to control the amount of noise that drives some random-looking distortion. Knowing that I want to control four different elements of an object means that I need to add four channels to my animation editor. I can do this by using the Names dialog. First I’m going to add my “noise” channel. To do this I’m going to type “noise” into the name field, and click Add Channels.

Screenshot_011614_114525_PM

Next I want to add three channels for some object translation. This time I’m going to type the following into the Names Field “trans[xyz]“.

Screenshot_011614_114908_PM

Doing this will add three channels all at once for us – transx, transy, transz. In hindsight, I’d actually do this by typing trans[XYZ]. That would mean that I’d have the channels transX, transY, transZ which would have been easier to read. At this point we should now have four channels that we can edit.

Screenshot_011614_115144_PM

Lets key frame some animation to get started, and if we want to change things we can come back to the editor. First, click on one of your channels so that it’s highlighted. Now along the time line you can hold down the Alt key to place a key frame. While you’re holding down the Alt key you should see a yellow set of cross hairs that show you where your key frame is going. After you’ve placed some key frames you can then translate them up or down in the animation editor, change the attack of their slope, as well as their function. I want an effect that can be looped, so I’m going to make sure that my first and last key frame have the same values. A few notes about the animation editor. I’m going to repeat this process for my other channels as well. Here’s what it looks like when I’m done:

Screenshot_011614_115803_PM

Here we see a few different elements help us understand the relationship of the editor to our time line. We can see 1 on the far left, and 600 (if you haven’t changed the duration of your network) on the right. In this case we’re looking at the number of frames in our network. If we look at the bottom left hand corner of our network we can see a few time-code settings:

Screenshot_011614_115829_PM

There’s lots of information here, but I for now I just want to talk about a few specific elements. We can see that we start at Frame 1 and End at Frame 600. We can also see that our FPS (Frames Per Second) is set to 60. With a little bit of math we know that we’ve got a 10 second window. Coming from any kind of animation work flow, the idea of a frame based time line should feel comfortable. If that’s not your background, you can start by digging in at the wikipedia page about Frame Rate. This should help you think about how you want to structure your animation, and how it’s going to relate to the performance of our geometry.

At this point we still need to do a little bit of work before our animation editor is behaving the way we want it to. By default the Animation Comp’s play mode is linked to the time line. This means that the animation you see should be directly connected to the global time line for your network. This is incredibly powerful, but it also means that we’re watching our animation happen on a constant loop. For many of my applications, I want to be able to cue an animation sequence, rather than having it run constantly locked to the time line. We can make this change by making a few adjustments in the Animation Comp’s parameters.

Before we start doing that, let’s add an operator to our network. I want a better visual sense of what’s happening in the Animation Comp. To achieve this, I’m going to use a Trail CHOP. By connecting a Trail CHOP to the outlet of the animation comp we can see a graph of change in the channels over time.

Screenshot_011714_121051_AM

Now that we’ve got a better window into what’s happening with our animation we can look at how to make some changes to the Animation Comp. Let’s start by pulling up the Parameters window. First I want to change the Play Mode to “Sequential.” Now we can trigger our animation by clicking on the “Cue Point” button.

Screenshot_011714_122911_AM

To get the effect I want, we still need to make a few more changes. Let’s head to the “Range” page in the parameters dialog. Here I want to set the Trim Right to “Hold” its value. This means that my animation is going to maintain the value that is at the last key frame. Now when I go back to the Animation page I can see that when I hit the cue button my animation runs, and then holds at the last values that have been graphed.

trail animation

Before we start to send this information to a piece of geometry, lets build a better button. I’ve talked about building Buttons before, and if you need a primer take a moment to skim through how buttons work. Add a Button Comp to your network, and change it’s Button Type to Momentary. Next we’re going to make the button viewer active. Last, but not least we’re going to use the button to drive the cue point trigger for our animation. In the Animation Comp click on the small “+” button next Cue. Now let’s write a quick reference expression. The expression we want to write looks like this:

op(“button1/out1″)[v1]

Screenshot_011714_123836_AM

Now when you click on your button you should trigger your animation.

At this point we have some animation stored in four channels that’s set to only output when it’s triggered. We also have a button to trigger this animation. Finally we can start to connect these values to make the real magic happen.

Let’s start by adding a Geometry COMP to our network. Next lets jump inside of our Geo and make some quick changes. Here’s a look at the whole network we’re going to make:

Screenshot_011714_124226_AM

Our network string looks like this:

Tours – Transform – Noise

We can start by adding the transform and the noise SOPs to our network and connecting them to the original torus. Make sure that you turn off the display and render flag on the torus1 SOP, and turn them on for the noise1 SOP.

Before I get started there are a few things that I know I want to make happen. I want my torus to have a feeling of constantly tumbling and moving. I want to use one of my channels from the Animation COMP to translate the torus, and I want to use my noise channel to drive the amount of distortion I see in my torus.

Let’s start with translating our torus. In the Transform SOP we’re going to write some simple expressions. First up let’s connect our translation channel from the Animation CHOP. We’re going to use relative paths to pull the animation channel we want. Understanding how paths work can be confusing, and if this sounds like greek you can start by reading about what the wiki has to say about pathways.  In the tz line of the transform SOP we’re going to click on the little blue box to tell TouchDesigner that we want to write an expression, and then we’re going to write:

op(“../animation1/out”)["transz"]

This is telling the transform SOP that out of the parent of this object, we want to look at the operator named “animation1″ and we want the channel named “tranz”. Next we’re going to write some expression to get our slow tumbling movement. In the rx and ry lines we’re going to write the following expressions:

me.time.absFrame * 0.1
me.time.absFrame * 0.3

In this case we’re telling TouchDesigner that we want the absolute frame (a number that just keeps counting upwards as long as your network is running) to be multiplied by 0.1 and 0.3, respectively. If this doesn’t makes sense to you, take some time play with the values you’re multiplying by to see how this changes the animation. When we’re done, our Transform SOP should look like this:

Screenshot_011714_125740_AM

Next in the Noise SOP we’re just going to write one simple expression. Here we want to call the noise channel from our Animation COMP. We’ve already practiced this in the Transform SOP, so this should look very familiar. In the Amplitude line we’re going to write the following expression:

op(“../animation1/out”)["noise"]

When you’re done your noise SOP should look something like this:

Screenshot_011714_010238_AM

Let’s back out of our Geo and see what we’ve made. Now when we click on our button we should see the triggered animation both run the trail CHOP, and our Geo. It’s important to remember that we’ve connected the changes to our torus to the Animation COMP. That means that if we want to change the shape or duration of the animation all we need to do is to go back to editing the Animation COMP and adjust our key frames.

geo animation

There you go, now you’ve built a animation sequence that’s rendered in real time, and triggered by a hitting a button.