Category Archives: Live Performance

presets and cue building – a beyond basics checklist | TouchDesigner 099

from the facebook help group

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

https://www.facebook.com/groups/touchdesignerhelp/permalink/835733779925957/

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

  • Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.
  • The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit. 
  • I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues  – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

  • What existing tools do you like – what’s their data structure / solution?
  • How is your data organized – arrays, dictionaries, etc.
  • Do you have a readme to refer back to when you extend your project in the future?
  • Do you have a way to add entries?
  • Do you have a way to recall entries?
  • Do you have a way to update entries?
  • Do you have a way to copy entries?
  • Do you have a validation process in-line to ensure your entries are valid?
  • Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

icon_library.PNG

icon_library_example.PNG

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing  your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

Richard’s killer post about optimization in touch has an excellent recommendation – keep your UI separate. This suggestion is HUGE, and it does far more good than you might intentionally imagine.

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

  • “What problem am I solving by adding this level of additional complexity?”
  • “Is there another way to solve this problem without an additional process?”
  • “What are the possible problems / issues this might cause?”
  • “Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

textport for performance | TouchDesiger

consoleText

I love a good challenge, and today on the TouchDesigner slack channel there was an interesting question about how you might go about getting the contents of the textport into a texture to display. That’s a great question, and I can imagine a circumstance where that might be a fun and interesting addition to a set. Sadly, I have no idea about how you might make that happen. I looked through the wiki a bit to see if there were any leads, and it’s difficult to see if there’s actually a good way to grab the contents of the textport.

What do we do then?!

Well, it just so happens that this might be another great place to look at how to take advantage of using extensions in TouchDesigner. Here our extension is going to do some double duty for us. The big picture idea is that we’ll want to be able to use a single call to either display a string, or print and display a string. If you only wanted to print it you could just use print(), so we’ll leave that one out of the mix for now.

Let’s take a look at the extension and then pull apart what’s happening in side.

Okay, so what exactly are we doing here?!

The big picture is that we want a way to be able to log something to a text  object that can be displayed. In this case I choose a table DAT. The reasoning here is that a table DAT before being converted to just a text DAT allows us to do some simple clean up and line adjustments. Each new entry is posted in a row – which makes for an easy way to limit the number of displayed rows. We can do this with a select DAT – which is where we use our StartRow and EndRow members.

Why exactly do we use these? Well, this helps ensure that we can keep our newest row displayed. A text TOP can accept a text DAT of any length, but at some point the text will spill off the bottom – unless you use adaptive sizing. The catch there is that at some point the text will become impossible to read. A top and bottom boundary ensures that we can always have something portion of our text displayed. We use a simple logical test in our Display() method to see if we’ve hit that boundary yet, and if we have we can update our members plus one… moving them both along at the same time.

You may also notice that we have a separate method to display and print… why not just do this in a single method. Well, that’s a great question. We could just use a single method for this with another argument. That’s probably a better way to tackle this challenge, but I wanted to use this opportunity to show how we might call another method from within our class. This can be helpful in a number of different situations, and while this application is a little too simple to really take advantage of that technique, it gives you a peak into how it might work.

Want to download the tox and take it for a test drive? You can find the source code here.

TouchDesigner | Understanding Extensions

genGeoClassSo you’ve made a killer component that you love using, but you suddenly find yourself wondering how best to re-use it in future projects. You could make a killer control panel for it, or create a more generalized method for passing in values with in CHOPs or DATs. You could just resign yourself to some more complex scripting – reaching deep into your component to set parameters one at a time. You could hard code it, you’ll probably be making some job specific changes to your custom component anyway, so what’s a little more hard coding? The 50000 series now features custom parameters, or you could use variables, or storage. Any one of these options might be right for your component, or maybe they’re just not quite right. Maybe what you really need is a little better reach with Python, but without as much head scratching specificity. If find yourself feeling this way, than extensions are about to make your TouchDesigner programming life so, so much better.

Using extensions requires a bit of leg work on your part as a programmer, it also means that you’ll want to do this for components that you’ll find yourself reusing time and again – after all, if you’re going to take some time to really think about how you want a reusable piece of code to work in a larger system it only makes sense to do this with something you know will be useful again. That is to say, this approach isn’t right for every circumstance, but the circumstances it is right for will really make a difference for you. We’re going to look at a rather impractical example to give us a lay of the land when it comes to using extensions – that’s okay. In fact, as you’re learning how to apply this approach to your workflow it’s worth practicing several times to make sure you have a handle on the ins and outs of the process.

Before we get too much further, what exactly is this extension business? If you’re to the point with TouchDesigner where you’re comfortable using Python to manipulate your networks, you’ll no doubt have come to rely on a number of methods – anything with a . followed by a call. For example:

  • op(‘moviefilein1’).width – returns the width of the file
  • op(‘moviefilein1’).height– returns the heightof the file
  • op(‘table1’).numRows – returns the number of Rows
  • op(‘table1’).numCols – returns the number of Columns

In each of these examples, it’s the .operation that extends how you might think of working with an operator. Custom extensions, means that you, the programmer, are now free to create your own set of extensions for a given component. The classic example for this kind of custom component and custom extension set for TouchDesigner would be  a movie player. Once you build a movie player that cross fades between two videos, wouldn’t it be lovely to use something like op(‘videoPlayer’).PlayNext() or op(‘videoPlayer’).Loop(). The big idea here is that you should be free to do just that when working with a component, and custom extensions are a big part of that puzzle. This keeps us from reaching deep into the component to change parameters, and instead allows us to write modular code with layers of abstraction. If you’re still not convinced that this is a powerful feature, consider that when you start a car you’re not in the business of specifying how precisely the starter motor sequences each electrical signal to help the car turn over, or which spark plugs fire in which order – you issue a command car.start() with the expectation that the start() function holds all of the necessary information about how the vehicle actually starts. While this example might be reductive, it helps to illustrate the importance of abstraction. It’s impractical for you, the driver, to always be caught up in starting sequences in order to drive a car (one might make an argument against this, but for now let’s roll with the fact that many drivers don’t understand the magic behind a car starting when they turn the key), this embedded function allows the driver to focus on higher order operations – navigation, manipulation, etc. At the end of the day, that’s really what we’re after here – how do add a layer of abstraction that simplifies how we manipulate a given component.

That’s all well and good, but let’s get to the practical application of these concepts. In this case, before we actually start to see this in action, we need to have a working component to start working with. We are going to imagine that we want to build a generative component that’s got faceted torus that we use in lots of live sets. We want to be able to change a number of different elements for this Torus – its texture, background, rotation, deformation, to name a few. Let’s begin by putting together a simple render network to build this component, and then we can look at how extensions complement the work we’ve already done.

First let’s add an empty Base COMP to our network.

emptyBase

Inside of our new base let’s add a Camera, Geo, and Light COMP, as well as a Render TOP connected to an Out TOP. We’re building a simple rendering network, which should feel like old hat.

simpleRender

Let’s add a movie file in TOP, and a Composite TOP to our network. We’ll composite our render over our movie file in, so we have a background. In the image below only the changed parameters for the Composite TOP are shown.

simpleRenderWithComposit

Next let’s look inside of our geo COMP, and make a few changes. First let’s change our geo to be a polygon rather than a mesh. We’ll also turn off the display and render flags for the torus (don’t worry, we’ll turn them on further down the chain.

torus

Next we’ll add a noise SOP.

Noise SOP

Next we’ll add a facet SOP, turning on unique points and compute normals.

facetSOP

Finally, let’s add a null SOP. On the null, let’s turn on the display and render flags. When it’s all said and done we should have something that looks like this.

noiseTorusChain

Let’s move up one layer out of our geo, back into the base we started in. Here let’s add a phong Material and apply it to our geo. Let’s also add a movie file in TOP connected to a null TOP, and set it as the color map for our phong.

colorMapAndMaterial

While we’re still thinking about our material, lets make a few changes. Let’s set our diffuse color to white, our specular color to a light gray, and turn up our shininess to 255.

phongNonDefaults

Shown are the non default parameters for the Phong Material.

Let’s also make a few changes to our light COMP. I’m after a kind of shiny faceted torus, so let’s change our light to a cone light, place it overhead and to the right of our geometry, and set it to look at our geo.

Shown are the non default parameters for the Light Component.

Shown are the non default parameters for the Light Component.

I’ve gone ahead and changed file in my movie file in TOP to a different default image so I can see the whole torus. In the end you should have a network that looks something like this.

textureTorus

Thinking ahead, I know that I’m going to want to have the freedom of changing a few parameters for this texture. I’d like to be able to control if it’s monochrome, as well as a few of the levels of the image. Let’s add a monochrome TOP and a level TOP between the movie file in and the null TOP.

postProcess

We’re almost ready to start thinking about extensions, but first we need to build a control network to operate our component. Let’s start by adding a constant CHOP and calling it attrAssign. Here I’m going to start thinking about what I want to control in this component. I want to drive the rotation of the x y and z axis for our geo, I want to control the amplitude of the noise, the saturation of our image, the black level, brightness, and opacity. I’m going to think of those parameters as:

  • rx
  • ry
  • rz
  • noiseAmp
  • monoVal
  • blkLvl
  • bright
  • opacity

I’ll start out my constant CHOP with those channel names, and some starting values.

attrAssign

For this particular component, I want to be able to set values, and have it smartly figure out transitions rather than needing it constantly feed it a set of smoothly changing values. There are a couple of different ways we might set this up, but I’m going to take a rout of using a speed CHOP for one set of operations, and a filter CHOP to smooth everything out. First I want to isolate the rx ry and rz channels, we can do that with a select CHOP. Next we’ll connect that to a speed CHOP. We can merge this changed set of channels back into the stream with a replace CHOP – replacing our static rx ry rz channels with the dynamic ones.

selectSpeedReplace

Finally, we can smooth out our data with a Filter CHOP, and end our chain of operations in a null CHOP.

controlChops

Our last step here is to export or reference to each of our control parameters. Our rotation channels should be referenced by our Geo1 for rx, ry, and rz. The Noise SOP in Geo1 should be connected to the channel noiseAmp, and our image controls should be connected to their respective parameters – Monochrome, Black Level, Brightness, and Opacity. In the end, you should end up with a complete network that looks something like this.

complete BaseCOMP

Alright, we now finally have a basic generative component set up, and we’re ready to start thinking about how we want our extensions to work with this bad boy. Let’s start with the simplest ideas here, and work our way up to something more complex. For starters we need to add a text DAT to our network. I’m going to call mine genGeoClass.

genGeo

Let’s add our first class to our genGeoClass text DAT. Our class is going to contain all of our functions that we want to use with our component. There are a few things we need to keep in mind for this process. First, white space is going to be important – tabs matter, and this is a great place to really learn that the hard way. Namespace also matters. We’re eventually going to promote our extensions (more on that later on down), and in order for that to work correctly our functions need to begin with capital letters. That will make more sense as we see that in practice, but for now it’s important to have that tucked away in your mind.

Let’s begin by adding a simple print command. First we define our class, and place our functions inside of the class. When we’re writing a class in Python we need to explicitly place self in our definitions. There are a number of very good reasons for this, and I’d encourage you to read up on the topic if you’re curious:

Why ‘self’ is used explicitly
Why the explicit self has to stay

For our purposes, let’s get our class started with the following;

class GenGeo:

    def SimplePrint( self ):
 
        print( 'Hello World' )
        
        return

Before we can see this in action, we need to set up our base COMP to use extensions. Let’s back out of our base, and take a look at our component parameters.

baseExtensions

Here I’ve set the module reference to be the operator called genGeoClass inside of base1. We can also see that we’re specifcally referencing the GenGeo() class that we just wrote. I’ve also gone ahead and turned on promote extensions. Make sure you click “Re-Init” Extensions at the bottom of the parameters page, and then we can see our extension in action.

Next let’s add a text DAT to the same directory as our base1. Here we’ll use the following piece of code to call the SimpleText() function we just defined:

op( 'base1' ).SimpleText()

Let’s open our text port, and run our text DAT.

SimpleText

That should feel like a little slice of magic. If you choose not to promote your extensions, the syntax for calling a function looks something like this:

op( 'base1' ).ext.GenGeo.SimplePrint()

Okay, this has been a lot of work so far to only print out “Hello World.” How can we make this a little more interesting? I’m so glad you asked. Here’s  a set of functions that I’ve already written. We can copy and paste these into our genGeoClass text DAT, and now we suddenly have a whole new host of functions we can call that perform some meta operations for us.

class GenGeo:

    def SimplePrint( self ):
        print( 'Hello World' )
        return

    def TorusPar( self , rows , columns ):
        op('geo1/torus1').par.rows = rows
        op('geo1/torus1').par.cols = columns
        return

    def TorusParReset( self ):
        op('geo1/torus1').par.rows = 10
        op('geo1/torus1').par.cols = 20 
        return

    def Texture( self , file ): 
        op('moviefilein1').par.file = file
        return

    def TextureReset( self ):
        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'
        return

    def Rot( self , rx , ry , rz ):
        attr = op('attrAssign')
 
        attr.par.value0 = rx
        attr.par.value1 = ry
        attr.par.value2 = rz
        return

    def RotReset( self ):
        attr = op('attrAssign')
        speed = op('speed1')
        filterCHOP = op('filter1')
        attr.par.value0 = 0
        attr.par.value1 = 0
        attr.par.value2 = 0
        speed.par.resetpulse.pulse()
        filterCHOP.par.resetpulse.pulse()
        return

    def TorusNoise( self , noiseAmp ):
        op( 'attrAssign' ).par.value3 = noiseAmp
        return

    def Mono( self , monoVal ):
        op( 'attrAssign' ).par.value4 = monoVal
        return

    def Levels( self , blkLvl , bright , opacity ):
        attr = op('attrAssign')
        attr.par.value5 = blkLvl
        attr.par.value6 = bright
        attr.par.value7 = opacity
        return

    def PostProcessReset( self ):
        attr = op('attrAssign')
        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def Background( self , onOff ):
        op('comp1').bypass = onOff
        return

To better understand what all of these do let’s look at a quick cheat sheet that I made:

# Test Print Statement
op( 'base1' ).SimplePrint()

# Set Rows and Columns
op( 'base1' ).TorusPar( 20 , 20 )

# Reset Rwos and Columns to 10 x 20
op( 'base1' ).TorusParReset()

# Set the texture of a movie file in TOP
op( 'base1' ).Texture( 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg' )

# Reset the Texture of movie file in TOP
op( 'base1' ).TextureReset()

# Set the Rotation Speed for the x y and / or z axis
op( 'base1' ).Rot( 10 , 15 , 20 )

# Reset the Rotation speed to 0, and the rotation values to 0
op( 'base1' ).RotReset()

# Set the Amplitude paramater of the Noise SOP for the Torus
op( 'base1' ).TorusNoise( 0.8 )

# Make the texture Monochrome
op( 'base1' ).Mono( 1.0 )

# Control the Black Leve, Brightness, and Opacity of the Texture
# that's applied to the Torus
op( 'base1' ).Levels( 0.25 , 1.5 , 0.8 )

# Reset all post process effects
op( 'base1' ).PostProcessReset()

# Turn off Background Image - 0 will turn the Background back on
op( 'base1' ).Background( 1 )

This is wonderful, but there’s one last thing for us to consider. Wouldn’t it be great if we had some initialization values in here, so at start-up or when we made a new instance of this comp we defaulted to a reliable base state? That would be lovely, and we can set that with an __init__ definition. Let’s add the following to our class:

    def __init__( self ):
 
        print( 'Gen Init' )
        attr = op('attrAssign')

        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'

        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1

        return

That means our whole class should now look like this:

class GenGeo:

    def __init__( self ):
        print( 'Gen Init' )
        attr = op('attrAssign')

        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'

        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def SimplePrint( self ):
        print( 'Hello World' )
        return

    def TorusPar( self , rows , columns ):
        op('geo1/torus1').par.rows = rows
        op('geo1/torus1').par.cols = columns
        return

    def TorusParReset( self ):
        op('geo1/torus1').par.rows = 10
        op('geo1/torus1').par.cols = 20 
        return

    def Texture( self , file ): 
        op('moviefilein1').par.file = file
        return

    def TextureReset( self ):
        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'
        return

    def Rot( self , rx , ry , rz ):
        attr = op('attrAssign')
 
        attr.par.value0 = rx
        attr.par.value1 = ry
        attr.par.value2 = rz
        return

    def RotReset( self ):
        attr = op('attrAssign')
        speed = op('speed1')
        filterCHOP = op('filter1')
        attr.par.value0 = 0
        attr.par.value1 = 0
        attr.par.value2 = 0
        speed.par.resetpulse.pulse()
        filterCHOP.par.resetpulse.pulse()
        return

    def TorusNoise( self , noiseAmp ):
        op( 'attrAssign' ).par.value3 = noiseAmp
        return

    def Mono( self , monoVal ):
        op( 'attrAssign' ).par.value4 = monoVal
        return

    def Levels( self , blkLvl , bright , opacity ):
        attr = op('attrAssign')
        attr.par.value5 = blkLvl
        attr.par.value6 = bright
        attr.par.value7 = opacity
        return

    def PostProcessReset( self ):
        attr = op('attrAssign')
        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def Background( self , onOff ):
        op('comp1').bypass = onOff
        return

Alright, so why do we care? Well, this application of extensions frees us to think differently about this component. Let’s say that I want to make a few changes to this component’s behavior. First I want to set a new image to be the texture for the torus, next I want to change the rotation speed on the x and y axis, and finally I want to turn up the noise SOP. Previously, I might think about this by writing a series of scripts that looked something like:

op( 'base1/attrAssign' ).par.value0 = 20
op( 'base1/attrAssign' ).par.value1 = 30
op( 'base1/attrAssign' ).par.value3 = 0.8
op( 'base1/moviefilein1' ).par.file = 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg'

Instead, I can now write that like this:

op( 'base1' ).Texture( 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg' )
op( 'base1' ).Rot( 20 , 30 , 0 )
op( 'base1' ).TorusNoise( 0.8 )

That might not seem like a huge difference here in our example network, but as we build larger and more complex components, this suddenly becomes hugely powerful as a working approach.

extensionsInAction

Check out the example file on GitHub if you get stuck along the way, or want to see exactly how I made this work.

THP 494 & 598 | Touch OSC – A Case Study | TouchDesigner

Core Concepts

  • Hexler’s Touch OSC
  • Simple Network Communication with Open Sound Control
  • Sending Floats from TouchOSC to TouchDesigner
  • Sending Floats from TouchDesigner to TouchOSC
  • Sending Messages from TouchDesiger to TouchOSC

Geometric Landscapes | TouchDesigner

rolling landscapeWorking on a new piece to premiere in Mexico I spent a lot of time experimenting with creating landscapes and backgrounds. Searching for a way into this exploration I wanted to play with the idea of instancing objects in 3D space, and the illusion of moving and shifting planes in space. This is already a popular visual style, and I was wanted to try my hand at exploring what it might look like to make something like this in TouchDesigner.

I’ve talked about instancing before, and so this challenge seemed like something that would be both fun, and interesting to play with. I also wanted something that mixed material methods – shaded, flat, and wire frame in appearance. Let’s take a look at how we can making something interesting happen using these ideas as a starting point.

Rendering is going to make or break us when thinking about how to set up this project, with that in mind it’s important to remember that a typcially rendering set-up needs something to be rendered (some geometry), a perspective from which to draw the object(s) (a camera), and a light source (a light, we don’t always need a light but as a rule of thumb it’s good to think that we need one). Our Geometry, Light, and Camera are all components, while our render operator is a Texture Operator (TOP). As a point of reference, here’s what a generic rendering set-up looks like:

classic render setup

We can tell our Render TOP to look at multiple Geometry Components, in the same network, or we can nest our active surfaces inside of a single GEO. Much of this depends on what you’re looking to create. The most important thing to consider is that surfaces operators (SOPs) are computed on the CPU unless placed inside of Geometry COMP – placed inside of a Geo they’re computed on the GPU instead. This makes a huge difference in your performance, and as a best practice it’s good to place any rendered geometry inside of a Geo.

Now let’s take a look at what our rendering set-up is going to look like for making our geometric landscape:

rolling landscape render set-up

Here we can see that we have a similar set-up on the left – a light, a geo, a camera, and a Render TOP. On the right I’ve got the geometry viewer open so we can see a little more about the relationship in the scene of the camera, light, and geometry. We can see that the camera is set above our geometry looking down, we can also see that we have a cone light set-up with a wide angle and a wide delta.

Now that we have a general sense of what we’re making let’s dig-in and make something interesting.

Lets start with an empty network. Lets start by adding a Geo to our network.

geo

To get started we’ll need to dive inside of geo to start making some changes. You can do this by double clicking on the Geo, using the quick key “i” (shortcut for inside), or by scroll wheel zooming into your geo. Inside our Geo we’ll see a torus that has it’s render and display flag set (the small blue and purple circles on the surface operator).

Inside of our geo let’s start by frist deleting the Torus SOP. Next let’s add a Grid SOP. Our grid is going to act the key generative element for us inside of this network – it’s going to give our surface its wire texture, the shading on the surface, and the location of where our spheres get placed. Our Grid is the central piece of what we’re making, and we’ll how in just a bit. Once we’ve added a grid to our network, we need to make a few changes to some its properties. First we want to make sure that it’s set to be a Polygon for Primitive type; we want to make sure that our orientation is set to ZX Plane; finally we want to change the size to 20 x 20.

grid setup

Next we’ll connect our Gird SOP to a Noise SOP. We’re going to use the Noise SOP to drive some of the shifting locations of the points in our grid. Before we move on, let’s make one quick change, On the Transform page in the Noise SOP’s parameter’s we can see that the translate z parameter is set to change with the second count of our project – me.time.seconds. This is excellent, and it keeps our noise animated over time, but it also means that we’re only working with 600 samples (in a default TouchDesigner network) because me.time.seconds is locked to our timeline. If, instead, we want noise that doesn’t have a hiccup every 10 seconds, we can instead use the call me.time.absSeconds. This uses the absolute number of seconds that TouchDesigner has been running to drive the transformations in the noise SOP. It’s a small change, but makes for a nicer look (at least in my opinion).

noise SOP

Next we’ll add a Material SOP to the our network. Our material SOP is going to allow us to assign a material to our grid. We’ll do this by also adding a Wireframe Material to our network. Before we assign our wireframe to our material we should see something like this:

before assignment

To assign the wireframe to the material, we’re just going to drag and drop the MAT onto the SOP.

assign MAT

Finally, we’re going to end this string by adding a Null SOP, making sure to turn on the render and display flags.

wire null

At this point we’ve made the Wireframe outlined elements of our grid. We’re now going to use the same data stream that we’ve already programmed to help us create a another layer of texture, and to create the locations for our spheres. Let’s start by adding our spheres to the network. To do this we’re going to do some instancing. When we’re instancing we need some location information for where to generate the copies of our source geometry. To get this information we’re going to use a SOP to CHOP to convert our SOP information into CHOP data.

SOPtoNow before we move on we need to change gears for just a moment. What we’re going to do next is to add another Geo inside of our current Geo – in Russian Nesting Doll Fashion. Why? Well, we’re going to do this in order to take advantage of the Geo’s ability to instance. Why not use the Copy SOP? I love me some Copy SOP action, but in this case the use of instancing is more efficient for this particular activity.

So, let’s add another Geo to our network:

new geo

Next we’re going to replace the torus inside of this Geo with a sphere. We’re also going to add a material inside of the geo. Easy, right? I’m going to set my sphere to be pretty small ( 0.09, 0.09, 0.09 ), I’m also going to make sure that I connect my sphere to a Null (in case I want to make any other changes), and then turn on the Null’s display and render flags. Finally I’m going to add a constant Material. When we’re all said and done you should have something like this:

inside the sphere

Excellent. Now we we back out of this nested Geo we should see just a single sphere. What?! Well, now we can set the Geo to instance – my favorite part.

small sphere

Let’s start by taking a look at the parameters of our Geo. Specifically, we want to look at the Instance page. Here we first need to turn on Instancing. Next we’re going to tell our Geo to look at the CHOP called sopto1. Finally we’ll set the TX, TY, and TZ parameters to correspond to the channels called tx, ty, and tz. If this seems like crazy talk, that’s okay – check out the picture below and it should make more sense:

instance page

We also want to head to Render Page of the Geo, and set this geo to use the material ./constant1 (this means, use the material inside of this geo called constant1 – ./ is a directory pointer indicating where to look for the thing in question, in this case a constant).

constant

Alright, now we can finally see some of our handy work – you should now see a sphere instanced at each of the vertices of the grid that we’re transforming with noise.

Spheres

Now let’s kick it up a notch. Now we’re going to add another Geo to our network.

last geo

We’re going to treat this Geo slightly differently. Inside let’s add an In SOP and a Phong Material. On our SOP In let’s make sure that the display and render flags are turned on, and for your SOP choose a nice dark color – I’m choosing a deep crimson.

geo3

The In SOP allows us to pass in the geometry that we’ve already made, acting as a kind of short-cut for us. When we go back to our Geo we just need to make sure that we’ve set our Render material to be ./phong1 – like with our constant this is the pathway to and the name of the material we want to use.

geo3mat

Alright, now you should have a network that looks something like this:

complete geo network

Now we’re ready to move out of our Geo and get ready to render our scene. Zooming out of our Geo we should see a network that looks something like this:

work space

In order to render our scene we need to revisit what we talked about at the beginning of the post – we need to add a Camera, a Light, and a Render TOP. Let’s go ahead and add these to our network.

rendersetupGeo

What gives?! This doesn’t look right at all. Well, part of what’s happening is that we don’t have our camera and light positioned correctly to render the scene correctly. To make this easier to understand, let’s change up our work space so we can use the geometry viewer (one of my favorite tools). Let’s start by dividing the workspace into two windows. We can do this by using the split work space icon in the menu bar:

menu bar

I like the vertical split, but you can choose whatever arrangement works for you. When you initially click on this split you’ll see two views of your current network location:

split step1

In order to see the geometry viewer we need to change the pane type selection for one of our windows. Clicking on the small drop down triangle will reveal a menu of network views. Let’s select Geometry Viewer from the list.

pane type

You should now see your network on one side, and the geometry from our Geo comp on the other side.

geo viewer

Now we’re cooking with gas. Alright, let’s make our lives just a little easier and change the scale of our light and camera to make them easier to see. Click on your light COMP, if your parameters window has disappeared you can bring it back by pressing “p” on your keyboard.” In the scale parameter, let’s turn that up to 10, 10, 10.

light scale

Great, now we can see part of the reason that our scene isn’t rendering the way we want it to. Our light is currently set up as a point light that’s positioned at the edge of our geometry. Let’s make some changes to our light’s properties so we get something closer to what we want. Let’s start by changing the location of our light, I’m going to place mine at 0, 10, 0.

lightplacement

Now let’s take a look at the Light page of the parameter’s window. Here I’m going to change my light to a Cone, and alter the cone size, delta, and rolloff. Experiment with different settings here to find something that you like.

light properties

Before you get too much experimenting done, however, you’ll probably notice that the cone of our light isn’t pointing towards the surface of our geometry. We can fix this by heading back to the Xform page of the parameters window, and setting the rotation values to -90, 0, 0.

light ROT

With our light starting to take shape, let’s get our camera in order. Over at our camera, let’s make the same initial change we made to the light and turn the scale up to 10, 10, 10 – this is going to make it much easier for us to find our camera.

cameraScale

With the scale turned up we can see that our camera needs to be translated back, up and rotated downwards so it’s looking at the geometry. After a little bit of adjusting I think I like my camera at:

TX    0
TY    6.2
TZ    18.9
RX    -16.8
RY    0
RZ    0

cameraTrans

Boom! Alright, let’s close our geometry viewer and take a closer look at our Render TOP to see what we’ve made.

finalRender

Nice work. This is a good looking start, and now that you know how it’s made you can start to really have fun – camera placement, light placement, noise amplitude, you name it, go crazy, make something fun or weird, or just silly.

Shuffling Words Around | Isadora

About a month ago I was playing about in Isadora and discovered the Text/ure actor. Unlike some of the other text display actors, this one hides a secret. This actor lets you copy and paste into a whole block of text that you can then display one line at a time. Why do that? Well, that’s a fine question, and at the time I didn’t have a good reason to use this technique, but it seemed interesting and I tucked it into the back of my mind. Fast forward a few months, and today on the Facebook group – Isadora User Group (London) – I see the following call for help:

Isadora_User_Group__London_

And that’s when I remembered the secret power of our friend the Text/ure actor. Taking Raphael’s case as an example let’s look at how we might solve this problem in Izzy.

First off we need to start by formatting our list of words. For the sake of simplicity I’m going to use a list of 10 words instead of 100 – the method is the same for any size list, but 10 will be easier for us to work with in this example. Start off by firing up your favorite plain text editing program. I’m going to use TextWrangler as my tool of choice on a Mac, if you’re on a PC I’d suggest looking into Notepad++.

In TextWrangler I’m going to make a quick list of words, making sure that there is a carriage return between each one – in other words I want each word to live on its own line. So far my sample text looks like this:

untitled_text

Boring, I know, but we’re still just getting started.
Next I’m going to open up Isadora and create a new patch. To my programming space I’m going to add the Text/ure actor:

Untitled

So far this looks like any other actor with inlets on the left, and outputs on the right. If we look closely, however, we’ll see a parameter called “line” that should catch our attention. Now for the magic. If we double click on the actor in the blue space to the right of our inlets, we suddenly get a pop up window where we can edit text.

 

Edit_Text_and_Untitled

Next let’s copy and past our words into this pop up window. Once all of your text has been added, click “OK.”

Edit_Text_and_Untitled 2

Great. Now we have our text loaded into the Text/ure actor, but we can’t see anything yet. Before we move on, let’s connect this actor to a projector and turn on a preview so we can get a sense of what’s happening. To do this start by adding a Projector actor, then connecting the video outlet of the Text/ure actor to the video inlet of the Projector.

Untitled 2

Next show stages – you can do this from the menu bar, or you can use the keyboard shortcut Command G. If you’re already connected to another display then your image should show up on your other display device. If you’d like to only see a preview window you can force preview with the keyboard shortcut Command-Shift F.

Untitled___Stage_1_and_Untitled

Alright, now we’re getting somewhere. If we want to change what text is showing up we change the line number on the Text/ure actor.

changing-text

Alright. So now to the question of shuffling through these different words. In Raphael’s original post, he was looking to not only be able to select different words, but also to have a shuffling method (and I’m going to assume that he doesn’t want to repeat). To crack this nut we’re going to use the shuffle actor, and some logic.

Let’s start by adding a shuffle actor to our patch, and let’s take a moment to look at how it works.

Untitled 3

Our Shuffling actor has a few parameters that are going to be especially important for us – min, max, shuffle, and next. Min, like the label says is the lowest value in the shuffle stack; Max is the highest value. Shuffle will reset the our counter, and reshuffle our stack. The next trigger will give us the next number in the stack. On the outlet side of our actor we see Remaining and Value. Value, is the shuffled number that we’re working with; Remaining is how many numbers are left. If we think of this as a deck of cards then we can start to imagine what’s happening here. On the left, shuffle is like actually shuffling the deck. Next is the same as dealing the next card. On the right the Value would be the face value of the card dealt, while remaining is how many cards are left in the deck.

Alright already, why is this important?! Well, it’s important because once we get to the end of our shuffled stack we can’t deal any more cards until we re-shuffle the deck. We can avoid this problem by adding a comparator actor to our patch. The comparator is a logical operation that compares two values, and then tells you when the result is a match (True) and when it doesn’t (False).

Untitled 4

To get our logic working the way we want let’s start by connecting the Shuffle’s Remaining value to the value2 of the Comparator. Next we’ll connect the true trigger from the Comparator back to the Shuffle inlet on the Shuffle actor.

Untitled 5

Great, now we’ve made a small feedback loop that automatically reshuffles our deck when we have used all of the values in our range. Now we can connect the Value outlet of the Shuffle Actor to the Line input of the Text/ure actor:

Untitled 6There we have it. Now the logic of our shuffle and comparator will allow us to keep running through our list of words that are in turn sent to our projector.

shuffle-text

OSC Remote Control | TouchDesigner

13771584713_fb81838217_mThere’s a lot to love about the internet, really. But I think one of my favorite things is how it connects people, how it flattens old hierarchies (not really, but let me wax idealistic for the sake of this intro) and connects people. In starting to program with TouchDesigner, I did the thing that any smart n00b would do – I joined the forum. The TouchDesigner forum is a great place to ask questions, find answers, learn from some of the best, and to offer help. We’ve all been stuck on a problem, and a commons like this one is a great place to ask questions, and keep tabs on what others are doing. To that end I shared a technique for sending and receiving OSC data with TouchDesigner back in October of 2013. I also shared this on the forum, because this happened to be something that I figured that others might want to know more about. My post was a simple example, but often it’s the simple examples that help move towards complex projects. As it turns out, someone else was fighting the same battle, and had some questions about how to make some headway – specifically they wanted to look at how to create an interface that could be controlled remotely with TouchOSC or from the TouchDesigner control panel itself. Ideally, each interface’s changes would be reflected in the other – changes on a smartphone would show up in the TouchDesigner control panel, and vice versa. I caught the first part of the exchange, and then I got swallowed by the theatre. First there was The Fall of the Hose of Escher, then Before You Ruin It took over my life, and then I spent almost a month solid in Wonder Dome. Long story short, I missed responding to a question, and finally made up for my bad Karma by responding, even if belatedly. It then occurred to me that I might as well write down the process of solving this problem. If you want to see the whole exchange you can read the thread here.

Enough jibber-jabber, let’s start programming.

For the sake of our sanity, I’m going to focus on just working with two sliders and two toggle buttons. The concepts we’ll cover here can then be applied to any other kind of TouchOSC widget, but let’s start small first. One more disclaimer, this one creates a messy network. If we took a little more time we could clean it up, but for now we’re going to let it be a little bit sloppy  – as much as it pains me to do so.

First things first, if you’re new to using TouchOSC you should take some time to get used to the basics and start by reading through this overview on using TouchOSC. Open Sound Control (OSC) messages are messages that are sent over a network. OSC can be used locally on a computer to allow for communication between programs, and it can also be used between multiple machines in order to allow them to communicate with one another. Interface tools like TouchOSC allow users to create custom interfaces that control some aspect of a program. That’s a very simplistic way of looking at OSC, but it’s a good start. The important takeaway here is that when we use OSC we’re actually relying on some network protocols to facilitate the communication between computers.

To get started I selected a simple interface on an iPod Touch that came with TouchOSC. Again, I wanted something with two toggles and two sliders to serve as our example. We can set-up our mobile device by starting TouchOSC and hitting the circular option / configuration button in the corner.

Once we do that we’ll see a screen with a few different options. We want to select the OSC connection tab.

oBGZO3662ohWeEYLEld_chDzyCFrrH1KFjmrRzwl0gA

From here we need to configure a few settings. First off you’ll want to set add the IP address of your computer into the “Host” field. To find your computer’s IP address you can use the ipconfig command to quickly find your IP address (if this sounds like another language, check out this youTube video to see what I’m talking about). I also want to take note of the IP address of the device – this is the “Local IP address.” I also want to take a close look at the outgoing and incoming ports, I’ll need to use these numbers in TouchDesigner to make sure that I can talk and listen to this device.

eAPW1uy3r24Z1X5Xba6Y6CRsU8g623zOkAU_BJIKXtY

Alright, now that we know a few things about our mobile device now we can head back to TouchDesigner.

First off, let’s take a look at how to listen to the incoming data. In a new network at an OSC In CHOP.

OSC_In

In the parameters box let’s make sure that we’re listening to the same port that we’re broadcasting on – in my case it’s port 10000. Now you should be able to start hitting buttons and moving sliders to see some new channels appear in your CHOP. Here’s it’s important to take a closer look at the naming convention for our channels. Let’s look at 1/fader1 first. TouchOSC has a great utility for creating your layouts, and if you let it do the naming for you this is the kind of format that you’ll see. The semantics of this are page/widgetNameWidgetNumber. So by looking at 1/fader1 we can read this to mean that this is on page one, it’s a fader, and it’s number (the order it was created) is one. This naming convention is going to be important to take note of, and will save you a lot of headaches if you take some time to really wrap your head around how these widgets are named.

Before we start building an interface to control let’s take a moment to get a few more ducks in a row. I’m going to connect my OSC In CHOP to four different select CHOPS – each control is going to be routed to a button or slider, and I want to make sure that I’m only dealing with one channel to do this.

4selectsFor my own sanity I’ve named each of the selects with a name that corresponds to what channel they’re carrying. You can choose any naming convention that you’d like, but definitely choose a naming convention. This kind of patching can get messy quickly, and a solid method for naming your operators will server you well.

Before we move on let’s take a closer look at one of the selects to see how we can pull out a particular channel.

fader1

You’ll notice that in the Channel Names I’ve used the name *fader1 rather than 1/fader1. What gives?! Either of those names is going to give me the same result in this case. I’ve elected to use the asterisk modifier to save myself some time and because I’m only using page one of this particular interface. What does the asterisk modifier do? I’m so glad you asked – this particular modifier will give results that match anything after the asterisk. If, for example, I had 1/fader1, 2/fader1, and 3/toggle1 as incoming channels, the asterisk would pull out the 1/fader1 and 2/fader1 channels. Naming and patterning isn’t important for this particular example, but it’s a technique to keep your back pocket for a rainy day.

Okay, now that we’ve got our OSC In data ready, let’s build a quick interface. In this instance I’m going to use two vertical sliders from the TUIK presets in the pallet browser. You can find them here:

TUIK

I’m also going to add two Button COMPs  and a Container COMP to my network.

interfaceSetup

Before we move forward, I need to make a few quick changes to by buttons. You’ll notice that the sliders both have CHOP inlets on their left-hand side, but my buttons do not. This means that my sliders are all ready set-up to receive a channel stream to change them. My buttons aren’t ready yet, so let’s take a look at the changes we need to make. Let’s start by diving into our button.

buttonCOMP

If this is your first time taking a look inside of button comp take a moment to read through a quick overview about working with buttons in TouchDesigner. We’re going to want to add a CHOP In to this component, and we’re going to also want the changes from the Touch interface to drive how our button’s color changes. Here’s what that’s going to look like:

new buttonHere I’ve added an In CHOP that feeds to a Math CHOP that I’ve renamed to “i” that’s finally connected to our Out CHOP. So why a Math, and why rename it? If you look closely at the Text TOP  you’ll see that it’s driven by several expressions allowing it to change color based on its active state – this is part of what’s happening in the expression CHOP originally called “i”. Here our Math CHOP is set to add our in and the “ii” CHOP (formerly just “i”) together. By changing the name of the operators, I’ve avoided re-writing the scripts to save myself some time.

With our buttons and sliders finally ready we can start connecting our interface elements. There are lots of ways to build interface elements in TouchDesigner, but today I’m just going to use the ability to wire components vertically to do this. First I’m going to layout my buttons and sliders in an approximate location that matches my TouchOSC layout just to help me get organized initially. Next I’m going to connect them through their vertical inlets to my container COMP. In the end, if you’re following along, it should look something like this:

vierticalwires

Finally, you’ll want to take some time to adjust the placement of your buttons and sliders, as well as adjusting their color or other elements of their appearance. My final set up looks like this:

finalcontrol

If you’re still with me, this is where the real magic starts to happen. First up we’re going to connect the our corresponding OSC In values to our control panel elements that we want to control. Fader 1 to fader 1, toggle 1 to toggle 1 and so on. Next we’re going to connect all four of our control panel elements to a single Merge CHOP.

almsot there

Next we’ll need to do a little renaming. To make things easier I’m going to use a Rename CHOP and a Constant CHOP. Here our Constant CHOP holds all of the names that we want to apply to the channels that are coming into our Rename CHOP. Here’s where all of that funny business about naming our channels becomes important. To make sure that I’m feeding back data to TouchOSC in a way that properly associates changes to my sliders I need to follow the naming conventions exactly the way they’re coming into TouchDesigner. 1/fader1 that’s since become v1 needs to have it’s name changed back to 1/fader1. You can see what I mean by taking a closer look at the network below:

reanme

Last but not least we’re going to connect our rename CHOP to an OSC Out CHOP. When we set out our OSC out we need to know the IP address of the device that we want to communicate with, and we need to know the port that we’re broadcasting to. I also like to change my OSC Out to be samples, and to turn off “Send every cook.” Sending with every cook is going to create a lot more network traffic, and while it’s not an issue for TouchOSC, if you’re working with someone using MaxMSP having this trick up your sleeve is going to make them much happier. Here’s what our OSC Out operator should look like:

OSCOUT

Whew! Alright gang, at this point you should be ready to start making the magic happen. If you’ve got everything set-up correctly you should now be able to drive your control panel in TouchDesigner either from the panel we created, or from a TouchOSC setup. As a bonus (why we did all of this hard work) you should also be able to see your changes in either environment reflected in the other.

simplefeedback

Inside Wonder Dome | TouchDesigner

first test gridIn approaching some of the many challenges of Wonder Dome one of the most pressing and intimidating was how to approach programming media playback for a show with a constant media presence. One of the challenges we had embraced as a team for this project was using Derivative’s TouchDesigner as our primary programming environment for show-control. TouchDesigner, like most programming environments, has very few limitations in terms of what you can make and do, but it also requires that you know what it is that you want to make and to do. Another challenge was the fact that while our team was full of bright and talented designers, I was the person with the broadest TouchDesigner experience. One of the hard conversations that Dan and I had during a planning meeting centered around our choices of programming environments and approaches for Wonder Dome. I told Dan that I was concerned that I would end up building an interface / patch that no one else knew how to use, fix, or program. This is one of the central challenges of a media designer – how to do you make sure that you’re building something that can be used / operated by another person. I wish there were an easy answer to this question, but sadly this is one situation that doesn’t have simple answers. The solution we came to was for me to do the programming and development – start to finish. For a larger implementation I think we could have developed an approach that would have divided some of the workload, but for this project there just wasn’t enough time for me to both teach the other designers how to use / program in TouchDesigner and to do the programming needed to ensure that we could run the show. Dan pointed out in his thesis paper on this project that our timeline shook out to just 26 days from when we started building the content of the show until we opened.

The question that follows, then, is – how did we do it? How did we manage to pull of this herculean feat in less than a month, what did we learn along the way, and what was an approach that, at the end of the process, gave us results that we used?

Organization

organizationMake a plan and stay organized. I really can’t emphasize this enough. Wonder Dome’s process lived and died in our organization as a team, and as individuals. One of the many hurdles that I approached was what our cuing system needed to be, and how it was going to relate to the script. With three people working on media, our cue sheet was a bit of a disaster at times. This meant that in our first days working together we weren’t always on the same page in terms of what cue corresponded to what moment in the play. We also knew that we were going to run into times when we needed to cut cues, re-arrange them, or re order them. For a 90 minute show with 20 media cues this is a hassle, but not an impossibility. Our 25 minute long kids show had, at the beginning, over 90 media cues.

In beginning to think about how to face this task I needed an approach that could be flexible, and responsive – fast fast fast. The solution that I approached here was to think about using a replicator to build a large portion of the interface. Replicators can be a little intimidating to use, but they are easily one of the most powerful tools that you can use in TouchDesigner. Here the principle is that you set up a model operator that you’d like subsequent copies to look like / behave like. You then use a table to drive the copies that you make – one copy operator per row in the table. If you change the table, you’ve changed / remade your operators. In the same way if you change your template operator – this is called your “Master Operator” – then you change all of the operators at once. For those reasons alone it’s easy to see how truly powerful this component is, but it’s also means that a change in your table might render your control panel suddenly un-usable.

button replicator set-up

Getting started here I began by first formatting my cue sheet in a way that made the most sense for TouchDesigner. This is a great time to practice your Excel skills and to use whatever spreadsheet application / service that you prefer to do as much formatting as possible for you. In my case I used the following as my header rows:

  • Cue Number – what was the number / name for the cue. Specifically this is what the stage manager was calling for over headset. This is also the same name / number for the cue that was in the media designer script. When anyone on the team was talking about M35 I wanted to make sure that we were all talking about the same thing.
  • Button Type – Different cues sometimes need different kinds of buttons. Rather than going through each button and making changes during tech, I wanted to be able to update the master cue sheet for the replicator, and for the properties specified to show up in the button. Do I want a momentary button, a toggle, a toggle down, etc. These things mattered, and by putting these details in the master table It was one less adjustment that I needed to make by hand.
  • Puppet – Wonder Dome had several different types of cues. Two classifications came to make a huge difference for us during the tech process. Puppet entrances / exits, and puppet movements. Ultimately, we started to treat puppet entrances and exits as a different classification of cue (rather than letters and numbers we just called for “Leo On” and “Leo Off”, this simplified the process of using digital puppets in a huge way for us), but we still had puppet movements that were cued in TouchDesigner. During the tech process we quickly found out that being able to differentiate between what cues were puppet movements and what cues were not was very important to us. By adding this column I could make sure that these buttons were a different color – and therefore differentiated from other types of cues.

Here I also took a programming precaution. I knew that invariably I was going to want to make changes to the table, but might not want those changes to be implemented immediately – like in the middle of a run for example. To solve this problem I used a simple copy script to make sure that I could copy the changed table to an active table when we were in a position to make changes to the show. By the end of the process I was probably fast enough to make changes on the fly and for them to be correctly formatted, but at the beginning of the process I wasn’t sure this was going to be the case. The last thing I wanted to do was to break the show control system, and then need 25 minutes to trouble shoot the misplacement of a 1 or 0. At the end of the day, this just made me feel better, and even if we didn’t need it in place I felt better knowing that I wasn’t going to break anything if I was thinking on my feet.

replicator in action

Above you can see a replicator in action – looking at an example like this I think helps to communicate just how useful this approach was. Method, like organization, is just a way to ensure that you’re working in a way that’s meaningful and thoughtful. I’m sure there are other methods that would have given us the same results, or even better results, but this approach helped me find a way to think about being able to quickly implement cue sheet changes into our show control environment. It also mean that we standardized our control system. With all of the buttons based on the same Master Operator it gave the interface a clean and purposed look – staring down the barrel of a 25 show run, I wanted something that I didn’t might looking at.

Thinking more broadly when it comes to organization, beyond just the use of replicators for making buttons I also took the approach that the show should be modular and organized as possible. This meant using base and container components to hold various parts of the show. Communication to lighting and sound each had their own module, as did our puppets. For the sake of performance I also ended up placing each of the locations in their own base as well. This had the added bonus of allowing for some scripting to turn cooking on and off for environments that we were using or not using at any given point in the show. We had a beast of a media server, but system resources were still important to manage to ensure smooth performance.

notThatStory_fullMap

If you want to learn more about replicators you can read through this post about getting started using them.

Show Control

Show control, however, is about more than just programming buttons. Driving Wonder Dome meant that we needed a few additional features at our fingertips during the show. Our show control system had two preview screens – one for the whole composite, and one for puppets only. One of the interesting features of working in a dome is how limited your vision becomes. The immersive quality of the projection swallows observers, which is downright awesome. This also means that it’s difficult to see where all of the media is at any given point. This is one of the reasons that we needed a solid preview monitor – just to be able to see the whole composition in one place. We also needed to be able to see the puppets separately at times – partially to locate them in space, but also to be able to understand what they looked like before being deformed and mapped onto the curved surface of the dome.

show_control

The central panel of our control system had our cues, our puppet actions, our preview monitors, and a performance monitor. During the show there were a number of moments when we had a dome transformation that was happening while nearly simultaneously a puppet was entering or exiting. While originally I was trying to drive all of this with a single mouse, I quickly abandoned that idea. Instead I created a simple TouchOSC interface to use on an iPad with another hand. This allowed me to take a double handed approach to diving the media added some challenge, but paid itself back ten fold with a bit of practice. This additional control panel also allowed me to drive the glitch effects that were a part of the show. Finally it also made for an easy place to reset many of the parameters of various scenes. In the change over between shows many elements needed to be reset, and but assigning a button on my second interface for this task I was able to move through the restore process much faster.

2014-04-09 14.44.46

If you’d like to learn more about using TouchOSC with TouchDesigner there a few pages that you might take a glance at here:

TouchOSC | Serious Show Control
Sending and Receiving OSC Values
Visualizing OSC Data

Cues

Beyond creating a system for interacting with TouchDesigner, a big question for me was how to actually think about the process of triggering changes within my network. Like so many things, this seems self evident on the face of it – this button with do that thing. But when you start to address the question of “how” then the process becomes a little more complicated. Given the unstable nature of our cue sheet, I knew that I needed a name-based approach that I called from a central location. Similar to my module based approach for building the master cue sheet, I used the same idea when building a master reference sheet.

With a little push and guidance from the fabulous Mary Franck, I used an evaluate DAT to report out the state of all of the buttons from the control panel, and name them in a way that allowed for easy calling – specifically I made sure that each cue maintained it’s name letter and number convention from our cue sheet.

master ref

 

On the face of this it seems like that’s an awful lot of scripts to write – it is, but like all things there are easier and harder ways to solve any problem. My approach to here was to let google spreadsheets do some work for me. Since cue sheet was already set-up as a spread sheet, writing some simple formulas to do the formatting for me was a quick and easy way to tackle this. It also meant that with a little bit of planning my tables for TouchDesigner were formatted quickly and easily.

excel script formattingIt was also here that I settled on using a series of Execute DATs to drive the cooking states of the various modules to control our playback performance. I think these DATs were some of the hardest for me to wrap my head around – partially because this involved a lot of considered monitoring of our system’s overall performance, and the decisions and stacking necessary to ensure that we were seeing smooth video as frequently as possible. While this certainly felt like a headache, by the time the show was running we rarely dropped below 28 frames per second.

cooking on and off

If you want to read a little more about some of the DAT work that went into Wonder Dome you can start here:

Evaluate DAT Magic
These are the DATs You’ve Been Looking For

Communication

All of the designers on the Wonder Dome team had wrestled with the challenges of communication between departments when it comes to making magic happen in the theatre. To this end, Adam, Steve, and I set out from the beginning to make sure that we had a system for lights, media, and sound to all be able to talk with one another without any headache. What kind’s of data did we need to share? To create as seamless a world as possible we wanted any data that might be relevant for another department to be easily accessible. This looked like different things for each of us, but talking about it from the beginning ensured that we built networks and modules that could easily communicate.

Screenshot_032314_115125_AM

In talking with lighting, one of our thoughts was about passing information relative to the color of the environment that we found ourselves in at any given point. To achieve this I cropped the render to a representative area, then took the average of the pixel values in that area, then converted the texture data to channel data and streamed lighting the RGBA values over OSC. We also made a simple crossfader in our stream for the times when we wanted the lighting in the scene to be different from the average of the render.

WD_AdamThis technique was hardly revolutionary, but it did create very powerful transitions in the show and allowed media to drive lighting for the general washes that filled the space. This had the added benefit of offloading some programming responsibility from lighting. While I had done a lot of work in the past to coordinate with sound, I hadn’t done much work coordinating with lights. In fact, this particular solution was one that we came up with one afternoon while we were asking questions like “what if…” about various parts of the show. We knew this was possible, but we didn’t expect to solve this problem so quickly and for it to be so immediately powerful. Through the end of the run we continued to consistently get positive audience response with this technique. Part of the reason this solution was so important was be cause Adam was busy building a control system that ultimately allowed him to control two moving lights with two wacom tablets – keeping the washing lighting driven by media kept both of his hands free to operate the moving lights.

Screenshot_032314_115026_AM

The approach to working with sound was, of course, very different from working with lights. Knowing that we wanted to use spatialized sound for this show Stephen Christensen built an incredible Max patch that allowed him to place sound anywhere he wanted in the dome. Part of our conversation from the beginning was making sure that media could send location data bout puppets or assets – we wanted the voice of the puppeteers to always be able to follow the movement of the puppets across the dome. This meant that created an OSC stream for sound that carried the location of the puppets, as well as any other go or value changes for moments where sound and media needed to be paired together.

Screenshot_032314_114946_AM

Communicating with sound wasn’t just a one way street though. Every day the Wonder Dome had a 90 minute block of free time when festival visitors were allowed to explore the dome and interact with some of the technology outside of the framework of the show. One of the components that we built for this was a 3D environment that responded to sound, animating the color and distribution of objects based on the highs, mids, and lows from the music that was being played. Here sound did the high, mid, low processing on its end, and then passed me a stream of OSC messages. to get a smoother feel from the data I used a Lag CHOP before using this to drive any parameters in my network.

Components and Reuse

Perhaps the most important lesson to be learned from this project was the importance of developing solid reusable components. This, again, isn’t anything revolutionary but it is worth remembering whenever working on a new project. The components that you build to use and reuse can make or break your efficiency and the speed of your workflow. One example of this would be a tool that we created to make placing content on the dome. Our simple tool for moving images and video around the dome would be used time and again throughout the project, and if I hadn’t take the time early on to create something that I intended to reuse, I would have instead spent a lot of time re-inventing the wheel every time we needed to solve that problem.

Screenshot_032314_114332_AM

In addition to using this placement tool for various pieces of media in the show, this is also how we placed the puppets. During the development phase of this tool I thought we might want to be able to drive the placement of content from a iPad or another computer during tech. To make this easier, I made sure that there was a mechanism embedded in the tool to allow for easy control from multiple inputs. This meant that when we finally decided to adapt this tool for use with the puppets, we already had a method for changing their location during the show. There are, of course, limits to how much anyone can plan ahead on any project but I would argue that taking the time to really think about what a component needs to be do before developing it makes good sense. I also made use of local variables when working with components in order to make it easier to enable or disable various pieces of the tool.

Screenshot_032314_114451_AM

You can read more about some of this process here:

3D Solutions for a 2D World
Container Display

Documentation and Comments

comment exampleI nearly forgot to mention one of the most critical parts of this process. Documentation and commenting. If I hadn’t commented my networks I would have been lost time after time. One of the most important practices to develop and to continue is good commenting. Whenever I was working on something that I couldn’t understand immediately by just looking at it, I added a comment. I know that some programmers use the ability to insert comments with individual operators, but I haven’t had as much success with that method. Personally, I find that inserting a text DAT is the best way for me to comment. I typically write in a text editor using manual carriage returns. I also make sure that I date my comments, so if I make a change I can leave the initial comments and then append the comment with new information. I can’t say enough about the importance of commenting – especially if you’re working with another programmer. Several times during the process I would help lighting solve a problem, and good commenting helped ensure that I could communicate important details about what was happening in the network to the other programmer.

I think it’s also important to consider how you document your work. This blog often functions as my method of documentation. If I learning something that I want to hold onto, or something that I think will be useful to other programmers then I write it down. It doesn’t do me any good to solve the same problem over and over again – writing down your thoughts and process help you organize your approach. There have been several times when I find shortcuts or new efficiency in a process only when I’m writing about it – the act of taking it all a apart to see how the pieces connect make you question what you did the first time and if there’s a better way. At times it can certainly feel tedious, but I’ve also been served time and again by the ability to return to what I’ve written down.