Category Archives: Media Installations

TouchDesigner | Email | Volumetric Lights

dest.PNG

Hello Matthew,
Ive been following your videos about TouchDesigner, great work! Really appreciate the content and learning resources you’ve made.

We are experimenting with TouchDesigner, so that we could perhaps use it in our dynamic / interactive installations.

Recently I have been researching usage of TD with dmx/artnet lighting and have come to a certain problem, which I just cannot solve easily. Ive been looking all over the internet, read two books about TD and still cannot figure this out. So I wanted to get in touch withou you and perhaps aks for advice (if you have time of course 🙂 )

Let me introduce the problem:

Imagine you have a 3D lighting sculpture, where there are DMX light sources all over a certain 3D space.

The lights are not uniformly spaced relatively to each other (no cubic shape or something light that), they are randomly placed in the space.

Now, I want to take a certain shape (for example a sphere) and map it as an lighting effect to the lighting fixtures. The sphere would for example had its radius increased / decreased in time, and the application should “map” which light source should light up when the sphere “crosses” it in space.

I would then somehow sample the color of the points and use that information and feed it to a DMX chop after some other operations…

It’s kinda difficult to explain, but hopefully I got it right.. 🙂

Do you know of any tricks or components I could use, so that I could “blend” 3d geometry with points in space in order to control lighting?

Im certainly able to work out how DMX works and all the other stuff, I just dont know how to achieve the effect in 3D.

(In 2D, it would be really simple. For example for a LED screen, its pretty straightforward, I would just draw a circle or whatever on a TOP and then sample it..)

Thanks a lot,
I would appreciate any tips or advice, really.. 🙂

Best regards,


Great question!

A sphere is a pretty easy place to start, and there are a few ways we can tackle this.

The big picture idea is to sort out how you can compute the distance from your grid points to the center of your sphere. By combining this with the diameter of your sphere we can then determine if a point is inside or outside of that object.

We could do this in SOP space and use a group SOP – this is the most straightforward to visualize, but also the least efficient – the grouping and transformation operations on SOPs are pretty expensive, and while this is a cool technique, you bottle-neck pretty quickly with this approach.

To do this in CHOPs what we need is to first compute the difference between our grid points and our sphere – we can do this with a math CHOP set to subtract. From there we’ll use another math CHOP to compute the length of our vector. In essence, this tells us how far away any given point is from the center of our sphere. From here we have few options – we might use a delete CHOP to remove samples that our outside of our diameter, or we might use a logic CHOP to tell us if we’re inside or outside of our sphere.

From there we should be able to pretty quickly see results.

2D-concept.gif

 

3D-concept.gif

Attached set of examples made in 099.

  • base_SOPs – this illustrates how this works in SOP space using groups
  • base_concept – here you can see how the idea works out with just a flat regular distribution of points. It’s easier to really pull apart the mechanics of this idea starting with a regular distribution first as it’s much easier to debug.
  • base_volume – the same ideas but applied to a 3D volume.
  • base_random – here you can see this process applied to a sudo random distribution of points. This is almost the same network as we looked at in base_concept, with a few adjustments to compensate for the different point density.

base_dist_test

 

presets and cue building – a beyond basics checklist | TouchDesigner 099

from the facebook help group

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

https://www.facebook.com/groups/touchdesignerhelp/permalink/835733779925957/

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

  • Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.
  • The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit. 
  • I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues  – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

  • What existing tools do you like – what’s their data structure / solution?
  • How is your data organized – arrays, dictionaries, etc.
  • Do you have a readme to refer back to when you extend your project in the future?
  • Do you have a way to add entries?
  • Do you have a way to recall entries?
  • Do you have a way to update entries?
  • Do you have a way to copy entries?
  • Do you have a validation process in-line to ensure your entries are valid?
  • Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

icon_library.PNG

icon_library_example.PNG

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing  your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

Richard’s killer post about optimization in touch has an excellent recommendation – keep your UI separate. This suggestion is HUGE, and it does far more good than you might intentionally imagine.

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

  • “What problem am I solving by adding this level of additional complexity?”
  • “Is there another way to solve this problem without an additional process?”
  • “What are the possible problems / issues this might cause?”
  • “Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

TouchDesigner | Understanding Extensions

genGeoClassSo you’ve made a killer component that you love using, but you suddenly find yourself wondering how best to re-use it in future projects. You could make a killer control panel for it, or create a more generalized method for passing in values with in CHOPs or DATs. You could just resign yourself to some more complex scripting – reaching deep into your component to set parameters one at a time. You could hard code it, you’ll probably be making some job specific changes to your custom component anyway, so what’s a little more hard coding? The 50000 series now features custom parameters, or you could use variables, or storage. Any one of these options might be right for your component, or maybe they’re just not quite right. Maybe what you really need is a little better reach with Python, but without as much head scratching specificity. If find yourself feeling this way, than extensions are about to make your TouchDesigner programming life so, so much better.

Using extensions requires a bit of leg work on your part as a programmer, it also means that you’ll want to do this for components that you’ll find yourself reusing time and again – after all, if you’re going to take some time to really think about how you want a reusable piece of code to work in a larger system it only makes sense to do this with something you know will be useful again. That is to say, this approach isn’t right for every circumstance, but the circumstances it is right for will really make a difference for you. We’re going to look at a rather impractical example to give us a lay of the land when it comes to using extensions – that’s okay. In fact, as you’re learning how to apply this approach to your workflow it’s worth practicing several times to make sure you have a handle on the ins and outs of the process.

Before we get too much further, what exactly is this extension business? If you’re to the point with TouchDesigner where you’re comfortable using Python to manipulate your networks, you’ll no doubt have come to rely on a number of methods – anything with a . followed by a call. For example:

  • op(‘moviefilein1’).width – returns the width of the file
  • op(‘moviefilein1’).height– returns the heightof the file
  • op(‘table1’).numRows – returns the number of Rows
  • op(‘table1’).numCols – returns the number of Columns

In each of these examples, it’s the .operation that extends how you might think of working with an operator. Custom extensions, means that you, the programmer, are now free to create your own set of extensions for a given component. The classic example for this kind of custom component and custom extension set for TouchDesigner would be  a movie player. Once you build a movie player that cross fades between two videos, wouldn’t it be lovely to use something like op(‘videoPlayer’).PlayNext() or op(‘videoPlayer’).Loop(). The big idea here is that you should be free to do just that when working with a component, and custom extensions are a big part of that puzzle. This keeps us from reaching deep into the component to change parameters, and instead allows us to write modular code with layers of abstraction. If you’re still not convinced that this is a powerful feature, consider that when you start a car you’re not in the business of specifying how precisely the starter motor sequences each electrical signal to help the car turn over, or which spark plugs fire in which order – you issue a command car.start() with the expectation that the start() function holds all of the necessary information about how the vehicle actually starts. While this example might be reductive, it helps to illustrate the importance of abstraction. It’s impractical for you, the driver, to always be caught up in starting sequences in order to drive a car (one might make an argument against this, but for now let’s roll with the fact that many drivers don’t understand the magic behind a car starting when they turn the key), this embedded function allows the driver to focus on higher order operations – navigation, manipulation, etc. At the end of the day, that’s really what we’re after here – how do add a layer of abstraction that simplifies how we manipulate a given component.

That’s all well and good, but let’s get to the practical application of these concepts. In this case, before we actually start to see this in action, we need to have a working component to start working with. We are going to imagine that we want to build a generative component that’s got faceted torus that we use in lots of live sets. We want to be able to change a number of different elements for this Torus – its texture, background, rotation, deformation, to name a few. Let’s begin by putting together a simple render network to build this component, and then we can look at how extensions complement the work we’ve already done.

First let’s add an empty Base COMP to our network.

emptyBase

Inside of our new base let’s add a Camera, Geo, and Light COMP, as well as a Render TOP connected to an Out TOP. We’re building a simple rendering network, which should feel like old hat.

simpleRender

Let’s add a movie file in TOP, and a Composite TOP to our network. We’ll composite our render over our movie file in, so we have a background. In the image below only the changed parameters for the Composite TOP are shown.

simpleRenderWithComposit

Next let’s look inside of our geo COMP, and make a few changes. First let’s change our geo to be a polygon rather than a mesh. We’ll also turn off the display and render flags for the torus (don’t worry, we’ll turn them on further down the chain.

torus

Next we’ll add a noise SOP.

Noise SOP

Next we’ll add a facet SOP, turning on unique points and compute normals.

facetSOP

Finally, let’s add a null SOP. On the null, let’s turn on the display and render flags. When it’s all said and done we should have something that looks like this.

noiseTorusChain

Let’s move up one layer out of our geo, back into the base we started in. Here let’s add a phong Material and apply it to our geo. Let’s also add a movie file in TOP connected to a null TOP, and set it as the color map for our phong.

colorMapAndMaterial

While we’re still thinking about our material, lets make a few changes. Let’s set our diffuse color to white, our specular color to a light gray, and turn up our shininess to 255.

phongNonDefaults

Shown are the non default parameters for the Phong Material.

Let’s also make a few changes to our light COMP. I’m after a kind of shiny faceted torus, so let’s change our light to a cone light, place it overhead and to the right of our geometry, and set it to look at our geo.

Shown are the non default parameters for the Light Component.

Shown are the non default parameters for the Light Component.

I’ve gone ahead and changed file in my movie file in TOP to a different default image so I can see the whole torus. In the end you should have a network that looks something like this.

textureTorus

Thinking ahead, I know that I’m going to want to have the freedom of changing a few parameters for this texture. I’d like to be able to control if it’s monochrome, as well as a few of the levels of the image. Let’s add a monochrome TOP and a level TOP between the movie file in and the null TOP.

postProcess

We’re almost ready to start thinking about extensions, but first we need to build a control network to operate our component. Let’s start by adding a constant CHOP and calling it attrAssign. Here I’m going to start thinking about what I want to control in this component. I want to drive the rotation of the x y and z axis for our geo, I want to control the amplitude of the noise, the saturation of our image, the black level, brightness, and opacity. I’m going to think of those parameters as:

  • rx
  • ry
  • rz
  • noiseAmp
  • monoVal
  • blkLvl
  • bright
  • opacity

I’ll start out my constant CHOP with those channel names, and some starting values.

attrAssign

For this particular component, I want to be able to set values, and have it smartly figure out transitions rather than needing it constantly feed it a set of smoothly changing values. There are a couple of different ways we might set this up, but I’m going to take a rout of using a speed CHOP for one set of operations, and a filter CHOP to smooth everything out. First I want to isolate the rx ry and rz channels, we can do that with a select CHOP. Next we’ll connect that to a speed CHOP. We can merge this changed set of channels back into the stream with a replace CHOP – replacing our static rx ry rz channels with the dynamic ones.

selectSpeedReplace

Finally, we can smooth out our data with a Filter CHOP, and end our chain of operations in a null CHOP.

controlChops

Our last step here is to export or reference to each of our control parameters. Our rotation channels should be referenced by our Geo1 for rx, ry, and rz. The Noise SOP in Geo1 should be connected to the channel noiseAmp, and our image controls should be connected to their respective parameters – Monochrome, Black Level, Brightness, and Opacity. In the end, you should end up with a complete network that looks something like this.

complete BaseCOMP

Alright, we now finally have a basic generative component set up, and we’re ready to start thinking about how we want our extensions to work with this bad boy. Let’s start with the simplest ideas here, and work our way up to something more complex. For starters we need to add a text DAT to our network. I’m going to call mine genGeoClass.

genGeo

Let’s add our first class to our genGeoClass text DAT. Our class is going to contain all of our functions that we want to use with our component. There are a few things we need to keep in mind for this process. First, white space is going to be important – tabs matter, and this is a great place to really learn that the hard way. Namespace also matters. We’re eventually going to promote our extensions (more on that later on down), and in order for that to work correctly our functions need to begin with capital letters. That will make more sense as we see that in practice, but for now it’s important to have that tucked away in your mind.

Let’s begin by adding a simple print command. First we define our class, and place our functions inside of the class. When we’re writing a class in Python we need to explicitly place self in our definitions. There are a number of very good reasons for this, and I’d encourage you to read up on the topic if you’re curious:

Why ‘self’ is used explicitly
Why the explicit self has to stay

For our purposes, let’s get our class started with the following;

class GenGeo:

    def SimplePrint( self ):
 
        print( 'Hello World' )
        
        return

Before we can see this in action, we need to set up our base COMP to use extensions. Let’s back out of our base, and take a look at our component parameters.

baseExtensions

Here I’ve set the module reference to be the operator called genGeoClass inside of base1. We can also see that we’re specifcally referencing the GenGeo() class that we just wrote. I’ve also gone ahead and turned on promote extensions. Make sure you click “Re-Init” Extensions at the bottom of the parameters page, and then we can see our extension in action.

Next let’s add a text DAT to the same directory as our base1. Here we’ll use the following piece of code to call the SimpleText() function we just defined:

op( 'base1' ).SimpleText()

Let’s open our text port, and run our text DAT.

SimpleText

That should feel like a little slice of magic. If you choose not to promote your extensions, the syntax for calling a function looks something like this:

op( 'base1' ).ext.GenGeo.SimplePrint()

Okay, this has been a lot of work so far to only print out “Hello World.” How can we make this a little more interesting? I’m so glad you asked. Here’s  a set of functions that I’ve already written. We can copy and paste these into our genGeoClass text DAT, and now we suddenly have a whole new host of functions we can call that perform some meta operations for us.

class GenGeo:

    def SimplePrint( self ):
        print( 'Hello World' )
        return

    def TorusPar( self , rows , columns ):
        op('geo1/torus1').par.rows = rows
        op('geo1/torus1').par.cols = columns
        return

    def TorusParReset( self ):
        op('geo1/torus1').par.rows = 10
        op('geo1/torus1').par.cols = 20 
        return

    def Texture( self , file ): 
        op('moviefilein1').par.file = file
        return

    def TextureReset( self ):
        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'
        return

    def Rot( self , rx , ry , rz ):
        attr = op('attrAssign')
 
        attr.par.value0 = rx
        attr.par.value1 = ry
        attr.par.value2 = rz
        return

    def RotReset( self ):
        attr = op('attrAssign')
        speed = op('speed1')
        filterCHOP = op('filter1')
        attr.par.value0 = 0
        attr.par.value1 = 0
        attr.par.value2 = 0
        speed.par.resetpulse.pulse()
        filterCHOP.par.resetpulse.pulse()
        return

    def TorusNoise( self , noiseAmp ):
        op( 'attrAssign' ).par.value3 = noiseAmp
        return

    def Mono( self , monoVal ):
        op( 'attrAssign' ).par.value4 = monoVal
        return

    def Levels( self , blkLvl , bright , opacity ):
        attr = op('attrAssign')
        attr.par.value5 = blkLvl
        attr.par.value6 = bright
        attr.par.value7 = opacity
        return

    def PostProcessReset( self ):
        attr = op('attrAssign')
        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def Background( self , onOff ):
        op('comp1').bypass = onOff
        return

To better understand what all of these do let’s look at a quick cheat sheet that I made:

# Test Print Statement
op( 'base1' ).SimplePrint()

# Set Rows and Columns
op( 'base1' ).TorusPar( 20 , 20 )

# Reset Rwos and Columns to 10 x 20
op( 'base1' ).TorusParReset()

# Set the texture of a movie file in TOP
op( 'base1' ).Texture( 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg' )

# Reset the Texture of movie file in TOP
op( 'base1' ).TextureReset()

# Set the Rotation Speed for the x y and / or z axis
op( 'base1' ).Rot( 10 , 15 , 20 )

# Reset the Rotation speed to 0, and the rotation values to 0
op( 'base1' ).RotReset()

# Set the Amplitude paramater of the Noise SOP for the Torus
op( 'base1' ).TorusNoise( 0.8 )

# Make the texture Monochrome
op( 'base1' ).Mono( 1.0 )

# Control the Black Leve, Brightness, and Opacity of the Texture
# that's applied to the Torus
op( 'base1' ).Levels( 0.25 , 1.5 , 0.8 )

# Reset all post process effects
op( 'base1' ).PostProcessReset()

# Turn off Background Image - 0 will turn the Background back on
op( 'base1' ).Background( 1 )

This is wonderful, but there’s one last thing for us to consider. Wouldn’t it be great if we had some initialization values in here, so at start-up or when we made a new instance of this comp we defaulted to a reliable base state? That would be lovely, and we can set that with an __init__ definition. Let’s add the following to our class:

    def __init__( self ):
 
        print( 'Gen Init' )
        attr = op('attrAssign')

        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'

        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1

        return

That means our whole class should now look like this:

class GenGeo:

    def __init__( self ):
        print( 'Gen Init' )
        attr = op('attrAssign')

        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'

        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def SimplePrint( self ):
        print( 'Hello World' )
        return

    def TorusPar( self , rows , columns ):
        op('geo1/torus1').par.rows = rows
        op('geo1/torus1').par.cols = columns
        return

    def TorusParReset( self ):
        op('geo1/torus1').par.rows = 10
        op('geo1/torus1').par.cols = 20 
        return

    def Texture( self , file ): 
        op('moviefilein1').par.file = file
        return

    def TextureReset( self ):
        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'
        return

    def Rot( self , rx , ry , rz ):
        attr = op('attrAssign')
 
        attr.par.value0 = rx
        attr.par.value1 = ry
        attr.par.value2 = rz
        return

    def RotReset( self ):
        attr = op('attrAssign')
        speed = op('speed1')
        filterCHOP = op('filter1')
        attr.par.value0 = 0
        attr.par.value1 = 0
        attr.par.value2 = 0
        speed.par.resetpulse.pulse()
        filterCHOP.par.resetpulse.pulse()
        return

    def TorusNoise( self , noiseAmp ):
        op( 'attrAssign' ).par.value3 = noiseAmp
        return

    def Mono( self , monoVal ):
        op( 'attrAssign' ).par.value4 = monoVal
        return

    def Levels( self , blkLvl , bright , opacity ):
        attr = op('attrAssign')
        attr.par.value5 = blkLvl
        attr.par.value6 = bright
        attr.par.value7 = opacity
        return

    def PostProcessReset( self ):
        attr = op('attrAssign')
        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def Background( self , onOff ):
        op('comp1').bypass = onOff
        return

Alright, so why do we care? Well, this application of extensions frees us to think differently about this component. Let’s say that I want to make a few changes to this component’s behavior. First I want to set a new image to be the texture for the torus, next I want to change the rotation speed on the x and y axis, and finally I want to turn up the noise SOP. Previously, I might think about this by writing a series of scripts that looked something like:

op( 'base1/attrAssign' ).par.value0 = 20
op( 'base1/attrAssign' ).par.value1 = 30
op( 'base1/attrAssign' ).par.value3 = 0.8
op( 'base1/moviefilein1' ).par.file = 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg'

Instead, I can now write that like this:

op( 'base1' ).Texture( 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg' )
op( 'base1' ).Rot( 20 , 30 , 0 )
op( 'base1' ).TorusNoise( 0.8 )

That might not seem like a huge difference here in our example network, but as we build larger and more complex components, this suddenly becomes hugely powerful as a working approach.

extensionsInAction

Check out the example file on GitHub if you get stuck along the way, or want to see exactly how I made this work.

TouchDesigner | Replicators, and Buttons, and Tables, oh my

replicator cue buttons

I want 30 buttons. Who doesn’t want 30 buttons in their life? Control panels are useful for all sorts of operations in TouchDesigner, but they can also be daunting if you’re not accustomed to how they work. In the past when I’ve wanted lots of buttons and sliders I’ve done all of my lay-out work the hard way… like the hardest way possible, one button or slider at a time. This is great practice, and for anyone who is compulsively organized this activity can be both maddening and deeply satisfying at the same time. If you feel best when you’re meticulously aligning your buttons and sliders in perfect harmony, this might be the read for you. If, however, you like buttons (lots of them), and you want to be able to use one of the most powerful components in TouchDesigner, then Replicators might just be the tool you’ve been looking for – even if you didn’t know it.

Replicators, big surprise, make copies of things. On the surface of it, this might not seem like the most thrilling of operators, but lets say that you want to automate a process that might be tedious if you set out to do it by hand. For example, let’s say you’re designing a tool or console that needs 30 buttons of the same type, but you’d like them have unique names and follow a template and have unique names. You could do this one button at a time, but if you ever change something inside of the button (like maybe you want the colors to change depending on their select state), making that change to each button might be a huge hassle. Replicators can help us make that process fast, easy, just plain awesome.

First thing, let’s make a container and dive inside of it to start making our buttons. Next let’s make a table. I’m going to use my table DAT to tell the replicator how to make the copies of my button. As you start to think about making a table imagine that each row (or column) is going to be used to create our copies. For example, I’ve created a table with thirty rows, Cue 1 – Cue 30.

Screenshot_030614_011703_AM

Next let’s make a button COMP and rename it button0 (don’t worry, this will make sense in a a few minutes).

Screenshot_030614_012057_AM

Alright, now we’re gonna do some detailed work to set-up our button so some magic will happen later. First up we’re going to make a small change to the Align Order of our button. The Align Order parameter of a button establishes the order in which your buttons are arranged in the control panel when you use the Align Parameter (if this doesn’t make sense right now, hang in there and it will soon). We’re going to use a simple python expression to specify this parameter. Expand the field for the Align Order and type the python expression:

me.digits

What’s this all about? This expression is calling for the number that’s in the name of our button. Remember that we renamed our button to button0. Using this expression we’ve told TouchDesigner that the alignment order parameter of this button should be 0 (in many programming languages you start counting at 0, so this makes this button the first in a list).

Screenshot_030614_012854_AM

Alright, now let’s start making some magic happen. Let’s go inside of our button and make a few changes to the background text. If this is your first time looking inside of the button component take a moment to get your bearings – we have a table DAT that’s changing the color of the button based on how we interact with this component, we have some text that we’re using for the background of the button, and we have a few CHOPs so we can see what the state of our button is when it’s toggled or pressed.

Screenshot_030614_013526_AM

Next we’re going to take a closer look at just the text TOP called bg. Looking first at the text page of the parameters dialogue gives us a sense of what’s happening here, specifically we can see that the text field is where we can change what’s displayed on this button. First up we’re going to tell this TOP that we want to pull some data from a DAT. In the DAT field type the path:

../table1

Next change the DAT Row to the following python expression:

me.parent().digits

Finally, clear the text parameter so it’s blank. With any luck you’re button should now read, “Cue 1,” or whatever you happened to type in the first row of your table.

Screenshot_030614_014601_AM

So what’s happening here?! First, we’re telling this TOP that we’re going to use some information from a table. Specifically, we’re going to use a table whose name is table1, and it’s location is in the parent network (that’s what ../ means). If we were to take our network path ../table1 and write it as a sentence it might read something like this – “hey TouchDesigner I want you to use a table DAT for this TOP, to find it look in the network above me for something named table1.” Next we have an expression that’s telling this TOP what row to pull text from. We just used a similar expression when we were setting our alignment order, and the same idea applies here – me.parent().digits as a sentence might read “hey TouchDesigner, look at my parent (the component that I’m inside of) and find the number at the end of its name.” Remember that we named this component button0, and that 0 (the digits from that name) correspond to the first row in our table.

Now it’s time to blow things up. Let’s add a replicator component to our network.

Screenshot_030614_015654_AM

In the parameters for our replicator first specify that table1 is the template table. Next let’s change the Operator Prefix to button to match the convention that we already started. Last by not least set your Master Operator to be your button0.

Screenshot_030614_020047_AM

If we set up everything correctly we should now have a total of 30 buttons arranged in a grid in our network. They should also each have a unique name that corresponds to the table that we used to make this array.

Screenshot_030614_020310_AM

Last but not least, let’s back up and see what this looks like from the container view. At this point you should be sorely disappointed, because our control panel looks all wrong – in fact it looks like there’s only one button. What gives?

Screenshot_030614_020604_AM

The catch here is that we still need to do a little bit of house keeping to get to what we’re after. In the Container’s parameters on the layout page make sure the Align parameter is set to “Layout Grid Columns” or “Layout Grid Rows” depending on what view better suits your needs. You might also want to play with the Align Margin if you’d like your buttons to have a little breathing room.

Screenshot_030614_020954_AM

Bingo-Bango you’ve just made some replicator mojo happen. With your new grid of buttons you’re ready to do all sorts of fun things. Remember that each of these buttons is based on the template of the master operator that we specified in the replicator. In our case that’s button0. This means that if you change that operator – maybe you change the color, or add an expression inside of the button – when you click the “recreate all operators” in the replicator, this remakes your buttons with those changes applied to every button.

Happy replicating.


A big thank you to Richard Burns and Space Monkeys for sharing this brief tutorial that inspired me to do some more looking and playing with Replicators:

Delicious Max/MSP Tutorial 4: Vocoder

This week I was gutsy, I did two MaxMSP tutorials. I know, brave. Sam’s tutorials on YouTube continue to be a fascinating way to learn Max, as well as yielding some interesting projects. This second installment this week is about building a vocoder. The audio effect now common place is still incredibly rewarding, especially when running through a mic rather than using a recorded sample. There is a strange pleasure in getting to hear the immediate effects of this on your voice, which is further compounded by the ability to add multiple ksliders (keyboards) to the mix. Below is the tutorial I followed along with yesterday, and a resulting bit of fun that I had as a byproduct.

A silly patch made for a dancer’s birthday using the technique outlined by Sam in his tutorial above.

Delicious Max/MSP Tutorial 2: Step Sequencer

Another MaxMSP tutorial from dude837 today in the afternoon. Today I the step sequencer in the video below. This seems like slow going, and maybe a little strange since I keep jumping around in this set of tutorials. This is a rough rode. Maybe it’s not rough so much as it’s slow at times. I guess that’s the challenge of learning anything new, it always has times when it’s agonizingly slow, and times when ideas and concepts come fast and furious. The patience that learning requires never ceases to amaze me. Perhaps that’s what feels so agonizing about school when we’re young – it’s a constant battle to master concepts, it’s a slow road that never ends. Learning to enjoy the difficult parts of a journey is tough business. Anyway, enough of that tripe. On to another tutorial. 



Sound Trigger | MaxMSP

Programming is often about solving problems, sometimes problems that you didn’t know that you actually had to deal with. This past week the Media Installations course that I’m taking spent some time discussion issues of synchronicity between computers for installations, especially in a situation where the latency of wired or wireless connections creates a problem. When 88 computers all need to be “listening” in order to know when to start playback, how can you solve that problem?
Part of the discussion in the class centered around using the built-in microphones on modern laptops as a possible solution. Here the idea was that if every computer had it’s microphone turned on, the detection of a sound (say a clap) would act as a trigger for all the machines. Unless you are dealing with distances where the speed of sound becomes a hurdle for accuracy, this seemed like a great solution. So I built a patch to do just that.
This max patch uses the internal microphone to listen to the environment and send out a trigger messages (a “bang” in max-speak) when a set sonic threshold is crossed. As an added bonus, this patch also turns off the systems involved with detection once it’s set in motion. Generally, it’s seemed to me that a fine way to keep things from going wrong is to streamline your program so that it’s running as efficiently as possible.

Here’s a link to the simple trigger patch

Tools Used
Programming – MaxMSP
Countdown Video – Adobe After Effects
Screen Cast – ScreenFlow
Video Editing – Adobe Premiere

Sparrow Song | Drawing with Light

One of the effects that I’ve used in two productions now is where lines appear to draw-in over time in a video. This effect is fairly easy to generate in After Effects, and I wanted to take a quick moment to detail how it actually works. 

This process can start many ways. For Sparrow song it started by connecting a laptop directly to the projectors being used, and using photoshop to map light directly onto the set. You can see in the photo to the right that each surface that’s intended to be a building has some kind of drawn on look. In photoshop each of these buildings exists as an independent layer. This makes it easy to isolate effects or changes to individual buildings in the animation process.

Here’s a quick tutorial about how I animated the layers to create the desired effect:


Now that I’ve done this several times it finally feels like a fairly straightforward process – even if it can be a rather time consuming one.

Here’s an example of what the rendered video looks like to the playback system.

Here’s an album of documentation photos from the closing show.


Tools Used
Digital Drawing Input – Wacom intuos4
Mapping and Artwork – Adobe Photoshop
Animation and Color – Adobe After Effects
Photos – Cannon EOS 7D
Photo Processing – Adobe LightRoom 4

Mapping and Melting

This semester I have two courses with conceptual frameworks that overlap. I’m taking a Media Installations course, and a course called New Systems Sculpture. Media Installations is an exploration of a number of different mediated environments, specially art installations in nature. This course deals with the issues of building playback systems that may be interactive, or may be self-contained. New Systems is taught through the art department, and is focused on sculpture and how video plays as an element in the creation of sculptured environments

This week in the Media Installations course each person was charged with mapping some environment or object with video. While the wikipedia entry on the subject is woefully lacking, the concept is becoming increasingly mainstream. The idea is to use a projector and to paint surfaces with light in a very precise way. In the case of this course, the challenge was to cover several objects with simultaneous video. 

I spent some time thinking about what I wanted to map, and what I wanted to project, and was especially interested in using a zoetrope kind of effect in the process. I kept thinking back to late nights playing with Yooouuutuuube, and I wanted to create something that was loosely based on the kind of idea. To get started I found a video that I thought might be especially visually interesting for this process. My friend and colleague Mike Caulfield has a tremendously inspiring band called The Russian Apartments. The music he writes is both haunting and inspiring, an electronica ballad and call to action. It’s beautiful, and whenever I listen to a track it stays with me throughout the day. Their videos are equally interesting, and Gods, a video from 2011, kept coming back to me. 


To start I took the video and created a composition in AfterEffects with 16 versions of the video playing simultaneously. I then offset each video by 10 frames from the previous. The Effect is a kind of video lag that elongates time for the observer, creating strange transitions in both space and color. I then took this source video and mapped it to individual surfaces, creating a mapped set of objects all playing the same video with the delay. You can see the effect in the video below:




Meanwhile in my sculpture course we were asked to create a video of process art: continuous, no cuts, no edits. I spent a lot of time thinking about what to record, and could not escape some primal urge to record the destruction of some object. In that vein I thought it would be interesting do use a slow drip of acetone on styrofoam. Further, I wanted to light this with a projector. I decided to approach this by using the mapping project that I had already created, and instead to frame the observation of this action from a closer vantage point. Interestingly, Gods has several scenes where textures crumble and melt in a similar way to the acetone effect on the styrofoam. It wasn’t until after I was done shooting that I saw this similarity. You can see the process video below:




Tools Used:

Mapping – MadMapper
Media Generation – Adobe After Effects
Projector – InFocus IN2116 DLP Projector