Category Archives: How-To

presets and cue building | TouchDesigner 099

I’ve been thinking about state machines and cuing systems lately. Specifically, that there aren’t many good resources that I’ve found so far that talk new artist programmers through how to think about or wrestle with these ideas. Like many Touch programmers I’ve tried lots of different ways of thinking about this problem, and just today I saw someone post on the Facebook help group.

from the facebook help group:

Hi, i’m working arround Matthews AME394 Simple VJ-Setup Tutorial. No Questions, but how can i do nearly the same with different blending times between the moduls. I tried a lot with getting different values out of a table DAT into the length parameter of a timerCHOP. But cannot figur out the right steps to get my goal. Any helps? this i need in a theater situation with different scenes to blend one after another with scenebuttons or only one button and a countCHOP or something else.

This challenge is so very familiar, and while there are lots of ways to solve this problem sometimes the hardest part is having an idea of where to start.  Today what I want to look at is just that – where do we start? This isn’t the best solution, or the only solution – it’s just a starting point solution. It’s a pass at the most basic parts of this equation to help us get started in thinking about what the real problems are, how we want to tackle them, and how we can go about exposes the real issues we need to solve for.

So where do we start? In this simple little state machine we’re going to start with a table full of states. For the sake of simplicity I’m going to keep this as simple as possible… though it might well uncover the more interesting and more challenging pieces that lie ahead.

I’m going to start with the idea that I’ve got a piece of content (an image or a movie) that I want to play. I want to apply some post process effects (in this case just black level and image inversion changes), and I want to have different transition times between these fixed states. Here the only transition I’m worrying about is one that goes from one chain of operations to another. I’m also going to stick with images for now.

So what do we need in our network to get started?!

We’re going to borrow from an idea that often gets used in these kinds of challenges, and we’re going to think of this as operating with two decks – an A deck, and a B deck. Our deck is essentially a chain of operators that allow for all of the possibilities that we might want to explore in our application. In this case I’m only working with a level TOP, but you can imagine that we might use all sorts of operations to make for interesting composition choices.

Alright, so we’re going to lay out a quick easy deck:

moviefilein > level > fit 

adeck.PNG

Next we’re going to repeat this whole chain, then connect both of our fit TOPs to a cross TOP:

ab_deck.PNG

If you’re scratching your head at this fit TOP in line, that’s okay. For us, the fit TOP is going to act as our safety. This makes sure that no matter what the resolution of the incoming file might be, that we always make sure that both decks have matching proportions. We’d probably want a little more thought in how this would work for an event or a show, but for now this is enough to help ensure that don’t experience any unexpected resolution shifts during our transitions.

Next we’re going to add a simple tweening system to our network to control how we blend between states. In this case I’m going to use a constant, a speed, and a null. I need to make sure that my speed is set to clamp, and that my min and max values are 0 and 1 respectively. Right now I only have two different decks, so I don’t want to go any higher that 1 or any lower than 0.

Now we’re cooking with propane! So where do we go next?

some simple cues

movie_file trans_time blk_lvl invert
Banana.tif 1 0 0
Butterfly1.tif 2 0.12 1
Butterfly5.tif 5 0.2 0
Mettler.2.jpg 10 0.05 0
OilDrums.jpg 0.5 0.25 1
Starfish.tif 1 0 1

In this simple examination of this challenge I’m going to use a table to store our cues. In a larger system I’d probably use python storage (which is really a dictionary), but for the sake of keeping it simple let’s start with just a table. Our simple cues are organized above, and we can put all of those values into a table DAT. You’ll notice that for now I’m only worrying about file name and not path – all of these files come from the same directory so we can treat them mostly the same way. We’ll also notice that I’m thinking of my transition times in terms of seconds. All of this can, of course, be much more complicated. The trick is to sort out a very simple example first to identify pressure points and challenges before you dig yourself into a hole.

Okay, let’s add a table DAT to our network and copy all over our cues over.

table_dat.PNG

Now that we have all of our pieces organized it is time to think through the logic of how we make this all work. For now let’s use a button, a count CHOP, and a CHOP Execute DAT. We need to make sure our button is set to be momentary, and we also need to make sure our count CHOP is set to loop – starting at 1 and ending at 6. That matches our row indices from our table DAT.

move-through-cues.PNG

This is great Matt, but why python?

Well, we could do a lot of this with a complex set of CHOPs and selects but these kinds of states tend to be better handled, logically at least, through written code. Python will let us explicitly describe exactly what happens, and in what order those things happen. That’s no small thing, and while it might be a little rocky to wrap your head around using Python in Touch at first, it’s well worth it in the end. So what do we write in our CHOP Execute?

a little bit of logic | python

# me - this DAT
# 
# channel - the Channel object which has changed
# sampleIndex - the index of the changed sample
# val - the numeric value of the changed sample
# prev - the previous sample value
# 
# Make sure the corresponding toggle is enabled in the CHOP Execute DAT.

attr       = op( 'constant_attr' )
deck       = op( 'null_deck' )[ 'trans' ]

deckA      = op( 'moviefilein_a' )
levelA     = op( 'level_a' )
deckB      = op( 'moviefilein_b' )
levelB     = op( 'level_b' )

cues       = op( 'null_cues' )
path       = app.samplesFolder + '/Map/{file}'

def onOffToOn(channel, sampleIndex, val, prev): 
    return

def whileOn(channel, sampleIndex, val, prev): 
    return

def onOnToOff(channel, sampleIndex, val, prev):
    return

def whileOff(channel, sampleIndex, val, prev): 
    return

def onValueChange(channel, sampleIndex, val, prev): 
    
    # we're in deckB, change A 
    if deck > 0.5: 
        deckA.par.file         = path.format( file = cues[ int( val ), 'movie_file' ] ) 
        levelA.par.blacklevel  = cues[ int( val ), 'blk_lvl' ] 
        levelA.par.invert      = cues[ int( val ), 'invert' ] 
        attr.par.value0        = float( 1 / cues[ int( val ), 'trans_time' ] ) * - 1

    else: 
        deckB.par.file         = path.format( file = cues[ int( val ), 'movie_file' ] )
        levelB.par.blacklevel  = cues[ int( val ), 'blk_lvl' ] 
        levelB.par.invert      = cues[ int( val ), 'invert' ] 
        attr.par.value0        = 1 / cues[ int( val ), 'trans_time' ]
    
    return

Uhhhhhhh… wait. What?

Okay. First off we just define a set of variables that we’re going to use. This makes our code a little easier to write, and easier to read. Next all of the action really happens in our onValueChange function.

We’re going to do all of this in a little logical if statement. If this thing, do that thing… in all the other cases, do something else.

First we check to see what our deck position is… which means that we check to see which output we’re currently seeing more of. If our cross TOP’s index is greater that 0.5 we know that we’re closer to 1, which also means we’re seeing more of deck B than deck A. That means that we want to make changes in deck A before we start transitioning. First we change our file, change all of our settings, then finally set a value in our constant CHOP. But why 1 / that value? And why multiplied by -1?

A default network runs at 60 fps. A speed CHOP fed by a constant with a value of 1 will rise a count of 1 over 60 frames. Said another way, an input value of 1 to our speed in a default network will increase by a count of one every second. If we divide that number in half we go twice as slow. A value of 0.5 will increase by a count of 1 every 2 seconds. 1 / our table value will let us think in seconds rather than in fractions while we’re writing our cues. Yeah, but what about being multiplied by -1?! Well, if we want to get back to the 0 index in our cross TOP we need a negative value feeding our speed CHOP. Multiplying by -1 here means that we don’t need to think about the order of cues in our table DAT, and instead our bits of Python will keep us on the rails. Our else statement does all of the same things, but to our B deck. It also uses a positive value to feed our speed CHOP – since we need an increasing value.

There you have it, a simple cuing system.

simple cues.gif

This is great Matt, but what if I want to tween settings on that level TOP? Or any other set of complicated things?! Well, I’d be that at this point you’ve got enough to get you started. You might use a column to indicate if you’re transitioning to a totally new cue or just to new values in the same source image. You could also choose to put your parameter values in CHOPs instead so you could manipulate them with other CHOPs before exporting them to your decks.

What if I don’t want linear transitions?! A speed is just a linear ramp! That’s okay. You might choose to use a lookup CHOP and a more complicated curve. You could even make several types of curves with animation COMPs and switch between them. Or you could use a lag  CHOP to change your attack and release slopes. Or you could use a trigger CHOP, or a fileter CHOP. There are lots of ways to shape curves with math, now it’s up to you to figure out exactly what you’re after.

Happy programming!

pull it apart

Pull apart the example in 088
Pull apart the example in 099

Python in TouchDesigner | The Channel Class | TouchDesigner

The Channel Class Wiki Documentation

Taking a little time to better understand the channel class provides a number of opportunities for getting a stronger handle on what’s happening in TouchDesigner. This can be especially helpful if you’re working with CHOP executes or just trying to really get a handle on what on earth CHOPs are all about.

To get started, it might be helpful to think about what’s really in a CHOP. Channel Operators are largely arrays (lists in python lingo) of numbers. These arrays can be only single values, or they might be a long set of numbers. In any given CHOP all of the channels will have the same length (we could also say that they have the same number of samples). That’s helpful to know as it might shape the way we think of channels and samples.

Before we go any further let’s stop to think through the above just a little bit more. Let’s first think about a constant CHOP with channel called ‘chan1’. We know we can write a python reference for this chop like this:

op( 'constant1' )[ 'chan1' ]

or like this:

op( 'constant1' )[ 0 ]

Just as a refresher, we should remember that the syntax here is:
op( stringNameToOperator )[ channelNameOrIndex ]

python_refs.PNG

That’s just great, but what happens if we have a pattern CHOP? If we drop down a default pattern CHOP (which has 1000 samples), and we try the same expression:

op( 'pattern1' )[ 'chan1' ]

We now get a constantly changing value. What gives?! Well, we’re now looking at bit list of numbers, and we haven’t told Touch where in that list of values we want to grab an index – instead touch is moving through that index with me.time.frame-1 as the position in the array. If you’re scratching your head, that’s okay we’re going to pull this apart a little more.

multi_sample_chop.gif

Okay, what’s really hiding from us is that CHOP references have a default value that’s provided for us. While we often simplify the reference syntax to:

op( stringNameToOperator )[ channelNameOrIndex ]

In actuality, the real reference syntax is:
op( stringNameToOperator )[ channelNameOrIndex ][ arrayPosition ]

In single sample CHOPs we don’t usually need to worry about this third argument – if there’s only one value in the list Touch very helpfully grabs the only value there. In a multi-sample CHOP channel, however, we need more information to know what sample we’re really after. Let’s try our reference to a narrow down to a single sample in that pattern CHOP. Let’s say we want sample 499:

op( 'pattern1' )[ 'chan1' ][ 499 ]

With any luck you should now be seeing that you’re only getting a single value. Success!

But what does this have to do with the Channel Class? Well, if we take a closer look at the documentation ( Channel Class Wiki Documentation ), we might find some interesting things, for example:

Members

  • valid (Read Only) True if the referenced chanel value currently exists, False if it has been deleted. Identical to bool(Channel).
  • index (Read Only) The numeric index of the channel.
  • name (Read Only) The name of the channel.
  • owner (Read Only) The OP to which this object belongs.
  • vals Get or set the full list of Channel values. Modifying Channel values can only be done in Python within a Script CHOP.

Okay, that’s great, but so what? Well, let’s practice our python and see what we might find if we try out a few of these members.

We might start by adding a pattern CHOP. I’m going to change my pattern CHOP to only be 5 samples long for now – we don’t need a ton of samples to see what’s going on here. Next I’m going to set up a table DAT and try out the following bits of python:

python
op( 'null1' )[0].valid
op( 'null1' )[0].index
op( 'null1' )[0].name
op( 'null1' )[0].owner
op( 'null1' )[0].exports
op( 'null1' )[0].vals

I’m going to plug that table DAT into an eval DAT to evaluate the python expressions so I can see what’s going on here. What I get back is:

True
0
chan1
/project1/base_the_channel_class/null1
[]
0.0 0.25 0.5 0.75 1.0

If we were to look at those side by side it would be:

Python In Python Out
op( ‘null1’ )[0].valid True
op( ‘null1’ )[0].index 0
op( ‘null1’ )[0].name chan1
op( ‘null1’ )[0].owner /project1/base_the_channel_class/null1
op( ‘null1’ )[0].exports []
op( ‘null1’ )[0].vals 0.0 0.25 0.5 0.75 1.0

So far that’s not terribly exciting… or is it?! The real power of these Class Members comes from CHOP executes. I’m going to make a quick little example to help pull apart what’s exciting here. Let’s add a Noise CHOP with 5 channels. I’m going to turn on time slicing so we only have single sample channels. Next I’m going to add a Math CHOP and set it to ceiling – this is going to round our values up, giving us a 1 or a 0 from our noise CHOP. Next I’ll add a null. Next I’m going to add 5 circle TOPs, and make sure they’re named circle1 – circle5.

Here’s what I want – Every time the value is true (1), I want the circle to be green, when it’s false (0) I want the circle to be red. We could set up a number of clever ways to solve this problem, but let’s imagine that it doesn’t happen too often – this might be part of a status system that we build that’s got indicator lights that help us know when we’ve lost a connection to a remote machine (this doesn’t need to be our most optimized code since it’s not going to execute all the time, and a bit of python is going to be simpler to write / read). Okay… so what do we put in our CHOP execute?! Well, before we get started it’s important to remember that our Channel class contains information that we might need – like the index of the channel. In this case we might use the channel index to figure out which circle needs updating. Okay, let’s get something started then!

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = [ 1.0, 0.0, 0.0 ]
    onColor         = [ 0.0, 1.0, 0.0 ]
    targetCircle    = 'circle{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = onColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = onColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = onColor[2]

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = offColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = offColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = offColor[2]
    return

channel_execute_par.gif

Alright! That works pretty well… but what if I want to use a select and save some texture memory?? Sure. Let’s take a look at how we might do that. This time around we’ll only make two circle TOPs – one for our on state, one for our off state. We’ll add 5 select TOPs and make sure they’re named select1-select5. Now our CHOP execute should be:

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = 'circle_off'
    onColor         = 'circle_on'
    targetCircle    = 'select{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = onColor

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = offColor
    return

Okay… I’m going to add one more example to the sample code, and rather than walk you all the way through it I’m going to describe the challenge and let you pull it apart to understand how it works – challenge by choice, if you’re into what’s going on here take it all apart, otherwise you can let it ride.

channel_execute_select.gif

Okay… so, what I want is a little container that displays a channel’s name, an indicator if the value is > 0 or < 0, another green / red indicator that corresponds to the >< values, and finally the text for the value itself. I want to use selects when possible, or just set the background TOP for a container directly. To make all this work you’ll probably need to use .name, .index, and .vals.

multi_sample_more_members.gif

You can pull mine apart to see how I made it work here: base_the_channel_class.

Happy Programming!


BUT WAIT! THERE’S MORE!

Ivan DesSol asks some interesting questions:

Questions for the professor:
1) How can I find out which sample index in the channel is the current sample?
2) How is that number calculated? That is, what determines which sample is current?

If we’re talking about a multi sample channel let’s take a look at how we might figure that out. I mentioned this in passing above, but it’s worth taking a little longer to pull this one apart a bit. I’m going to use a constant CHOP and a trail CHOP to take a look at what’s happening here.

multi_sample_ref.PNG

Let’s start with a simple reference one more time. This time around I’m going to use a pattern CHOP with 200 samples. I’m going to connect my pattern to a null (in my case this is null7). My resulting python should look like:

op( 'null7' )[ 'chan1' ]

Alright… so we’re speeding right along, and our value just keeps wrapping around. We know that our multi sample channel has an index, so for fun games and profit let’s try using me.time.frame:

op( 'null7' )[ 'chan1' ][ me.time.frame ]

Alright… well. That works some of the time, but we also get the error “Index invalid or out of range.” WTF does that even mean?! Well, remember an array or list has a specific length, when we try to grab something outside of that length we’ll seen an error. If you’re still stratching you’re head that’s okay – let’s take a look at it this way.

Let’s say we have a list like this:

fruit = [ apple, orange, kiwi, grape ]

We know that we can retrieve values from our list with an index:

print( fruit[ 0 ] ) | returns "apple"
print( fruit[ 1 ] ) | returns "orange"
print( fruit[ 2 ] ) | returns "kiwi"
print( fruit[ 3 ] ) | returns "grape"

If, however, we try:

print( fruit[ 4 ] )

Now we should see an out of range error… because there is nothing in the 4th position in our list / array. Okay, Matt – so how does that relate to our error earlier? The error we were seeing earlier is because me.time.frame (in a default network) evaluates up to 600 before going back to 1. So, to fix our error we might use modulo:

op( 'null7' )[ 'chan1' ][ me.time.frame % 200 ]

Wait!!! Why 200? I’m using 200 because that’s the number of samples I have in my pattern CHOP.

Okay! Now we’re getting somewhere.
The only catch is that if we look closely we’ll see that our refence with an index, and how touch is interpreting our previous refernce are different:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame % 200 ] 0.6381909251213074

WHAT GIVES MAAAAAAAAAAAAAAT!?
Alright, so there’s one more thing for us to keep in mind here. me.time.frame starts sequencing at 1. That makes sense, because we don’t usually think of frame 0 in animation we think of frame 1. Okay, cool. The catch is that our list indexes from the 0 position – in programming languages 0 still represents an index position. So what we’re actually seeing here is an off by one error.

Now that we now what the problem is it’s easy to fix:

op( 'null7' )[ 'chan1' ][ ( me.time.frame - 1 ) % 200 ]

Now we’re right as rain:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame ] 0.6381909251213074
op( ‘null7’ )[ ‘chan1’ ][ ( me.time.frame – 1 ) % 200 ] 0.6331658363342285

Hope that helps!

textport for performance | TouchDesiger

consoleText

I love a good challenge, and today on the TouchDesigner slack channel there was an interesting question about how you might go about getting the contents of the textport into a texture to display. That’s a great question, and I can imagine a circumstance where that might be a fun and interesting addition to a set. Sadly, I have no idea about how you might make that happen. I looked through the wiki a bit to see if there were any leads, and it’s difficult to see if there’s actually a good way to grab the contents of the textport.

What do we do then?!

Well, it just so happens that this might be another great place to look at how to take advantage of using extensions in TouchDesigner. Here our extension is going to do some double duty for us. The big picture idea is that we’ll want to be able to use a single call to either display a string, or print and display a string. If you only wanted to print it you could just use print(), so we’ll leave that one out of the mix for now.

Let’s take a look at the extension and then pull apart what’s happening in side.

'''
author | matthew ragan
web | matthewragan.com

The idea behind the display class comes from a question on the TouchDesiger 
slack channel about how to handle displaying text for performance. While you
might choose to display the console, the challenge here is that you'll end 
up with another window open on your output machine. Ideally, you only draw
a single openGL context - this results in the best possible performance
from TouchDesigner. With that in mind, how might you approach wanting to use
the textport in an artistic way for your project? 

This example tackles that challenge with a set of methods designed to 
print to a texture intended for display, as well as to the text port.
'''

class disp():

    def __init__( self ):
        ''' our init method does a bit of set-up and starting organization.

        We need a few things to be available to us in this method. It's handy
        to have the target table, the target text TOP, a message with a placeholder
        and some information for our start and end rows to be displayed.
        '''
        self.TextTarget = op( 'table_target' )
        self.DisplayText = op( 'text_for_display' )
        self.MessagePrompt = ">>> {msg}"
        self.StartRow = 0
        self.EndRow = 11
        self.TextTarget.clear()

        return

    def Display( self, inputMsg="hello world" ):
        ''' The display method will populate a table which is used to feed a text TOP

        The big idea here is that we need to fill in a table DAT that will then feed
        a text TOP. Why a table DAT instead of a text TOP? Well, you can do whatever you
        want. My idea here is that a table DAT is a little easier to think of in terms 
        of single rows that can fill up a text TOP. We can also do fancy things like use
        a select to make sure that our text is always displayed even if our start and end
        rows get away from us. A text DAT will just keep filling our text TOP, and
        we might end up with squashed text we can't see. 

        In this method we format our MessagePrompt with the supplied text. We also do a 
        quick check to see if our text will fit in the display. If we have more rows than
        can be displayed, we update our class variables to so that our last lines will
        remain visible.

        Finally, we return our formattedMsg - why? This is handy if we want to use another
        method to actually print our message to the textport as well.
        '''
        formattedMsg = self.MessagePrompt.format( msg = inputMsg )

        self.TextTarget.appendRow( [ formattedMsg ] )

        if self.TextTarget.numRows > 11:
            self.StartRow += 1
            self.EndRow += 1

        else:
            pass

        return formattedMsg

    def PrintAndDisplay( self, inputMsg="hello world" ):
        ''' The PrintAndDisplay method will both add the message to the text TOP, and 
        print the same message to the console.

        You may be wondering why we return the formatted message - the trick here is that
        we can take advantage of the work that arleady happens in the Display method.
        Why write that whole bit again, if we can just use the same logic and work 
        we've already done, and just add a final step of printing to the textport as well.
        '''

        forConsole = self.Display( inputMsg )
        print( forConsole )
        return

        def ClearDisplay( self ):
        ''' Clear display dumps out the contents of our target table.

        This method seems silly, but the idea is that you're likely to want
        a fast and clean way of cleaning out the contents of your display window.
        This method does only that simple little task. This will also reset our 
        start and end bounds to make sure that we're getting the correct selection
        for our text TOP.
        '''
        self.TextTarget.clear()
        self.__init__()
        return

Okay, so what exactly are we doing here?!

The big picture is that we want a way to be able to log something to a text  object that can be displayed. In this case I choose a table DAT. The reasoning here is that a table DAT before being converted to just a text DAT allows us to do some simple clean up and line adjustments. Each new entry is posted in a row – which makes for an easy way to limit the number of displayed rows. We can do this with a select DAT – which is where we use our StartRow and EndRow members.

Why exactly do we use these? Well, this helps ensure that we can keep our newest row displayed. A text TOP can accept a text DAT of any length, but at some point the text will spill off the bottom – unless you use adaptive sizing. The catch there is that at some point the text will become impossible to read. A top and bottom boundary ensures that we can always have something portion of our text displayed. We use a simple logical test in our Display() method to see if we’ve hit that boundary yet, and if we have we can update our members plus one… moving them both along at the same time.

You may also notice that we have a separate method to display and print… why not just do this in a single method. Well, that’s a great question. We could just use a single method for this with another argument. That’s probably a better way to tackle this challenge, but I wanted to use this opportunity to show how we might call another method from within our class. This can be helpful in a number of different situations, and while this application is a little too simple to really take advantage of that technique, it gives you a peak into how it might work.

Want to download the tox and take it for a test drive? You can find the source code here.

scriptDAT | Tips and Tricks | TouchDesigner

If you spend lots of time setting up parameters in your UI elements and want a faster way to use a set of presets to populate some parameters, then the Script DAT might be just what you’re looking for.

Let’s look at a fast simple example that might have you re-thinking how to quickly set up pars in a project. Keep in mind that this won’t work in every situation, but it might work for an awful lot of them and in ways that you might not have expected.

To get started let us imagine that we have a simple set-up where we have a UI element and a display element. We want a fast way to quickly update their parameters. For the sake of this example let’s imagine that we do not need any fancy scaling or changes on the fly. This is going to be used on a set of displays where we know exactly how they’re going to display. We might think about using storage to set and pull parameters, but you might be hesitant to use too much python for those bits and bobs. Okay, so exports it is… they’re a little more cumbersome to set up, but they are much faster – fine.

Sigh.

I guess we need to start setting up an export table, or a constant CHOP and dragging and dropping all over creation. Before you do that though, take a closer look that is the majesty of the Script DAT:

The Script DAT runs a script each time the DAT cooks and can build/modify the output table based in the optional input tables. The Script DAT is created with a docked (attached) DAT that contains three Python methods: cook, onPulse, and setupParameters. The cook method is run each time the Script DAT cooks. The setupParameters method is run whenever the Setup Parameter button on the Script page is pressed. The onPulse method is run whenever a custom pulse parameter is pushed.

Maybe we can use the Script DAT to make an export table for us with just a little bit of python.

We can start by putting a few things into storage. Let’s create a new dictionary but follow some simple rules:

  • The keys in this dictionary are going to be operator names or paths
  • Each operator is itself a key for another dictionary
  • The keys of that dictionary must be proper parameter names
  • The values associated with these keys need to be legal entries for parameters

Okay, with these rules in mind let’s see what we can do. Open up a new project, in project1 let’s create two new containers:

  • container_ui
  • continer_led_display

Add a new text DAT and create a simple dictionary to put into storage, and let’s follow the rules we described above:

# it's important here to know that 
# pars need to match their correct
# parameter name in order for this to
# work correctly.

# in that same vein, the relative paths
# are based on the location of the script
# op that's being used

# finally, for this to work correctly, you'll
# need to re-run this dat in order to place
# new values into storage

attr = {
    "container_ui" :{ 
        "w" : 500,
        "h" : 1080,
        "alignorder" : 0
    },
    "container_led_display": {
        "w" : 400,
        "h" : 300,
        "alignorder" : 1
    },
    "..":{
        "align" : 1,
        "justifyv" : 1
    }
}

parent().store( 'set_up_attr', attr )

Alright, so far so good. Now let’s add a Script DAT.

We’re going to use our Script DAT to look at our stored vals and create an export table on the fly for whatever is in the storage dictionary “attr” – easy.

Let’s edit our Script DAT to have the following contents:

# me - this DAT
# scriptOp - the OP which is cooking
#
# press 'Setup Parameters' in the OP to call this function to re-create the parameters.

# define where pars is coming from
pars = parent().fetch( 'set_up_attr' )

def setupParameters(scriptOp):
    return

# called whenever custom pulse parameter is pushed
def onPulse(par):
    return

def cook(scriptOp):
    scriptOp.clear()

    # insert header row
    scriptOp.insertRow( [ 'path', 'parameter', 'value', 'enable' ] )

    # loop through dictionary for pars
    for key, value in pars.items():
        for item_key, item_value in value.items():
            scriptOp.appendRow( [ key, item_key, item_value, 1 ] )

    # set up parent pars for heigh and width
    parent_height = max( [ pars[ 'container_ui' ][ 'h' ], pars[ 'container_led_display' ][ 'h' ] ] )
    parent_width = sum( [ pars[ 'container_ui' ][ 'w' ], pars[ 'container_led_display' ][ 'w' ] ] )
    scriptOp.appendRow( [ '..', 'w', parent_width, 1 ] )
    scriptOp.appendRow( [ '..', 'h', parent_height, 1 ] )

return

Finally, let’s turn on the green export flag at the bottom of our Script DAT:

script_dat.PNG

And just like that we’ve set-up an auto-export system. Now every time we update our dictionary run our script to put the contents into storage we’ll automatically push those changes to an export table.

Looking for an example to pull apart – head over to github and download a simple example to look over.

Maintaining Perspective with Multiple Cameras | TouchDesigner

File this away under “interesting theoretical concepts that I’ll never use… or will I?”

At some point while making realtime generative art for massive installations you may find that you’re beyond the capabilities of traditional realtime rendering in Touch. Let’s say, for example, that you need to render 12 hd outputs for a 3 x 4 array of screens – a resolution of 7680 x 3240 certainly can be done with a single render TOP, but delivering that final texture is a little more tricky.

I’m well aware that there are a number of possible solutions to this problem but before you find yourself composing that email to me about how to hack a way to a solution… what if it wasn’t 12 ouputs, what if it was 120? 200? What if every output was 4k? The answer we’re really after here is how to draw a scene with consistent perspective across multiple machines… because at some point you’ll have to use multiple machines. So, what do we do?

Forget what we do… what does that even look like? I’m still so confused.

Okay, so let’s first look at some examples of what it looks like as reference.

In this example we can see one large canvas that spans multiple screens. This is great – it’s huge and beautiful. This example shows a large desktop, which is also great… but what if we’re after some real-time rendering? This is a great illustration of the problem we might encounter. What if these displays were all 1920 x 1080. It’s a 7 x 4 array, so that’s going to be a definite challenge for a complex scene on a single machine. At this point we probably can’t realistically produce a single pixel to pixel texture for this array on a single machine. Instead we’d have to have a system of distributed rendering machines. Okay, that’s pretty straightforward and we can do some hip flat rendering that’s all orthographic no problem. What if we want perspective? If you want perspective in your real time rendering you need a way to conceptualize what the entire “screen” is, and then how to selectively render just a portion of that larger scene.

Huh?

Consider the beautiful work of Refik Anadol. I can’t speak to exactly what technique is being used here, but it’s a good illustration of the same challenge. How can you maintain the illusion of perspective if you need to render your generative art on multiple machines? That’s the real question we’re trying to answer… and now we can look at some ideas to help us better understand that challenge.

The process and methodology described below aim to solve that problem. For this example I’m going to work in a scale that’s unrealistic… but will allow those without a commercial license to play along from home. If you have a commercial license feel free to turn up the resolution as long as you keep mathematics involved in mind.

First things first, let’s build out a simple proof of concept that will make sure we understand this problem more completely.

Let’s imagine that we have a large composition that we need to cut up (for the sake of rendering) into 4 smaller slices. That might look something like this:

multi_perspective

Remember, this is just a proof of concept so we’re going to start with a very easy implementation first, before we start to dig into the more complex questions. An important lesson to consider when it comes to programming is to start by reducing the problem to its basic elements, then when you have a foundational understanding of the issue start to scale up – don’t worry, we’ll get there we just have to start small.

Okay, so we’ve got our 2 x 2 array that we want to render. Let’s see how we can set up some cameras to render just one of those squares a piece, but all from the same point of view.

Wait! Why do they need the same point of view? We’re after the same point of view so we can maintain perspective. It’d be easy enough to use four different cameras that were translated into positions to only see their section of the larger quad, but the results from lighting and perspective calculations wouldn’t match. You can use this multi-camera transformation technique if you’re doing orthographic rendering with emissive lighting, but not if you want to maintain perspective and use non-emissive lighting. It’s okay if you don’t believe me – I didn’t believe me either, and I had to set it up and test it bunch of times before I really understood what’s happening.

What’s that going to look like? Well, for the perspective of a single camera that can see the whole scene – the effect we want to recreate eventually – we might see something like this:

full_camera_view.PNG

From the vantage point of a single camera, we want to be able to zero in on just a single quadrant in our scene, something like this:

single_quad.PNG

Eventually, we want to reassemble the view of four cameras to look like our original single camera view.

Matt, I still don’t get it. That’s okay. Keep reading, and if by the time we get done with this example it’s still not a useful technique you can stop reading. If, however, you want a means to do perspective based illusions across multiple machines for massive installations, keep reading to the end.

We’re going to set up our test by by using a part of our camera COMP that you may not have used before. Specifically, when it comes to the view page, we’re going to use the Viewing Angle and Method called “Focal Length and Aperture.” I wish I could tell you exactly what this means to TouchDesigner – spoiler, I can’t – what I can help you understand is how these values relate to one another in order to achieve our particular illusion.

We’re going to start by setting up a simple example. Add a geo to your scene, and replace the torus inside with a grid. Set your sizex parameter to be 16/9. If you want to follow along step for step , change your grid to be a polygon, and add a noise SOP with a period of 0.02 and an amplitude of 0.5. Connect that to a facet SOP with unique points and computed normals. Connect your chain of operators to a null SOP and make sure that your display and render flag are turned on for your null. You should have a simple network that looks like this:

SOPs.png

Outside of your geo add two moviefile in TOPs. In the first you can use the supplied quad arrangement in the assets portion of the git repository that accompanies this post. It’s called multi_perspective.png.In your second moviefile in TOP select the FiledGuide.tif. Composite these two together with a composite TOP, and change the operand method to Add. Connect this to a null TOP, and finally assign this null to a phong material as the color map. When you’re done you should have something that looks like this:

texture_grid.png

Whew. Alright, now we can finally get to the interesting part. Let’s add a light and a camera to our network.

Now, in our camera comp let’s set the tz parameter to 10 units:

cam_tz.PNG

Next let’s move over the view page of the camera COMP. Here we’re going to leave the projection as perspective, but we’re going to change the viewing angle method to “Focal length and Aperture. Next we’ll change our Focal Length to 10 and our aperture to 16/9:

full_view_cam

This is our camera that can see the entire scene. Let’s add a render TOP to our network so we can see what our camera sees:

full_view_cam_view

If we were to bypass our noise SOP the view of this piece of geometry would fit exactly within our view-port. From here forward it’s going to get a little interesting. We want to maintain the perspective calculations from this vantage point, but we want only a single quadrant at a time to fill our view-port. It’s almost like zooming in and cropping to only a sub-section of our view. How do we do that?

Let’s copy our first camera, and then make a few adjustments. I’m going to call my new camera cam_single_p1. Next we’re going to leave our Focal Length and Aperture settings just as they were. We are, however, going to change our window x/y parameters to be:

winx -0.25
winy 0.25

We’re also going to change our window size to be 0.5.

cam_single_p1

Let’s render that camera and see what we get:

single_cam_p1_rendered.PNG

Woah!! That works just like we wanted! Thinking through our other cameras, we can quickly see that the combination of our windows size and offsets act as a zoom and translation mechanism. Try adding 3 more cameras with the following tx/y settings:

Cam2

  • tx 0.25
  • ty 0.25

Cam3

  • tx -0.25
  • ty -0.25

Cam4

  • tx 0.25
  • ty -0.25

If you render these cameras you should see something like this:

all_cams.PNG

Okay. That’s all pretty slick, but how do all of these parameters relate?

What does it mean?!

The meat and potatoes of this technique is to define a view-port’s aspect ratio, the number of vertical slices that a single window represents, and then to specify where the offsets sit that represent the center of a given window.

Huh?!

Let’s think through our simple example. We made our rendered quad a 1.7778 x 1 rectangle – a rectangle with a 16:9 aspect ratio. This was the same value we used for our aperture. We also set the distance of our camera from our geo to be 10 units, which was the same value we used to define our focal length. The window size is a ratio of 1 over the number of vertical sections… in our case we had two vertical sections, so 1 over 2 is 0.5. Our xy window offsets represent the center of our windows in our sections. That’s a little harder to wrap our heads around, and a better way to think about it is the UV coordinates of the center of a given window into our scene. Let’s break that out a little more.

Window Size

  • 1 / number of vertical windows that can fit within our viewport

Focal Length

  • the distance of our camera to our scene window

Aperture

  • the aspect ratio of our scene window

Win X

  • ( U * 2) – 1 ) / 2

Win Y

  • – ( ( U * 2) – 1 ) / 2 )

To really dig into the power of this technique we need to push beyond just a quad based set up we need a more abstract configuration of windows in our scene. Now that we understand the mechanics of our set up, let’s look at some arbitrary configuration of windows that might be spread across a large number of machines. What if our window arrangement looked something like the below:

multi_cam_perspective_output_map

What’s going on here?

Well,  let’s imagine that you’re working on a large format LED installation where you need to slice up your scene into uniform HD chunks that then feed an LED controller. In some cases those regions overlap even though only a single LED screen is outputting the content. Relationally, however, you still need to be able to control and designate the regions for the screens. Output 5 and 7 are a good example of these. The final shape of that screen is going to be the combined outline of the two displays, but the video feed to the LED controllers needs to be consistent HD cutouts. All of the dimensions in this example could probably be managed by doing a single rendering of the full scene then cropping to a given region – but at some point your ability to render the full scene and then use TOPs to crop out pixels is going to fail. This example has 7 outputs, but it’s not hard to imagine a project that had 20 or more – as reference, the need to understand this technique came out of working on an installation with 68 discrete outputs.

Okay, okay, okay… fine. So what are all these targets and numbers about?

We’ll remember in our first proof of concept example that we were able to take advantage of a simple 0.25 offset – that makes sense right?

If our geometry is placed with it’s bottom left corner at the origin (0,0), then the center of our first slice is going to be at (0.25, 0.25):

2016-12-13 13.35.19.jpg

That’s great Matt, but that doesn’t make any sense… following this logic, our slices would have been more like:

  • Slice1 ( 0.25, 0.75 )
  • Slice2 ( 0.75, 0.75 )
  • Slice3 ( 0.25, 0.25 )
  • Slice4 ( 0.75, 0.25 )

So what gives?!

Well, we have to remember that we set up our geometry to have it’s center at the origin. If we take this into account, what we see is more like:

2016-12-13 13.36.58.jpg

Looking at this, it should make sense why we used the translation coordinates that we did. The other interesting thing to understand in this look into our geometry is to consider the following.

If the full extent of our scene represents the bounds of a complete window, then we can begin to think about our winx and winy coords as being more akin to a UV – a normalized coordinate on our full window. We need to do some additional math to compensate for the translation of our origin, but that’s pretty straightforward.

If you’re still scratching your head, that’s okay. Let’s look at how to take our new window map and create a programmatic means of slicing up that full scene.

Thinking back to our proof of concept test we need a few things in place in order for this all to work as we expect:

  • We need the transforms for all our cameras to be the same – these are on the xform page: tx, ty, tz.
  • We also need several pieces for the view page:
    • Focal Length
    • Aperture
    • Winx
    • Winy
    • Winsize

Given that our cameras aren’t going to be doing any moving once we set up our calibration, we can safely use python expressions in our parameter fields. In terms of optimization, if an op does a lot of cooking it’s often better to use exports instead of expressions – expressions end up getting complied on demand and evaluated when op op cooks. Since we want fixed cameras looking into a moving world, we can use expressions for our cameras – it’s also going to be less of a hassle to set up, which is great.

For starters let’s add our reference template into our scene. We can start by dragging in our texture, connecting a Null TOP, and then assigning this to the color of a constant Material.

reference_plate.PNG

Next let’s add a geometry to our scene. We can replace the torus inside with a rectangle that has our source window’s aspect ratio, which in this case is 16:9 or 1.778 : 1.

scene_window.PNG

Next I’m going to use a Null COMP to hold the transforms of our camera system. I’m going to set this to have a tz value of 5, and otherwise leave this alone.

Null_pars.PNG

Let’s also add an object CHOP to help us with determining the distance between the null and geometry – to be clear, we don’t need to do this with an object CHOP, we could do this with a math CHOP, or with Python.

In the Object CHOP I’m going to set our null as the target object, geo1 as the reference object, and I’m going to set this to compute distance.

object_chop.png

I’m also going to add some tables that hold some reference information for us. I want to know the width and height of our scene, as well as the width and height of a given cut-out.

ref_values.PNG

We’re also going to need a table with all of our pixel space coordinates:

output_coords.PNG

Now that we have all our primary ingredients ready, we can build out a system to convert our pixel space coordinates into a set of winx and winy translation values.

Let’s start by looking at this process in general. We can first start with our coords in pixel space:

Pixel Space

output x y
output1 640 360
output2 205 130
output3 237 518
output4 525 130
output5 752 593
output6 1013 188
output7 1012 483

These values represent the actual center of these windows in the full pixel scene. I started by making this template in Photoshop, and then measured the location of the center of each given viewport.

Once we have these values, we need to convert them into a normalized values. In other words, how do these pixel coordinates translate to UV coordinates. This is a pretty straightforward calculation – the pixel value divided by it’s respective dimension for the full scene:

  • outputx / full_scene_x
  • outputy / full_scene_y

We can set up a quick eval DAT to do all of this for us:

convert_to_uv

The two python expressions that drive this in the table2 DAT are:

me.inputCell / op( 'table_scene' )[ 'scene_x', 1 ]
me.inputCell / op( 'table_scene' )[ 'scene_y', 1 ]

Our results from this can be found in the table below.

UV Space

output x y
output1 0.5 0.5
output2 0.16015625 0.18055555
output3 0.18515625 0.71944444
output4 0.41015625 0.18055555
output5 0.5875 0.82361111
output6 0.79140625 0.26111111
output7 0.790625 0.67083333

Now that we have the UV coords that represent the center of each window, we need to convert these values into a scale that takes into account that the center of our geometry is located at the origin. For our x values we multiply our value by 2, subtract 1, and divide by 2. For our y values we use the same operation and multiply by negative 1. We can use another eval DAT to do just this for us.

convert_to_winxy.PNG

The two python expressions that drive this in table3 are:

( ( me.inputCell * 2 ) - 1 ) / 2
( ( ( me.inputCell * 2 ) - 1 ) / 2 ) * -1

Now we have taken our original pixel coords and then converted them into our winx and winy transforms.

Converted into winx winy transfroms

output x y
output1 0.0 -0.0
output2 -0.33984375 0.31944444
output3 -0.31484375 -0.21944444
output4 -0.08984375 0.319444444
output5 0.08750000 -0.323611111
output6 0.29140625 0.238888888
output7 0.290625 -0.170833333

With these values set, we just need to make sure that we compute our window size, aperture, and focal_length. Looking back to the above, calculating these remaining values should be a snap.

Window Size

Window size is 1 over the number of windows that can fit into our full scene. This also means that our total scene’s width should be n x the width of a given window.

  • In our case a single window (measured in Photoshop) is 320 pixels, and our full scene is 1280 pixels.
  • 1280 / 320 = 4
  • 1 / 4 = 0.25
  • Window Size = 0.25

Focal Length

Focal Length is the distance between our point of view (in our case the null), and our full scene. We’ve used an object CHOP to compute this distance.

Aperture

Our aperture is the aspect ratio of our full scene.

  • 1280/720 = 1.778
  • Aperture = 1.778

I’m using an eval DAT to do the computation and organization of all of this:

cam_attr_calculations.PNG

The python for these looks like this:

1 / ( op( 'table_scene' )[ 'scene_x', 1 ] / op( 'table_render_attr' )[ 'width', 1 ] )
op( 'table_scene' )[ 'scene_x', 1 ] / op( 'table_scene' )[ 'scene_y', 1 ]
op( 'object1' )[ 'dist' ]

Python in touch is name dependent, so I’d recommend looking at this example network if you’re trying to replicate this effect.

Now we can set up our cameras to correctly crop out a given viewport of the entire scene. I’m going to rely on the digits of a camera to be correspondences to the output. So in my case output1 and camera1 should be the same thing. I’m also going to use the translation values of our null to set the location of the camera.

All of that said, our expressions for our camera should look like this:

camera_expressions.PNG

Again, all of our expressions are name dependent, so I’d recommend looking over how I’ve organized this in the sample file in order to make sure you know exactly what’s referencing what.

Now we can copy past our camera 6 more times – I’m using digits in many of these expressions to match the camera digit to the output digit. Looking over my results it looks like we’re right on the money.

view_ports.PNG

To really appreciate what’s happening here, I’m going to turn off the rendering on our calibration plate, and turn on a sample piece of geometry.

view_ports_real_geo.PNG

In geo2 you can see the entire geometry, and in each of our viewports we can see that we’re only rendering the region of our geometry that falls into single view.

“That’s a mess… I don’t get it.” You might well be saying. That’s okay. Let’s rearranged our TOPs to mirror more closely what’s in our template:

view_ports_real_geo_rearranged.PNG

Hopefully, doing this we should better be able to see how these various pieces work together.

At this point we now have a means of rendering a complex scene across multiple machines (or GPUs if you’re able to use affinity on Quadro cards), and maintain perspective. That unlocks a whole new avenue for realtime rendering that breaks you away from the limitations of single machine configurations or reliance on baked media for distributed realtime rendering.

Download this example from github


It’s always lovely to get an email from Derivative headquarters.

In this case I just got a lovely ping from Malcolm to let me know that the crop parameters on the render TOP can be used for the same functions described above.

Let’s look back at our first example to understand how that might work. In this case I’m going to use the same initial camera that we set up – our cam_full_scene COMP. I’m going to use this one already since I know that it’s correctly configured to capture the entire width and height of our reference plate. Next I’m going to add a render TOP, and under the CROP page I’m going to change my crop right and crop bottom pars to 0.5. For the sake of understanding the concept I’m going to leave this in a fractional unit space, but we could just as easily determine these values as absolute pixel measures. The result of this looks like this:

render_crop1.PNG

Next up, rather than using another render TOP I’m going to use a render pass – there’s lots of good reasons to use the render pass but one of the most important considerations here is that it’s a very efficient rendering operation. We do, however, need to make a few other adjustments. We need to target our render2 as our render TOP on the Render Pass page, and we also need to toggle on clear to camera color, and clear depth buffer:

render_crop2.PNG

On our render pass we’ll need to use the following crop parameters:

render_crop2_2.PNG

As we add additional render pass tops we need to target the previous render pass – a given render or render pass TOP can only have a single render pass assigned to it.

Our crop parameters should look like this:

renderpass3

  • crop left 0.0
  • crop right 0.5
  • crop bottom 0.0
  • crop top 0.5

renderpass4

  • crop left 0.5
  • crop right 1.0
  • crop bottom 0.0
  • crop top 0.5

All in all when we’re done we should have something that looks like this:

render_crop_full.PNG

Here the result is the same, the methodology going into it all is just a little different. Like all things TouchDesigner, there are multiple means of solving the same challenge and the “right” one ultimatly comes down to the choice that’s best for your particular installation.

Happy Programming everyone.

* The git repository and support files have been updated to reflect this additional material.