Tag Archives: TouchDesigner

TouchDesigner | Reusable Code Segmentation with Python

reusable-code-segmentation.PNG

Thinking about how to approach re-usability isn’t a new topic here, in fact there’s been plenty of discussion about how to re-use components saved as tox files, how to build out modular pieces, and how to think about using Extensions for building components you want to re-use with special functions.

That’s wonderful and exciting, but for any of us that have built deployed projects have quickly started to think about how to build a standard project that any given installation is just a variation of… a flavor, if you will, of a standard project build. Much of that customization can be handled by a proper configuration process, but there are some outliers in every project… that special method that breaks our beautiful project paradigm with some feature that’s only useful for a single client or application.

What if we could separate our functions into multiple classes – those that are universal to every project we build, and another that’s specific to just the single job we’re working on? Could that help us find a way to preserve a canonical framework with beautiful abstractions while also making space for developing the one-off solutions? What if we needed a solution to handle the above in conjunction with sending messages over a network?  Finally, what if we paired this with some thoughts about how we handle switch-case statements in Python? Could we find a way to help solve this problem more elegantly so we could work smarter, not harder?

Well, that’s exactly what we’re going to take a look at here.

First a little disclaimer, this approach might not be right for everyone, and there’s a strong assumption here that you’ve got some solid Python under your belt before you tackle this process / working style. If you need to tool up a little bit before you dig in, that’s okay. Take a look at the Python posts to help get situated then come back to really dig in.


Getting Set-up

In order to see this approach really sing we need to do a few things to get set-up. We’ll need a few operators in place to see how this works, so before we dig into the python let’s get our network in order.

First let’s create a new base:

base.PNG

Next, inside of our new base let’s set up a typical AB Deck of TOPs with a constant CHOP to swap between them:

typical-ab-deck.PNG

Above we have two moviefilein TOPS connected to a switch TOP that’s terminated in a null TOP. We also have a constant CHOP terminated in a null whose chan1 value is exported to the index parameter of our switch TOP.

Let’s also use the new Layout TOP in another TOP chain:

layout-top.PNG

Here we have a single layout TOP that’s set-up with an export table. If you’ve never used DAT Exports before you might quickly check out the article on the wiki to see how that works. The dime tour of that ideal is that we use a table DAT to export vals to another operator. This is a powerful approach for setting parameters, and certainly worth knowing more about / exploring.

Whew. Okay, now it’s time to set up our extensions. Let’s start by creating a textDAT called messageParserEXT, generalEXT, and one called jobEXT.

parser-general-job.PNG


The Message Parser

A quick note about our parser. The idea here is that a control machine is going to pass along a message to a set of other machines also running on the network. We’re omitting the process of sending and receiving a JSON blob over UPD, but that would be the idea. The control machine passes a JSON message over the network to render nodes who in turn need to decode the message and perform some action. We want a generalized approach to sending those blobs, and we want both the values and the control messages to be embedded in that JSON blob. In our very simple example our JSON blob has only two keys, messagekind and vals:

message = {
        'messagekind' : 'some_method_name',
        'vals' : 'some_value'
}

In this example, I want the messagekind key to be the same as a method name in our General or Specific classes.

Pero, like why?!

Before we get too far ahead of ourselves, let’s first copy and past the code below into our messageParserEXT text DAT, add our General and Specific Classes, and finish setting up our Extensions.

The General Code Bits

In our generalEXT we’re going to create a General class. This works hand in hand with our parser. The parser is going to be our helper class to handle how we pass around commands. The General class is going to handle anything that we need to have persist between projects. The examples here are not representative of the kind of code you’d have your project, instead they’re just here to help us see what’s happening in this approach.

The Specific Code Bits

Here in our Specific class we have the operations that are specific to this single job – or maybe they’re experimental features that we’re not ready to roll into our General class just yet, regardless, these are methods that don’t yet have a place in our canonical code base. For now let’s copy this code block into our jobEXT text DAT.

At this point we’re just about ready to pull apart what on earth is happening. First let’s make sure our extension is correctly set-up. Let’s go back up a level and configure our base component to have the correct path to our newly created extension:

 

reusable-ext-settings.PNG

Wait wait wait… that’s only one extension? What gives? Part of what we’re seeing here is inheritance. Our Specific class inherits from our General class, which inherits form our MessageParser. If you’re scratching your head, consider that a null TOP is also a TOP is also an OP. In the same way we’re taking advantage of Python’s Object oriented nature so we can treat a Specific class as a special kind of General operation that’s related to sending messages between our objects. All of his leads me to believe that we should really talk about object oriented programming… but that’s for another post.

Alright… ALMOST THERE! Finally, let’s head back inside of our base and create three buttons. Lets also create a panel execute for each button:

buttons.PNG

Our first panel execute DAT needs to be set up to watch the state panel value, and to run on Value Change:

change-switch.PNG

Inside of our panel execute DAT our code looks like:

# me - this DAT
# panelValue - the PanelValue object that changed# # Make sure the corresponding toggle is enabled in the Panel Execute DAT.
def onOffToOn(panelValue):
    return
def whileOn(panelValue):
    return
def onOnToOff(panelValue):
    return
def whileOff(panelValue):
    return
def onValueChange(panelValue):
    message = {
        'messagekind' : 'Change_switch',
        'vals' : panelValue } 
    parent().Process_message(message)
    return

If we make our button viewer active, and click out button we should see our constant1 CHOP update, and our switch TOP change:

switch-gif.gif

AHHHHHHHHH!

WHAT JUST HAPPENED?!


The Black Magic

The secret here is that our messagekind key in our panel execute DAT matches an existing method name in our General class. Our ProcessMessage() method accepts a dictionary then extracts the key for messagekind. Next it checks to see if that string matches an existing method in either our General or Specific classes. If it matches, it then calls that method, and passes along the same JSON message blob (which happens to contain our vals) to the correct method for execution.

In this example the messagekind key was Change_switch(). The parser recognized that Change_switch was a valid method for our parent() object, and then called that method and passed along the message JSON blob. If we take a look at the Change_switch() method we can see that it extracts the vals key from the JSON blob, then changes the constant CHOP’s value0 parameter to match the incoming val.

This kind of approach let’s you separate out your experimental or job specific methods from your tried and true methods making it easier in the long run to move from job to job without having to crawl through your extensions to see what can be tossed or what needs to be kept. What’s better still is that this imposes minimal restrictions on how we work – I don’t need to call a separate extension, or create complex branching if-else trees to get the results I want – you’ll also see that in the MessageParser we have a system for managing elegant failure with our simple if hasattr() check – this step ensure that we log that something went wrong, but don’t just throw an error. You’d probably want to also print the key for the method that wasn’t successfully called, but that’s up to you in terms of how you want to approach this challenge.

Next see if you can successfully format a message to call the Image_order() method with another panel execute.

What happens if you call a method that doesn’t exist? Don’t forget to check your text port for this one.

If you’re really getting stuck you might check the link to the repo below so you can see exactly how I’ve set this process up.

If you got this far, here are some other questions to ponder:

  • How would you  use this in production?
  • What problems does this solve for you… does it solve any problems?
  • Are there other places you could apply this same idea in your projects?

At the end of the day this kind of methodology is really looking to help us stop writing the same code bits and bobs, and instead to figure out how to build soft modules for our code so we can work smarter not harder.

With any luck this helps you do just that.

Happy Programming.


Take a look at the sample Repo for this example on Github:
touchdesigner-reusable-code-segmentation-python

TouchDesigner | Switch Statements in Python

python-switch-statements.PNG

Hang onto your socks programmers, we’re about to dive deep. What are we up to here today? Well, we’re going to look into switch statement alternatives in Python (if you don’t know what a switch statement is don’t worry we’ll cover that bit), how you might use that in a practical real-world situation, and why that’s even an idea worth considering. With that in mind let’s dig-in and start to pull apart what Switch Statements are, and why you should care.

From 20,000 feet, switch-case statements are an approach to handling different situations by way of a look-up table rather than with a series of if-else statements. If you’re furrowing your brow consider situations when you may have encountered complex if-else statements where once change breaks everything… for so so much longer than you might want. Also consider what happens if you want to extend that if-else ladder into something more complicated… maybe you want to call different functions or methods based on input conditions, maybe you need to control a remote machine and suddenly you’re scratching your head as you ponder how on earth you’re going to handle complex logic statements across a network. Maybe you’re just after a better code-segmentation solution. Or maybe you’ve run into a function so long you’re starting to loose cycles to long execution times. These are just a few of the situations you might find yourself in and a switch statement might just be the right tool to help – except that there are no switch-case statements in Python.

What gives?!

While there aren’t any switch-case statements, we can use dictionary mappings to get to a similar result… a result so powerful we’re really in for a treat. Before we get there though, we need to look at the situation we’re trying to avoid.

So what exactly is that situation? Let’s consider a problem where we want to only call one function and then let that code block handle all of the various permutations of our actions. That might look like our worst case solution below.

Worst

To get started, what do we have above? We have a single function called switcher() that takes three arguments – the name of the function we want to call, and two values. In this example we have four different math operations, and we want to be able to access any of the four as well as pass in two values and get a result just by calling a single function. That doesn’t seem so hideous on the face of it, so why is this the worst approach?

This example probably isn’t so terrible, but what it does do is bury all the functional mathematical portions of our code inside of a single function. It means we can’t add and test a new element without possibly breaking our whole functional code block, we can only access these operations from within switcher(), and if we decide to add additional operations in the future our code block will just continue to accrue lines of code. It’s a naive approach (naive in the programming sense – as in the first brute force solution you might think of), but it doesn’t give us much room for modularity or growth that doesn’t also come with some unfortunate side effects.

Okay… fine… so what’s a good solution then?


Good

A good solution segments our functions into their own blocks. This allows us to develop functions outside of our switcher() function, call them independently, and have a little more flexible modularity. You might well be thinking that this seems like a LOT more lines… can we really say this is better?! Sure. The additional lines are worth it if we also get some more handles on what we’re doing. It also means we probably save some serious debugging time by being able to isolate where a problem is happening. In our worst case approach we’re stuck with a single function that if it breaks, none of our functions work… and if our logic got sufficiently complex we might be sifting through a whole heap of code before we can really track down what’s happening. Here at least there’s a better chance that a problem is going to be isolated to a single function block – that alone is a HUGE help.

All that said, we’re still not really getting to switch-case statements… we’re still stuck in if-else hell where we’ll have to evaluate our incoming string against potentially all of the possible options before we actually execute our actual code block. At four functions this isn’t so bad, but if we had hundreds we might really be kicking ourselves.

So how can we do better?


Better

Better is to remember that the contents of a python dictionary can be any data type – in fact they can even be function names, or Python objects. How does that help use? Well, it means we can look up what function we want to call on the fly, call it, and even pass in variables. In the example above our switcher() function holds a dictionary of all the possible functions at our disposal – when we call our switcher we pass in the name of the function with the variables that will in turn get passed to the function. Above our active_function variable becomes the variable that’s fetched from our dictionary, which we in turn pass our incoming variables along to.

That’s great in a lot of ways, but especially in that it gets us away from long complicated if-else trees. We can also use this as a mechanism for handling short-hand names for our methods, or multiple assignments – we might want two different keys to access the same function (maybe “mult” and “Multiple” both call the same function for example).

So far this is far away a better approach, so how might we make this better still?


Best

We might take this one step further and start to consider how we might address accepting an arbitrary number of vals. Above we have a simple way to tackle this – probably not what you’d end up with in production, but something that should hopefully get you thinking. Here the variable vals becomes a list that can be any number of values. In the case of both our Add() and Subtract() functions we loop through all of the values – adding each val, or subtracting each val respectively. In the case of our Multiply() and Divide() functions we limit these operations to only two values for the sake of our example. What’s interesting here is that we can return can think about error handling based on the array of values that’s coming into our function.


The above is great, of course, but it’s really just the beginning of the puzzle. Where this really starts to become interesting is how you might think of integrating this approach in your python extensions.

Or if vals is a a dictionary in it’s own right rather than a simple list.

Or if you can send a command like this over the network.

Or if you can start to think about how to build out blocks of code that are specific to a single job, and universal blocks that apply to all of your projects.

Next we’ll start to pull apart some of those very ideas and see where this concept really gets exciting and creates spaces for building tools that persists right alongside the tools that you have to build for a single job.

In the meantime, experiment with some Python style switch statements to see if you can get a handle on what’s happening here, and how you might take better advantage of this method.

Happy programming!


References

Looking for another perspective on this approach form a more pure Python perspective? Check out this post on Jaxenter.com.

TouchDesigner | Delay Scripts

It’s hard to appreciate some of the stranger complexities of working in a programming environment until you stumble on something good and strange. Strange how Matt? What a lovely question, and I’m so glad that you asked!

Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need some-something to happen at some time in the future. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming?

Typically we start to consider this through operations that involve some form of delay. I might issue the command for an operation now, but I want the environment to wait some fixed period of time before executing those instructions. In Python we have a lovely option for using the time module to perform an operation called sleep – this seems like a lovely choice, but in fact you’ll be oh so sorry if you try this approach:

But whyyyyyyyy?!

Well, Python is blocking inside of TouchDesigner. This means that all of the Python code needs to execute before you can proceed to the next frame. So what does that mean? Well, copy and paste the code above into a text DAT and run this script.

time.sleep

If you keep an eye on the timeline at the bottom of the screen, you should see it pause for 1 second while the time.sleep() operation happens, then we print “oh, hello there” to the text port and we start back up again. In practice this will seem like Touch has frozen, and you’ll soon be cursing yourself for thinking that such a thing would be so easy.

So, if that doesn’t work… what does? Is there any way to delay operations in Python? What do we do?!

Well, as luck would have it there’s a lovely method called run() in the td module. That’s lovely and all, but it’s a little strange to understand how to use this method. There’s lots of interesting nuance to this method, but for now let’s just get a handle on how to use it – both from a simple standpoint, and with more complex configurations.

To get started let’s examine the same idea that we saw above. Instead of using time.sleep() we can instead use run() with an argument called delayFrames. The same operation that we looked at above, but run in a non-blocking way would look like this:

If you try copying and pasting the code above into a text DAT you should have much better results – or at least results where TouchDesigner doesn’t stop running while it waits for the Python bits to finish.

Okay… so that sure is swell and all, so what’s so complicated? Well, let’s suppose you want to pass some arguments into that script – in fact we’ll see in a moment that we sometimes have to pass arguments into that script. First things first – how does that work?

Notice how when we wrote our string we used args[some_index_value] to indicate how to use an argument. That’s great, right? I know… but why do we need that exactly? Well, as it turns out there are some interesting things to consider about running scripts. Let’s think about a situation where we have a constant CHOP whose parameter value0 we want to change each time in a for loop. How do we do that? We need to pass a new value into our script each time it runs. Let’s try something like:

What you should see is that your constant CHOP increments every second:

for-loop-delay

But that’s just the tip of the iceberg. We can run strings, whole DATs, even the contents of a table cell.

This approach isn’t great for everything… in fact, I’m always hesitant to use delay scripts too heavily – but sometimes they’re just what you need, and for that very reason they’re worth understanding.

If you’ve gotten this far and are wondering why on earth this is worth writing about – check out this post on the forum: Replicator set custom parms error. It’s a pretty solid example of how and why it’s worth having a better understanding of how delay scripts work, and how you can make them better work for you.

Happy Programming.

 

TouchDesigner | Finding Dominant Color

Programming is a strange practice. It’s not uncommon that in order to make what’s really interesting, or what you promised the client, or what’s driving a part of your project you have to build another tool.

You want to detect motion, so you need to build out a means of comparing frames, and then determining where the most change has occurred. You want to make visuals that react to audio, but first you need to build out the process for finding meaningful patterns in the audio. And on and on and on.

And so it goes that I’ve been thinking about finding dominant color in an image. There are lots of ways to do this, and one approach is to use a technique called KMeans clustering. This approach isn’t without its faults, but it is interesting and relatively straightforward to implement. The catch is that it’s not fast enough for a realtime application – at least not if you’re using Python. So what can we do? Well, we can still use KMeans clustering, but we need to understand how to use multi-threading in python so we don’t block our main thread in TouchDesigner.

The project / tool / example below does just that – it’s a mechanism for finding dominant color in an image with an approach that uses a background thread for processing that request.


TouchDesigner Dominant Color

An approach for finding dominant color in an image using KMeans clustering with scikit learn and openCV. The approach here is built for realtime applications using TouchDesigner and python multi-threading.

TouchDesigner Version

099
Build 2018.22800

Python Dependencies

  • numpy
  • scipy
  • sklearn
  • cv2

Overview

base_dominant_color

A tool for finding Dominant Color with openCV.

Here we find an attempt at locating dominant colors from a source image with openCV and KMeans clustering. The large idea is to sample colors from a source image build averages from clustered samples and return a best estimation of dominant color. While this works well, it’s not perfect, and in this class you’ll find a number of helper methods to resolve some of the shortcomings of this process.

Procedurally, you’ll find that that the process starts by saving out a small resolution version of the sampled file. This is then hadned over to openCV for some preliminary analysis before being again handed over to sklearn (sci-kit learn) for the KMeans portion of the process. While there is a built-in function for KMeans sorting in openCV the sklearn method is a little less cumbersome and has better reference documentation for building functionality. After the clustering process each resulting sample is processed to find its luminance. Luminance values outside of the set bounds are discarded before assembling a final array of pixel values to be used.

It’s worth noting that this method relies on a number of additional python libraries. These can all be pip installed, and the recommended build approach here would be to use Python35. In the developer’s experience this produces the least number of errors and issues – and boy did the developer stumble along the way here.

Other considerations you’ll find below are that this extension supports a multi-threaded approach to finding results.

Parameters

Dominant Color

  • Image Process Status – (string) The thread process status.
  • Temp Image Cache – (folder) A directory location for a temp image file.
  • Source Image – (TouchDesigner TOP) A TOP (still) used for color analysis.
  • Clusters – (int) The number of requested clusters.
  • Luminance Bounds – (int, tuple) Luminance bounds, mine and max expressed as value between 0 and 1.
  • Clusters within Bounds – (int) The number of clusters within the Luminance Bounds.
  • Smooth Ramp – (toggle) Texture interpolation on output image.
  • Ramp Width – (int) Number of pixels in the output Ramp.
  • Output Image – (menu) A drop-down menu for selecting a ramp or only the returned clusters.
  • Find Colors – (pulse) Issues the command to find dominant colors.

Python

  • Python Externals – (path) A path to the directory with python external libraries.
  • Check Imports – (pulse) A pulse button to check if sklearn was correctly imported.

Using this Module

To use this module there are a few essential elements to keep in mind.

Getting Python in Order

If you haven’t worked with external Python Libraries inside of Touch yet, please take a moment to familiarize yourself with the process. You can read more about it on the Derivative Wiki – Importing Modules

Before you can run this module you’ll need to ensure that your Python environment is correctly set-up. I’d recommend that you install Python 3.5+ as that matches the Python installation in Touch. In building out this tool I ran into some wobbly pieces that largely centered around installing sklearn using Python 3.6 – so take it from someone whose already ran into some issues, you’ll encounter the fewest challenges / configuration issues if you start there. Sklearn (the primary external library used by this module) requires both scipy and numpy – if you have pip installed the process is straightforward. From a command prompt you can run each of these commands consecutively:

pip install numpy
pip install scipy
pip install sklearn

Once you’ve installed the libraries above, you can confirm that they’re available in python by invoking python in your command prompt, and then importing the libraries one by one. Testing to make sure you’ve correctly installed your libraries in a Python only environment first, will help ensure that any debugging you need to do in TouchDesigner is more straightforward.

python-externals-confirmation

Working with TouchDesigner

Python | Importing Modules

If you haven’t imported external libraries in TouchDesigner before there’s an additional step you’ll need to take care of – adding your external site-packages path to TouchDesigner. You can do this with a regular text DAT and by modifying the example below:

import sys
mypath = "C:/Python35/Lib/site-packages/mymodule"
if mypath not in sys.path:
    sys.path.append(mypath)

Copy and paste the above into your text DAT, and modify mypath to be a string that points do your Python externals site-packages directory.

python-page-dominant-color

If that sounds a little out of your depth, you can use a helper feature on the Dominant Color module. On the Python page, navigate to your Python Externals directory. It should likely be a path like: C:\Program Files\Python35\Lib\site-packages

Your path may be different, especially if when you installed Python you didn’t use the checkbox to install for all users. After navigating to your externals directory, pulse the Check imports parameter. If you don’t see a pop-up window then sklearnwas successfully imported. If you do see a pop-up window then something is not quite right, and you’ll need to do a bit of leg-work to get your Python pieces in order before you can use the module.

Using the Dominant Color

With all of your Python elements in order, you’re ready to start using this module.

dominant-colors-parameters

The process for finding dominant color uses a KMeans clustering algorithm for grouping similar values. Luckily we don’t need to know all of the statistics that goes into that mechanism in order to take full advantage of the approach, but it is important to know that we need to be mindful a few elements. For this to work efficiently, we’ll need to save our image out to an external file. For this to work you need to make sure that this module has a cache for saving temporary images. The process will verify that the directory you’ve pointed it to exists before saving out a file, and will create a directory if one doesn’t yet exist. That’s mostly sanity checking to ensure that you don’t have to loose time trying to figure out why your file isn’t saving.

Give that this process happens in another thread, it’s also important to consider that this functions based on a still image, not on a moving one. While it would be slick to have a fast operation for finding KMeans clusters in video, that’s not what this tool does. Instead the assumption here is that you’re using a single frame of reference content, not video. You point this module to a target source, by dropping a TOP onto the Source Image parameter.

Next you’ll need to define the number of clusters you want to look for. Here the term clusters is akin to what’s the target number of dominant colors you’re looking to find – the top 3, the top 10, the top 20? It’s up to you, but keep in mind that more clusters takes longer to produce a result. You’re also likely to want to bound your results with some luminance measure – for example, you probably don’t want colors that are too dark, or too light. The luminance bounds parameters are for luminance measures that are normalized as 0 to 1. Clusters within bounds, then, tells you how many clusters were returned from the process that fell within your specified regions. This is, essentially, a way to know how many swatches work within the brightness ranges you’ve set.

The output ramp from this process can be interpolated and smooth, or Nearest Pixel swatches. You can also choose to output a ramp that’s any length. You might, for example, want a gradient that’s spread over 100 or 1000 pixels rather than just the discrete samples. You can set the number of output pixels with the ramp width parameter.

On the otherside of that equation, you might just want only the samples that came out of the process. In the Output Image parameter, if you choose clusters from the drop down menu you’ll get only the valid samples that fell within your specified luminance bounds.

Finally, to run the operation pulse Find Colors. As an operational note, this process would normally block / lock-up TouchDesigner. To avoid that unsavory circumstance, this module runs the KMeans clustering process in another thread. It’s slightly slower than if it ran in the main thread, but the benefit is that Touch will continue running. You’ll notice that Image Process Status parameter displays Processing while the separate thread is running. Once the result has been returned you’ll Ready displayed in the parameter.

Download from Github

https://github.com/raganmd/touchdesigner-dominant-color

References


Notes from Other Programmers

I don’t use CONDA, but for those of you that do, you can install sklearn with the following command:
conda install scikit-learn

TouchDesigner | Multi-Threading

Multi-threading is no easy task to wrap your head around, and there are plenty of pit-falls when it comes to using it in in Touch. Below we have three simple examples of seeing how that works. It’s not the most thrilling topic in the world, until it’s something that you need – desperately – then it might just save your project.

You can grab the whole repo here: touchdesigner-multi-threading

Three Examples

text_pyThreads
A simple example of creating a function that runs in another thread. As noted in the forum reference post it’s important to consider what operations can potentially create race conditions in Touch. The suggested consideration here is to avoid the use of any operations that will interact with a touch Object. Looking more closely at this example we can see here that we’re using only Pythonic approaches to editing an external file. In our first example we use a simple text file approach to ensure that we have the simplest possible exploration of a concept.


text_pyThreads_openCV
In the opeCV example we look at how one might consider taking the approach of working with image processing through the openCV library. While this example only creates a random red circle, with a little imagination we might see how this would be useful for doing an external image processing pass – finding image features, identifying colors, etc. Running this as a for loop makes it easy to see how this process can block TouchDesigner’s main thread, and why it would be useful to have a means of executing this function in a way that minimizes impact on the running project.


text_pyThreads_queue
While the example pyThreads_opneCV is an excellent start, that doesn’t help us if we need to know when an outside operation has completed. The use of Queue helps resolve this issue. A Queue object can be placed into storage and act as an interchange between threads. It’s marked as a thread safe operation, and in our case is used to help track when we’re working function is “Processing” or “Ready”. You’ll notice that the complications here are that we have a less than ideal need to use an execute DAT to run a frame start script to check for our completed status each frame. This is less than ideal but a reasonable solution for an otherwise blocking operation. Notice that the execute DAT will disable the operation of the frame start script after it’s “Ready”. This kind of approach helps to ensure that the execute DAT only runs the frame start script when necessary and not every frame.

Forum Example

Forum Example 1 python_threading_sample.toe

An example from the forum used to sort out the essential pieces of working with multiple threads, queues, and how to approach this issue without crashing Touch. Many thanks to the original authors for their work helping to shed some light on this murkey part of working in TouchDesigner.

presets and cue building | TouchDesigner 099

I’ve been thinking about state machines and cuing systems lately. Specifically, that there aren’t many good resources that I’ve found so far that talk new artist programmers through how to think about or wrestle with these ideas. Like many Touch programmers I’ve tried lots of different ways of thinking about this problem, and just today I saw someone post on the Facebook help group.

from the facebook help group:

Hi, i’m working arround Matthews AME394 Simple VJ-Setup Tutorial. No Questions, but how can i do nearly the same with different blending times between the moduls. I tried a lot with getting different values out of a table DAT into the length parameter of a timerCHOP. But cannot figur out the right steps to get my goal. Any helps? this i need in a theater situation with different scenes to blend one after another with scenebuttons or only one button and a countCHOP or something else.

This challenge is so very familiar, and while there are lots of ways to solve this problem sometimes the hardest part is having an idea of where to start.  Today what I want to look at is just that – where do we start? This isn’t the best solution, or the only solution – it’s just a starting point solution. It’s a pass at the most basic parts of this equation to help us get started in thinking about what the real problems are, how we want to tackle them, and how we can go about exposes the real issues we need to solve for.

So where do we start? In this simple little state machine we’re going to start with a table full of states. For the sake of simplicity I’m going to keep this as simple as possible… though it might well uncover the more interesting and more challenging pieces that lie ahead.

I’m going to start with the idea that I’ve got a piece of content (an image or a movie) that I want to play. I want to apply some post process effects (in this case just black level and image inversion changes), and I want to have different transition times between these fixed states. Here the only transition I’m worrying about is one that goes from one chain of operations to another. I’m also going to stick with images for now.

So what do we need in our network to get started?!

We’re going to borrow from an idea that often gets used in these kinds of challenges, and we’re going to think of this as operating with two decks – an A deck, and a B deck. Our deck is essentially a chain of operators that allow for all of the possibilities that we might want to explore in our application. In this case I’m only working with a level TOP, but you can imagine that we might use all sorts of operations to make for interesting composition choices.

Alright, so we’re going to lay out a quick easy deck:

moviefilein > level > fit 

adeck.PNG

Next we’re going to repeat this whole chain, then connect both of our fit TOPs to a cross TOP:

ab_deck.PNG

If you’re scratching your head at this fit TOP in line, that’s okay. For us, the fit TOP is going to act as our safety. This makes sure that no matter what the resolution of the incoming file might be, that we always make sure that both decks have matching proportions. We’d probably want a little more thought in how this would work for an event or a show, but for now this is enough to help ensure that don’t experience any unexpected resolution shifts during our transitions.

Next we’re going to add a simple tweening system to our network to control how we blend between states. In this case I’m going to use a constant, a speed, and a null. I need to make sure that my speed is set to clamp, and that my min and max values are 0 and 1 respectively. Right now I only have two different decks, so I don’t want to go any higher that 1 or any lower than 0.

Now we’re cooking with propane! So where do we go next?

some simple cues

movie_file trans_time blk_lvl invert
Banana.tif 1 0 0
Butterfly1.tif 2 0.12 1
Butterfly5.tif 5 0.2 0
Mettler.2.jpg 10 0.05 0
OilDrums.jpg 0.5 0.25 1
Starfish.tif 1 0 1

In this simple examination of this challenge I’m going to use a table to store our cues. In a larger system I’d probably use python storage (which is really a dictionary), but for the sake of keeping it simple let’s start with just a table. Our simple cues are organized above, and we can put all of those values into a table DAT. You’ll notice that for now I’m only worrying about file name and not path – all of these files come from the same directory so we can treat them mostly the same way. We’ll also notice that I’m thinking of my transition times in terms of seconds. All of this can, of course, be much more complicated. The trick is to sort out a very simple example first to identify pressure points and challenges before you dig yourself into a hole.

Okay, let’s add a table DAT to our network and copy all over our cues over.

table_dat.PNG

Now that we have all of our pieces organized it is time to think through the logic of how we make this all work. For now let’s use a button, a count CHOP, and a CHOP Execute DAT. We need to make sure our button is set to be momentary, and we also need to make sure our count CHOP is set to loop – starting at 1 and ending at 6. That matches our row indices from our table DAT.

move-through-cues.PNG

This is great Matt, but why python?

Well, we could do a lot of this with a complex set of CHOPs and selects but these kinds of states tend to be better handled, logically at least, through written code. Python will let us explicitly describe exactly what happens, and in what order those things happen. That’s no small thing, and while it might be a little rocky to wrap your head around using Python in Touch at first, it’s well worth it in the end. So what do we write in our CHOP Execute?

a little bit of logic | python

Uhhhhhhh… wait. What?

Okay. First off we just define a set of variables that we’re going to use. This makes our code a little easier to write, and easier to read. Next all of the action really happens in our onValueChange function.

We’re going to do all of this in a little logical if statement. If this thing, do that thing… in all the other cases, do something else.

First we check to see what our deck position is… which means that we check to see which output we’re currently seeing more of. If our cross TOP’s index is greater that 0.5 we know that we’re closer to 1, which also means we’re seeing more of deck B than deck A. That means that we want to make changes in deck A before we start transitioning. First we change our file, change all of our settings, then finally set a value in our constant CHOP. But why 1 / that value? And why multiplied by -1?

A default network runs at 60 fps. A speed CHOP fed by a constant with a value of 1 will rise a count of 1 over 60 frames. Said another way, an input value of 1 to our speed in a default network will increase by a count of one every second. If we divide that number in half we go twice as slow. A value of 0.5 will increase by a count of 1 every 2 seconds. 1 / our table value will let us think in seconds rather than in fractions while we’re writing our cues. Yeah, but what about being multiplied by -1?! Well, if we want to get back to the 0 index in our cross TOP we need a negative value feeding our speed CHOP. Multiplying by -1 here means that we don’t need to think about the order of cues in our table DAT, and instead our bits of Python will keep us on the rails. Our else statement does all of the same things, but to our B deck. It also uses a positive value to feed our speed CHOP – since we need an increasing value.

There you have it, a simple cuing system.

simple cues.gif

This is great Matt, but what if I want to tween settings on that level TOP? Or any other set of complicated things?! Well, I’d be that at this point you’ve got enough to get you started. You might use a column to indicate if you’re transitioning to a totally new cue or just to new values in the same source image. You could also choose to put your parameter values in CHOPs instead so you could manipulate them with other CHOPs before exporting them to your decks.

What if I don’t want linear transitions?! A speed is just a linear ramp! That’s okay. You might choose to use a lookup CHOP and a more complicated curve. You could even make several types of curves with animation COMPs and switch between them. Or you could use a lag  CHOP to change your attack and release slopes. Or you could use a trigger CHOP, or a fileter CHOP. There are lots of ways to shape curves with math, now it’s up to you to figure out exactly what you’re after.

Happy programming!

pull it apart

Pull apart the example in 088
Pull apart the example in 099

textport for performance | TouchDesiger

consoleText

I love a good challenge, and today on the TouchDesigner slack channel there was an interesting question about how you might go about getting the contents of the textport into a texture to display. That’s a great question, and I can imagine a circumstance where that might be a fun and interesting addition to a set. Sadly, I have no idea about how you might make that happen. I looked through the wiki a bit to see if there were any leads, and it’s difficult to see if there’s actually a good way to grab the contents of the textport.

What do we do then?!

Well, it just so happens that this might be another great place to look at how to take advantage of using extensions in TouchDesigner. Here our extension is going to do some double duty for us. The big picture idea is that we’ll want to be able to use a single call to either display a string, or print and display a string. If you only wanted to print it you could just use print(), so we’ll leave that one out of the mix for now.

Let’s take a look at the extension and then pull apart what’s happening in side.

Okay, so what exactly are we doing here?!

The big picture is that we want a way to be able to log something to a text  object that can be displayed. In this case I choose a table DAT. The reasoning here is that a table DAT before being converted to just a text DAT allows us to do some simple clean up and line adjustments. Each new entry is posted in a row – which makes for an easy way to limit the number of displayed rows. We can do this with a select DAT – which is where we use our StartRow and EndRow members.

Why exactly do we use these? Well, this helps ensure that we can keep our newest row displayed. A text TOP can accept a text DAT of any length, but at some point the text will spill off the bottom – unless you use adaptive sizing. The catch there is that at some point the text will become impossible to read. A top and bottom boundary ensures that we can always have something portion of our text displayed. We use a simple logical test in our Display() method to see if we’ve hit that boundary yet, and if we have we can update our members plus one… moving them both along at the same time.

You may also notice that we have a separate method to display and print… why not just do this in a single method. Well, that’s a great question. We could just use a single method for this with another argument. That’s probably a better way to tackle this challenge, but I wanted to use this opportunity to show how we might call another method from within our class. This can be helpful in a number of different situations, and while this application is a little too simple to really take advantage of that technique, it gives you a peak into how it might work.

Want to download the tox and take it for a test drive? You can find the source code here.

scriptDAT | Tips and Tricks | TouchDesigner

If you spend lots of time setting up parameters in your UI elements and want a faster way to use a set of presets to populate some parameters, then the Script DAT might be just what you’re looking for.

Let’s look at a fast simple example that might have you re-thinking how to quickly set up pars in a project. Keep in mind that this won’t work in every situation, but it might work for an awful lot of them and in ways that you might not have expected.

To get started let us imagine that we have a simple set-up where we have a UI element and a display element. We want a fast way to quickly update their parameters. For the sake of this example let’s imagine that we do not need any fancy scaling or changes on the fly. This is going to be used on a set of displays where we know exactly how they’re going to display. We might think about using storage to set and pull parameters, but you might be hesitant to use too much python for those bits and bobs. Okay, so exports it is… they’re a little more cumbersome to set up, but they are much faster – fine.

Sigh.

I guess we need to start setting up an export table, or a constant CHOP and dragging and dropping all over creation. Before you do that though, take a closer look that is the majesty of the Script DAT:

The Script DAT runs a script each time the DAT cooks and can build/modify the output table based in the optional input tables. The Script DAT is created with a docked (attached) DAT that contains three Python methods: cook, onPulse, and setupParameters. The cook method is run each time the Script DAT cooks. The setupParameters method is run whenever the Setup Parameter button on the Script page is pressed. The onPulse method is run whenever a custom pulse parameter is pushed.

Maybe we can use the Script DAT to make an export table for us with just a little bit of python.

We can start by putting a few things into storage. Let’s create a new dictionary but follow some simple rules:

  • The keys in this dictionary are going to be operator names or paths
  • Each operator is itself a key for another dictionary
  • The keys of that dictionary must be proper parameter names
  • The values associated with these keys need to be legal entries for parameters

Okay, with these rules in mind let’s see what we can do. Open up a new project, in project1 let’s create two new containers:

  • container_ui
  • continer_led_display

Add a new text DAT and create a simple dictionary to put into storage, and let’s follow the rules we described above:

Alright, so far so good. Now let’s add a Script DAT.

We’re going to use our Script DAT to look at our stored vals and create an export table on the fly for whatever is in the storage dictionary “attr” – easy.

Let’s edit our Script DAT to have the following contents:

Finally, let’s turn on the green export flag at the bottom of our Script DAT:

script_dat.PNG

And just like that we’ve set-up an auto-export system. Now every time we update our dictionary run our script to put the contents into storage we’ll automatically push those changes to an export table.

Looking for an example to pull apart – head over to github and download a simple example to look over.

Maintaining Perspective with Multiple Cameras | TouchDesigner

File this away under “interesting theoretical concepts that I’ll never use… or will I?”

At some point while making realtime generative art for massive installations you may find that you’re beyond the capabilities of traditional realtime rendering in Touch. Let’s say, for example, that you need to render 12 hd outputs for a 3 x 4 array of screens – a resolution of 7680 x 3240 certainly can be done with a single render TOP, but delivering that final texture is a little more tricky.

I’m well aware that there are a number of possible solutions to this problem but before you find yourself composing that email to me about how to hack a way to a solution… what if it wasn’t 12 ouputs, what if it was 120? 200? What if every output was 4k? The answer we’re really after here is how to draw a scene with consistent perspective across multiple machines… because at some point you’ll have to use multiple machines. So, what do we do?

Forget what we do… what does that even look like? I’m still so confused.

Okay, so let’s first look at some examples of what it looks like as reference.

In this example we can see one large canvas that spans multiple screens. This is great – it’s huge and beautiful. This example shows a large desktop, which is also great… but what if we’re after some real-time rendering? This is a great illustration of the problem we might encounter. What if these displays were all 1920 x 1080. It’s a 7 x 4 array, so that’s going to be a definite challenge for a complex scene on a single machine. At this point we probably can’t realistically produce a single pixel to pixel texture for this array on a single machine. Instead we’d have to have a system of distributed rendering machines. Okay, that’s pretty straightforward and we can do some hip flat rendering that’s all orthographic no problem. What if we want perspective? If you want perspective in your real time rendering you need a way to conceptualize what the entire “screen” is, and then how to selectively render just a portion of that larger scene.

Huh?

Consider the beautiful work of Refik Anadol. I can’t speak to exactly what technique is being used here, but it’s a good illustration of the same challenge. How can you maintain the illusion of perspective if you need to render your generative art on multiple machines? That’s the real question we’re trying to answer… and now we can look at some ideas to help us better understand that challenge.

The process and methodology described below aim to solve that problem. For this example I’m going to work in a scale that’s unrealistic… but will allow those without a commercial license to play along from home. If you have a commercial license feel free to turn up the resolution as long as you keep mathematics involved in mind.

First things first, let’s build out a simple proof of concept that will make sure we understand this problem more completely.

Let’s imagine that we have a large composition that we need to cut up (for the sake of rendering) into 4 smaller slices. That might look something like this:

multi_perspective

Remember, this is just a proof of concept so we’re going to start with a very easy implementation first, before we start to dig into the more complex questions. An important lesson to consider when it comes to programming is to start by reducing the problem to its basic elements, then when you have a foundational understanding of the issue start to scale up – don’t worry, we’ll get there we just have to start small.

Okay, so we’ve got our 2 x 2 array that we want to render. Let’s see how we can set up some cameras to render just one of those squares a piece, but all from the same point of view.

Wait! Why do they need the same point of view? We’re after the same point of view so we can maintain perspective. It’d be easy enough to use four different cameras that were translated into positions to only see their section of the larger quad, but the results from lighting and perspective calculations wouldn’t match. You can use this multi-camera transformation technique if you’re doing orthographic rendering with emissive lighting, but not if you want to maintain perspective and use non-emissive lighting. It’s okay if you don’t believe me – I didn’t believe me either, and I had to set it up and test it bunch of times before I really understood what’s happening.

What’s that going to look like? Well, for the perspective of a single camera that can see the whole scene – the effect we want to recreate eventually – we might see something like this:

full_camera_view.PNG

From the vantage point of a single camera, we want to be able to zero in on just a single quadrant in our scene, something like this:

single_quad.PNG

Eventually, we want to reassemble the view of four cameras to look like our original single camera view.

Matt, I still don’t get it. That’s okay. Keep reading, and if by the time we get done with this example it’s still not a useful technique you can stop reading. If, however, you want a means to do perspective based illusions across multiple machines for massive installations, keep reading to the end.

We’re going to set up our test by by using a part of our camera COMP that you may not have used before. Specifically, when it comes to the view page, we’re going to use the Viewing Angle and Method called “Focal Length and Aperture.” I wish I could tell you exactly what this means to TouchDesigner – spoiler, I can’t – what I can help you understand is how these values relate to one another in order to achieve our particular illusion.

We’re going to start by setting up a simple example. Add a geo to your scene, and replace the torus inside with a grid. Set your sizex parameter to be 16/9. If you want to follow along step for step , change your grid to be a polygon, and add a noise SOP with a period of 0.02 and an amplitude of 0.5. Connect that to a facet SOP with unique points and computed normals. Connect your chain of operators to a null SOP and make sure that your display and render flag are turned on for your null. You should have a simple network that looks like this:

SOPs.png

Outside of your geo add two moviefile in TOPs. In the first you can use the supplied quad arrangement in the assets portion of the git repository that accompanies this post. It’s called multi_perspective.png.In your second moviefile in TOP select the FiledGuide.tif. Composite these two together with a composite TOP, and change the operand method to Add. Connect this to a null TOP, and finally assign this null to a phong material as the color map. When you’re done you should have something that looks like this:

texture_grid.png

Whew. Alright, now we can finally get to the interesting part. Let’s add a light and a camera to our network.

Now, in our camera comp let’s set the tz parameter to 10 units:

cam_tz.PNG

Next let’s move over the view page of the camera COMP. Here we’re going to leave the projection as perspective, but we’re going to change the viewing angle method to “Focal length and Aperture. Next we’ll change our Focal Length to 10 and our aperture to 16/9:

full_view_cam

This is our camera that can see the entire scene. Let’s add a render TOP to our network so we can see what our camera sees:

full_view_cam_view

If we were to bypass our noise SOP the view of this piece of geometry would fit exactly within our view-port. From here forward it’s going to get a little interesting. We want to maintain the perspective calculations from this vantage point, but we want only a single quadrant at a time to fill our view-port. It’s almost like zooming in and cropping to only a sub-section of our view. How do we do that?

Let’s copy our first camera, and then make a few adjustments. I’m going to call my new camera cam_single_p1. Next we’re going to leave our Focal Length and Aperture settings just as they were. We are, however, going to change our window x/y parameters to be:

winx -0.25
winy 0.25

We’re also going to change our window size to be 0.5.

cam_single_p1

Let’s render that camera and see what we get:

single_cam_p1_rendered.PNG

Woah!! That works just like we wanted! Thinking through our other cameras, we can quickly see that the combination of our windows size and offsets act as a zoom and translation mechanism. Try adding 3 more cameras with the following tx/y settings:

Cam2

  • tx 0.25
  • ty 0.25

Cam3

  • tx -0.25
  • ty -0.25

Cam4

  • tx 0.25
  • ty -0.25

If you render these cameras you should see something like this:

all_cams.PNG

Okay. That’s all pretty slick, but how do all of these parameters relate?

What does it mean?!

The meat and potatoes of this technique is to define a view-port’s aspect ratio, the number of vertical slices that a single window represents, and then to specify where the offsets sit that represent the center of a given window.

Huh?!

Let’s think through our simple example. We made our rendered quad a 1.7778 x 1 rectangle – a rectangle with a 16:9 aspect ratio. This was the same value we used for our aperture. We also set the distance of our camera from our geo to be 10 units, which was the same value we used to define our focal length. The window size is a ratio of 1 over the number of vertical sections… in our case we had two vertical sections, so 1 over 2 is 0.5. Our xy window offsets represent the center of our windows in our sections. That’s a little harder to wrap our heads around, and a better way to think about it is the UV coordinates of the center of a given window into our scene. Let’s break that out a little more.

Window Size

  • 1 / number of vertical windows that can fit within our viewport

Focal Length

  • the distance of our camera to our scene window

Aperture

  • the aspect ratio of our scene window

Win X

  • ( U * 2) – 1 ) / 2

Win Y

  • – ( ( U * 2) – 1 ) / 2 )

To really dig into the power of this technique we need to push beyond just a quad based set up we need a more abstract configuration of windows in our scene. Now that we understand the mechanics of our set up, let’s look at some arbitrary configuration of windows that might be spread across a large number of machines. What if our window arrangement looked something like the below:

multi_cam_perspective_output_map

What’s going on here?

Well,  let’s imagine that you’re working on a large format LED installation where you need to slice up your scene into uniform HD chunks that then feed an LED controller. In some cases those regions overlap even though only a single LED screen is outputting the content. Relationally, however, you still need to be able to control and designate the regions for the screens. Output 5 and 7 are a good example of these. The final shape of that screen is going to be the combined outline of the two displays, but the video feed to the LED controllers needs to be consistent HD cutouts. All of the dimensions in this example could probably be managed by doing a single rendering of the full scene then cropping to a given region – but at some point your ability to render the full scene and then use TOPs to crop out pixels is going to fail. This example has 7 outputs, but it’s not hard to imagine a project that had 20 or more – as reference, the need to understand this technique came out of working on an installation with 68 discrete outputs.

Okay, okay, okay… fine. So what are all these targets and numbers about?

We’ll remember in our first proof of concept example that we were able to take advantage of a simple 0.25 offset – that makes sense right?

If our geometry is placed with it’s bottom left corner at the origin (0,0), then the center of our first slice is going to be at (0.25, 0.25):

2016-12-13 13.35.19.jpg

That’s great Matt, but that doesn’t make any sense… following this logic, our slices would have been more like:

  • Slice1 ( 0.25, 0.75 )
  • Slice2 ( 0.75, 0.75 )
  • Slice3 ( 0.25, 0.25 )
  • Slice4 ( 0.75, 0.25 )

So what gives?!

Well, we have to remember that we set up our geometry to have it’s center at the origin. If we take this into account, what we see is more like:

2016-12-13 13.36.58.jpg

Looking at this, it should make sense why we used the translation coordinates that we did. The other interesting thing to understand in this look into our geometry is to consider the following.

If the full extent of our scene represents the bounds of a complete window, then we can begin to think about our winx and winy coords as being more akin to a UV – a normalized coordinate on our full window. We need to do some additional math to compensate for the translation of our origin, but that’s pretty straightforward.

If you’re still scratching your head, that’s okay. Let’s look at how to take our new window map and create a programmatic means of slicing up that full scene.

Thinking back to our proof of concept test we need a few things in place in order for this all to work as we expect:

  • We need the transforms for all our cameras to be the same – these are on the xform page: tx, ty, tz.
  • We also need several pieces for the view page:
    • Focal Length
    • Aperture
    • Winx
    • Winy
    • Winsize

Given that our cameras aren’t going to be doing any moving once we set up our calibration, we can safely use python expressions in our parameter fields. In terms of optimization, if an op does a lot of cooking it’s often better to use exports instead of expressions – expressions end up getting complied on demand and evaluated when op op cooks. Since we want fixed cameras looking into a moving world, we can use expressions for our cameras – it’s also going to be less of a hassle to set up, which is great.

For starters let’s add our reference template into our scene. We can start by dragging in our texture, connecting a Null TOP, and then assigning this to the color of a constant Material.

reference_plate.PNG

Next let’s add a geometry to our scene. We can replace the torus inside with a rectangle that has our source window’s aspect ratio, which in this case is 16:9 or 1.778 : 1.

scene_window.PNG

Next I’m going to use a Null COMP to hold the transforms of our camera system. I’m going to set this to have a tz value of 5, and otherwise leave this alone.

Null_pars.PNG

Let’s also add an object CHOP to help us with determining the distance between the null and geometry – to be clear, we don’t need to do this with an object CHOP, we could do this with a math CHOP, or with Python.

In the Object CHOP I’m going to set our null as the target object, geo1 as the reference object, and I’m going to set this to compute distance.

object_chop.png

I’m also going to add some tables that hold some reference information for us. I want to know the width and height of our scene, as well as the width and height of a given cut-out.

ref_values.PNG

We’re also going to need a table with all of our pixel space coordinates:

output_coords.PNG

Now that we have all our primary ingredients ready, we can build out a system to convert our pixel space coordinates into a set of winx and winy translation values.

Let’s start by looking at this process in general. We can first start with our coords in pixel space:

Pixel Space

output x y
output1 640 360
output2 205 130
output3 237 518
output4 525 130
output5 752 593
output6 1013 188
output7 1012 483

These values represent the actual center of these windows in the full pixel scene. I started by making this template in Photoshop, and then measured the location of the center of each given viewport.

Once we have these values, we need to convert them into a normalized values. In other words, how do these pixel coordinates translate to UV coordinates. This is a pretty straightforward calculation – the pixel value divided by it’s respective dimension for the full scene:

  • outputx / full_scene_x
  • outputy / full_scene_y

We can set up a quick eval DAT to do all of this for us:

convert_to_uv

The two python expressions that drive this in the table2 DAT are:

me.inputCell / op( 'table_scene' )[ 'scene_x', 1 ]
me.inputCell / op( 'table_scene' )[ 'scene_y', 1 ]

Our results from this can be found in the table below.

UV Space

output x y
output1 0.5 0.5
output2 0.16015625 0.18055555
output3 0.18515625 0.71944444
output4 0.41015625 0.18055555
output5 0.5875 0.82361111
output6 0.79140625 0.26111111
output7 0.790625 0.67083333

Now that we have the UV coords that represent the center of each window, we need to convert these values into a scale that takes into account that the center of our geometry is located at the origin. For our x values we multiply our value by 2, subtract 1, and divide by 2. For our y values we use the same operation and multiply by negative 1. We can use another eval DAT to do just this for us.

convert_to_winxy.PNG

The two python expressions that drive this in table3 are:

( ( me.inputCell * 2 ) - 1 ) / 2
( ( ( me.inputCell * 2 ) - 1 ) / 2 ) * -1

Now we have taken our original pixel coords and then converted them into our winx and winy transforms.

Converted into winx winy transfroms

output x y
output1 0.0 -0.0
output2 -0.33984375 0.31944444
output3 -0.31484375 -0.21944444
output4 -0.08984375 0.319444444
output5 0.08750000 -0.323611111
output6 0.29140625 0.238888888
output7 0.290625 -0.170833333

With these values set, we just need to make sure that we compute our window size, aperture, and focal_length. Looking back to the above, calculating these remaining values should be a snap.

Window Size

Window size is 1 over the number of windows that can fit into our full scene. This also means that our total scene’s width should be n x the width of a given window.

  • In our case a single window (measured in Photoshop) is 320 pixels, and our full scene is 1280 pixels.
  • 1280 / 320 = 4
  • 1 / 4 = 0.25
  • Window Size = 0.25

Focal Length

Focal Length is the distance between our point of view (in our case the null), and our full scene. We’ve used an object CHOP to compute this distance.

Aperture

Our aperture is the aspect ratio of our full scene.

  • 1280/720 = 1.778
  • Aperture = 1.778

I’m using an eval DAT to do the computation and organization of all of this:

cam_attr_calculations.PNG

The python for these looks like this:

1 / ( op( 'table_scene' )[ 'scene_x', 1 ] / op( 'table_render_attr' )[ 'width', 1 ] )
op( 'table_scene' )[ 'scene_x', 1 ] / op( 'table_scene' )[ 'scene_y', 1 ]
op( 'object1' )[ 'dist' ]

Python in touch is name dependent, so I’d recommend looking at this example network if you’re trying to replicate this effect.

Now we can set up our cameras to correctly crop out a given viewport of the entire scene. I’m going to rely on the digits of a camera to be correspondences to the output. So in my case output1 and camera1 should be the same thing. I’m also going to use the translation values of our null to set the location of the camera.

All of that said, our expressions for our camera should look like this:

camera_expressions.PNG

Again, all of our expressions are name dependent, so I’d recommend looking over how I’ve organized this in the sample file in order to make sure you know exactly what’s referencing what.

Now we can copy past our camera 6 more times – I’m using digits in many of these expressions to match the camera digit to the output digit. Looking over my results it looks like we’re right on the money.

view_ports.PNG

To really appreciate what’s happening here, I’m going to turn off the rendering on our calibration plate, and turn on a sample piece of geometry.

view_ports_real_geo.PNG

In geo2 you can see the entire geometry, and in each of our viewports we can see that we’re only rendering the region of our geometry that falls into single view.

“That’s a mess… I don’t get it.” You might well be saying. That’s okay. Let’s rearranged our TOPs to mirror more closely what’s in our template:

view_ports_real_geo_rearranged.PNG

Hopefully, doing this we should better be able to see how these various pieces work together.

At this point we now have a means of rendering a complex scene across multiple machines (or GPUs if you’re able to use affinity on Quadro cards), and maintain perspective. That unlocks a whole new avenue for realtime rendering that breaks you away from the limitations of single machine configurations or reliance on baked media for distributed realtime rendering.

Download this example from github


It’s always lovely to get an email from Derivative headquarters.

In this case I just got a lovely ping from Malcolm to let me know that the crop parameters on the render TOP can be used for the same functions described above.

Let’s look back at our first example to understand how that might work. In this case I’m going to use the same initial camera that we set up – our cam_full_scene COMP. I’m going to use this one already since I know that it’s correctly configured to capture the entire width and height of our reference plate. Next I’m going to add a render TOP, and under the CROP page I’m going to change my crop right and crop bottom pars to 0.5. For the sake of understanding the concept I’m going to leave this in a fractional unit space, but we could just as easily determine these values as absolute pixel measures. The result of this looks like this:

render_crop1.PNG

Next up, rather than using another render TOP I’m going to use a render pass – there’s lots of good reasons to use the render pass but one of the most important considerations here is that it’s a very efficient rendering operation. We do, however, need to make a few other adjustments. We need to target our render2 as our render TOP on the Render Pass page, and we also need to toggle on clear to camera color, and clear depth buffer:

render_crop2.PNG

On our render pass we’ll need to use the following crop parameters:

render_crop2_2.PNG

As we add additional render pass tops we need to target the previous render pass – a given render or render pass TOP can only have a single render pass assigned to it.

Our crop parameters should look like this:

renderpass3

  • crop left 0.0
  • crop right 0.5
  • crop bottom 0.0
  • crop top 0.5

renderpass4

  • crop left 0.5
  • crop right 1.0
  • crop bottom 0.0
  • crop top 0.5

All in all when we’re done we should have something that looks like this:

render_crop_full.PNG

Here the result is the same, the methodology going into it all is just a little different. Like all things TouchDesigner, there are multiple means of solving the same challenge and the “right” one ultimatly comes down to the choice that’s best for your particular installation.

Happy Programming everyone.

* The git repository and support files have been updated to reflect this additional material.