Category Archives: Software

TouchDesigner | Switch Statements in Python

python-switch-statements.PNG

Hang onto your socks programmers, we’re about to dive deep. What are we up to here today? Well, we’re going to look into switch statement alternatives in Python (if you don’t know what a switch statement is don’t worry we’ll cover that bit), how you might use that in a practical real-world situation, and why that’s even an idea worth considering. With that in mind let’s dig-in and start to pull apart what Switch Statements are, and why you should care.

From 20,000 feet, switch-case statements are an approach to handling different situations by way of a look-up table rather than with a series of if-else statements. If you’re furrowing your brow consider situations when you may have encountered complex if-else statements where once change breaks everything… for so so much longer than you might want. Also consider what happens if you want to extend that if-else ladder into something more complicated… maybe you want to call different functions or methods based on input conditions, maybe you need to control a remote machine and suddenly you’re scratching your head as you ponder how on earth you’re going to handle complex logic statements across a network. Maybe you’re just after a better code-segmentation solution. Or maybe you’ve run into a function so long you’re starting to loose cycles to long execution times. These are just a few of the situations you might find yourself in and a switch statement might just be the right tool to help – except that there are no switch-case statements in Python.

What gives?!

While there aren’t any switch-case statements, we can use dictionary mappings to get to a similar result… a result so powerful we’re really in for a treat. Before we get there though, we need to look at the situation we’re trying to avoid.

So what exactly is that situation? Let’s consider a problem where we want to only call one function and then let that code block handle all of the various permutations of our actions. That might look like our worst case solution below.

Worst

To get started, what do we have above? We have a single function called switcher() that takes three arguments – the name of the function we want to call, and two values. In this example we have four different math operations, and we want to be able to access any of the four as well as pass in two values and get a result just by calling a single function. That doesn’t seem so hideous on the face of it, so why is this the worst approach?

This example probably isn’t so terrible, but what it does do is bury all the functional mathematical portions of our code inside of a single function. It means we can’t add and test a new element without possibly breaking our whole functional code block, we can only access these operations from within switcher(), and if we decide to add additional operations in the future our code block will just continue to accrue lines of code. It’s a naive approach (naive in the programming sense – as in the first brute force solution you might think of), but it doesn’t give us much room for modularity or growth that doesn’t also come with some unfortunate side effects.

Okay… fine… so what’s a good solution then?


Good

A good solution segments our functions into their own blocks. This allows us to develop functions outside of our switcher() function, call them independently, and have a little more flexible modularity. You might well be thinking that this seems like a LOT more lines… can we really say this is better?! Sure. The additional lines are worth it if we also get some more handles on what we’re doing. It also means we probably save some serious debugging time by being able to isolate where a problem is happening. In our worst case approach we’re stuck with a single function that if it breaks, none of our functions work… and if our logic got sufficiently complex we might be sifting through a whole heap of code before we can really track down what’s happening. Here at least there’s a better chance that a problem is going to be isolated to a single function block – that alone is a HUGE help.

All that said, we’re still not really getting to switch-case statements… we’re still stuck in if-else hell where we’ll have to evaluate our incoming string against potentially all of the possible options before we actually execute our actual code block. At four functions this isn’t so bad, but if we had hundreds we might really be kicking ourselves.

So how can we do better?


Better

Better is to remember that the contents of a python dictionary can be any data type – in fact they can even be function names, or Python objects. How does that help use? Well, it means we can look up what function we want to call on the fly, call it, and even pass in variables. In the example above our switcher() function holds a dictionary of all the possible functions at our disposal – when we call our switcher we pass in the name of the function with the variables that will in turn get passed to the function. Above our active_function variable becomes the variable that’s fetched from our dictionary, which we in turn pass our incoming variables along to.

That’s great in a lot of ways, but especially in that it gets us away from long complicated if-else trees. We can also use this as a mechanism for handling short-hand names for our methods, or multiple assignments – we might want two different keys to access the same function (maybe “mult” and “Multiple” both call the same function for example).

So far this is far away a better approach, so how might we make this better still?


Best

We might take this one step further and start to consider how we might address accepting an arbitrary number of vals. Above we have a simple way to tackle this – probably not what you’d end up with in production, but something that should hopefully get you thinking. Here the variable vals becomes a list that can be any number of values. In the case of both our Add() and Subtract() functions we loop through all of the values – adding each val, or subtracting each val respectively. In the case of our Multiply() and Divide() functions we limit these operations to only two values for the sake of our example. What’s interesting here is that we can return can think about error handling based on the array of values that’s coming into our function.


The above is great, of course, but it’s really just the beginning of the puzzle. Where this really starts to become interesting is how you might think of integrating this approach in your python extensions.

Or if vals is a a dictionary in it’s own right rather than a simple list.

Or if you can send a command like this over the network.

Or if you can start to think about how to build out blocks of code that are specific to a single job, and universal blocks that apply to all of your projects.

Next we’ll start to pull apart some of those very ideas and see where this concept really gets exciting and creates spaces for building tools that persists right alongside the tools that you have to build for a single job.

In the meantime, experiment with some Python style switch statements to see if you can get a handle on what’s happening here, and how you might take better advantage of this method.

Happy programming!


References

Looking for another perspective on this approach form a more pure Python perspective? Check out this post on Jaxenter.com.

TouchDesigner | Delay Scripts

It’s hard to appreciate some of the stranger complexities of working in a programming environment until you stumble on something good and strange. Strange how Matt? What a lovely question, and I’m so glad that you asked!

Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need some-something to happen at some time in the future. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming?

Typically we start to consider this through operations that involve some form of delay. I might issue the command for an operation now, but I want the environment to wait some fixed period of time before executing those instructions. In Python we have a lovely option for using the time module to perform an operation called sleep – this seems like a lovely choice, but in fact you’ll be oh so sorry if you try this approach:

But whyyyyyyyy?!

Well, Python is blocking inside of TouchDesigner. This means that all of the Python code needs to execute before you can proceed to the next frame. So what does that mean? Well, copy and paste the code above into a text DAT and run this script.

time.sleep

If you keep an eye on the timeline at the bottom of the screen, you should see it pause for 1 second while the time.sleep() operation happens, then we print “oh, hello there” to the text port and we start back up again. In practice this will seem like Touch has frozen, and you’ll soon be cursing yourself for thinking that such a thing would be so easy.

So, if that doesn’t work… what does? Is there any way to delay operations in Python? What do we do?!

Well, as luck would have it there’s a lovely method called run() in the td module. That’s lovely and all, but it’s a little strange to understand how to use this method. There’s lots of interesting nuance to this method, but for now let’s just get a handle on how to use it – both from a simple standpoint, and with more complex configurations.

To get started let’s examine the same idea that we saw above. Instead of using time.sleep() we can instead use run() with an argument called delayFrames. The same operation that we looked at above, but run in a non-blocking way would look like this:

If you try copying and pasting the code above into a text DAT you should have much better results – or at least results where TouchDesigner doesn’t stop running while it waits for the Python bits to finish.

Okay… so that sure is swell and all, so what’s so complicated? Well, let’s suppose you want to pass some arguments into that script – in fact we’ll see in a moment that we sometimes have to pass arguments into that script. First things first – how does that work?

Notice how when we wrote our string we used args[some_index_value] to indicate how to use an argument. That’s great, right? I know… but why do we need that exactly? Well, as it turns out there are some interesting things to consider about running scripts. Let’s think about a situation where we have a constant CHOP whose parameter value0 we want to change each time in a for loop. How do we do that? We need to pass a new value into our script each time it runs. Let’s try something like:

What you should see is that your constant CHOP increments every second:

for-loop-delay

But that’s just the tip of the iceberg. We can run strings, whole DATs, even the contents of a table cell.

This approach isn’t great for everything… in fact, I’m always hesitant to use delay scripts too heavily – but sometimes they’re just what you need, and for that very reason they’re worth understanding.

If you’ve gotten this far and are wondering why on earth this is worth writing about – check out this post on the forum: Replicator set custom parms error. It’s a pretty solid example of how and why it’s worth having a better understanding of how delay scripts work, and how you can make them better work for you.

Happy Programming.

 

TouchDesigner | Finding Dominant Color

Programming is a strange practice. It’s not uncommon that in order to make what’s really interesting, or what you promised the client, or what’s driving a part of your project you have to build another tool.

You want to detect motion, so you need to build out a means of comparing frames, and then determining where the most change has occurred. You want to make visuals that react to audio, but first you need to build out the process for finding meaningful patterns in the audio. And on and on and on.

And so it goes that I’ve been thinking about finding dominant color in an image. There are lots of ways to do this, and one approach is to use a technique called KMeans clustering. This approach isn’t without its faults, but it is interesting and relatively straightforward to implement. The catch is that it’s not fast enough for a realtime application – at least not if you’re using Python. So what can we do? Well, we can still use KMeans clustering, but we need to understand how to use multi-threading in python so we don’t block our main thread in TouchDesigner.

The project / tool / example below does just that – it’s a mechanism for finding dominant color in an image with an approach that uses a background thread for processing that request.


TouchDesigner Dominant Color

An approach for finding dominant color in an image using KMeans clustering with scikit learn and openCV. The approach here is built for realtime applications using TouchDesigner and python multi-threading.

TouchDesigner Version

099
Build 2018.22800

Python Dependencies

  • numpy
  • scipy
  • sklearn
  • cv2

Overview

base_dominant_color

A tool for finding Dominant Color with openCV.

Here we find an attempt at locating dominant colors from a source image with openCV and KMeans clustering. The large idea is to sample colors from a source image build averages from clustered samples and return a best estimation of dominant color. While this works well, it’s not perfect, and in this class you’ll find a number of helper methods to resolve some of the shortcomings of this process.

Procedurally, you’ll find that that the process starts by saving out a small resolution version of the sampled file. This is then hadned over to openCV for some preliminary analysis before being again handed over to sklearn (sci-kit learn) for the KMeans portion of the process. While there is a built-in function for KMeans sorting in openCV the sklearn method is a little less cumbersome and has better reference documentation for building functionality. After the clustering process each resulting sample is processed to find its luminance. Luminance values outside of the set bounds are discarded before assembling a final array of pixel values to be used.

It’s worth noting that this method relies on a number of additional python libraries. These can all be pip installed, and the recommended build approach here would be to use Python35. In the developer’s experience this produces the least number of errors and issues – and boy did the developer stumble along the way here.

Other considerations you’ll find below are that this extension supports a multi-threaded approach to finding results.

Parameters

Dominant Color

  • Image Process Status – (string) The thread process status.
  • Temp Image Cache – (folder) A directory location for a temp image file.
  • Source Image – (TouchDesigner TOP) A TOP (still) used for color analysis.
  • Clusters – (int) The number of requested clusters.
  • Luminance Bounds – (int, tuple) Luminance bounds, mine and max expressed as value between 0 and 1.
  • Clusters within Bounds – (int) The number of clusters within the Luminance Bounds.
  • Smooth Ramp – (toggle) Texture interpolation on output image.
  • Ramp Width – (int) Number of pixels in the output Ramp.
  • Output Image – (menu) A drop-down menu for selecting a ramp or only the returned clusters.
  • Find Colors – (pulse) Issues the command to find dominant colors.

Python

  • Python Externals – (path) A path to the directory with python external libraries.
  • Check Imports – (pulse) A pulse button to check if sklearn was correctly imported.

Using this Module

To use this module there are a few essential elements to keep in mind.

Getting Python in Order

If you haven’t worked with external Python Libraries inside of Touch yet, please take a moment to familiarize yourself with the process. You can read more about it on the Derivative Wiki – Importing Modules

Before you can run this module you’ll need to ensure that your Python environment is correctly set-up. I’d recommend that you install Python 3.5+ as that matches the Python installation in Touch. In building out this tool I ran into some wobbly pieces that largely centered around installing sklearn using Python 3.6 – so take it from someone whose already ran into some issues, you’ll encounter the fewest challenges / configuration issues if you start there. Sklearn (the primary external library used by this module) requires both scipy and numpy – if you have pip installed the process is straightforward. From a command prompt you can run each of these commands consecutively:

pip install numpy
pip install scipy
pip install sklearn

Once you’ve installed the libraries above, you can confirm that they’re available in python by invoking python in your command prompt, and then importing the libraries one by one. Testing to make sure you’ve correctly installed your libraries in a Python only environment first, will help ensure that any debugging you need to do in TouchDesigner is more straightforward.

python-externals-confirmation

Working with TouchDesigner

Python | Importing Modules

If you haven’t imported external libraries in TouchDesigner before there’s an additional step you’ll need to take care of – adding your external site-packages path to TouchDesigner. You can do this with a regular text DAT and by modifying the example below:

import sys
mypath = "C:/Python35/Lib/site-packages/mymodule"
if mypath not in sys.path:
    sys.path.append(mypath)

Copy and paste the above into your text DAT, and modify mypath to be a string that points do your Python externals site-packages directory.

python-page-dominant-color

If that sounds a little out of your depth, you can use a helper feature on the Dominant Color module. On the Python page, navigate to your Python Externals directory. It should likely be a path like: C:\Program Files\Python35\Lib\site-packages

Your path may be different, especially if when you installed Python you didn’t use the checkbox to install for all users. After navigating to your externals directory, pulse the Check imports parameter. If you don’t see a pop-up window then sklearnwas successfully imported. If you do see a pop-up window then something is not quite right, and you’ll need to do a bit of leg-work to get your Python pieces in order before you can use the module.

Using the Dominant Color

With all of your Python elements in order, you’re ready to start using this module.

dominant-colors-parameters

The process for finding dominant color uses a KMeans clustering algorithm for grouping similar values. Luckily we don’t need to know all of the statistics that goes into that mechanism in order to take full advantage of the approach, but it is important to know that we need to be mindful a few elements. For this to work efficiently, we’ll need to save our image out to an external file. For this to work you need to make sure that this module has a cache for saving temporary images. The process will verify that the directory you’ve pointed it to exists before saving out a file, and will create a directory if one doesn’t yet exist. That’s mostly sanity checking to ensure that you don’t have to loose time trying to figure out why your file isn’t saving.

Give that this process happens in another thread, it’s also important to consider that this functions based on a still image, not on a moving one. While it would be slick to have a fast operation for finding KMeans clusters in video, that’s not what this tool does. Instead the assumption here is that you’re using a single frame of reference content, not video. You point this module to a target source, by dropping a TOP onto the Source Image parameter.

Next you’ll need to define the number of clusters you want to look for. Here the term clusters is akin to what’s the target number of dominant colors you’re looking to find – the top 3, the top 10, the top 20? It’s up to you, but keep in mind that more clusters takes longer to produce a result. You’re also likely to want to bound your results with some luminance measure – for example, you probably don’t want colors that are too dark, or too light. The luminance bounds parameters are for luminance measures that are normalized as 0 to 1. Clusters within bounds, then, tells you how many clusters were returned from the process that fell within your specified regions. This is, essentially, a way to know how many swatches work within the brightness ranges you’ve set.

The output ramp from this process can be interpolated and smooth, or Nearest Pixel swatches. You can also choose to output a ramp that’s any length. You might, for example, want a gradient that’s spread over 100 or 1000 pixels rather than just the discrete samples. You can set the number of output pixels with the ramp width parameter.

On the otherside of that equation, you might just want only the samples that came out of the process. In the Output Image parameter, if you choose clusters from the drop down menu you’ll get only the valid samples that fell within your specified luminance bounds.

Finally, to run the operation pulse Find Colors. As an operational note, this process would normally block / lock-up TouchDesigner. To avoid that unsavory circumstance, this module runs the KMeans clustering process in another thread. It’s slightly slower than if it ran in the main thread, but the benefit is that Touch will continue running. You’ll notice that Image Process Status parameter displays Processing while the separate thread is running. Once the result has been returned you’ll Ready displayed in the parameter.

Download from Github

https://github.com/raganmd/touchdesigner-dominant-color

References


Notes from Other Programmers

I don’t use CONDA, but for those of you that do, you can install sklearn with the following command:
conda install scikit-learn

TouchDesigner | Process Management

A look at how you can launch another process from TouchDesigner with Python, and how you can kill that process.

Grab all the files from the repo here.

Overview

At some point you’ll need to split up the work of a single project into multiple processes. This happens for lots of reasons – maybe you want to break your control interface out from your output elements, or maybe you want to start up another tool you’ve built – you name it, there are lots of reasons you might want to launch another process, and if you haven’t found a reason to you… chances are you will soon.

The good news is that we can do this with a little bit of python. We need to import a few extra libraries, and we need to do a little leg work – but once we get a handle on those things we have a straightforward process on our hands.

Getting Started

First things first, start by downloading or cloning the whole repo. We’ll start by opening the process-management.toe file. You might imagine that this is the toe file that you’re launching processes from, or you might think of this as your main control toe file. You’ll notice that there’s also a toe file called other-app.toe. This is the file we’re going to launch from within TouchDesigner. At this point feel free to open up that file – you’ll see that it starts in perform mode and says that it’s some other process. Perfect. You should also notice that it says “my role is,” but nothing else. Don’t worry, it’s this way on purpose.

Process-management.toe

In this toe file you’ll see three buttons:

  • Launch Process
  • Quit Process
  • Quit Process ID None

Launch Process

This button will run the script in text_start_process.

So, what’s happening here? First we need to import a few other libraries that we’re going to use – os and subprocess. From there we need to identify the application we’re going to use, and the file we’re going to launch. Said another way we want to know what program we’re going to open our toe file with. You’ll see that we’re doing something a little tricksy here. In Touch, the all class has a member called binFolder – this tells us the location of the Touch Binary files, which happens to include our executable file. Rather than hard coding a path to our binary we can instead use the path provided by Touch – this has lots of advantages and should mean that your code is less likely to break from machine to machine.

Similarly, you’ll see that we location our other-process.toe by looking at our project folder (where process-management.toeis saved) and locating our other-process.toe file relative to our main project file.

So far so good. You should also see that we’re setting an environment variable with os.environ. This is an interesting place where we can actually set variables for a Touch process at start. Why do this? Well, you may find that you have a single toe file that you want to configure differently for any number of reasons. If you’re using a single toe file configuration, you might want to launch your main file to default as a controller, and a another instance of the same file in an output configuration, and maybe another instance of the same app to handle some other process. If that’s not interesting to you, you can comment out that line – but it might at least be worth thinking about before you add that pound sign.

Next we use a subprocess.Popen() call to start our process – by providing the all, and the file as arguments in a list. We can also grab our process ID (we’ll use that later) while we’re here.

Finally we’ll build a little dictionary of our attributes and put that all in storage. I’m using a dictionary in this example since you might find that you need to launch multiple instances, and having a nice way to keep them separate is handy.

Okay. To see this work, let’s make that button viewer active and click it – tada! At this point you should see another TouchDesigner process launch and this time around the name that was entered for our ROLE environment variable shows up in our second touch process: “some other process, my role is render1”

Good to know is that if you try to save your file now you’ll get an error. That’s because we’ve put a subprocess object into storage and it can’t persist between closing and opening. Your file will be saved, but our little dictionary in storage will be lost.

Quit Process

This little button kills our other-app.toe instance.

Much simpler than our last script this one first grabs our dictionary that’s in storage, grabs the subprocess object, and then kills it. Next we unstore all of the bits in storage so we can save our toe file without any warning messages.

Quit Process ID

Okay – so what happens if you want to just kill a process by it’s ID, not by storing the whole subprocess object? You can do that.

In this case we can use the os module to issue a kill call with our process id. If we look at the os documentation we’ll see that we need a pid and a sig – which is why we’re also importing signal.

Take Aways

This may or may not be useful in your current work flow, but it’s handy to know that there are ways to launch and quit another toe file. Better yet, this same idea doesn’t have to be limited to use with Touch. You might use this to lunch or control any other application your heart desires.

TouchDesigner | Deferred Lighting – Cone Lights

With a start on point lights, one of the next questions you might ask is “what about cone lights?” Well, it just so happens that there’s a way to approach a deferred pipeline for cone lights just like with point lights. This example still has a bit to go with a few lingering mis-behaviors, but it is a start for those interested in looking at complex lighting solutions for their realtime scenes.

You can find a repo for all of this work and experimentation here: TouchDesigner Deferred Lighting.


TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

This approach is very similar to point lights, with the additional challenge of needing to think about lights as directional. We’ll see that the first stage and last of this process – is consistent with our Point Light example, but in the middle we need to make some changes. We can get started by again with color buffers.

Color Buffers

color buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

  • position – renderselect_postition
  • normals – renderselect_normal
  • color – renderselect_color
  • uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

light attributes

Here we’ll begin to see a divergence from our previous approach.

We are still going to compute and pack data for the position, color, and falloff for our point lights like in our previous example. The difference now is that we also need to compute a look-at position for each of our lights. In addition to our falloff data we’ll need to also consider the cone angle and delta of our lights. For the time being cone angle is working, but cone delta is broken – pardon my learning in public here.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our null SOP null_lightpos. We convert this to CHOP data and use the attributes from this null (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene. In this case we’re using a null since we want to position the look-at points at some other position than our lights themselves. Notice that our circle has one transform SOP to describe light position, and another transform SOP to describe look-at position. In the next stage we’ll use our null_light_pos CHOP and our null_light_lookat CHOP for the lighting calculations – we’ll also end up using the results of our object CHOP null_cone_rot to be able to describe the rotation of our lights when rendering them as instances.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

  • position – null_light_pos
  • color – null_light_color
  • falloff – null_light_falloff
  • light cone – null_light_cone

This means that sample 0 from each of these four CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

The additional light cone attribute here is used to describe the radius of the cone and the degree of softness at the edges (again pardon the fact that this isn’t yet working).

Combining Buffers

combining buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

If you look at the final pieces of our for loop you’ll find that much of this process is borrowed from the example Malcolm wrote (Thanks Malcolm!). This starting point serves as a baseline to help us get started from the position of how other lights are handled in Touch.

Representing Lights

representing lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Our divergence here is that rather than using spheres, we’re instead using cones to represent our lights. In a future iteration the width of the cone base should scale along with our cone angle, but for now let’s celebrate the fact that we have a way to see where our lights are coming from. You’ll notice that the rotate attributes generated from the object CHOP are used to describe the rotation of the instances. Ultimately, we probably don’t need these representations, but they sure are handy when we’re trying to get a sense of what’s happening inside of our shader.

Post Processing for Final Output

post process

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

final product

TouchDesigner | Deferred Lighting – Point Lights

A bit ago I wanted to get a handle on how one might approach real time rendering with LOTS of lights. The typical openGL pipeline has some limitations here, but there’s a lot interesting potential with Deferred Lighting (also referred to as deferred shading). Making that leap, however, is no easy task and I asked Mike Walczyk for some help getting started. There’s a great starting point for this idea on the derivative forum but I wanted a 099 approach and wanted to pull it apart to better understand what was happening. With that in mind, this is a first pass at looking through using point lights in a deferred pipeline, and what those various stages look like.

You can find a repo for all of this work and experimentation here: TouchDesigner Deferred Lighting.


TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

Color Buffers

example-lights-point-color-buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

  • position – renderselect_postition
  • normals – renderselect_normal
  • color – renderselect_color
  • uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

example-lights-point-attributes

Next we’re going to compute and pack data for the position, color, and falloff for our point lights.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our Circle SOP circle1. We convert this CHOP data and use the attributes from this circle (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

  • position – null_light_pos
  • color – null_light_color
  • falloff – null_light_falloff

This means that sample 0 from each of these three CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

Combining Buffers

example-lights-point-combining-buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

Representing Lights

example-lights-point-represetning-lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Post Processing for Final Output

example-lights-point-post-process

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

piont-lights

Python in TouchDesigner | The Channel Class | TouchDesigner

The Channel Class Wiki Documentation

Taking a little time to better understand the channel class provides a number of opportunities for getting a stronger handle on what’s happening in TouchDesigner. This can be especially helpful if you’re working with CHOP executes or just trying to really get a handle on what on earth CHOPs are all about.

To get started, it might be helpful to think about what’s really in a CHOP. Channel Operators are largely arrays (lists in python lingo) of numbers. These arrays can be only single values, or they might be a long set of numbers. In any given CHOP all of the channels will have the same length (we could also say that they have the same number of samples). That’s helpful to know as it might shape the way we think of channels and samples.

Before we go any further let’s stop to think through the above just a little bit more. Let’s first think about a constant CHOP with channel called ‘chan1’. We know we can write a python reference for this chop like this:

op( 'constant1' )[ 'chan1' ]

or like this:

op( 'constant1' )[ 0 ]

Just as a refresher, we should remember that the syntax here is:
op( stringNameToOperator )[ channelNameOrIndex ]

python_refs.PNG

That’s just great, but what happens if we have a pattern CHOP? If we drop down a default pattern CHOP (which has 1000 samples), and we try the same expression:

op( 'pattern1' )[ 'chan1' ]

We now get a constantly changing value. What gives?! Well, we’re now looking at bit list of numbers, and we haven’t told Touch where in that list of values we want to grab an index – instead touch is moving through that index with me.time.frame-1 as the position in the array. If you’re scratching your head, that’s okay we’re going to pull this apart a little more.

multi_sample_chop.gif

Okay, what’s really hiding from us is that CHOP references have a default value that’s provided for us. While we often simplify the reference syntax to:

op( stringNameToOperator )[ channelNameOrIndex ]

In actuality, the real reference syntax is:
op( stringNameToOperator )[ channelNameOrIndex ][ arrayPosition ]

In single sample CHOPs we don’t usually need to worry about this third argument – if there’s only one value in the list Touch very helpfully grabs the only value there. In a multi-sample CHOP channel, however, we need more information to know what sample we’re really after. Let’s try our reference to a narrow down to a single sample in that pattern CHOP. Let’s say we want sample 499:

op( 'pattern1' )[ 'chan1' ][ 499 ]

With any luck you should now be seeing that you’re only getting a single value. Success!

But what does this have to do with the Channel Class? Well, if we take a closer look at the documentation ( Channel Class Wiki Documentation ), we might find some interesting things, for example:

Members

  • valid (Read Only) True if the referenced chanel value currently exists, False if it has been deleted. Identical to bool(Channel).
  • index (Read Only) The numeric index of the channel.
  • name (Read Only) The name of the channel.
  • owner (Read Only) The OP to which this object belongs.
  • vals Get or set the full list of Channel values. Modifying Channel values can only be done in Python within a Script CHOP.

Okay, that’s great, but so what? Well, let’s practice our python and see what we might find if we try out a few of these members.

We might start by adding a pattern CHOP. I’m going to change my pattern CHOP to only be 5 samples long for now – we don’t need a ton of samples to see what’s going on here. Next I’m going to set up a table DAT and try out the following bits of python:

python
op( 'null1' )[0].valid
op( 'null1' )[0].index
op( 'null1' )[0].name
op( 'null1' )[0].owner
op( 'null1' )[0].exports
op( 'null1' )[0].vals

I’m going to plug that table DAT into an eval DAT to evaluate the python expressions so I can see what’s going on here. What I get back is:

True
0
chan1
/project1/base_the_channel_class/null1
[]
0.0 0.25 0.5 0.75 1.0

If we were to look at those side by side it would be:

Python In Python Out
op( ‘null1’ )[0].valid True
op( ‘null1’ )[0].index 0
op( ‘null1’ )[0].name chan1
op( ‘null1’ )[0].owner /project1/base_the_channel_class/null1
op( ‘null1’ )[0].exports []
op( ‘null1’ )[0].vals 0.0 0.25 0.5 0.75 1.0

So far that’s not terribly exciting… or is it?! The real power of these Class Members comes from CHOP executes. I’m going to make a quick little example to help pull apart what’s exciting here. Let’s add a Noise CHOP with 5 channels. I’m going to turn on time slicing so we only have single sample channels. Next I’m going to add a Math CHOP and set it to ceiling – this is going to round our values up, giving us a 1 or a 0 from our noise CHOP. Next I’ll add a null. Next I’m going to add 5 circle TOPs, and make sure they’re named circle1 – circle5.

Here’s what I want – Every time the value is true (1), I want the circle to be green, when it’s false (0) I want the circle to be red. We could set up a number of clever ways to solve this problem, but let’s imagine that it doesn’t happen too often – this might be part of a status system that we build that’s got indicator lights that help us know when we’ve lost a connection to a remote machine (this doesn’t need to be our most optimized code since it’s not going to execute all the time, and a bit of python is going to be simpler to write / read). Okay… so what do we put in our CHOP execute?! Well, before we get started it’s important to remember that our Channel class contains information that we might need – like the index of the channel. In this case we might use the channel index to figure out which circle needs updating. Okay, let’s get something started then!

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = [ 1.0, 0.0, 0.0 ]
    onColor         = [ 0.0, 1.0, 0.0 ]
    targetCircle    = 'circle{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = onColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = onColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = onColor[2]

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = offColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = offColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = offColor[2]
    return

channel_execute_par.gif

Alright! That works pretty well… but what if I want to use a select and save some texture memory?? Sure. Let’s take a look at how we might do that. This time around we’ll only make two circle TOPs – one for our on state, one for our off state. We’ll add 5 select TOPs and make sure they’re named select1-select5. Now our CHOP execute should be:

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = 'circle_off'
    onColor         = 'circle_on'
    targetCircle    = 'select{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = onColor

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = offColor
    return

Okay… I’m going to add one more example to the sample code, and rather than walk you all the way through it I’m going to describe the challenge and let you pull it apart to understand how it works – challenge by choice, if you’re into what’s going on here take it all apart, otherwise you can let it ride.

channel_execute_select.gif

Okay… so, what I want is a little container that displays a channel’s name, an indicator if the value is > 0 or < 0, another green / red indicator that corresponds to the >< values, and finally the text for the value itself. I want to use selects when possible, or just set the background TOP for a container directly. To make all this work you’ll probably need to use .name, .index, and .vals.

multi_sample_more_members.gif

You can pull mine apart to see how I made it work here: base_the_channel_class.

Happy Programming!


BUT WAIT! THERE’S MORE!

Ivan DesSol asks some interesting questions:

Questions for the professor:
1) How can I find out which sample index in the channel is the current sample?
2) How is that number calculated? That is, what determines which sample is current?

If we’re talking about a multi sample channel let’s take a look at how we might figure that out. I mentioned this in passing above, but it’s worth taking a little longer to pull this one apart a bit. I’m going to use a constant CHOP and a trail CHOP to take a look at what’s happening here.

multi_sample_ref.PNG

Let’s start with a simple reference one more time. This time around I’m going to use a pattern CHOP with 200 samples. I’m going to connect my pattern to a null (in my case this is null7). My resulting python should look like:

op( 'null7' )[ 'chan1' ]

Alright… so we’re speeding right along, and our value just keeps wrapping around. We know that our multi sample channel has an index, so for fun games and profit let’s try using me.time.frame:

op( 'null7' )[ 'chan1' ][ me.time.frame ]

Alright… well. That works some of the time, but we also get the error “Index invalid or out of range.” WTF does that even mean?! Well, remember an array or list has a specific length, when we try to grab something outside of that length we’ll seen an error. If you’re still stratching you’re head that’s okay – let’s take a look at it this way.

Let’s say we have a list like this:

fruit = [ apple, orange, kiwi, grape ]

We know that we can retrieve values from our list with an index:

print( fruit[ 0 ] ) | returns "apple"
print( fruit[ 1 ] ) | returns "orange"
print( fruit[ 2 ] ) | returns "kiwi"
print( fruit[ 3 ] ) | returns "grape"

If, however, we try:

print( fruit[ 4 ] )

Now we should see an out of range error… because there is nothing in the 4th position in our list / array. Okay, Matt – so how does that relate to our error earlier? The error we were seeing earlier is because me.time.frame (in a default network) evaluates up to 600 before going back to 1. So, to fix our error we might use modulo:

op( 'null7' )[ 'chan1' ][ me.time.frame % 200 ]

Wait!!! Why 200? I’m using 200 because that’s the number of samples I have in my pattern CHOP.

Okay! Now we’re getting somewhere.
The only catch is that if we look closely we’ll see that our refence with an index, and how touch is interpreting our previous refernce are different:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame % 200 ] 0.6381909251213074

WHAT GIVES MAAAAAAAAAAAAAAT!?
Alright, so there’s one more thing for us to keep in mind here. me.time.frame starts sequencing at 1. That makes sense, because we don’t usually think of frame 0 in animation we think of frame 1. Okay, cool. The catch is that our list indexes from the 0 position – in programming languages 0 still represents an index position. So what we’re actually seeing here is an off by one error.

Now that we now what the problem is it’s easy to fix:

op( 'null7' )[ 'chan1' ][ ( me.time.frame - 1 ) % 200 ]

Now we’re right as rain:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame ] 0.6381909251213074
op( ‘null7’ )[ ‘chan1’ ][ ( me.time.frame – 1 ) % 200 ] 0.6331658363342285

Hope that helps!

textport for performance | TouchDesiger

consoleText

I love a good challenge, and today on the TouchDesigner slack channel there was an interesting question about how you might go about getting the contents of the textport into a texture to display. That’s a great question, and I can imagine a circumstance where that might be a fun and interesting addition to a set. Sadly, I have no idea about how you might make that happen. I looked through the wiki a bit to see if there were any leads, and it’s difficult to see if there’s actually a good way to grab the contents of the textport.

What do we do then?!

Well, it just so happens that this might be another great place to look at how to take advantage of using extensions in TouchDesigner. Here our extension is going to do some double duty for us. The big picture idea is that we’ll want to be able to use a single call to either display a string, or print and display a string. If you only wanted to print it you could just use print(), so we’ll leave that one out of the mix for now.

Let’s take a look at the extension and then pull apart what’s happening in side.

Okay, so what exactly are we doing here?!

The big picture is that we want a way to be able to log something to a text  object that can be displayed. In this case I choose a table DAT. The reasoning here is that a table DAT before being converted to just a text DAT allows us to do some simple clean up and line adjustments. Each new entry is posted in a row – which makes for an easy way to limit the number of displayed rows. We can do this with a select DAT – which is where we use our StartRow and EndRow members.

Why exactly do we use these? Well, this helps ensure that we can keep our newest row displayed. A text TOP can accept a text DAT of any length, but at some point the text will spill off the bottom – unless you use adaptive sizing. The catch there is that at some point the text will become impossible to read. A top and bottom boundary ensures that we can always have something portion of our text displayed. We use a simple logical test in our Display() method to see if we’ve hit that boundary yet, and if we have we can update our members plus one… moving them both along at the same time.

You may also notice that we have a separate method to display and print… why not just do this in a single method. Well, that’s a great question. We could just use a single method for this with another argument. That’s probably a better way to tackle this challenge, but I wanted to use this opportunity to show how we might call another method from within our class. This can be helpful in a number of different situations, and while this application is a little too simple to really take advantage of that technique, it gives you a peak into how it might work.

Want to download the tox and take it for a test drive? You can find the source code here.

Building a Calibration UI | Reusing Palette Components – The Stoner | TouchDesigner

Here’s our second stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study

stoner

As many of you have already found there’s a wealth of interesting examples and useful tools tucked away in the palette in touch designer. If you’re unfamiliar with this feature, it’s located on the left hand side of the interface when you open touch, and you can quickly summon it into existence with the small drawer and arrow icon:

pallet

Tucked away at the bottom of the tools list is the stoner. If you’ve never used the stoner it’s a killer tool for all your grid warping needs. It allows for key stoning and grid warping, with a healthy set of elements that make for fast and easy alterations to a given grid. You can bump points with the keyboard, you can use the mouse to scroll around, there are options for linear curves, bezier curves, persepective mapping, and bilinear mapping. It is an all around wonderful tool. The major catch is that using the tox as is runs you about 0.2 milliseconds when we’re not looking at the UI, and about 0.5 milliseconds when we are looking at the UI. That’s not bad, in fact that’s a downright snappy in the scheme of things, but it’s going to have limitations when it comes to scale, and using multiple stoners at the same time.

stoner

That’s slick. But what if there was a way to get almost the same results at a cost of 0 milliseconds for photos, and only 0.05 milliseconds when working with video? As it turns out, there’s a gem of a feature in the stoner that allows us to get just this kind of performance, and we’re going to take a look at how that works as well as how to take advantage of that feature.

stoner_fast

Let’s start by taking a closer look at the stoner itself. We can see now that there’s a second outlet on our op. Let’s plug in a null to both outlets and see what we’re getting.

stoner_nulls

Well hello there, what is this all about?!

Our second output is a 32 bit texture made up of only red and green channels. Looking closer we can see that it’s a gradient of green in the top left corner, and red in the bottom right corner. If we pause here for a moment we can look at how we might generate a ramp like this with a GLSL Top.

glsl_vuvst

If you’re following along at home, let’s start by adding a GLSL Top to our network. Next we’ll edit the pixel shader.

out vec4 fragColor;

void main()
{
 fragColor = vec4( vUV.st , 0.0 , 1.0 );
}

So what do we have here exactly? For starters we have an explicit declaration of our out vec4 (in very simple terms – our texture that we want to pass out of the main loop); a main loop where we assign values to our output texture.

What’s a vec4?

In GLSL vectors are a data type. We use vectors for all sorts of operations, and as a datatype they’re very useful to us as we often want variable with several positions. Keeping in mind that GLSL is used in pixeltown (one of the largest burrows on your GPU), it’s helpful to be able to think of variables that carry multiple values – like say information about a red, green, blue, and alpha value for a given pixel. In fact, that’s just what our vec4 is doing for us here, it represents the RGBA values we want to associate with a given pixel.

vUV is an input variable that we can use to locate the texture coordinate of a pixel. This value changes for every pixel, which is part of the reason it’s so useful to us. So what is this whole vec4( vUV.st, 0.0, 1.0) business? In GL we can fill in the values of a vec4 with a vec2 – vUV.st is our uv coordinate as a vec2. In essence what we’ve done is say that we want to use the uv coordinates to stand in for our red and green values, blue will always be 0, and our alpha will always be 1. It’s okay if that’s a wonky to wrap your head around at the moment. If you’re still scratching your head you can read more at links below

Read about more GLSL Data Types

Read about writing your own GLSL TOP

Okay, so we’ve got this silly gradient, but what is it good for?!

Let’s move around our stoner a little bit to see what else changes here.

pushingpoints

That’s still not very sexy – I know, but let’s hold on for just one second. We first need to pause for a moment and think about what this might be good for. In fact, there’s a lovely operator that this plays very nicely with. The remap TOP. Say what now? The remap top can be used to warp input1 based on a map in input2. Still scratching your head? That’s okay. Let’s plugin a few other ops so we can see this in action. We’re going to rearrange our ops here just a little and add a remap TOP to the mix.

remapTOP.PNG

Here we can see that the red / green map is used on the second input our our remap top, and our movie file is used on the first input.

Okay. But why is this anything exciting?

Richard Burns just recently wrote about remapping, and he very succinctly nails down exactly why this is so powerful:

It’s commonly used by people who use the stoner component as it means they can do their mapping using the stoners render pipeline and then simply remove the whole mapping tool from the system leaving only the remap texture in place.

If you haven’t read his post yet it’s well worth a read, and you can find it here.

Just like Richard mentions we can use this new feature to essentially completely remove or disable the stoner in our project once we’ve made maps for all of our texture warping. This is how we’ll get our cook time down to just 0.05 milliseconds.

Let’s look at how we can use the stoner to do just this.

For starters we need to add some empty bases to our network. To keep things simple for now I’m just going to add them to the same part of the network where my stoner lives. I’m going to call them base_calibration1 and base_calibration2.

calibration_bases

Next we’re going to take a closer look at the stoner’s custom parameters. On the Stoner page we can see that there’s now a place to put a path for a project.

stoner_path

Let’s start by putting in the path to our base_calibration1 component. Once we hit enter we should see that our base_calibration1 has new set of inputs and outputs:

base_capture1_added

Let’s take a quick look inside our component to see what was added.

inside_base1.PNG

Ah ha! Here we’ve got a set of tables that will allow the stoner UI to update correctly, and we’ve got a locked remap texture!

So, what do we do with all of this?

Let’s push around the corners of our texture in the stoner and hook up a few nulls to see what’s happening here.

working_with_calibration1

You may need to toggle the “always refresh” parameter on the stoner to get your destination project to update correctly. Later on we’ll look at how to work around this problem.

So far so good. Here we can see that our base_calibration1 has been updated with the changes we made to the stoner. What happens if we change the project path now to be base_calibration2? We should see that inputs and outputs are added to our base. We should also be able to make some changes to the stoner UI and see a two different calibrations.

working_with_calibration2.PNG

Voila! That’s pretty slick. Better yet if we change the path in the stoner project parameter we’ll see that the UI updates to reflect the state we left our stoner in. In essence, this means that you can use a single stoner to calibrate multiple projectors without needing multiple stoners in your network. In fact, we can even bypass or delete the stoner from our project once we’re happy with the results.

no_stoner

There are, of course, a few things changes that we’ll make to integrate this into our project’s pipeline but understanding how this works will be instrumental in what we build next. Before we move ahead take some time to look through how this works, read through Richard’s post as well as some of the other documentation. Like Richard mentions, this approach to locking calibration data can be used in lots of different configurations and means that you can remove a huge chunk of overhead from your projects.

Next we’ll take the lessons we’ve learned here combined with the project requirements we laid out earlier to start building out our complete UI and calibration pipeline.