Category Archives: Software

TouchDesigner | Delay Scripts

It’s hard to appreciate some of the stranger complexities of working in a programming environment until you stumble on something good and strange. Strange how Matt? What a lovely question, and I’m so glad that you asked!

Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need some-something to happen at some time in the future. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming?

Typically we start to consider this through operations that involve some form of delay. I might issue the command for an operation now, but I want the environment to wait some fixed period of time before executing those instructions. In Python we have a lovely option for using the time module to perform an operation called sleep – this seems like a lovely choice, but in fact you’ll be oh so sorry if you try this approach:

But whyyyyyyyy?!

Well, Python is blocking inside of TouchDesigner. This means that all of the Python code needs to execute before you can proceed to the next frame. So what does that mean? Well, copy and paste the code above into a text DAT and run this script.

time.sleep

If you keep an eye on the timeline at the bottom of the screen, you should see it pause for 1 second while the time.sleep() operation happens, then we print “oh, hello there” to the text port and we start back up again. In practice this will seem like Touch has frozen, and you’ll soon be cursing yourself for thinking that such a thing would be so easy.

So, if that doesn’t work… what does? Is there any way to delay operations in Python? What do we do?!

Well, as luck would have it there’s a lovely method called run() in the td module. That’s lovely and all, but it’s a little strange to understand how to use this method. There’s lots of interesting nuance to this method, but for now let’s just get a handle on how to use it – both from a simple standpoint, and with more complex configurations.

To get started let’s examine the same idea that we saw above. Instead of using time.sleep() we can instead use run() with an argument called delayFrames. The same operation that we looked at above, but run in a non-blocking way would look like this:

If you try copying and pasting the code above into a text DAT you should have much better results – or at least results where TouchDesigner doesn’t stop running while it waits for the Python bits to finish.

Okay… so that sure is swell and all, so what’s so complicated? Well, let’s suppose you want to pass some arguments into that script – in fact we’ll see in a moment that we sometimes have to pass arguments into that script. First things first – how does that work?

Notice how when we wrote our string we used args[some_index_value] to indicate how to use an argument. That’s great, right? I know… but why do we need that exactly? Well, as it turns out there are some interesting things to consider about running scripts. Let’s think about a situation where we have a constant CHOP whose parameter value0 we want to change each time in a for loop. How do we do that? We need to pass a new value into our script each time it runs. Let’s try something like:

What you should see is that your constant CHOP increments every second:

for-loop-delay

But that’s just the tip of the iceberg. We can run strings, whole DATs, even the contents of a table cell.

This approach isn’t great for everything… in fact, I’m always hesitant to use delay scripts too heavily – but sometimes they’re just what you need, and for that very reason they’re worth understanding.

If you’ve gotten this far and are wondering why on earth this is worth writing about – check out this post on the forum: Replicator set custom parms error. It’s a pretty solid example of how and why it’s worth having a better understanding of how delay scripts work, and how you can make them better work for you.

Happy Programming.

 

TouchDesigner | Finding Dominant Color

Programming is a strange practice. It’s not uncommon that in order to make what’s really interesting, or what you promised the client, or what’s driving a part of your project you have to build another tool.

You want to detect motion, so you need to build out a means of comparing frames, and then determining where the most change has occurred. You want to make visuals that react to audio, but first you need to build out the process for finding meaningful patterns in the audio. And on and on and on.

And so it goes that I’ve been thinking about finding dominant color in an image. There are lots of ways to do this, and one approach is to use a technique called KMeans clustering. This approach isn’t without its faults, but it is interesting and relatively straightforward to implement. The catch is that it’s not fast enough for a realtime application – at least not if you’re using Python. So what can we do? Well, we can still use KMeans clustering, but we need to understand how to use multi-threading in python so we don’t block our main thread in TouchDesigner.

The project / tool / example below does just that – it’s a mechanism for finding dominant color in an image with an approach that uses a background thread for processing that request.


TouchDesigner Dominant Color

An approach for finding dominant color in an image using KMeans clustering with scikit learn and openCV. The approach here is built for realtime applications using TouchDesigner and python multi-threading.

TouchDesigner Version

099
Build 2018.22800

Python Dependencies

  • numpy
  • scipy
  • sklearn
  • cv2

Overview

base_dominant_color

A tool for finding Dominant Color with openCV.

Here we find an attempt at locating dominant colors from a source image with openCV and KMeans clustering. The large idea is to sample colors from a source image build averages from clustered samples and return a best estimation of dominant color. While this works well, it’s not perfect, and in this class you’ll find a number of helper methods to resolve some of the shortcomings of this process.

Procedurally, you’ll find that that the process starts by saving out a small resolution version of the sampled file. This is then hadned over to openCV for some preliminary analysis before being again handed over to sklearn (sci-kit learn) for the KMeans portion of the process. While there is a built-in function for KMeans sorting in openCV the sklearn method is a little less cumbersome and has better reference documentation for building functionality. After the clustering process each resulting sample is processed to find its luminance. Luminance values outside of the set bounds are discarded before assembling a final array of pixel values to be used.

It’s worth noting that this method relies on a number of additional python libraries. These can all be pip installed, and the recommended build approach here would be to use Python35. In the developer’s experience this produces the least number of errors and issues – and boy did the developer stumble along the way here.

Other considerations you’ll find below are that this extension supports a multi-threaded approach to finding results.

Parameters

Dominant Color

  • Image Process Status – (string) The thread process status.
  • Temp Image Cache – (folder) A directory location for a temp image file.
  • Source Image – (TouchDesigner TOP) A TOP (still) used for color analysis.
  • Clusters – (int) The number of requested clusters.
  • Luminance Bounds – (int, tuple) Luminance bounds, mine and max expressed as value between 0 and 1.
  • Clusters within Bounds – (int) The number of clusters within the Luminance Bounds.
  • Smooth Ramp – (toggle) Texture interpolation on output image.
  • Ramp Width – (int) Number of pixels in the output Ramp.
  • Output Image – (menu) A drop-down menu for selecting a ramp or only the returned clusters.
  • Find Colors – (pulse) Issues the command to find dominant colors.

Python

  • Python Externals – (path) A path to the directory with python external libraries.
  • Check Imports – (pulse) A pulse button to check if sklearn was correctly imported.

Using this Module

To use this module there are a few essential elements to keep in mind.

Getting Python in Order

If you haven’t worked with external Python Libraries inside of Touch yet, please take a moment to familiarize yourself with the process. You can read more about it on the Derivative Wiki – Importing Modules

Before you can run this module you’ll need to ensure that your Python environment is correctly set-up. I’d recommend that you install Python 3.5+ as that matches the Python installation in Touch. In building out this tool I ran into some wobbly pieces that largely centered around installing sklearn using Python 3.6 – so take it from someone whose already ran into some issues, you’ll encounter the fewest challenges / configuration issues if you start there. Sklearn (the primary external library used by this module) requires both scipy and numpy – if you have pip installed the process is straightforward. From a command prompt you can run each of these commands consecutively:

pip install numpy
pip install scipy
pip install sklearn

Once you’ve installed the libraries above, you can confirm that they’re available in python by invoking python in your command prompt, and then importing the libraries one by one. Testing to make sure you’ve correctly installed your libraries in a Python only environment first, will help ensure that any debugging you need to do in TouchDesigner is more straightforward.

python-externals-confirmation

Working with TouchDesigner

Python | Importing Modules

If you haven’t imported external libraries in TouchDesigner before there’s an additional step you’ll need to take care of – adding your external site-packages path to TouchDesigner. You can do this with a regular text DAT and by modifying the example below:

import sys
mypath = "C:/Python35/Lib/site-packages/mymodule"
if mypath not in sys.path:
    sys.path.append(mypath)

Copy and paste the above into your text DAT, and modify mypath to be a string that points do your Python externals site-packages directory.

python-page-dominant-color

If that sounds a little out of your depth, you can use a helper feature on the Dominant Color module. On the Python page, navigate to your Python Externals directory. It should likely be a path like: C:\Program Files\Python35\Lib\site-packages

Your path may be different, especially if when you installed Python you didn’t use the checkbox to install for all users. After navigating to your externals directory, pulse the Check imports parameter. If you don’t see a pop-up window then sklearnwas successfully imported. If you do see a pop-up window then something is not quite right, and you’ll need to do a bit of leg-work to get your Python pieces in order before you can use the module.

Using the Dominant Color

With all of your Python elements in order, you’re ready to start using this module.

dominant-colors-parameters

The process for finding dominant color uses a KMeans clustering algorithm for grouping similar values. Luckily we don’t need to know all of the statistics that goes into that mechanism in order to take full advantage of the approach, but it is important to know that we need to be mindful a few elements. For this to work efficiently, we’ll need to save our image out to an external file. For this to work you need to make sure that this module has a cache for saving temporary images. The process will verify that the directory you’ve pointed it to exists before saving out a file, and will create a directory if one doesn’t yet exist. That’s mostly sanity checking to ensure that you don’t have to loose time trying to figure out why your file isn’t saving.

Give that this process happens in another thread, it’s also important to consider that this functions based on a still image, not on a moving one. While it would be slick to have a fast operation for finding KMeans clusters in video, that’s not what this tool does. Instead the assumption here is that you’re using a single frame of reference content, not video. You point this module to a target source, by dropping a TOP onto the Source Image parameter.

Next you’ll need to define the number of clusters you want to look for. Here the term clusters is akin to what’s the target number of dominant colors you’re looking to find – the top 3, the top 10, the top 20? It’s up to you, but keep in mind that more clusters takes longer to produce a result. You’re also likely to want to bound your results with some luminance measure – for example, you probably don’t want colors that are too dark, or too light. The luminance bounds parameters are for luminance measures that are normalized as 0 to 1. Clusters within bounds, then, tells you how many clusters were returned from the process that fell within your specified regions. This is, essentially, a way to know how many swatches work within the brightness ranges you’ve set.

The output ramp from this process can be interpolated and smooth, or Nearest Pixel swatches. You can also choose to output a ramp that’s any length. You might, for example, want a gradient that’s spread over 100 or 1000 pixels rather than just the discrete samples. You can set the number of output pixels with the ramp width parameter.

On the otherside of that equation, you might just want only the samples that came out of the process. In the Output Image parameter, if you choose clusters from the drop down menu you’ll get only the valid samples that fell within your specified luminance bounds.

Finally, to run the operation pulse Find Colors. As an operational note, this process would normally block / lock-up TouchDesigner. To avoid that unsavory circumstance, this module runs the KMeans clustering process in another thread. It’s slightly slower than if it ran in the main thread, but the benefit is that Touch will continue running. You’ll notice that Image Process Status parameter displays Processing while the separate thread is running. Once the result has been returned you’ll Ready displayed in the parameter.

References


Notes from Other Programmers

I don’t use CONDA, but for those of you that do, you can install sklearn with the following command:
conda install scikit-learn

TouchDesigner | Process Management

A look at how you can launch another process from TouchDesigner with Python, and how you can kill that process.

Grab all the files from the repo here.

Overview

At some point you’ll need to split up the work of a single project into multiple processes. This happens for lots of reasons – maybe you want to break your control interface out from your output elements, or maybe you want to start up another tool you’ve built – you name it, there are lots of reasons you might want to launch another process, and if you haven’t found a reason to you… chances are you will soon.

The good news is that we can do this with a little bit of python. We need to import a few extra libraries, and we need to do a little leg work – but once we get a handle on those things we have a straightforward process on our hands.

Getting Started

First things first, start by downloading or cloning the whole repo. We’ll start by opening the process-management.toe file. You might imagine that this is the toe file that you’re launching processes from, or you might think of this as your main control toe file. You’ll notice that there’s also a toe file called other-app.toe. This is the file we’re going to launch from within TouchDesigner. At this point feel free to open up that file – you’ll see that it starts in perform mode and says that it’s some other process. Perfect. You should also notice that it says “my role is,” but nothing else. Don’t worry, it’s this way on purpose.

Process-management.toe

In this toe file you’ll see three buttons:

  • Launch Process
  • Quit Process
  • Quit Process ID None

Launch Process

This button will run the script in text_start_process.

So, what’s happening here? First we need to import a few other libraries that we’re going to use – os and subprocess. From there we need to identify the application we’re going to use, and the file we’re going to launch. Said another way we want to know what program we’re going to open our toe file with. You’ll see that we’re doing something a little tricksy here. In Touch, the all class has a member called binFolder – this tells us the location of the Touch Binary files, which happens to include our executable file. Rather than hard coding a path to our binary we can instead use the path provided by Touch – this has lots of advantages and should mean that your code is less likely to break from machine to machine.

Similarly, you’ll see that we location our other-process.toe by looking at our project folder (where process-management.toeis saved) and locating our other-process.toe file relative to our main project file.

So far so good. You should also see that we’re setting an environment variable with os.environ. This is an interesting place where we can actually set variables for a Touch process at start. Why do this? Well, you may find that you have a single toe file that you want to configure differently for any number of reasons. If you’re using a single toe file configuration, you might want to launch your main file to default as a controller, and a another instance of the same file in an output configuration, and maybe another instance of the same app to handle some other process. If that’s not interesting to you, you can comment out that line – but it might at least be worth thinking about before you add that pound sign.

Next we use a subprocess.Popen() call to start our process – by providing the all, and the file as arguments in a list. We can also grab our process ID (we’ll use that later) while we’re here.

Finally we’ll build a little dictionary of our attributes and put that all in storage. I’m using a dictionary in this example since you might find that you need to launch multiple instances, and having a nice way to keep them separate is handy.

Okay. To see this work, let’s make that button viewer active and click it – tada! At this point you should see another TouchDesigner process launch and this time around the name that was entered for our ROLE environment variable shows up in our second touch process: “some other process, my role is render1”

Good to know is that if you try to save your file now you’ll get an error. That’s because we’ve put a subprocess object into storage and it can’t persist between closing and opening. Your file will be saved, but our little dictionary in storage will be lost.

Quit Process

This little button kills our other-app.toe instance.

Much simpler than our last script this one first grabs our dictionary that’s in storage, grabs the subprocess object, and then kills it. Next we unstore all of the bits in storage so we can save our toe file without any warning messages.

Quit Process ID

Okay – so what happens if you want to just kill a process by it’s ID, not by storing the whole subprocess object? You can do that.

In this case we can use the os module to issue a kill call with our process id. If we look at the os documentation we’ll see that we need a pid and a sig – which is why we’re also importing signal.

Take Aways

This may or may not be useful in your current work flow, but it’s handy to know that there are ways to launch and quit another toe file. Better yet, this same idea doesn’t have to be limited to use with Touch. You might use this to lunch or control any other application your heart desires.

TouchDesigner | Deferred Lighting – Cone Lights

With a start on point lights, one of the next questions you might ask is “what about cone lights?” Well, it just so happens that there’s a way to approach a deferred pipeline for cone lights just like with point lights. This example still has a bit to go with a few lingering mis-behaviors, but it is a start for those interested in looking at complex lighting solutions for their realtime scenes.

You can find a repo for all of this work and experimentation here: TouchDesigner Deferred Lighting.


TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

This approach is very similar to point lights, with the additional challenge of needing to think about lights as directional. We’ll see that the first stage and last of this process – is consistent with our Point Light example, but in the middle we need to make some changes. We can get started by again with color buffers.

Color Buffers

color buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

  • position – renderselect_postition
  • normals – renderselect_normal
  • color – renderselect_color
  • uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

light attributes

Here we’ll begin to see a divergence from our previous approach.

We are still going to compute and pack data for the position, color, and falloff for our point lights like in our previous example. The difference now is that we also need to compute a look-at position for each of our lights. In addition to our falloff data we’ll need to also consider the cone angle and delta of our lights. For the time being cone angle is working, but cone delta is broken – pardon my learning in public here.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our null SOP null_lightpos. We convert this to CHOP data and use the attributes from this null (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene. In this case we’re using a null since we want to position the look-at points at some other position than our lights themselves. Notice that our circle has one transform SOP to describe light position, and another transform SOP to describe look-at position. In the next stage we’ll use our null_light_pos CHOP and our null_light_lookat CHOP for the lighting calculations – we’ll also end up using the results of our object CHOP null_cone_rot to be able to describe the rotation of our lights when rendering them as instances.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

  • position – null_light_pos
  • color – null_light_color
  • falloff – null_light_falloff
  • light cone – null_light_cone

This means that sample 0 from each of these four CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

The additional light cone attribute here is used to describe the radius of the cone and the degree of softness at the edges (again pardon the fact that this isn’t yet working).

Combining Buffers

combining buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

If you look at the final pieces of our for loop you’ll find that much of this process is borrowed from the example Malcolm wrote (Thanks Malcolm!). This starting point serves as a baseline to help us get started from the position of how other lights are handled in Touch.

Representing Lights

representing lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Our divergence here is that rather than using spheres, we’re instead using cones to represent our lights. In a future iteration the width of the cone base should scale along with our cone angle, but for now let’s celebrate the fact that we have a way to see where our lights are coming from. You’ll notice that the rotate attributes generated from the object CHOP are used to describe the rotation of the instances. Ultimately, we probably don’t need these representations, but they sure are handy when we’re trying to get a sense of what’s happening inside of our shader.

Post Processing for Final Output

post process

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

final product

TouchDesigner | Deferred Lighting – Point Lights

A bit ago I wanted to get a handle on how one might approach real time rendering with LOTS of lights. The typical openGL pipeline has some limitations here, but there’s a lot interesting potential with Deferred Lighting (also referred to as deferred shading). Making that leap, however, is no easy task and I asked Mike Walczyk for some help getting started. There’s a great starting point for this idea on the derivative forum but I wanted a 099 approach and wanted to pull it apart to better understand what was happening. With that in mind, this is a first pass at looking through using point lights in a deferred pipeline, and what those various stages look like.

You can find a repo for all of this work and experimentation here: TouchDesigner Deferred Lighting.


TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

Color Buffers

example-lights-point-color-buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

  • position – renderselect_postition
  • normals – renderselect_normal
  • color – renderselect_color
  • uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

example-lights-point-attributes

Next we’re going to compute and pack data for the position, color, and falloff for our point lights.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our Circle SOP circle1. We convert this CHOP data and use the attributes from this circle (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

  • position – null_light_pos
  • color – null_light_color
  • falloff – null_light_falloff

This means that sample 0 from each of these three CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

Combining Buffers

example-lights-point-combining-buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

Representing Lights

example-lights-point-represetning-lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Post Processing for Final Output

example-lights-point-post-process

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

piont-lights

Python in TouchDesigner | The Channel Class | TouchDesigner

The Channel Class Wiki Documentation

Taking a little time to better understand the channel class provides a number of opportunities for getting a stronger handle on what’s happening in TouchDesigner. This can be especially helpful if you’re working with CHOP executes or just trying to really get a handle on what on earth CHOPs are all about.

To get started, it might be helpful to think about what’s really in a CHOP. Channel Operators are largely arrays (lists in python lingo) of numbers. These arrays can be only single values, or they might be a long set of numbers. In any given CHOP all of the channels will have the same length (we could also say that they have the same number of samples). That’s helpful to know as it might shape the way we think of channels and samples.

Before we go any further let’s stop to think through the above just a little bit more. Let’s first think about a constant CHOP with channel called ‘chan1’. We know we can write a python reference for this chop like this:

op( 'constant1' )[ 'chan1' ]

or like this:

op( 'constant1' )[ 0 ]

Just as a refresher, we should remember that the syntax here is:
op( stringNameToOperator )[ channelNameOrIndex ]

python_refs.PNG

That’s just great, but what happens if we have a pattern CHOP? If we drop down a default pattern CHOP (which has 1000 samples), and we try the same expression:

op( 'pattern1' )[ 'chan1' ]

We now get a constantly changing value. What gives?! Well, we’re now looking at bit list of numbers, and we haven’t told Touch where in that list of values we want to grab an index – instead touch is moving through that index with me.time.frame-1 as the position in the array. If you’re scratching your head, that’s okay we’re going to pull this apart a little more.

multi_sample_chop.gif

Okay, what’s really hiding from us is that CHOP references have a default value that’s provided for us. While we often simplify the reference syntax to:

op( stringNameToOperator )[ channelNameOrIndex ]

In actuality, the real reference syntax is:
op( stringNameToOperator )[ channelNameOrIndex ][ arrayPosition ]

In single sample CHOPs we don’t usually need to worry about this third argument – if there’s only one value in the list Touch very helpfully grabs the only value there. In a multi-sample CHOP channel, however, we need more information to know what sample we’re really after. Let’s try our reference to a narrow down to a single sample in that pattern CHOP. Let’s say we want sample 499:

op( 'pattern1' )[ 'chan1' ][ 499 ]

With any luck you should now be seeing that you’re only getting a single value. Success!

But what does this have to do with the Channel Class? Well, if we take a closer look at the documentation ( Channel Class Wiki Documentation ), we might find some interesting things, for example:

Members

  • valid (Read Only) True if the referenced chanel value currently exists, False if it has been deleted. Identical to bool(Channel).
  • index (Read Only) The numeric index of the channel.
  • name (Read Only) The name of the channel.
  • owner (Read Only) The OP to which this object belongs.
  • vals Get or set the full list of Channel values. Modifying Channel values can only be done in Python within a Script CHOP.

Okay, that’s great, but so what? Well, let’s practice our python and see what we might find if we try out a few of these members.

We might start by adding a pattern CHOP. I’m going to change my pattern CHOP to only be 5 samples long for now – we don’t need a ton of samples to see what’s going on here. Next I’m going to set up a table DAT and try out the following bits of python:

python
op( 'null1' )[0].valid
op( 'null1' )[0].index
op( 'null1' )[0].name
op( 'null1' )[0].owner
op( 'null1' )[0].exports
op( 'null1' )[0].vals

I’m going to plug that table DAT into an eval DAT to evaluate the python expressions so I can see what’s going on here. What I get back is:

True
0
chan1
/project1/base_the_channel_class/null1
[]
0.0 0.25 0.5 0.75 1.0

If we were to look at those side by side it would be:

Python In Python Out
op( ‘null1’ )[0].valid True
op( ‘null1’ )[0].index 0
op( ‘null1’ )[0].name chan1
op( ‘null1’ )[0].owner /project1/base_the_channel_class/null1
op( ‘null1’ )[0].exports []
op( ‘null1’ )[0].vals 0.0 0.25 0.5 0.75 1.0

So far that’s not terribly exciting… or is it?! The real power of these Class Members comes from CHOP executes. I’m going to make a quick little example to help pull apart what’s exciting here. Let’s add a Noise CHOP with 5 channels. I’m going to turn on time slicing so we only have single sample channels. Next I’m going to add a Math CHOP and set it to ceiling – this is going to round our values up, giving us a 1 or a 0 from our noise CHOP. Next I’ll add a null. Next I’m going to add 5 circle TOPs, and make sure they’re named circle1 – circle5.

Here’s what I want – Every time the value is true (1), I want the circle to be green, when it’s false (0) I want the circle to be red. We could set up a number of clever ways to solve this problem, but let’s imagine that it doesn’t happen too often – this might be part of a status system that we build that’s got indicator lights that help us know when we’ve lost a connection to a remote machine (this doesn’t need to be our most optimized code since it’s not going to execute all the time, and a bit of python is going to be simpler to write / read). Okay… so what do we put in our CHOP execute?! Well, before we get started it’s important to remember that our Channel class contains information that we might need – like the index of the channel. In this case we might use the channel index to figure out which circle needs updating. Okay, let’s get something started then!

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = [ 1.0, 0.0, 0.0 ]
    onColor         = [ 0.0, 1.0, 0.0 ]
    targetCircle    = 'circle{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = onColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = onColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = onColor[2]

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorr   = offColor[0]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorg   = offColor[1]
        op( targetCircle.format( digit = channel.index + 1 ) ).par.fillcolorb   = offColor[2]
    return

channel_execute_par.gif

Alright! That works pretty well… but what if I want to use a select and save some texture memory?? Sure. Let’s take a look at how we might do that. This time around we’ll only make two circle TOPs – one for our on state, one for our off state. We’ll add 5 select TOPs and make sure they’re named select1-select5. Now our CHOP execute should be:

python
def onValueChange(channel, sampleIndex, val, prev):
    
    # set up some variables
    offColor        = 'circle_off'
    onColor         = 'circle_on'
    targetCircle    = 'select{digit}'

    # describe what happens when val is true
    if val:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = onColor

    # describe what happens when val is false
    else:
        op( targetCircle.format( digit = channel.index + 1 ) ).par.top      = offColor
    return

Okay… I’m going to add one more example to the sample code, and rather than walk you all the way through it I’m going to describe the challenge and let you pull it apart to understand how it works – challenge by choice, if you’re into what’s going on here take it all apart, otherwise you can let it ride.

channel_execute_select.gif

Okay… so, what I want is a little container that displays a channel’s name, an indicator if the value is > 0 or < 0, another green / red indicator that corresponds to the >< values, and finally the text for the value itself. I want to use selects when possible, or just set the background TOP for a container directly. To make all this work you’ll probably need to use .name, .index, and .vals.

multi_sample_more_members.gif

You can pull mine apart to see how I made it work here: base_the_channel_class.

Happy Programming!


BUT WAIT! THERE’S MORE!

Ivan DesSol asks some interesting questions:

Questions for the professor:
1) How can I find out which sample index in the channel is the current sample?
2) How is that number calculated? That is, what determines which sample is current?

If we’re talking about a multi sample channel let’s take a look at how we might figure that out. I mentioned this in passing above, but it’s worth taking a little longer to pull this one apart a bit. I’m going to use a constant CHOP and a trail CHOP to take a look at what’s happening here.

multi_sample_ref.PNG

Let’s start with a simple reference one more time. This time around I’m going to use a pattern CHOP with 200 samples. I’m going to connect my pattern to a null (in my case this is null7). My resulting python should look like:

op( 'null7' )[ 'chan1' ]

Alright… so we’re speeding right along, and our value just keeps wrapping around. We know that our multi sample channel has an index, so for fun games and profit let’s try using me.time.frame:

op( 'null7' )[ 'chan1' ][ me.time.frame ]

Alright… well. That works some of the time, but we also get the error “Index invalid or out of range.” WTF does that even mean?! Well, remember an array or list has a specific length, when we try to grab something outside of that length we’ll seen an error. If you’re still stratching you’re head that’s okay – let’s take a look at it this way.

Let’s say we have a list like this:

fruit = [ apple, orange, kiwi, grape ]

We know that we can retrieve values from our list with an index:

print( fruit[ 0 ] ) | returns "apple"
print( fruit[ 1 ] ) | returns "orange"
print( fruit[ 2 ] ) | returns "kiwi"
print( fruit[ 3 ] ) | returns "grape"

If, however, we try:

print( fruit[ 4 ] )

Now we should see an out of range error… because there is nothing in the 4th position in our list / array. Okay, Matt – so how does that relate to our error earlier? The error we were seeing earlier is because me.time.frame (in a default network) evaluates up to 600 before going back to 1. So, to fix our error we might use modulo:

op( 'null7' )[ 'chan1' ][ me.time.frame % 200 ]

Wait!!! Why 200? I’m using 200 because that’s the number of samples I have in my pattern CHOP.

Okay! Now we’re getting somewhere.
The only catch is that if we look closely we’ll see that our refence with an index, and how touch is interpreting our previous refernce are different:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame % 200 ] 0.6381909251213074

WHAT GIVES MAAAAAAAAAAAAAAT!?
Alright, so there’s one more thing for us to keep in mind here. me.time.frame starts sequencing at 1. That makes sense, because we don’t usually think of frame 0 in animation we think of frame 1. Okay, cool. The catch is that our list indexes from the 0 position – in programming languages 0 still represents an index position. So what we’re actually seeing here is an off by one error.

Now that we now what the problem is it’s easy to fix:

op( 'null7' )[ 'chan1' ][ ( me.time.frame - 1 ) % 200 ]

Now we’re right as rain:

refernce value
op( ‘null7’ )[ ‘chan1’ ] 0.6331658363342285
op( ‘null7’ )[ ‘chan1’ ][ me.time.frame ] 0.6381909251213074
op( ‘null7’ )[ ‘chan1’ ][ ( me.time.frame – 1 ) % 200 ] 0.6331658363342285

Hope that helps!

textport for performance | TouchDesiger

consoleText

I love a good challenge, and today on the TouchDesigner slack channel there was an interesting question about how you might go about getting the contents of the textport into a texture to display. That’s a great question, and I can imagine a circumstance where that might be a fun and interesting addition to a set. Sadly, I have no idea about how you might make that happen. I looked through the wiki a bit to see if there were any leads, and it’s difficult to see if there’s actually a good way to grab the contents of the textport.

What do we do then?!

Well, it just so happens that this might be another great place to look at how to take advantage of using extensions in TouchDesigner. Here our extension is going to do some double duty for us. The big picture idea is that we’ll want to be able to use a single call to either display a string, or print and display a string. If you only wanted to print it you could just use print(), so we’ll leave that one out of the mix for now.

Let’s take a look at the extension and then pull apart what’s happening in side.

Okay, so what exactly are we doing here?!

The big picture is that we want a way to be able to log something to a text  object that can be displayed. In this case I choose a table DAT. The reasoning here is that a table DAT before being converted to just a text DAT allows us to do some simple clean up and line adjustments. Each new entry is posted in a row – which makes for an easy way to limit the number of displayed rows. We can do this with a select DAT – which is where we use our StartRow and EndRow members.

Why exactly do we use these? Well, this helps ensure that we can keep our newest row displayed. A text TOP can accept a text DAT of any length, but at some point the text will spill off the bottom – unless you use adaptive sizing. The catch there is that at some point the text will become impossible to read. A top and bottom boundary ensures that we can always have something portion of our text displayed. We use a simple logical test in our Display() method to see if we’ve hit that boundary yet, and if we have we can update our members plus one… moving them both along at the same time.

You may also notice that we have a separate method to display and print… why not just do this in a single method. Well, that’s a great question. We could just use a single method for this with another argument. That’s probably a better way to tackle this challenge, but I wanted to use this opportunity to show how we might call another method from within our class. This can be helpful in a number of different situations, and while this application is a little too simple to really take advantage of that technique, it gives you a peak into how it might work.

Want to download the tox and take it for a test drive? You can find the source code here.

Building a Calibration UI | Reusing Palette Components – The Stoner | TouchDesigner

Here’s our second stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study

stoner

As many of you have already found there’s a wealth of interesting examples and useful tools tucked away in the palette in touch designer. If you’re unfamiliar with this feature, it’s located on the left hand side of the interface when you open touch, and you can quickly summon it into existence with the small drawer and arrow icon:

pallet

Tucked away at the bottom of the tools list is the stoner. If you’ve never used the stoner it’s a killer tool for all your grid warping needs. It allows for key stoning and grid warping, with a healthy set of elements that make for fast and easy alterations to a given grid. You can bump points with the keyboard, you can use the mouse to scroll around, there are options for linear curves, bezier curves, persepective mapping, and bilinear mapping. It is an all around wonderful tool. The major catch is that using the tox as is runs you about 0.2 milliseconds when we’re not looking at the UI, and about 0.5 milliseconds when we are looking at the UI. That’s not bad, in fact that’s a downright snappy in the scheme of things, but it’s going to have limitations when it comes to scale, and using multiple stoners at the same time.

stoner

That’s slick. But what if there was a way to get almost the same results at a cost of 0 milliseconds for photos, and only 0.05 milliseconds when working with video? As it turns out, there’s a gem of a feature in the stoner that allows us to get just this kind of performance, and we’re going to take a look at how that works as well as how to take advantage of that feature.

stoner_fast

Let’s start by taking a closer look at the stoner itself. We can see now that there’s a second outlet on our op. Let’s plug in a null to both outlets and see what we’re getting.

stoner_nulls

Well hello there, what is this all about?!

Our second output is a 32 bit texture made up of only red and green channels. Looking closer we can see that it’s a gradient of green in the top left corner, and red in the bottom right corner. If we pause here for a moment we can look at how we might generate a ramp like this with a GLSL Top.

glsl_vuvst

If you’re following along at home, let’s start by adding a GLSL Top to our network. Next we’ll edit the pixel shader.

out vec4 fragColor;

void main()
{
 fragColor = vec4( vUV.st , 0.0 , 1.0 );
}

So what do we have here exactly? For starters we have an explicit declaration of our out vec4 (in very simple terms – our texture that we want to pass out of the main loop); a main loop where we assign values to our output texture.

What’s a vec4?

In GLSL vectors are a data type. We use vectors for all sorts of operations, and as a datatype they’re very useful to us as we often want variable with several positions. Keeping in mind that GLSL is used in pixeltown (one of the largest burrows on your GPU), it’s helpful to be able to think of variables that carry multiple values – like say information about a red, green, blue, and alpha value for a given pixel. In fact, that’s just what our vec4 is doing for us here, it represents the RGBA values we want to associate with a given pixel.

vUV is an input variable that we can use to locate the texture coordinate of a pixel. This value changes for every pixel, which is part of the reason it’s so useful to us. So what is this whole vec4( vUV.st, 0.0, 1.0) business? In GL we can fill in the values of a vec4 with a vec2 – vUV.st is our uv coordinate as a vec2. In essence what we’ve done is say that we want to use the uv coordinates to stand in for our red and green values, blue will always be 0, and our alpha will always be 1. It’s okay if that’s a wonky to wrap your head around at the moment. If you’re still scratching your head you can read more at links below

Read about more GLSL Data Types

Read about writing your own GLSL TOP

Okay, so we’ve got this silly gradient, but what is it good for?!

Let’s move around our stoner a little bit to see what else changes here.

pushingpoints

That’s still not very sexy – I know, but let’s hold on for just one second. We first need to pause for a moment and think about what this might be good for. In fact, there’s a lovely operator that this plays very nicely with. The remap TOP. Say what now? The remap top can be used to warp input1 based on a map in input2. Still scratching your head? That’s okay. Let’s plugin a few other ops so we can see this in action. We’re going to rearrange our ops here just a little and add a remap TOP to the mix.

remapTOP.PNG

Here we can see that the red / green map is used on the second input our our remap top, and our movie file is used on the first input.

Okay. But why is this anything exciting?

Richard Burns just recently wrote about remapping, and he very succinctly nails down exactly why this is so powerful:

It’s commonly used by people who use the stoner component as it means they can do their mapping using the stoners render pipeline and then simply remove the whole mapping tool from the system leaving only the remap texture in place.

If you haven’t read his post yet it’s well worth a read, and you can find it here.

Just like Richard mentions we can use this new feature to essentially completely remove or disable the stoner in our project once we’ve made maps for all of our texture warping. This is how we’ll get our cook time down to just 0.05 milliseconds.

Let’s look at how we can use the stoner to do just this.

For starters we need to add some empty bases to our network. To keep things simple for now I’m just going to add them to the same part of the network where my stoner lives. I’m going to call them base_calibration1 and base_calibration2.

calibration_bases

Next we’re going to take a closer look at the stoner’s custom parameters. On the Stoner page we can see that there’s now a place to put a path for a project.

stoner_path

Let’s start by putting in the path to our base_calibration1 component. Once we hit enter we should see that our base_calibration1 has new set of inputs and outputs:

base_capture1_added

Let’s take a quick look inside our component to see what was added.

inside_base1.PNG

Ah ha! Here we’ve got a set of tables that will allow the stoner UI to update correctly, and we’ve got a locked remap texture!

So, what do we do with all of this?

Let’s push around the corners of our texture in the stoner and hook up a few nulls to see what’s happening here.

working_with_calibration1

You may need to toggle the “always refresh” parameter on the stoner to get your destination project to update correctly. Later on we’ll look at how to work around this problem.

So far so good. Here we can see that our base_calibration1 has been updated with the changes we made to the stoner. What happens if we change the project path now to be base_calibration2? We should see that inputs and outputs are added to our base. We should also be able to make some changes to the stoner UI and see a two different calibrations.

working_with_calibration2.PNG

Voila! That’s pretty slick. Better yet if we change the path in the stoner project parameter we’ll see that the UI updates to reflect the state we left our stoner in. In essence, this means that you can use a single stoner to calibrate multiple projectors without needing multiple stoners in your network. In fact, we can even bypass or delete the stoner from our project once we’re happy with the results.

no_stoner

There are, of course, a few things changes that we’ll make to integrate this into our project’s pipeline but understanding how this works will be instrumental in what we build next. Before we move ahead take some time to look through how this works, read through Richard’s post as well as some of the other documentation. Like Richard mentions, this approach to locking calibration data can be used in lots of different configurations and means that you can remove a huge chunk of overhead from your projects.

Next we’ll take the lessons we’ve learned here combined with the project requirements we laid out earlier to start building out our complete UI and calibration pipeline.

Building a Calibration UI | Software Architecture | TouchDesigner

Here’s our first stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study.

Big Ideas

Organization Matters

How you build a thing matters. Ask any architect, chef, crafts person, quilter, and on and on and on. The principles and ideas that drive how you’re building your network matter, and as a full disclaimer I am a misery to collaborate with when it comes to messy TouchDesigner networks and messy directories. “As long as it works, it doesn’t matter Matt!” Is the mantra I often hear as a push back to my requests for organized work. There’s a lot of truth in that, but if you ever have to work with another person (or work with your future self) then the organization of your work will matter. Cluttered, messy, or disorganized structures will make a world of misery for you and your collaborators. You can do whatever you want at the end of the day, but my strong recommendation is that you create some guiding principles for your organization structures, network layout, documentation, and pending todo items on a project.

Here are some of my simple suggestions:

Turn off node resizing for TOPs and COMPs, and don’t resize your nodes by hand:

What? Why? Those things might make perfect sense to you, and you might like them a lot – and that’s great. I hate them in my networks because they insert irregularities into the flow of the network that spoof importance. When I look at a network and there is a single node that’s larger, smaller, resized in anyway that’s different from the surrounding nodes  it implies to me that it has some significance or importance. It might well be that you just needed a closer look for a second and then forgot to change its size again, but for the person looking at your code for the first time that it is significant – even if it’s not. Time and again I’ve seen that kind of use of resizing throw off myself, and many of programmers that I respect.

If you need to use size as a defining characteristic in your networks, then use python to make nodes consistent sizes. It all goes back to being organized and consistent. Help your future self, or the other people you’re working with avoid as much confusion as possible… chances are the project is already confusing enough.

resizing

Align your ops:

I like the left bottom corner of my ops aligned with the network grid. I like to start new ops on new grid lines, and I like a full grid line of vertical space between ops. I also like to arrange ops (when possible) so that wires don’t cross unnecessarily. OCD much Matt? Sure, you can say that if you like. For me, the truth of the matter is that I like tidy networks where I can quickly see the flow of operations left to right. Too much space and things feel spread out all over creation, too little space and it can be difficult to see a network’s operation when you’re zoomed out too far. For me, this is the right kind of balance. It might well be different for you, and that’s wonderful. My only encouragement here is to be purposeful. Whatever you do, make sure it’s a choice you’re making not just the happenstance layout that emerged from the creative process. Be messy, be wild, place nodes every which way … but tidy up before you save and commit your work for the day.

alignment

Color code sparingly

I love me some color coding, but make sure it’s done purposefully and give yourself a key. This is another situation where color coding is often a great idea on the face of it, and then it quickly changes into something you can hardly keep track of. That’s okay, and my experience has taught me that less is better in this regard.

Leave those op names alone and only append after the op name

It took a lot to get me on board for this, but it’s now something that I’m whole wholeheartedly behind. Leaving the original operator name in place and only appending after is a tremendous help when it comes to quickly glancing at a network. It helps the viewer know quickly what an operator is doing without having to do any additional inspection.

“I don’t get it Matt.”

That’s okay. Let’s look at this example:

names_nightmare

In 2 seconds or less can you tell if the first operator with the banana is a select or a moviefilein TOP? Can you tell if the composited image is a comp TOP or a multiply TOP? Can you tell if the TOP labeled “done” is a null or an out TOP?

Compare that with:

names

To be fair, the names aren’t nearly as exciting, but they do make it much easier to understand what’s happening at a glance.

Like it or not, you’re an engineer now

As much as we all might like to fancy ourselves Artists with capital letter As and a filigree flourish, the truth is that right now you’re an engineer.  That’s not to say that you’re not an artist too, you are; but some problems don’t need artful solutions, they need thoughtfully engineered solutions based on a complex understanding of computation and computer science. The more you work in TouchDesigner, the more you’ll find that you’re as much engineer as you are artist – that’s not only okay, that’s an important realization to make about your own work. It can be a bifurcating moment to feel at conflict with the art, and the mechanics of a given network – “this approach is beautiful and it works but costs 20 milliseconds” is probably not a viable solution. It’s especially not a viable solution for something like a calibration pipeline that needs to have as little computational overhead as possible. The work that goes into an efficient process isn’t always glamorous, or much to show off; instead, it makes room for the art with a capital A to take more cycles and be more expressive.

Lucky for us, there’s room to be both engineer and artist in our work. Remember to embrace both of those roles, even when faced with the frustration of complex logic problems, or when faced with the questions of aesthetics.

Externalizing Files

So we should externalize files? Yes. For the love of god, yes. If you believe in a world where you don’t tear your hair out, yes. In case you still are wondering, the answer is yes – you should always look for ways to externalize files.

Why? That seems silly.

Once upon a time I didn’t externalize any components, and my projects lived as complete toe files and the world was beautiful. Then I crashed my project because I did something silly, and then it took me hours to track down where the problem was… even opening the crashautosave.toe left me slowly moving through networks in networks trying to figure out what I had done where to make everything crash. Worse yet, because it was just a single toe file there was no way to unit test any of the smaller modules. Open, change, save, open crash… repeat. If you haven’t had this experience yet, it’s only a matter of time.You too will crash one day, and mightily. You’ll wonder why you ever wanted to work with computers in the first place, and why you don’t just open up a bar in on a sandy beach somewhere.

Snark aside, externalizing components in your network has several benefits. For starters, it moves you away from having a single toe file black box that’s difficult to test or debug. Separating out smaller modules allows for more portability between the applications that you build, and it means that you can test just that module. It also means that you can update just that module and not your entire toe file.

Externalizing modules, extensions, and the like also lets you more easily compare files over time if you’re using a piece of version control software – don’t worry, we’ll talk about git in just a second. It also makes it easier to work with text files in something like sublime (my favorite) or Visual Studio Code. That might not seem like a big deal now, but the more you work with extensions and python, the more you’ll want robust text editing tools at your finger tips.

Externalizing files also reduces bloat in your toe file. If you’ve ever locked a texture in your toe file you know all to well that it your slim file size can quickly balloon. External files also makes it easier to save and try multiple configurations. We’re going to work on a building a calibration UI, we might want to be able to save a calibration configuration and then try another calibration. The same goes for configuration files. Rather than a configuration that’s locked into your toe file, an external file can make it easy to load lots of different configurations or other settings.

It might well seem like I’m belaboring this point, and I’m doing that on purpose. Working with external files is more work, takes more organization, takes better planning, and requires a more meticulous approach – but if you want to grow as a developer and programmer, you’ll wrestle with this challenge sooner or later. I’d encourage you to do that sooner rather than later.

git

What is git? Direct from the git website :

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Git is easy to learn and has a tiny footprint with lightning fast performance. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows.

But what does that mean?! Unlike working with art where a tool like Dropbox would be helpful, when we’re developing software it’s often more helpful to be have some specific means of specifying when a file needs to be saved (probably with the same name), along with a message about what our changes have done. Let’s say we’re working on our calibration UI, we’ve made a change that now allows us to save out our calibration configuration. That’s great, but it’s come at the cost of changing a piece of how our UI works. We know for sure that the old way works 100% of the time, and we’re pretty sure that our new way works at least 80% of the time. Git allows us to check in the new version of our software, commit it to our working directory structure, and make a note of what we did.

We’re smart programmers, so we’ve also started to externalize our files. This means that we’ve separated out parts of our network responsible for different tasks. With git we’re now able to check in only the single tox that’s changed while we’ve been working. Better yet, we can even work with another person. Developer 1 might be working on one part of our project, while Developer 2 is working on another. Both developers are saving their respective tox files, and git allows us to track that progress and merge work seamlessly. If we do end up editing the same files, we end up with a merge conflict… which means what Matt? A merge conflict means just that, two files are competing for the same place in our directory, and we need to review them in order to decide which file to keep.

Git also allows us to work in multiple branches on our project.

Huh?

Let’s imagine that we find a bug, we might want to create a branch in our code to deal with resolving this bug.

Why?

Well, let us imagine a situation familiar to us all. We find a bug, squash it mightily. Only to discover that our fix has created another problem… maybe a much bigger problem. The little bug that we killed we could have worked with, but now we have a show stopper bug running a muck, that we don’t know how to stop, and the show starts in 45 minutes. Trouble shooting in another branch allows you to make fixes and then integrate them back into your project when you’re confident that they don’t create other problems.

Best of all, git creates a history that let’s us step back through each one of our commits. At any point we can return to a past state of our project. Taking advantage of modular design and externalized files makes this even more powerful because it means that with a little work we can resurrect ideas that might otherwise have been lost or discarded.

That’s a big commercial for git so far, but no cautionary tales – an cautionary tales there are plenty of. While git is a phenomenal tool, it can be a bit beastly to wrangle when  you’re first starting – know that you’ll get frustrated, know that you’ll curse it, know that you’ll want to give up on using it, then count to 10 and give it another go. Like all things you’re unfamiliar with, take your time to get used to it as a tool set. You might choose to start by using a GUI tool, but at some point you should learn some of the commands to run directly from the command line. You’ll also want to avoid checking in any of your assets. These days I divide up my projects and separate my assets from my code. I have a few exceptions for this practice, but I try to keep them few and far between – only really allowing for template files, or grids. Essentially, I try to make sure that it’s only files that are going to change in frequently.

If you’re going to use git, you’ll probably also use something like github or bitbucket. These are great services and host your code online so you can get to it from anywhere. These services also come with some great tools for making action items, bug tracking, and maintaining documentation for your project. Take advantage of these tools.

learn more about git at https://git-scm.com/
start using git with github or bitbucket

See more comics from XKCD at https://xkcd.com/

Extensions Everywhere

I’ve talked plenty of about extensions, how to use them, and what they’re good for. I’ve had some spirited discussions about when to use Extensions vs. Modules, and there’s a healthy conversation to be had there for those who want to really get into the weeds of it all. The place I’ve finally landed is this – I use extensions anytime I want to extend the capabilities of a given component. Case to case the degree of extension varies – sometimes I take fuller advantage of using classes than others, and that’s okay. To me, it’s okay if I don’t always take complete advantage of the difference between Extensions and Modules. Sticking with extensions means choosing an approach that works for 99% of the use cases where I need more flexibility in my programming. For mean that means a little bit of stability from the Touch paradigm where there are almost always at least a handful of different ways to solve the same problem. You might like Modules better, and that’s okay. In this case study, however, we’ll see the use of Extensions.

We’ll also see a move away from fragmenting our code across our network. It’s often tempting to use a button deep down in a UI somewhere to perform a logical operation. The only catch is that now that script is buried deep down in your project somewhere, and should it ever give you problems, good luck finding it. For the most part, on large projects I aim to keep the lion share of my operations focused in an extension. This takes a little more time to write, but it means that things are more organized, easier to diff between check ins, that I can change my code in a single place, and that I’m writing functions that are callable from other parts of the network. In my experience so far, this little amount of extra work makes for a larger savings across the whole project.

More about Extensions
Python in TouchDesigner | Extensions
Understanding Extensions

documenting your code – what are docstrings?

I’m a broken record. I know, but really – leave yourself notes, voluminous ones. Better yet, when using extensions make sure to add some docstrings.

“What are docstrings?” you ask.

I’m so glad you did! Docstrings are a feature in several programming languages that allow you to embed  comments or information into your code that can be inspected at run time. If you’re still scratching your head that’s okay. Let’s look at an example, this one is from the wikipedia page about docstrings:

"""
Assuming this is file mymodule.py, then this string, being the
first statement in the file, will become the "mymodule" module's
docstring when the file is imported.
"""

class MyClass(object):
    """The class's docstring"""

    def my_method(self):
        """The method's docstring"""

def my_function():
    """The function's docstring"""

“Great. But why are these useful?”

There useful in lots of ways but for me they have some special importance. For starters, they help ensure that I’m documenting my code as I go. The more tips you leave yourself along the way, the happier you’ll be in the long run. It does make your code longer, and it does take more time. But, if you’re really invested in building something reusable that you can come back to and refine over time, these are solid investments to make that process smoother in the future.

You can get to these notes from anywhere. This internal documentation can be called from anywhere in the network, so if you’ve forgotten how many arguments your method takes, or what it is supposed to return you can get some quick answers without having to track down the extension. When you start to write lots of complex methods this can become a real help. It’s probably not something you need when you’re first working on the project, but if you’re collaborating with someone else or coming back to your project over time it’ll be well worth the time and effort.

Finally worth considering is that docstrings are a part of the larger Python style guide. Like it or not you’re an engineer, and at this point you’re turning into a budding Python programmer. Learning about the best practices of other programmers outside of the TouchDesigner silo helps broaden your perspective and exposes you to how other people work. For better or worse I spent a lot of time in the liberal arts, and one of the core tenants of the ideology that underpins the arts is that exposure to other ideas and perspectives is important in its own right. I’ve carried this with me through life, and I think even in the cases of programming languages it’s worth considering. The conventions and practices of any community teaches you about how they think of the world – how it’s organized, how it aught to work, how that group values the world, or what they don’t value. The Python community is one that we participate in, even if only mostly from our gray network of operators.

learn more about docstrings
see google’s python style guide for docstrings

Mapping out our Project

You know the saying – measure twice, cut once.

Before digging in to a new project make sure you ask a lot of questions, map out as many details as possible, and make a plan.

Okay, so what about our project?

For starters

  • We know that we’ll have five projectors and one monitor that acts as a UI.
  • We know the resolution of all of our displays.
  • We know that we’ll externalize lots of files in our network.
  • We know that we’re going to use git as our version control system.
  • We know that we are going to work with other people.
  • We know that we need to document our work.
  • We know that we’ll need a way to control warping for the five different displays
  • We know that we’ll need a few other controls – both for the mapping, and also to make space for the future UI elements to control playback.
  • We know we need to previs the layout.
  • We know that we’re looking to build out a complete application with all of the features that you might expect from that kind of build.

Whew! Okay, so where do we start? Before we start throwing down ops let’s think through a plan.

We’ll need a container that holds our UI, we’ll also need a container for each of our displays. We know that it’s best to just use a single perform window that spans multiple monitors, so we should make sure we plan for that when we’re developing our project.

We also need to think about how we’re going to set up rendering our scene so we can previs it without turning on all the projectors. That will mean a multi camera set up and some live rendering.

We should also think about how we might ingest video to display for calibration and test playback. On top of that we should also build out a test module to help the creative programmer understand the correct formatting for how to work in the framework we’ve created.

We should also plan to make some space in our build to house assets that are needed across our network, as well as a place to keep our calibration data. We’ll take advantage of the stoner, so we better make some time to pull that apart and better understand how it works.

We’ll use a few set of extensions, so we should also make sure that we know how that works before we dig in too deep.

Finally, we should make sure we know how git works, since we’re going to use that along the way.

We’re bound to run into some other things we need to suss out as we go, but this should give us a solid starting point in terms of understanding the project from a distance.

Next we’ll take a closer look at the stoner to make sure we understand this component we’re going to use, then we’ll dig in and start building.