Category Archives: media design

TouchDesigner | Finding Dominant Color

Programming is a strange practice. It’s not uncommon that in order to make what’s really interesting, or what you promised the client, or what’s driving a part of your project you have to build another tool.

You want to detect motion, so you need to build out a means of comparing frames, and then determining where the most change has occurred. You want to make visuals that react to audio, but first you need to build out the process for finding meaningful patterns in the audio. And on and on and on.

And so it goes that I’ve been thinking about finding dominant color in an image. There are lots of ways to do this, and one approach is to use a technique called KMeans clustering. This approach isn’t without its faults, but it is interesting and relatively straightforward to implement. The catch is that it’s not fast enough for a realtime application – at least not if you’re using Python. So what can we do? Well, we can still use KMeans clustering, but we need to understand how to use multi-threading in python so we don’t block our main thread in TouchDesigner.

The project / tool / example below does just that – it’s a mechanism for finding dominant color in an image with an approach that uses a background thread for processing that request.


TouchDesigner Dominant Color

An approach for finding dominant color in an image using KMeans clustering with scikit learn and openCV. The approach here is built for realtime applications using TouchDesigner and python multi-threading.

TouchDesigner Version

099
Build 2018.22800

Python Dependencies

  • numpy
  • scipy
  • sklearn
  • cv2

Overview

A tool for finding Dominant Color with openCV.

Here we find an attempt at locating dominant colors from a source image with openCV and KMeans clustering. The large idea is to sample colors from a source image build averages from clustered samples and return a best estimation of dominant color. While this works well, it’s not perfect, and in this class you’ll find a number of helper methods to resolve some of the shortcomings of this process.

Procedurally, you’ll find that that the process starts by saving out a small resolution version of the sampled file. This is then hadned over to openCV for some preliminary analysis before being again handed over to sklearn (sci-kit learn) for the KMeans portion of the process. While there is a built-in function for KMeans sorting in openCV the sklearn method is a little less cumbersome and has better reference documentation for building functionality. After the clustering process each resulting sample is processed to find its luminance. Luminance values outside of the set bounds are discarded before assembling a final array of pixel values to be used.

It’s worth noting that this method relies on a number of additional python libraries. These can all be pip installed, and the recommended build approach here would be to use Python35. In the developer’s experience this produces the least number of errors and issues – and boy did the developer stumble along the way here.

Other considerations you’ll find below are that this extension supports a multi-threaded approach to finding results.

Parameters

Dominant Color

  • Image Process Status – (string) The thread process status.
  • Temp Image Cache – (folder) A directory location for a temp image file.
  • Source Image – (TouchDesigner TOP) A TOP (still) used for color analysis.
  • Clusters – (int) The number of requested clusters.
  • Luminance Bounds – (int, tuple) Luminance bounds, mine and max expressed as value between 0 and 1.
  • Clusters within Bounds – (int) The number of clusters within the Luminance Bounds.
  • Smooth Ramp – (toggle) Texture interpolation on output image.
  • Ramp Width – (int) Number of pixels in the output Ramp.
  • Output Image – (menu) A drop-down menu for selecting a ramp or only the returned clusters.
  • Find Colors – (pulse) Issues the command to find dominant colors.

Python

  • Python Externals – (path) A path to the directory with python external libraries.
  • Check Imports – (pulse) A pulse button to check if sklearn was correctly imported.

Using this Module

To use this module there are a few essential elements to keep in mind.

Getting Python in Order

If you haven’t worked with external Python Libraries inside of Touch yet, please take a moment to familiarize yourself with the process. You can read more about it on the Derivative Wiki – Importing Modules

Before you can run this module you’ll need to ensure that your Python environment is correctly set-up. I’d recommend that you install Python 3.5+ as that matches the Python installation in Touch. In building out this tool I ran into some wobbly pieces that largely centered around installing sklearn using Python 3.6 – so take it from someone whose already ran into some issues, you’ll encounter the fewest challenges / configuration issues if you start there. Sklearn (the primary external library used by this module) requires both scipy and numpy – if you have pip installed the process is straightforward. From a command prompt you can run each of these commands consecutively:

pip install numpy
pip install scipy
pip install sklearn

Once you’ve installed the libraries above, you can confirm that they’re available in python by invoking python in your command prompt, and then importing the libraries one by one. Testing to make sure you’ve correctly installed your libraries in a Python only environment first, will help ensure that any debugging you need to do in TouchDesigner is more straightforward.

Working with TouchDesigner

Python | Importing Modules

If you haven’t imported external libraries in TouchDesigner before there’s an additional step you’ll need to take care of – adding your external site-packages path to TouchDesigner. You can do this with a regular text DAT and by modifying the example below:

import sys
mypath = "C:/Python35/Lib/site-packages/mymodule"
if mypath not in sys.path:
    sys.path.append(mypath)

Copy and paste the above into your text DAT, and modify mypath to be a string that points do your Python externals site-packages directory.

If that sounds a little out of your depth, you can use a helper feature on the Dominant Color module. On the Python page, navigate to your Python Externals directory. It should likely be a path like: C:\Program Files\Python35\Lib\site-packages

Your path may be different, especially if when you installed Python you didn’t use the checkbox to install for all users. After navigating to your externals directory, pulse the Check imports parameter. If you don’t see a pop-up window then sklearnwas successfully imported. If you do see a pop-up window then something is not quite right, and you’ll need to do a bit of leg-work to get your Python pieces in order before you can use the module.

Using the Dominant Color

With all of your Python elements in order, you’re ready to start using this module.

The process for finding dominant color uses a KMeans clustering algorithm for grouping similar values. Luckily we don’t need to know all of the statistics that goes into that mechanism in order to take full advantage of the approach, but it is important to know that we need to be mindful a few elements. For this to work efficiently, we’ll need to save our image out to an external file. For this to work you need to make sure that this module has a cache for saving temporary images. The process will verify that the directory you’ve pointed it to exists before saving out a file, and will create a directory if one doesn’t yet exist. That’s mostly sanity checking to ensure that you don’t have to loose time trying to figure out why your file isn’t saving.

Give that this process happens in another thread, it’s also important to consider that this functions based on a still image, not on a moving one. While it would be slick to have a fast operation for finding KMeans clusters in video, that’s not what this tool does. Instead the assumption here is that you’re using a single frame of reference content, not video. You point this module to a target source, by dropping a TOP onto the Source Image parameter.

Next you’ll need to define the number of clusters you want to look for. Here the term clusters is akin to what’s the target number of dominant colors you’re looking to find – the top 3, the top 10, the top 20? It’s up to you, but keep in mind that more clusters takes longer to produce a result. You’re also likely to want to bound your results with some luminance measure – for example, you probably don’t want colors that are too dark, or too light. The luminance bounds parameters are for luminance measures that are normalized as 0 to 1. Clusters within bounds, then, tells you how many clusters were returned from the process that fell within your specified regions. This is, essentially, a way to know how many swatches work within the brightness ranges you’ve set.

The output ramp from this process can be interpolated and smooth, or Nearest Pixel swatches. You can also choose to output a ramp that’s any length. You might, for example, want a gradient that’s spread over 100 or 1000 pixels rather than just the discrete samples. You can set the number of output pixels with the ramp width parameter.

On the otherside of that equation, you might just want only the samples that came out of the process. In the Output Image parameter, if you choose clusters from the drop down menu you’ll get only the valid samples that fell within your specified luminance bounds.

Finally, to run the operation pulse Find Colors. As an operational note, this process would normally block / lock-up TouchDesigner. To avoid that unsavory circumstance, this module runs the KMeans clustering process in another thread. It’s slightly slower than if it ran in the main thread, but the benefit is that Touch will continue running. You’ll notice that Image Process Status parameter displays Processing while the separate thread is running. Once the result has been returned you’ll Ready displayed in the parameter.

Download from Github

https://github.com/raganmd/touchdesigner-dominant-color

References


Notes from Other Programmers

I don’t use CONDA, but for those of you that do, you can install sklearn with the following command:
conda install scikit-learn

presets and cue building – a beyond basics checklist | TouchDesigner 099

from the facebook help group

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

https://www.facebook.com/groups/touchdesignerhelp/permalink/835733779925957/

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

  • Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.
  • The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit. 
  • I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues  – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

  • What existing tools do you like – what’s their data structure / solution?
  • How is your data organized – arrays, dictionaries, etc.
  • Do you have a readme to refer back to when you extend your project in the future?
  • Do you have a way to add entries?
  • Do you have a way to recall entries?
  • Do you have a way to update entries?
  • Do you have a way to copy entries?
  • Do you have a validation process in-line to ensure your entries are valid?
  • Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

icon_library.PNG

icon_library_example.PNG

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing  your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

Richard’s killer post about optimization in touch has an excellent recommendation – keep your UI separate. This suggestion is HUGE, and it does far more good than you might intentionally imagine.

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

  • “What problem am I solving by adding this level of additional complexity?”
  • “Is there another way to solve this problem without an additional process?”
  • “What are the possible problems / issues this might cause?”
  • “Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

Advanced Instancing | Puzzle Pieces | TouchDesigner

Part 1

Core Concepts

  • Instancing geometry
  • Replicators and Clones
  • The Sort SOP
  • Images composed out of component pieces
  • Real time rendering


Part 2

Core Concepts

  • Instanced geometry from pixel data
  • Texture Arrays – the 3D Texture TOP
  • The Sort SOP
  • Images composed out of component pieces
  • Real time rendering

Advanced Instancing | Instancing with the Animation COMP | TouchDesigner

Once upon a time this had an audio track, and then suddenly it didn’t. A fix coming by the end of the week. Sorry for the delay.

Core Concepts

  • Instancing geometry
  • Working with the Animation COMP
  • Building Animation Channels
  • The Shuffle CHOP
  • Real time rendering

Understanding Referencing | TouchDesigner

referencingReferencing is one of the most powerful tools at the programmer’s disposal in TouchDesigner. Referencing creates a direct link between two or more floats or integers. This allows you to link operators that are outside of their respective families – normally you can only connect CHOPs to CHOPs and TOPs to TOPs, but referencing allows you to create connections between nearly any operators. There are a number of way to create these links with references or expressions. In many of the other posts that I’ve written I often write about using expressions and references, but haven’t taken much time to talk in depth about what this is, how how it all works. Let’s change that.

I started thinking about this when I saw a post on the Derivative forum from a new user struggling with understanding what I’d written in some earlier tutorials. Expressions are something that I continue to learn more about, and open up all sorts of opportunities for faster, more streamlined, and more elegant programming. Let’s start by looking at the typical kinds of referencing that you might do on any project. Specifically, let’s look at how we might connect one family of operators to another. In this example we’ll look at connecting a CHOP to a TOP, and all of the different ways we might do that.

all reference typesIn the image above we actually are referencing the same CHOP in four different ways. We can start this by first talking about how we connect two operators from different families. In this example I’m going to use a noise CHOP and a circle TOP. I want to use the sudo random noise from the noise CHOP to drive the vertical position of my circle in my circle TOP. There are a two major ways that we can make this connection: dragging and dropping the CHOP onto the TOP, or writing an expression that connects the two of them. I often opt for writing the expression – I do this because I think it’s good practice, and has helped me better understand the syntax and structure of using expression in references. We’ll take a look at both of these methods.

Let’s start with the drag and drop method. To use the drag and drop method we need the source operator to be viewer active (there are some exceptions, but it’s a good rule of thumb that your source probably needs to be viewer active to do this). We can make an operator viewer active by clicking on the + symbol in the bottom right corner, or by holding down the alt key (holding alt will make all operators viewer active). You can tell an operator is viewer active because the color coded border disappears, and usually a portion of it is highlighted when you mouse over the operator.

viewer activeLet’s take a moment to better understand the anatomy of an operator while we have this example handy. On the left upper corner of the operator we have a few different toggle switches – viewer, clone immune, bypass, and lock. Along the very bottom of our operator we have it’s name (you can make changes to this field), any flags associated with the operator family, and the viewer active toggle. Having a solid sense of the anatomy of your operators becomes increasingly important the longer you work with TouchDesigner.

Alright, now that we know how to toggle our viewer active mode on and off, and know a little more about our operators’ anatomy lets look at how to build a reference. Let’s make a noise CHOP in our network as well as a Circle TOP. With the Noise CHOP viewer active, click on the name “chan1” in the viewer, and drag it to the Y parameter of the Circle TOP (I’ve made my circle a little smaller, to make this easier to see):

exportAs you do this you should see a drop down menu appear, let’s select “Export CHOP” from the list. You should now see the Y position (or center 2) changed to a green color. You should also see some text that shows up as well. Here’s a closer look at just the paramater we’ve changed in the circle TOP:

export CHOPLooking closer we can see that the text reads: noise_active:chan1. Great, but what does that mean?! Well, if we take a closer look at our Noise CHOP we can see that I changed the name of that operator to “noise_active” – we also see that the name of our noise channel is “chan1”. If we were to abstract what we’re seeing in the export language we might write something like this:

source_operator_name:source_channel_name

Exporting is a fine way to connect operators, but it’s not my personal favorite. I say this because exporting creates a locked relationship. Once you’ve done this you can’t change the text in the target operator. Exporting creates a much more permanent relationship between your operators. To remove the export you’ll need to right click on the parameter filed and select “remove export.” Surely there’s a better way to connect operators?!

In fact, there are still three more ways to connect operators. Taking a closer look at the drop down menu that appears when we use the drag and drop method we see the following that there are several Methods:

current valueAll of these are ways that we can connect two operators together, and many of them yield the same results, so what gives:

  • Export CHOP – we’ve already seen this method, and we know that one of its limitations is that it creates a fixed relationship between two operators. This is excellent for creating something more finished for locked in nature.
  • Text – Text exports the pathway to a particular channel.
  • Current CHOP Value – this exports the value of the operator in question at the precise moment that you drag and drop. Rather than a continually updating value this is just a single float or integer.
  • Relative CHOP Reference – the relative reference exports a python expression that points to the operator being referenced. A relative makes for easy cutting and pasting so long as the network hierarchy relationships remain constant between operators.
  • CHOP Reference (sometimes called Absolute Reference) – the absolute reference also creates a python expression pointing to an operator. The difference here is that it includes the precise pathway to the operator in question making cutting and pasting a bit more frustrating.

For now we’ll take a pass on the “Text” and “Current CHOP Value” options as these have more limited uses. Let’s now take a close look at our Relative and Absolute Reference options.

Relative Referencing

Let’s go ahead and make another circle in our network, and this time let’s create a relative reference between our noise CHOP and our circle TOP.

relative referenceTaking a closer look at our expression we can see that it reads:

op(“noise1”)[“chan1”]

Okay, what does this mean? Lets start by looking at the syntax of this expression. First we can see that we’re looking for an operator. We know this because our expression starts with op(). Next comes the name of the operator in quotation marks. As an important note, Python doesn’t care if you use double quotes or single quotes so long as they match. This means that “noise1” and ‘noise1’ are both equal and produce the same results; ‘noise1″ or “noise1′ however will not work. Finally we see the name of the channel in question in brackets and in quotes – [“chan1”]. This means that our syntax looks something like operator(“exact_name_of_operator”)[“desired_channel”]. Okay, let’s look at another example to make sure we have a firm understanding of how relative referencing works.

Let’s make a Constant CHOP. Let’s name change the name of the constant to “fruits” and name the first three channels, “apple” “pear” and “lemon”. You should have something that looks like this:

fruit constantAlright, now lets add a circle TOP to our network. This time, instead of using the drag and drop method we’ll write out the Python expression to create a reference to our constant. We’ll start by referencing our apple channel. This means our expression is going to be:

op(‘fruits’)[‘apple’]

You can write this directly in the expression directly in the target parameter field for the target operator. When you’re done you should have something like this:

fruits topIf you drag the slider in the constant CHOP to the right, you should now see the circle move up in the viewer. So we’ve successfully connected our circle TOP to the apple channel, why is this any better than just exporting? Well, let’s say that for whatever reason you change your mind while you’re programming and decide that instead you’d prefer for the circle TOP to be connected to the “pear” channel? Written as an expression we can make that change simply by deleting “apple” and replacing it with “pear” or “lemon”. Our expressions then would be:

op(‘fruits’)[‘pear’]
op(‘fruits’)[‘lemon’]

Additionally, if we’ve written a reference as an expression we write some math directly into our reference. We might, for example, only want half of the value coming out of the apple channel. In this case we’d write the expression:

op(‘fruits’)[‘apple’] * 0.5

This would divide ever value in half, changing our scaling from 0 – 1 to 0 – 0.5 instead. We can also use this method to multiply a channel by another channel. For example maybe we want to create a relationship between two different channels from our fruits constant. We might write the expression:

op(‘fruits’)[‘apple’] * op(‘fruits’)[‘lemon’]

You could just as easily do this with a Math CHOP, but you might find that just writing the expression is faster, simpler, or more tidy.

Before we move on, there are two more modifiers that we need to know when writing relative references:

./
../

What on earth are these all about? Well, these are handy directory pointers. At some point you will surely end up wanting to reference an operator that is another part of your network – a control panel, a material, a slider, you name it – if you program in Touch long enough, you’re gonna need these. So what do they mean:

./  – this modifier means the network inside of me
../  – this modifier means in the network above me

If you’re scratching your head, that’s okay. Let’s look at an example. Let’s say that we have a Geometry component, and inside of it we have placed a material – a constant that’s red. A relative reference for that material would be “./constant1”. This means, look inside of me for the material called “constant1”.

constant1So how does ../ work then? Imagine that you’d like the alpha of the constant in our geo to be connected to the noise in our parent network. Here we can write a reference that looks like this:

op(‘../noise1′)[‘chan1’]

Here’s what that would look like:

dot dot slash

Absolute References

Now that we understand what relative references are, what are absolute References? Unlike relative references, absolute references require the entire network path to an operator. In the case of our noise and circle example, that means that our reference looks like this:

op(“references/noise1”)[“chan1”]

absoluteAbsolute references require that you know exactly where in your network you’re referencing an operator from, because you have to use the entire network path. That sounds like a pain, so why use them? Well, let’s imagine that you’re building a complex program and you’re trying to be as tidy and organized as possible. You might build a large change of your user interface in a single location. This means that all of the sliders, buttons, and menus that are being called all live in the same container. In this case, using an absolute reference makes good programming sense. Relative references will leave you constantly trying to figure out how many ../ to use when referencing your user interface. Absolute calls don’t require this, as they point to a very specific place in the network. You can even simplify this by making sure that all of your buttons and sliders are joined with a merge CHOP.

To get a better sense of how this works, download the example .toe file and look at the last example that’s driven by sliders that control the level TOP. There’s lots more to learn about expressions, but practicing your references will help you begin to understand the syntax and logic of how they work.

Download this example toe to learn and explore some more – referencing toe

 

 

 

Let’s Make this Table Data Move | TouchDesigner

14285147532_9f75f63f29_nWorking with live streaming data is about as good as it gets when it comes to programming – this is especially true if you’re working on a project that looks to create a recognizable relationship between live data and media. What, then, is a person to do when placed in a situation without access to a live source of data? Whatever the project, the best way to tackle this problem is to find a source of prerecorded information. If you’re working on something like motion tracking, using a pre-recorded video is an excellent solution to this problem. What about sensors that aren’t image based? What if I’m dealing with a series of floats? What happens if those floats just come to me in a table? How can I take a series of recorded data points that live in an text file and make them move? That’s exactly one of the problems that came up for me recently, and I’ve got a handy trick that will make it easy to work with a data set in a table as though it were streaming into your TouchDesigner network live.

To get started, we need a file with some data in it. I made a quick spreadsheet with two columns. One starts at 0.01 and goes up to 1, awhile the other column starts at .99 and counts down to 0. If you’re following along, you can download that text file here (tabledata). In broad terms, what were going to make is a simple network of operators that moves through a table, pulling one row of data at a time, and then converting that table information into CHOP data. We can see where we’re headed if we look at our whole completed network:

whole networkSo what’s happening here? In the DAT called “data” we have a table of recorded values. Next I use a select to remove the header row from the data, and another select to move through the rows of data. Using another table, a transpose, and a merge gives us a table that’s easy to convert into a CHOP. Now that we have a general sense of what’s happening in this network, let’s dig-in and get to work.

We’ll start by adding a Table DAT to an empty network. Rather than entering data by hand, we can instead just point TouchDesigner to a file that we want it to use. In the Table DAT’s parameters dialogue we’ll click on the plus button to the right of the “file” field and then locate the file that we’re looking to use.

tableDATIn order to see our table data we need to click on the button “Reload File” so that our table will be populated with the information from the file that we’re using. Next we’re going to use a few Select DATs to manipulate the contents of our table. We’re going to use the first select to remove the header row of our table. To set this up, we’ll set our select to extract rows by index, starting at 1.

select1

You’ll also notice that specifying that we’re extracting rows by index turns on a End Row Index value that’s driven by an expression (me.inputs[0].numRows – 1). We’re going to use the logic from this expression a little later on, so tuck that into the back of your mind for just a moment.

Next we’ll use another Select Table to move through the rows of our table. In adding another Select, let’s again set it up to extract Rows by Index. This time, however, we’re going to change the value of the start row and end row index to be the same. Doing things, you should notice that we get only one row of our table. Try changing the values of these parameters – as long as both fields contain the same number you’ll see only one row of information. We’ll animate that in just a moment taking advantage of this.

select2

The next operator that we’ll add to this network is a Transpose DAT. A transpose will change rows into columns, giving us a result that’s two rows, rather than two columns of data.

transpose

While just these changing values are ultimately what I’m after, I would also like my values to have names. To do this I’m going to add another table DAT, creating two rows: xPos and yPos. I’m going to use a Merge DAT to combine these two tables – to make this work properly we’ll need to set the Merge to append columns. When we’re doing we should have something that looks like this:

merge

Alright, not that we have our DAT string’s set up, let’s animate this table, and look at how to get some CHOP data out of these DATs. First let’s start by adding a Constant CHOP to our network. Let’s give our first channel a useful name, and then call our absolute frame count (me.time.absFrame).

constantWhy use absolute frame? I’d like a steadily increasing integer that can be used to drive our progression through the rows of our table. Our absolute frame is an excellent candidate for this need – except that I don’t want to exceed the maximum number of rows in my table. To do this let’s add a Limit CHOP. First up I’ll need to set this Operator to Loop, I’ll also want to set this operator to start at 0 (Minimum).

lmit

For the maximum value, I want to use the total number of rows in our second table (the table that contained only data, without a header). I could hard-code this by entering 200 into the Maximum parameter of our Limit, but then I have to change this number whenever my table changes. A better solution would be to use an expression to pull this number from the table in question – which is exactly what that expression we saw earlier does. The expression we want to use then for our Maximum parameter is: op(‘select1’).numRows.

limit2

Now it’s the moment we’ve been waiting for. Lets make that table move! To do this we’ll use the row counter in to drive our location in our table – we’ll write some relative references in our select2 DAT to make this happen. In the Start Row and End Row Index values let’s use the reference op(‘limit1’)[‘row’] to drive the change in our table.

limit3

The last step here is to add a DAT to CHOP to our network. We’ll add this at the end of our network, and drag the target DAT onto the CHOP.

dat to CHOP

There we have it. We’ve just taken a static table full of data, and turned it into a channel data that changes over time. For extra credit, add a Trail CHOP to the DAT to see what your data looks like.

trail

 

Inspired by Rutt Etra | TouchDesigner

Fall of the House of EscherBack when I was working on The Fall of the House of Escher I became obsessed with live z-displacement in the media. That particular show was wrestling with questions of quantum physics, time, reality, and all manner of perceptual problems and so it made a lot of sense to be love the look and suggested meaning inherent in the displacement of an image in a third dimension. This particular technique is perhaps most well known to us because of the Rutt Etra Video Synthesizer. This video synth was an attempt to begin manipulating video in same manner as sound was being manipulated in the 1970s, and was truly a ground breaking examination of our relationship to video – live video especially. One of the most famous elements of the Rutt Etra Synth was z displacement, the change in the depth of an image based on its luminance.

You can get a better sense of what this kind of effect looks like by playing with the Rutt Etra-izer online. This simple tool lets you play with one slice of what the original video synth did. Additionally, if you’re a mac user you might want to check out what v002 had to share when it comes to plug-ins, as well as some thoughts from Bill Etra. You can find more from v002 here: Rutt Etra v002

So that’s all well and good, but what am I after? Well, in the pursuit of better understanding how to program with TouchDesigner, as well as how to explore some of the interesting ideas from the 1970s, I wanted to know how to replicated this kind of look in a TouchDesigner network. There are plenty of other people who have done this already, and that’s awesome. I, however, happen to subscribe to the kind of art philosophy asks students to copy the work of others – to practice their hands at someone else’s technique, to take what works and leave what doesn’t. Art schools have often required students to copy the work of masters in order to foster a better appreciation and understanding of a particular form, and I think there’s a lot too that in programming. So today, we’ll look at how to make this kind of effect in TouchDesigner and then ask how we might manipulate this idea in ways that differ from how the original method was intended to  work.

Rutt Etra almostTo begin, this idea started when I was looking through the wiki page about Generative Design. Specifically, the sample network talking about image manipulation really got my attention. Buried in this example network is something that’s Rutt Etra flavored, and it’s from this example that I started to pull out some ideas to play with. As we work our way through this example it’ll be easy to get lost, frustrated, or confused. Before that happens to you, take a moment to think about what we’re trying to do. Ultimately, we want to take an image (video later, but for now an image) and add together the RGBA values of an individual pixel and use that number to transform said pixel in a z dimension. In other words, we’re really after creating a faux depth map out of an image. It’s also important to remember that using an image that’s 1920 x 1080 means we’re talking about 2,073,600 pixels. That’s a lot of points, so we’re going to start by creating a grid that simplifies those dimensions – partially for our own sanity, partially for the sake of the processing involved, and partially to remain true the aesthetic of the original idea. Once we replicate the original, then we can start to talk about where we can start to play. That’s enough disclaimers for us to get started, so let’s do some programming.

Let’s start by looking at the whole network:

From this vantage point we can see that we’re going to make a network that uses a little bit of everything – some texture operators (TOPs), some channel operators (CHOPs), and some surface operators (SOPs).

Starting with Texture Operators

First things first, let’s make a new container and dive inside. To our empty container let’s start by adding an In TOP as well as a Movie In TOP. We’re going to start by connecting the Movie In to the second input on the In TOP. Why? The Movie In TOP is going to allow us to pass a video stream into this container. We may, however, want some image to show up while we’re working or when we don’t have a video stream coming in; this is where that second input comes in handy. This gives us a default image to use when we don’t have a stream coming into the container. Here’s what you should have so far:

in

Next, we want to down sample some of this image. This step is actually helping us plan ahead, we’re going to use some expressions to set the parameters of some of our other operators, and this step is going to help us with that. To do this down-sampling, we’re going to use the Resolution TOP. After that we’ll end our TOP string in a Null TOP. All of that should look like this:

Rutt-TOPs

Before we move on, let’s make on change to the resolution TOP in our string. Open up the parameters window for your resolution TOP and on the common page change the Output Resolution parameter to Quarter:

resolution

 

Channel Operators

There are a number of different transformations that we’ll need to do with the data from our image, we’ll start this process by first moving to Channel operators. If we think back to where we started this process with an intent to transform luminance into depth, then we know that one of the operations we need to complete is to add together the RGBA values of our pixels in order to have a single number we can use to manipulate a surface. To get us started we first need to convert our image into channel data that we can manipulate. To do this we’re going to add a TOP to CHOP to our network. This operator is going to allow us to look at our TOP as if it were all Channel data. We’ll see what that looks like in a moment. First, however, let’s make a few changes to our operator. In the Image page for the TOP to make sure that you’re set to “Next Frame” as the download type. On the Crop page you’ll also want to make sure that you’re set to the full image:

Next we need to assign a TOP so we have a texture that we’re converting into Channel data. You can do this by dragging the TOP onto this CHOP, or you can enter in the name of the target TOP on the image page. Your TOP to CHOP, should now look something like this:

top to chop

This viewer on this operator can be taxing on your system, so at this point I’d recommend that you click the small bulls-eye in the upper left hand corner of your CHOP to turn off the viewer. This will save you some system resources as you’re working in your network and keep you from seeing too much lag while you’re editing this network.

The next series of channel operators is going to allow us to make some changes to our data. First we’ll use a Shuffle CHOP to reorganize our samples into sets of channels, then we’ll use some Math CHOPs to add together our pixel values and to give us some control over the z-displacement, and finally we’ll use a Rename CHOP to make it easy to use this data.

Let’s start this process by connecting our Top to CHOP to a Shuffle CHOP and setting our method to be Sequence Channels by Name:

shuffle

Next we’ll add our Math CHOP and set the combine channel operation to Add, let’s also set the multiply value to 1 /3:

math

Next we’re going to add a slider, and another Math CHOP. Why? Great question. For starters, I want to be able to control the strength of this effect with a slider. At some point I’m going to want to be able to drive the strength of this effect, and a slider is a great way for us to do that. Why another Math CHOP? Another excellent question. While we could just use one Math CHOP and apply our slider there, that also means that there’s no way to isolate the effect of the slider. There’s some redundancy here, but there’s also a little more flexibility in the isolation of applying different alterations to our data set. Is this the best way to code a final component, maybe not, but it is a fine way to work with something that’s still being developed. Alright, let’s add our slider and second Math CHOP:

slider math

Next we need to write a simple expression in our math2 operator in order to be able to use the slider as an input method. On the Multi-Add page we’re going to use the out value from the slider as the value that the incoming channel information is multiplied by – if we think about the structure of our slider’s output we’ll remember that it’s a normalized value ranging from 0 – 1, and we can think of this as being the same as 0 – 100%. Alright, here’s our simple reference expression:

op( ‘slider1/out1’ ) [ ‘v1’ ]

In plain English this expression reads: give me the number value of the channel called ‘v1’ from operator called ‘out1’ that’s inside of ‘slider1’. If you click on the + sign on the bottom right of your slider (making it viewer active) you can now move it left and right to see the change in the number value of your math2 CHOP.

simple referecne expression

Before we move away from this portion of our network we need to add one final operator, a Rename CHOP. The Rename CHOP allows us to rename the channels inside of an operator. Later we’ll want to be able to use this number to replace an value from a surface operator chain – in order to do that easily we need to rename this value. In the to field of the rename CHOP type tz:

rename

 

Surface Operators

Now that we have started the process of converting our Texture into a channel data, we need to think about what we’re going to do with that information. We’re going to start by adding a Grid SOP to our network. We’d like this operator to pull some of its dimensions from the our video – this will make sure that we’re dealing with a surface that’s the correct aspect ratio. In our Grid SOP we’re going to use some simple expressions to pull some information from our Null1 TOP (remember that our Null1 is at the end of our TOP chain). First we’ll use the height and width of our Texture to set the number of rows and columns – we can use the dot call method to ask for the height and width of our TOP with the following expressions for rows and cols:

op( ‘null1’ ).width
op( ‘null1’ ).height

Next we can use internally call the number of rows and columns to for our height and width with the expressions:

me.par.cols
me.par.rows

We’ll also want to make sure that we’ve set our grid to be a Polygon with Rows as the connectivity type.

Now we’ll need to get ready for our next step by adding a CHOP to SOP. This operator is going to allow us to create some geometry out our CHOP data.

chopto

What gives? Well, in order for this operator to work properly, we need to have some feed it some CHOP data.

 

Channel Operators Again

Now we’re finally making progress, even if it doesn’t feel like it just yet. For starters, we have our Texture Operators converted into channel data, we have a piece of geometry that we can alter based on the dimensions of the input Texture, and we’re ready to combine our Channel data with our Surface data, we just have some final house keeping to do.

Let’s start by first converting our Grid SOP to a CHOP with a SOP to CHOP.

sopto

Like we did before we’re going to use a Shuffle CHOP, but this time we’re going to set it to Sequence all Channels:

shuffle2

Next we’ll use a Math CHOP to find the absolute value of our shuffle – we can do this by setting the Channel Pre Op to be Positive:

absolute value

Next we’ll use an Analyze CHOP to find the maximum value coming out of our Math CHOP.

analyze

Now we’re going to normalize our data against itself. We’re going to add another Math CHOP to our network. We’ll connect it’s input to our sopto1 CHOP and in this case we’ll use the Multi-Add page and use 1 / the maxim from our analyze CHOP. We can do this with a simple reference expression:

1 / op( ‘analyze1’ ) [ ‘tx’ ]

All of that should look something like this:

MORE math

We can finally start putting all of this together with one more Math CHOP. This final Math CHOP is going to combine our SOP to string, and our TOP to string. You’ll want to make sure that the SOP string is in the first place on the Math CHOP, with the TOP string coming in underneath. Next make sure that Combine Channels is set to Add, and that Match By is set to Channel Name.

SOP TOP

Now let’s end this set of operations in a Null, and move back to where we left off with our surface operators. Back in our Chop to SOP, let’s set our CHOP reference to be our last CHOP null (in my case it’s Null2). All of that should finally look like this:

chop to 1

At long last we’ve finally transformed a grid in the z direction with the information from an input Texture. Ba da bing bang boom:

grid transformed

 

 

Rendering

Dear flying spaghetti monster help us… this is has been a lot of work, but so far it’s not very fancy. What gives? Well, if we really want to make something interesting out of this, we need to render it – that is, we need to change it from being just some geometric information back into some pixels. If we think back to what we learned while we were playing with Instancing, rendering isn’t too hard, we just need a few things to make it work (some geometry, a camera, and a light source… this time we’ll skip that last one).

Let’s start by adding a camera, and a geometry component to our network.

geo camera

For our render to work properly, we need our chopto SOP to be inside of our Geo. At this point you can use your favorite piece of profanity and get get ready to remake this network OR you can HOLD YOUR HORSES and think about how we might solve this problem. My favorite way to address this kind of issue is to jump inside of the geo component, add an In SOP and set it to render and display. This means that we can pass our geometry into this component without needing to encapsulate all of the geometry inside.

insop

Now let’s connect our chopto to our inlet on the Geo comp:

geo connect

Ideally we want the image from our original texture to be used when rendering our z transformation. To do this, let’s jump back inside of our Geo COMP and add a constant Material. A Constant, unlike a phong, isn’t shaded. In other words, it doesn’t render with shadows. While this isn’t great in some respects, the pay off is that it’s much cheaper to render. While we’re getting started, this is a fine material to start with. We’ll also want to make sure that our Constant is using our original TOP as a color Map. In the Color Map field we can tell this operator to look for the operator called in1 in the directory above with the following call:

../in1

constant mat

Next we need to apply this material to our Geo. To do that let’s jump out of the Geo comp and head to the render page of the parameters window. We can tell our Geo to look for the material called constant1 that’s inside of the geo comp like this:

./constant1

geoconstant

Holy macaroni, we’re almost there. The last step we need to take is to add a Render TOP to our network:

renderTOP

At long last we have finally replicated the z-translation aesthetic that we set out to emulate. From here you might consider changing the orientation of the geo comp, or the camera comp for a more interesting angle on your work.

 

Play Time

That’s all well and good, but how can we turn it up a little? Well, now that we have a portion of this built we can start to think about what the video stream going into this operation looks like, as well as how we modify the video coming out of it. Let’s look at  one example of what we can do with the video coming out.

I’ve made a few changes to our stream above (some simple moving animation, and changed the resolution op), to have something a little more interesting to play with but it’s still not quite right. One thing I want to look at is giving the lines a little more of a neon kind of look and feel. To do this I’m going to start by adding a Blur TOP to my network:

blur

Next I’m going to add an Add TOP and plug both of my TOPs into it, adding the blur back to the original render:

addblur

Finally, I’m going to add a Constant TOP set to black, and a Composite TOP. I’ll composite my Add TOP and my Constant TOP together to end with a final composition:

composite

Now it’s your turn to play. What happens when you play with the signal processing after your render, what happens when you alter the video stream heading into this component. Also, don’t forget that we built a slider that controls the strength of the effect – play and make something fun.

rutrut

 

Looking to take a closer look at what makes this process work? Download the tox file and see what makes this thing tick: rut