Category Archives: Installation

TouchDesigner | Reusable Code Segmentation with Python

reusable-code-segmentation.PNG

Thinking about how to approach re-usability isn’t a new topic here, in fact there’s been plenty of discussion about how to re-use components saved as tox files, how to build out modular pieces, and how to think about using Extensions for building components you want to re-use with special functions.

That’s wonderful and exciting, but for any of us that have built deployed projects have quickly started to think about how to build a standard project that any given installation is just a variation of… a flavor, if you will, of a standard project build. Much of that customization can be handled by a proper configuration process, but there are some outliers in every project… that special method that breaks our beautiful project paradigm with some feature that’s only useful for a single client or application.

What if we could separate our functions into multiple classes – those that are universal to every project we build, and another that’s specific to just the single job we’re working on? Could that help us find a way to preserve a canonical framework with beautiful abstractions while also making space for developing the one-off solutions? What if we needed a solution to handle the above in conjunction with sending messages over a network?  Finally, what if we paired this with some thoughts about how we handle switch-case statements in Python? Could we find a way to help solve this problem more elegantly so we could work smarter, not harder?

Well, that’s exactly what we’re going to take a look at here.

First a little disclaimer, this approach might not be right for everyone, and there’s a strong assumption here that you’ve got some solid Python under your belt before you tackle this process / working style. If you need to tool up a little bit before you dig in, that’s okay. Take a look at the Python posts to help get situated then come back to really dig in.


Getting Set-up

In order to see this approach really sing we need to do a few things to get set-up. We’ll need a few operators in place to see how this works, so before we dig into the python let’s get our network in order.

First let’s create a new base:

base.PNG

Next, inside of our new base let’s set up a typical AB Deck of TOPs with a constant CHOP to swap between them:

typical-ab-deck.PNG

Above we have two moviefilein TOPS connected to a switch TOP that’s terminated in a null TOP. We also have a constant CHOP terminated in a null whose chan1 value is exported to the index parameter of our switch TOP.

Let’s also use the new Layout TOP in another TOP chain:

layout-top.PNG

Here we have a single layout TOP that’s set-up with an export table. If you’ve never used DAT Exports before you might quickly check out the article on the wiki to see how that works. The dime tour of that ideal is that we use a table DAT to export vals to another operator. This is a powerful approach for setting parameters, and certainly worth knowing more about / exploring.

Whew. Okay, now it’s time to set up our extensions. Let’s start by creating a textDAT called messageParserEXT, generalEXT, and one called jobEXT.

parser-general-job.PNG


The Message Parser

A quick note about our parser. The idea here is that a control machine is going to pass along a message to a set of other machines also running on the network. We’re omitting the process of sending and receiving a JSON blob over UPD, but that would be the idea. The control machine passes a JSON message over the network to render nodes who in turn need to decode the message and perform some action. We want a generalized approach to sending those blobs, and we want both the values and the control messages to be embedded in that JSON blob. In our very simple example our JSON blob has only two keys, messagekind and vals:

message = {
        'messagekind' : 'some_method_name',
        'vals' : 'some_value'
}

In this example, I want the messagekind key to be the same as a method name in our General or Specific classes.

Pero, like why?!

Before we get too far ahead of ourselves, let’s first copy and past the code below into our messageParserEXT text DAT, add our General and Specific Classes, and finish setting up our Extensions.

The General Code Bits

In our generalEXT we’re going to create a General class. This works hand in hand with our parser. The parser is going to be our helper class to handle how we pass around commands. The General class is going to handle anything that we need to have persist between projects. The examples here are not representative of the kind of code you’d have your project, instead they’re just here to help us see what’s happening in this approach.

The Specific Code Bits

Here in our Specific class we have the operations that are specific to this single job – or maybe they’re experimental features that we’re not ready to roll into our General class just yet, regardless, these are methods that don’t yet have a place in our canonical code base. For now let’s copy this code block into our jobEXT text DAT.

At this point we’re just about ready to pull apart what on earth is happening. First let’s make sure our extension is correctly set-up. Let’s go back up a level and configure our base component to have the correct path to our newly created extension:

 

reusable-ext-settings.PNG

Wait wait wait… that’s only one extension? What gives? Part of what we’re seeing here is inheritance. Our Specific class inherits from our General class, which inherits form our MessageParser. If you’re scratching your head, consider that a null TOP is also a TOP is also an OP. In the same way we’re taking advantage of Python’s Object oriented nature so we can treat a Specific class as a special kind of General operation that’s related to sending messages between our objects. All of his leads me to believe that we should really talk about object oriented programming… but that’s for another post.

Alright… ALMOST THERE! Finally, let’s head back inside of our base and create three buttons. Lets also create a panel execute for each button:

buttons.PNG

Our first panel execute DAT needs to be set up to watch the state panel value, and to run on Value Change:

change-switch.PNG

Inside of our panel execute DAT our code looks like:

# me - this DAT
# panelValue - the PanelValue object that changed# # Make sure the corresponding toggle is enabled in the Panel Execute DAT.
def onOffToOn(panelValue):
    return
def whileOn(panelValue):
    return
def onOnToOff(panelValue):
    return
def whileOff(panelValue):
    return
def onValueChange(panelValue):
    message = {
        'messagekind' : 'Change_switch',
        'vals' : panelValue } 
    parent().Process_message(message)
    return

If we make our button viewer active, and click out button we should see our constant1 CHOP update, and our switch TOP change:

switch-gif.gif

AHHHHHHHHH!

WHAT JUST HAPPENED?!


The Black Magic

The secret here is that our messagekind key in our panel execute DAT matches an existing method name in our General class. Our ProcessMessage() method accepts a dictionary then extracts the key for messagekind. Next it checks to see if that string matches an existing method in either our General or Specific classes. If it matches, it then calls that method, and passes along the same JSON message blob (which happens to contain our vals) to the correct method for execution.

In this example the messagekind key was Change_switch(). The parser recognized that Change_switch was a valid method for our parent() object, and then called that method and passed along the message JSON blob. If we take a look at the Change_switch() method we can see that it extracts the vals key from the JSON blob, then changes the constant CHOP’s value0 parameter to match the incoming val.

This kind of approach let’s you separate out your experimental or job specific methods from your tried and true methods making it easier in the long run to move from job to job without having to crawl through your extensions to see what can be tossed or what needs to be kept. What’s better still is that this imposes minimal restrictions on how we work – I don’t need to call a separate extension, or create complex branching if-else trees to get the results I want – you’ll also see that in the MessageParser we have a system for managing elegant failure with our simple if hasattr() check – this step ensure that we log that something went wrong, but don’t just throw an error. You’d probably want to also print the key for the method that wasn’t successfully called, but that’s up to you in terms of how you want to approach this challenge.

Next see if you can successfully format a message to call the Image_order() method with another panel execute.

What happens if you call a method that doesn’t exist? Don’t forget to check your text port for this one.

If you’re really getting stuck you might check the link to the repo below so you can see exactly how I’ve set this process up.

If you got this far, here are some other questions to ponder:

  • How would you  use this in production?
  • What problems does this solve for you… does it solve any problems?
  • Are there other places you could apply this same idea in your projects?

At the end of the day this kind of methodology is really looking to help us stop writing the same code bits and bobs, and instead to figure out how to build soft modules for our code so we can work smarter not harder.

With any luck this helps you do just that.

Happy Programming.


Take a look at the sample Repo for this example on Github:
touchdesigner-reusable-code-segmentation-python

TouchDesigner | Email | Volumetric Lights

dest.PNG

Hello Matthew,
Ive been following your videos about TouchDesigner, great work! Really appreciate the content and learning resources you’ve made.

We are experimenting with TouchDesigner, so that we could perhaps use it in our dynamic / interactive installations.

Recently I have been researching usage of TD with dmx/artnet lighting and have come to a certain problem, which I just cannot solve easily. Ive been looking all over the internet, read two books about TD and still cannot figure this out. So I wanted to get in touch withou you and perhaps aks for advice (if you have time of course 🙂 )

Let me introduce the problem:

Imagine you have a 3D lighting sculpture, where there are DMX light sources all over a certain 3D space.

The lights are not uniformly spaced relatively to each other (no cubic shape or something light that), they are randomly placed in the space.

Now, I want to take a certain shape (for example a sphere) and map it as an lighting effect to the lighting fixtures. The sphere would for example had its radius increased / decreased in time, and the application should “map” which light source should light up when the sphere “crosses” it in space.

I would then somehow sample the color of the points and use that information and feed it to a DMX chop after some other operations…

It’s kinda difficult to explain, but hopefully I got it right.. 🙂

Do you know of any tricks or components I could use, so that I could “blend” 3d geometry with points in space in order to control lighting?

Im certainly able to work out how DMX works and all the other stuff, I just dont know how to achieve the effect in 3D.

(In 2D, it would be really simple. For example for a LED screen, its pretty straightforward, I would just draw a circle or whatever on a TOP and then sample it..)

Thanks a lot,
I would appreciate any tips or advice, really.. 🙂

Best regards,


Great question!

A sphere is a pretty easy place to start, and there are a few ways we can tackle this.

The big picture idea is to sort out how you can compute the distance from your grid points to the center of your sphere. By combining this with the diameter of your sphere we can then determine if a point is inside or outside of that object.

We could do this in SOP space and use a group SOP – this is the most straightforward to visualize, but also the least efficient – the grouping and transformation operations on SOPs are pretty expensive, and while this is a cool technique, you bottle-neck pretty quickly with this approach.

To do this in CHOPs what we need is to first compute the difference between our grid points and our sphere – we can do this with a math CHOP set to subtract. From there we’ll use another math CHOP to compute the length of our vector. In essence, this tells us how far away any given point is from the center of our sphere. From here we have few options – we might use a delete CHOP to remove samples that our outside of our diameter, or we might use a logic CHOP to tell us if we’re inside or outside of our sphere.

From there we should be able to pretty quickly see results.

2D-concept.gif

 

3D-concept.gif

Attached set of examples made in 099.

  • base_SOPs – this illustrates how this works in SOP space using groups
  • base_concept – here you can see how the idea works out with just a flat regular distribution of points. It’s easier to really pull apart the mechanics of this idea starting with a regular distribution first as it’s much easier to debug.
  • base_volume – the same ideas but applied to a 3D volume.
  • base_random – here you can see this process applied to a sudo random distribution of points. This is almost the same network as we looked at in base_concept, with a few adjustments to compensate for the different point density.

base_dist_test

 

presets and cue building – a beyond basics checklist | TouchDesigner 099

from the facebook help group

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

https://www.facebook.com/groups/touchdesignerhelp/permalink/835733779925957/

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

  • Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.
  • The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit. 
  • I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues  – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

  • What existing tools do you like – what’s their data structure / solution?
  • How is your data organized – arrays, dictionaries, etc.
  • Do you have a readme to refer back to when you extend your project in the future?
  • Do you have a way to add entries?
  • Do you have a way to recall entries?
  • Do you have a way to update entries?
  • Do you have a way to copy entries?
  • Do you have a validation process in-line to ensure your entries are valid?
  • Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

icon_library.PNG

icon_library_example.PNG

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing  your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

Richard’s killer post about optimization in touch has an excellent recommendation – keep your UI separate. This suggestion is HUGE, and it does far more good than you might intentionally imagine.

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

  • “What problem am I solving by adding this level of additional complexity?”
  • “Is there another way to solve this problem without an additional process?”
  • “What are the possible problems / issues this might cause?”
  • “Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

Maintaining Perspective with Multiple Cameras | TouchDesigner

File this away under “interesting theoretical concepts that I’ll never use… or will I?”

At some point while making realtime generative art for massive installations you may find that you’re beyond the capabilities of traditional realtime rendering in Touch. Let’s say, for example, that you need to render 12 hd outputs for a 3 x 4 array of screens – a resolution of 7680 x 3240 certainly can be done with a single render TOP, but delivering that final texture is a little more tricky.

I’m well aware that there are a number of possible solutions to this problem but before you find yourself composing that email to me about how to hack a way to a solution… what if it wasn’t 12 ouputs, what if it was 120? 200? What if every output was 4k? The answer we’re really after here is how to draw a scene with consistent perspective across multiple machines… because at some point you’ll have to use multiple machines. So, what do we do?

Forget what we do… what does that even look like? I’m still so confused.

Okay, so let’s first look at some examples of what it looks like as reference.

In this example we can see one large canvas that spans multiple screens. This is great – it’s huge and beautiful. This example shows a large desktop, which is also great… but what if we’re after some real-time rendering? This is a great illustration of the problem we might encounter. What if these displays were all 1920 x 1080. It’s a 7 x 4 array, so that’s going to be a definite challenge for a complex scene on a single machine. At this point we probably can’t realistically produce a single pixel to pixel texture for this array on a single machine. Instead we’d have to have a system of distributed rendering machines. Okay, that’s pretty straightforward and we can do some hip flat rendering that’s all orthographic no problem. What if we want perspective? If you want perspective in your real time rendering you need a way to conceptualize what the entire “screen” is, and then how to selectively render just a portion of that larger scene.

Huh?

Consider the beautiful work of Refik Anadol. I can’t speak to exactly what technique is being used here, but it’s a good illustration of the same challenge. How can you maintain the illusion of perspective if you need to render your generative art on multiple machines? That’s the real question we’re trying to answer… and now we can look at some ideas to help us better understand that challenge.

The process and methodology described below aim to solve that problem. For this example I’m going to work in a scale that’s unrealistic… but will allow those without a commercial license to play along from home. If you have a commercial license feel free to turn up the resolution as long as you keep mathematics involved in mind.

First things first, let’s build out a simple proof of concept that will make sure we understand this problem more completely.

Let’s imagine that we have a large composition that we need to cut up (for the sake of rendering) into 4 smaller slices. That might look something like this:

multi_perspective

Remember, this is just a proof of concept so we’re going to start with a very easy implementation first, before we start to dig into the more complex questions. An important lesson to consider when it comes to programming is to start by reducing the problem to its basic elements, then when you have a foundational understanding of the issue start to scale up – don’t worry, we’ll get there we just have to start small.

Okay, so we’ve got our 2 x 2 array that we want to render. Let’s see how we can set up some cameras to render just one of those squares a piece, but all from the same point of view.

Wait! Why do they need the same point of view? We’re after the same point of view so we can maintain perspective. It’d be easy enough to use four different cameras that were translated into positions to only see their section of the larger quad, but the results from lighting and perspective calculations wouldn’t match. You can use this multi-camera transformation technique if you’re doing orthographic rendering with emissive lighting, but not if you want to maintain perspective and use non-emissive lighting. It’s okay if you don’t believe me – I didn’t believe me either, and I had to set it up and test it bunch of times before I really understood what’s happening.

What’s that going to look like? Well, for the perspective of a single camera that can see the whole scene – the effect we want to recreate eventually – we might see something like this:

full_camera_view.PNG

From the vantage point of a single camera, we want to be able to zero in on just a single quadrant in our scene, something like this:

single_quad.PNG

Eventually, we want to reassemble the view of four cameras to look like our original single camera view.

Matt, I still don’t get it. That’s okay. Keep reading, and if by the time we get done with this example it’s still not a useful technique you can stop reading. If, however, you want a means to do perspective based illusions across multiple machines for massive installations, keep reading to the end.

We’re going to set up our test by by using a part of our camera COMP that you may not have used before. Specifically, when it comes to the view page, we’re going to use the Viewing Angle and Method called “Focal Length and Aperture.” I wish I could tell you exactly what this means to TouchDesigner – spoiler, I can’t – what I can help you understand is how these values relate to one another in order to achieve our particular illusion.

We’re going to start by setting up a simple example. Add a geo to your scene, and replace the torus inside with a grid. Set your sizex parameter to be 16/9. If you want to follow along step for step , change your grid to be a polygon, and add a noise SOP with a period of 0.02 and an amplitude of 0.5. Connect that to a facet SOP with unique points and computed normals. Connect your chain of operators to a null SOP and make sure that your display and render flag are turned on for your null. You should have a simple network that looks like this:

SOPs.png

Outside of your geo add two moviefile in TOPs. In the first you can use the supplied quad arrangement in the assets portion of the git repository that accompanies this post. It’s called multi_perspective.png.In your second moviefile in TOP select the FiledGuide.tif. Composite these two together with a composite TOP, and change the operand method to Add. Connect this to a null TOP, and finally assign this null to a phong material as the color map. When you’re done you should have something that looks like this:

texture_grid.png

Whew. Alright, now we can finally get to the interesting part. Let’s add a light and a camera to our network.

Now, in our camera comp let’s set the tz parameter to 10 units:

cam_tz.PNG

Next let’s move over the view page of the camera COMP. Here we’re going to leave the projection as perspective, but we’re going to change the viewing angle method to “Focal length and Aperture. Next we’ll change our Focal Length to 10 and our aperture to 16/9:

full_view_cam

This is our camera that can see the entire scene. Let’s add a render TOP to our network so we can see what our camera sees:

full_view_cam_view

If we were to bypass our noise SOP the view of this piece of geometry would fit exactly within our view-port. From here forward it’s going to get a little interesting. We want to maintain the perspective calculations from this vantage point, but we want only a single quadrant at a time to fill our view-port. It’s almost like zooming in and cropping to only a sub-section of our view. How do we do that?

Let’s copy our first camera, and then make a few adjustments. I’m going to call my new camera cam_single_p1. Next we’re going to leave our Focal Length and Aperture settings just as they were. We are, however, going to change our window x/y parameters to be:

winx -0.25
winy 0.25

We’re also going to change our window size to be 0.5.

cam_single_p1

Let’s render that camera and see what we get:

single_cam_p1_rendered.PNG

Woah!! That works just like we wanted! Thinking through our other cameras, we can quickly see that the combination of our windows size and offsets act as a zoom and translation mechanism. Try adding 3 more cameras with the following tx/y settings:

Cam2

  • tx 0.25
  • ty 0.25

Cam3

  • tx -0.25
  • ty -0.25

Cam4

  • tx 0.25
  • ty -0.25

If you render these cameras you should see something like this:

all_cams.PNG

Okay. That’s all pretty slick, but how do all of these parameters relate?

What does it mean?!

The meat and potatoes of this technique is to define a view-port’s aspect ratio, the number of vertical slices that a single window represents, and then to specify where the offsets sit that represent the center of a given window.

Huh?!

Let’s think through our simple example. We made our rendered quad a 1.7778 x 1 rectangle – a rectangle with a 16:9 aspect ratio. This was the same value we used for our aperture. We also set the distance of our camera from our geo to be 10 units, which was the same value we used to define our focal length. The window size is a ratio of 1 over the number of vertical sections… in our case we had two vertical sections, so 1 over 2 is 0.5. Our xy window offsets represent the center of our windows in our sections. That’s a little harder to wrap our heads around, and a better way to think about it is the UV coordinates of the center of a given window into our scene. Let’s break that out a little more.

Window Size

  • 1 / number of vertical windows that can fit within our viewport

Focal Length

  • the distance of our camera to our scene window

Aperture

  • the aspect ratio of our scene window

Win X

  • ( U * 2) – 1 ) / 2

Win Y

  • – ( ( U * 2) – 1 ) / 2 )

To really dig into the power of this technique we need to push beyond just a quad based set up we need a more abstract configuration of windows in our scene. Now that we understand the mechanics of our set up, let’s look at some arbitrary configuration of windows that might be spread across a large number of machines. What if our window arrangement looked something like the below:

multi_cam_perspective_output_map

What’s going on here?

Well,  let’s imagine that you’re working on a large format LED installation where you need to slice up your scene into uniform HD chunks that then feed an LED controller. In some cases those regions overlap even though only a single LED screen is outputting the content. Relationally, however, you still need to be able to control and designate the regions for the screens. Output 5 and 7 are a good example of these. The final shape of that screen is going to be the combined outline of the two displays, but the video feed to the LED controllers needs to be consistent HD cutouts. All of the dimensions in this example could probably be managed by doing a single rendering of the full scene then cropping to a given region – but at some point your ability to render the full scene and then use TOPs to crop out pixels is going to fail. This example has 7 outputs, but it’s not hard to imagine a project that had 20 or more – as reference, the need to understand this technique came out of working on an installation with 68 discrete outputs.

Okay, okay, okay… fine. So what are all these targets and numbers about?

We’ll remember in our first proof of concept example that we were able to take advantage of a simple 0.25 offset – that makes sense right?

If our geometry is placed with it’s bottom left corner at the origin (0,0), then the center of our first slice is going to be at (0.25, 0.25):

2016-12-13 13.35.19.jpg

That’s great Matt, but that doesn’t make any sense… following this logic, our slices would have been more like:

  • Slice1 ( 0.25, 0.75 )
  • Slice2 ( 0.75, 0.75 )
  • Slice3 ( 0.25, 0.25 )
  • Slice4 ( 0.75, 0.25 )

So what gives?!

Well, we have to remember that we set up our geometry to have it’s center at the origin. If we take this into account, what we see is more like:

2016-12-13 13.36.58.jpg

Looking at this, it should make sense why we used the translation coordinates that we did. The other interesting thing to understand in this look into our geometry is to consider the following.

If the full extent of our scene represents the bounds of a complete window, then we can begin to think about our winx and winy coords as being more akin to a UV – a normalized coordinate on our full window. We need to do some additional math to compensate for the translation of our origin, but that’s pretty straightforward.

If you’re still scratching your head, that’s okay. Let’s look at how to take our new window map and create a programmatic means of slicing up that full scene.

Thinking back to our proof of concept test we need a few things in place in order for this all to work as we expect:

  • We need the transforms for all our cameras to be the same – these are on the xform page: tx, ty, tz.
  • We also need several pieces for the view page:
    • Focal Length
    • Aperture
    • Winx
    • Winy
    • Winsize

Given that our cameras aren’t going to be doing any moving once we set up our calibration, we can safely use python expressions in our parameter fields. In terms of optimization, if an op does a lot of cooking it’s often better to use exports instead of expressions – expressions end up getting complied on demand and evaluated when op op cooks. Since we want fixed cameras looking into a moving world, we can use expressions for our cameras – it’s also going to be less of a hassle to set up, which is great.

For starters let’s add our reference template into our scene. We can start by dragging in our texture, connecting a Null TOP, and then assigning this to the color of a constant Material.

reference_plate.PNG

Next let’s add a geometry to our scene. We can replace the torus inside with a rectangle that has our source window’s aspect ratio, which in this case is 16:9 or 1.778 : 1.

scene_window.PNG

Next I’m going to use a Null COMP to hold the transforms of our camera system. I’m going to set this to have a tz value of 5, and otherwise leave this alone.

Null_pars.PNG

Let’s also add an object CHOP to help us with determining the distance between the null and geometry – to be clear, we don’t need to do this with an object CHOP, we could do this with a math CHOP, or with Python.

In the Object CHOP I’m going to set our null as the target object, geo1 as the reference object, and I’m going to set this to compute distance.

object_chop.png

I’m also going to add some tables that hold some reference information for us. I want to know the width and height of our scene, as well as the width and height of a given cut-out.

ref_values.PNG

We’re also going to need a table with all of our pixel space coordinates:

output_coords.PNG

Now that we have all our primary ingredients ready, we can build out a system to convert our pixel space coordinates into a set of winx and winy translation values.

Let’s start by looking at this process in general. We can first start with our coords in pixel space:

Pixel Space

output x y
output1 640 360
output2 205 130
output3 237 518
output4 525 130
output5 752 593
output6 1013 188
output7 1012 483

These values represent the actual center of these windows in the full pixel scene. I started by making this template in Photoshop, and then measured the location of the center of each given viewport.

Once we have these values, we need to convert them into a normalized values. In other words, how do these pixel coordinates translate to UV coordinates. This is a pretty straightforward calculation – the pixel value divided by it’s respective dimension for the full scene:

  • outputx / full_scene_x
  • outputy / full_scene_y

We can set up a quick eval DAT to do all of this for us:

convert_to_uv

The two python expressions that drive this in the table2 DAT are:

me.inputCell / op( 'table_scene' )[ 'scene_x', 1 ]
me.inputCell / op( 'table_scene' )[ 'scene_y', 1 ]

Our results from this can be found in the table below.

UV Space

output x y
output1 0.5 0.5
output2 0.16015625 0.18055555
output3 0.18515625 0.71944444
output4 0.41015625 0.18055555
output5 0.5875 0.82361111
output6 0.79140625 0.26111111
output7 0.790625 0.67083333

Now that we have the UV coords that represent the center of each window, we need to convert these values into a scale that takes into account that the center of our geometry is located at the origin. For our x values we multiply our value by 2, subtract 1, and divide by 2. For our y values we use the same operation and multiply by negative 1. We can use another eval DAT to do just this for us.

convert_to_winxy.PNG

The two python expressions that drive this in table3 are:

( ( me.inputCell * 2 ) - 1 ) / 2
( ( ( me.inputCell * 2 ) - 1 ) / 2 ) * -1

Now we have taken our original pixel coords and then converted them into our winx and winy transforms.

Converted into winx winy transfroms

output x y
output1 0.0 -0.0
output2 -0.33984375 0.31944444
output3 -0.31484375 -0.21944444
output4 -0.08984375 0.319444444
output5 0.08750000 -0.323611111
output6 0.29140625 0.238888888
output7 0.290625 -0.170833333

With these values set, we just need to make sure that we compute our window size, aperture, and focal_length. Looking back to the above, calculating these remaining values should be a snap.

Window Size

Window size is 1 over the number of windows that can fit into our full scene. This also means that our total scene’s width should be n x the width of a given window.

  • In our case a single window (measured in Photoshop) is 320 pixels, and our full scene is 1280 pixels.
  • 1280 / 320 = 4
  • 1 / 4 = 0.25
  • Window Size = 0.25

Focal Length

Focal Length is the distance between our point of view (in our case the null), and our full scene. We’ve used an object CHOP to compute this distance.

Aperture

Our aperture is the aspect ratio of our full scene.

  • 1280/720 = 1.778
  • Aperture = 1.778

I’m using an eval DAT to do the computation and organization of all of this:

cam_attr_calculations.PNG

The python for these looks like this:

1 / ( op( 'table_scene' )[ 'scene_x', 1 ] / op( 'table_render_attr' )[ 'width', 1 ] )
op( 'table_scene' )[ 'scene_x', 1 ] / op( 'table_scene' )[ 'scene_y', 1 ]
op( 'object1' )[ 'dist' ]

Python in touch is name dependent, so I’d recommend looking at this example network if you’re trying to replicate this effect.

Now we can set up our cameras to correctly crop out a given viewport of the entire scene. I’m going to rely on the digits of a camera to be correspondences to the output. So in my case output1 and camera1 should be the same thing. I’m also going to use the translation values of our null to set the location of the camera.

All of that said, our expressions for our camera should look like this:

camera_expressions.PNG

Again, all of our expressions are name dependent, so I’d recommend looking over how I’ve organized this in the sample file in order to make sure you know exactly what’s referencing what.

Now we can copy past our camera 6 more times – I’m using digits in many of these expressions to match the camera digit to the output digit. Looking over my results it looks like we’re right on the money.

view_ports.PNG

To really appreciate what’s happening here, I’m going to turn off the rendering on our calibration plate, and turn on a sample piece of geometry.

view_ports_real_geo.PNG

In geo2 you can see the entire geometry, and in each of our viewports we can see that we’re only rendering the region of our geometry that falls into single view.

“That’s a mess… I don’t get it.” You might well be saying. That’s okay. Let’s rearranged our TOPs to mirror more closely what’s in our template:

view_ports_real_geo_rearranged.PNG

Hopefully, doing this we should better be able to see how these various pieces work together.

At this point we now have a means of rendering a complex scene across multiple machines (or GPUs if you’re able to use affinity on Quadro cards), and maintain perspective. That unlocks a whole new avenue for realtime rendering that breaks you away from the limitations of single machine configurations or reliance on baked media for distributed realtime rendering.

Download this example from github


It’s always lovely to get an email from Derivative headquarters.

In this case I just got a lovely ping from Malcolm to let me know that the crop parameters on the render TOP can be used for the same functions described above.

Let’s look back at our first example to understand how that might work. In this case I’m going to use the same initial camera that we set up – our cam_full_scene COMP. I’m going to use this one already since I know that it’s correctly configured to capture the entire width and height of our reference plate. Next I’m going to add a render TOP, and under the CROP page I’m going to change my crop right and crop bottom pars to 0.5. For the sake of understanding the concept I’m going to leave this in a fractional unit space, but we could just as easily determine these values as absolute pixel measures. The result of this looks like this:

render_crop1.PNG

Next up, rather than using another render TOP I’m going to use a render pass – there’s lots of good reasons to use the render pass but one of the most important considerations here is that it’s a very efficient rendering operation. We do, however, need to make a few other adjustments. We need to target our render2 as our render TOP on the Render Pass page, and we also need to toggle on clear to camera color, and clear depth buffer:

render_crop2.PNG

On our render pass we’ll need to use the following crop parameters:

render_crop2_2.PNG

As we add additional render pass tops we need to target the previous render pass – a given render or render pass TOP can only have a single render pass assigned to it.

Our crop parameters should look like this:

renderpass3

  • crop left 0.0
  • crop right 0.5
  • crop bottom 0.0
  • crop top 0.5

renderpass4

  • crop left 0.5
  • crop right 1.0
  • crop bottom 0.0
  • crop top 0.5

All in all when we’re done we should have something that looks like this:

render_crop_full.PNG

Here the result is the same, the methodology going into it all is just a little different. Like all things TouchDesigner, there are multiple means of solving the same challenge and the “right” one ultimatly comes down to the choice that’s best for your particular installation.

Happy Programming everyone.

* The git repository and support files have been updated to reflect this additional material.

Building a Calibration UI | Software Architecture | TouchDesigner

Here’s our first stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study.

Big Ideas

Organization Matters

How you build a thing matters. Ask any architect, chef, crafts person, quilter, and on and on and on. The principles and ideas that drive how you’re building your network matter, and as a full disclaimer I am a misery to collaborate with when it comes to messy TouchDesigner networks and messy directories. “As long as it works, it doesn’t matter Matt!” Is the mantra I often hear as a push back to my requests for organized work. There’s a lot of truth in that, but if you ever have to work with another person (or work with your future self) then the organization of your work will matter. Cluttered, messy, or disorganized structures will make a world of misery for you and your collaborators. You can do whatever you want at the end of the day, but my strong recommendation is that you create some guiding principles for your organization structures, network layout, documentation, and pending todo items on a project.

Here are some of my simple suggestions:

Turn off node resizing for TOPs and COMPs, and don’t resize your nodes by hand:

What? Why? Those things might make perfect sense to you, and you might like them a lot – and that’s great. I hate them in my networks because they insert irregularities into the flow of the network that spoof importance. When I look at a network and there is a single node that’s larger, smaller, resized in anyway that’s different from the surrounding nodes  it implies to me that it has some significance or importance. It might well be that you just needed a closer look for a second and then forgot to change its size again, but for the person looking at your code for the first time that it is significant – even if it’s not. Time and again I’ve seen that kind of use of resizing throw off myself, and many of programmers that I respect.

If you need to use size as a defining characteristic in your networks, then use python to make nodes consistent sizes. It all goes back to being organized and consistent. Help your future self, or the other people you’re working with avoid as much confusion as possible… chances are the project is already confusing enough.

resizing

Align your ops:

I like the left bottom corner of my ops aligned with the network grid. I like to start new ops on new grid lines, and I like a full grid line of vertical space between ops. I also like to arrange ops (when possible) so that wires don’t cross unnecessarily. OCD much Matt? Sure, you can say that if you like. For me, the truth of the matter is that I like tidy networks where I can quickly see the flow of operations left to right. Too much space and things feel spread out all over creation, too little space and it can be difficult to see a network’s operation when you’re zoomed out too far. For me, this is the right kind of balance. It might well be different for you, and that’s wonderful. My only encouragement here is to be purposeful. Whatever you do, make sure it’s a choice you’re making not just the happenstance layout that emerged from the creative process. Be messy, be wild, place nodes every which way … but tidy up before you save and commit your work for the day.

alignment

Color code sparingly

I love me some color coding, but make sure it’s done purposefully and give yourself a key. This is another situation where color coding is often a great idea on the face of it, and then it quickly changes into something you can hardly keep track of. That’s okay, and my experience has taught me that less is better in this regard.

Leave those op names alone and only append after the op name

It took a lot to get me on board for this, but it’s now something that I’m whole wholeheartedly behind. Leaving the original operator name in place and only appending after is a tremendous help when it comes to quickly glancing at a network. It helps the viewer know quickly what an operator is doing without having to do any additional inspection.

“I don’t get it Matt.”

That’s okay. Let’s look at this example:

names_nightmare

In 2 seconds or less can you tell if the first operator with the banana is a select or a moviefilein TOP? Can you tell if the composited image is a comp TOP or a multiply TOP? Can you tell if the TOP labeled “done” is a null or an out TOP?

Compare that with:

names

To be fair, the names aren’t nearly as exciting, but they do make it much easier to understand what’s happening at a glance.

Like it or not, you’re an engineer now

As much as we all might like to fancy ourselves Artists with capital letter As and a filigree flourish, the truth is that right now you’re an engineer.  That’s not to say that you’re not an artist too, you are; but some problems don’t need artful solutions, they need thoughtfully engineered solutions based on a complex understanding of computation and computer science. The more you work in TouchDesigner, the more you’ll find that you’re as much engineer as you are artist – that’s not only okay, that’s an important realization to make about your own work. It can be a bifurcating moment to feel at conflict with the art, and the mechanics of a given network – “this approach is beautiful and it works but costs 20 milliseconds” is probably not a viable solution. It’s especially not a viable solution for something like a calibration pipeline that needs to have as little computational overhead as possible. The work that goes into an efficient process isn’t always glamorous, or much to show off; instead, it makes room for the art with a capital A to take more cycles and be more expressive.

Lucky for us, there’s room to be both engineer and artist in our work. Remember to embrace both of those roles, even when faced with the frustration of complex logic problems, or when faced with the questions of aesthetics.

Externalizing Files

So we should externalize files? Yes. For the love of god, yes. If you believe in a world where you don’t tear your hair out, yes. In case you still are wondering, the answer is yes – you should always look for ways to externalize files.

Why? That seems silly.

Once upon a time I didn’t externalize any components, and my projects lived as complete toe files and the world was beautiful. Then I crashed my project because I did something silly, and then it took me hours to track down where the problem was… even opening the crashautosave.toe left me slowly moving through networks in networks trying to figure out what I had done where to make everything crash. Worse yet, because it was just a single toe file there was no way to unit test any of the smaller modules. Open, change, save, open crash… repeat. If you haven’t had this experience yet, it’s only a matter of time.You too will crash one day, and mightily. You’ll wonder why you ever wanted to work with computers in the first place, and why you don’t just open up a bar in on a sandy beach somewhere.

Snark aside, externalizing components in your network has several benefits. For starters, it moves you away from having a single toe file black box that’s difficult to test or debug. Separating out smaller modules allows for more portability between the applications that you build, and it means that you can test just that module. It also means that you can update just that module and not your entire toe file.

Externalizing modules, extensions, and the like also lets you more easily compare files over time if you’re using a piece of version control software – don’t worry, we’ll talk about git in just a second. It also makes it easier to work with text files in something like sublime (my favorite) or Visual Studio Code. That might not seem like a big deal now, but the more you work with extensions and python, the more you’ll want robust text editing tools at your finger tips.

Externalizing files also reduces bloat in your toe file. If you’ve ever locked a texture in your toe file you know all to well that it your slim file size can quickly balloon. External files also makes it easier to save and try multiple configurations. We’re going to work on a building a calibration UI, we might want to be able to save a calibration configuration and then try another calibration. The same goes for configuration files. Rather than a configuration that’s locked into your toe file, an external file can make it easy to load lots of different configurations or other settings.

It might well seem like I’m belaboring this point, and I’m doing that on purpose. Working with external files is more work, takes more organization, takes better planning, and requires a more meticulous approach – but if you want to grow as a developer and programmer, you’ll wrestle with this challenge sooner or later. I’d encourage you to do that sooner rather than later.

git

What is git? Direct from the git website :

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Git is easy to learn and has a tiny footprint with lightning fast performance. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows.

But what does that mean?! Unlike working with art where a tool like Dropbox would be helpful, when we’re developing software it’s often more helpful to be have some specific means of specifying when a file needs to be saved (probably with the same name), along with a message about what our changes have done. Let’s say we’re working on our calibration UI, we’ve made a change that now allows us to save out our calibration configuration. That’s great, but it’s come at the cost of changing a piece of how our UI works. We know for sure that the old way works 100% of the time, and we’re pretty sure that our new way works at least 80% of the time. Git allows us to check in the new version of our software, commit it to our working directory structure, and make a note of what we did.

We’re smart programmers, so we’ve also started to externalize our files. This means that we’ve separated out parts of our network responsible for different tasks. With git we’re now able to check in only the single tox that’s changed while we’ve been working. Better yet, we can even work with another person. Developer 1 might be working on one part of our project, while Developer 2 is working on another. Both developers are saving their respective tox files, and git allows us to track that progress and merge work seamlessly. If we do end up editing the same files, we end up with a merge conflict… which means what Matt? A merge conflict means just that, two files are competing for the same place in our directory, and we need to review them in order to decide which file to keep.

Git also allows us to work in multiple branches on our project.

Huh?

Let’s imagine that we find a bug, we might want to create a branch in our code to deal with resolving this bug.

Why?

Well, let us imagine a situation familiar to us all. We find a bug, squash it mightily. Only to discover that our fix has created another problem… maybe a much bigger problem. The little bug that we killed we could have worked with, but now we have a show stopper bug running a muck, that we don’t know how to stop, and the show starts in 45 minutes. Trouble shooting in another branch allows you to make fixes and then integrate them back into your project when you’re confident that they don’t create other problems.

Best of all, git creates a history that let’s us step back through each one of our commits. At any point we can return to a past state of our project. Taking advantage of modular design and externalized files makes this even more powerful because it means that with a little work we can resurrect ideas that might otherwise have been lost or discarded.

That’s a big commercial for git so far, but no cautionary tales – an cautionary tales there are plenty of. While git is a phenomenal tool, it can be a bit beastly to wrangle when  you’re first starting – know that you’ll get frustrated, know that you’ll curse it, know that you’ll want to give up on using it, then count to 10 and give it another go. Like all things you’re unfamiliar with, take your time to get used to it as a tool set. You might choose to start by using a GUI tool, but at some point you should learn some of the commands to run directly from the command line. You’ll also want to avoid checking in any of your assets. These days I divide up my projects and separate my assets from my code. I have a few exceptions for this practice, but I try to keep them few and far between – only really allowing for template files, or grids. Essentially, I try to make sure that it’s only files that are going to change in frequently.

If you’re going to use git, you’ll probably also use something like github or bitbucket. These are great services and host your code online so you can get to it from anywhere. These services also come with some great tools for making action items, bug tracking, and maintaining documentation for your project. Take advantage of these tools.

learn more about git at https://git-scm.com/
start using git with github or bitbucket

See more comics from XKCD at https://xkcd.com/

Extensions Everywhere

I’ve talked plenty of about extensions, how to use them, and what they’re good for. I’ve had some spirited discussions about when to use Extensions vs. Modules, and there’s a healthy conversation to be had there for those who want to really get into the weeds of it all. The place I’ve finally landed is this – I use extensions anytime I want to extend the capabilities of a given component. Case to case the degree of extension varies – sometimes I take fuller advantage of using classes than others, and that’s okay. To me, it’s okay if I don’t always take complete advantage of the difference between Extensions and Modules. Sticking with extensions means choosing an approach that works for 99% of the use cases where I need more flexibility in my programming. For mean that means a little bit of stability from the Touch paradigm where there are almost always at least a handful of different ways to solve the same problem. You might like Modules better, and that’s okay. In this case study, however, we’ll see the use of Extensions.

We’ll also see a move away from fragmenting our code across our network. It’s often tempting to use a button deep down in a UI somewhere to perform a logical operation. The only catch is that now that script is buried deep down in your project somewhere, and should it ever give you problems, good luck finding it. For the most part, on large projects I aim to keep the lion share of my operations focused in an extension. This takes a little more time to write, but it means that things are more organized, easier to diff between check ins, that I can change my code in a single place, and that I’m writing functions that are callable from other parts of the network. In my experience so far, this little amount of extra work makes for a larger savings across the whole project.

More about Extensions
Python in TouchDesigner | Extensions
Understanding Extensions

documenting your code – what are docstrings?

I’m a broken record. I know, but really – leave yourself notes, voluminous ones. Better yet, when using extensions make sure to add some docstrings.

“What are docstrings?” you ask.

I’m so glad you did! Docstrings are a feature in several programming languages that allow you to embed  comments or information into your code that can be inspected at run time. If you’re still scratching your head that’s okay. Let’s look at an example, this one is from the wikipedia page about docstrings:

"""
Assuming this is file mymodule.py, then this string, being the
first statement in the file, will become the "mymodule" module's
docstring when the file is imported.
"""

class MyClass(object):
    """The class's docstring"""

    def my_method(self):
        """The method's docstring"""

def my_function():
    """The function's docstring"""

“Great. But why are these useful?”

There useful in lots of ways but for me they have some special importance. For starters, they help ensure that I’m documenting my code as I go. The more tips you leave yourself along the way, the happier you’ll be in the long run. It does make your code longer, and it does take more time. But, if you’re really invested in building something reusable that you can come back to and refine over time, these are solid investments to make that process smoother in the future.

You can get to these notes from anywhere. This internal documentation can be called from anywhere in the network, so if you’ve forgotten how many arguments your method takes, or what it is supposed to return you can get some quick answers without having to track down the extension. When you start to write lots of complex methods this can become a real help. It’s probably not something you need when you’re first working on the project, but if you’re collaborating with someone else or coming back to your project over time it’ll be well worth the time and effort.

Finally worth considering is that docstrings are a part of the larger Python style guide. Like it or not you’re an engineer, and at this point you’re turning into a budding Python programmer. Learning about the best practices of other programmers outside of the TouchDesigner silo helps broaden your perspective and exposes you to how other people work. For better or worse I spent a lot of time in the liberal arts, and one of the core tenants of the ideology that underpins the arts is that exposure to other ideas and perspectives is important in its own right. I’ve carried this with me through life, and I think even in the cases of programming languages it’s worth considering. The conventions and practices of any community teaches you about how they think of the world – how it’s organized, how it aught to work, how that group values the world, or what they don’t value. The Python community is one that we participate in, even if only mostly from our gray network of operators.

learn more about docstrings
see google’s python style guide for docstrings

Mapping out our Project

You know the saying – measure twice, cut once.

Before digging in to a new project make sure you ask a lot of questions, map out as many details as possible, and make a plan.

Okay, so what about our project?

For starters

  • We know that we’ll have five projectors and one monitor that acts as a UI.
  • We know the resolution of all of our displays.
  • We know that we’ll externalize lots of files in our network.
  • We know that we’re going to use git as our version control system.
  • We know that we are going to work with other people.
  • We know that we need to document our work.
  • We know that we’ll need a way to control warping for the five different displays
  • We know that we’ll need a few other controls – both for the mapping, and also to make space for the future UI elements to control playback.
  • We know we need to previs the layout.
  • We know that we’re looking to build out a complete application with all of the features that you might expect from that kind of build.

Whew! Okay, so where do we start? Before we start throwing down ops let’s think through a plan.

We’ll need a container that holds our UI, we’ll also need a container for each of our displays. We know that it’s best to just use a single perform window that spans multiple monitors, so we should make sure we plan for that when we’re developing our project.

We also need to think about how we’re going to set up rendering our scene so we can previs it without turning on all the projectors. That will mean a multi camera set up and some live rendering.

We should also think about how we might ingest video to display for calibration and test playback. On top of that we should also build out a test module to help the creative programmer understand the correct formatting for how to work in the framework we’ve created.

We should also plan to make some space in our build to house assets that are needed across our network, as well as a place to keep our calibration data. We’ll take advantage of the stoner, so we better make some time to pull that apart and better understand how it works.

We’ll use a few set of extensions, so we should also make sure that we know how that works before we dig in too deep.

Finally, we should make sure we know how git works, since we’re going to use that along the way.

We’re bound to run into some other things we need to suss out as we go, but this should give us a solid starting point in terms of understanding the project from a distance.

Next we’ll take a closer look at the stoner to make sure we understand this component we’re going to use, then we’ll dig in and start building.

TouchDesigner | Email | Shrink Instance

Original Email – Mon, Jul 13, 2015 at 9:43 PM

Hey Matt!
So I’ve tried banging my brain around 10 different ways to where I’m sure it now resembles a sphereSOP – noiseSOP but I’m still coming up a little short. Naturally, I thought about your THP 494 Shape lesson but I was not able to figure out what I’m missing so I’m coming straight to teacher for some guidance.

I’m trying to recreate the attached image. I’m using a GridSOP, MetaballSOP – MagnetSOP which sort of works but I’m missing some very important part of the process. Can you have a look at this and give me a hint as to what I’m missing?

You certainly do not have to correct the work unless you want to but a point in the right direction (haha) would really help me sleep tonight!


Reply – Tue, Jul 14, 2015 at 12:23 AM,

That was a good brain teaser.

I’m including two different approaches to solve this problem. The first looks at using your magnet approach, and the second is more GPU focused.

Screenshot_072115_120220_AM

Following your model with magnet, you had just about nailed it – all of the information you needed was in that SOP to CHOP, it was just a matter of reformatting it. In the magnet base, I grabbed the tz channel of the SOP to CHOP, scaled it, and renamed it scale, merged it in your instance CHOP, then applied it to the scale parameter of your instances. Pretty right on with your existing approach. The draw back here is that the magnet SOP is very expensive – nearly 7 milliseconds by itself – bottle-necking your performance at about 30 fps. This also keeps you pretty limited in terms of the number of points you can work with – CPU bottlenecks can be tricky to work around.

MetaBall

So, I started to think about how I would solve this problem on the GPU, and remembered that an array of pixels is just a different kind of grid. The second approach translates a circle TOP, and then converts that to CHOP information, merges this with a SOP to CHOP (using the xyz data from a gird), and then instances from there. I was looking at over 1400 instances without a problem. The challenge you’ll encounter, however, is when you try to replicate that many source textures. I did a quick test, and things slowed down when I had that many texture instances drawn using the newer texture instance approach. I was, however, able to get performance back up to 60 FPS if I loaded a 3D Texture Array TOP, and then turned off it’s active parameter. Markus uses this trick often.

circleTOP

Anyway, that’s as far as a I got in the little bit of time that I carved out. The next steps (in terms of mimicking your source image) would be to get the displacement right pushing instances up and down to make an opening in the center of the array.

circleTOPdisplacement

Alright, well I put another hour into this because I got really interested in the idea of displacement. I think this still needs a little more work to really dial it in, but it’s a solid starting point for sure.

Screenshot_072115_120617_AM

Hope this helps.

Best,
Matthew

scaleDisplacementExample

Look at the example file on GitHub – shrinkInstance_locked

TouchDesigner | Understanding Extensions

genGeoClassSo you’ve made a killer component that you love using, but you suddenly find yourself wondering how best to re-use it in future projects. You could make a killer control panel for it, or create a more generalized method for passing in values with in CHOPs or DATs. You could just resign yourself to some more complex scripting – reaching deep into your component to set parameters one at a time. You could hard code it, you’ll probably be making some job specific changes to your custom component anyway, so what’s a little more hard coding? The 50000 series now features custom parameters, or you could use variables, or storage. Any one of these options might be right for your component, or maybe they’re just not quite right. Maybe what you really need is a little better reach with Python, but without as much head scratching specificity. If find yourself feeling this way, than extensions are about to make your TouchDesigner programming life so, so much better.

Using extensions requires a bit of leg work on your part as a programmer, it also means that you’ll want to do this for components that you’ll find yourself reusing time and again – after all, if you’re going to take some time to really think about how you want a reusable piece of code to work in a larger system it only makes sense to do this with something you know will be useful again. That is to say, this approach isn’t right for every circumstance, but the circumstances it is right for will really make a difference for you. We’re going to look at a rather impractical example to give us a lay of the land when it comes to using extensions – that’s okay. In fact, as you’re learning how to apply this approach to your workflow it’s worth practicing several times to make sure you have a handle on the ins and outs of the process.

Before we get too much further, what exactly is this extension business? If you’re to the point with TouchDesigner where you’re comfortable using Python to manipulate your networks, you’ll no doubt have come to rely on a number of methods – anything with a . followed by a call. For example:

  • op(‘moviefilein1’).width – returns the width of the file
  • op(‘moviefilein1’).height– returns the heightof the file
  • op(‘table1’).numRows – returns the number of Rows
  • op(‘table1’).numCols – returns the number of Columns

In each of these examples, it’s the .operation that extends how you might think of working with an operator. Custom extensions, means that you, the programmer, are now free to create your own set of extensions for a given component. The classic example for this kind of custom component and custom extension set for TouchDesigner would be  a movie player. Once you build a movie player that cross fades between two videos, wouldn’t it be lovely to use something like op(‘videoPlayer’).PlayNext() or op(‘videoPlayer’).Loop(). The big idea here is that you should be free to do just that when working with a component, and custom extensions are a big part of that puzzle. This keeps us from reaching deep into the component to change parameters, and instead allows us to write modular code with layers of abstraction. If you’re still not convinced that this is a powerful feature, consider that when you start a car you’re not in the business of specifying how precisely the starter motor sequences each electrical signal to help the car turn over, or which spark plugs fire in which order – you issue a command car.start() with the expectation that the start() function holds all of the necessary information about how the vehicle actually starts. While this example might be reductive, it helps to illustrate the importance of abstraction. It’s impractical for you, the driver, to always be caught up in starting sequences in order to drive a car (one might make an argument against this, but for now let’s roll with the fact that many drivers don’t understand the magic behind a car starting when they turn the key), this embedded function allows the driver to focus on higher order operations – navigation, manipulation, etc. At the end of the day, that’s really what we’re after here – how do add a layer of abstraction that simplifies how we manipulate a given component.

That’s all well and good, but let’s get to the practical application of these concepts. In this case, before we actually start to see this in action, we need to have a working component to start working with. We are going to imagine that we want to build a generative component that’s got faceted torus that we use in lots of live sets. We want to be able to change a number of different elements for this Torus – its texture, background, rotation, deformation, to name a few. Let’s begin by putting together a simple render network to build this component, and then we can look at how extensions complement the work we’ve already done.

First let’s add an empty Base COMP to our network.

emptyBase

Inside of our new base let’s add a Camera, Geo, and Light COMP, as well as a Render TOP connected to an Out TOP. We’re building a simple rendering network, which should feel like old hat.

simpleRender

Let’s add a movie file in TOP, and a Composite TOP to our network. We’ll composite our render over our movie file in, so we have a background. In the image below only the changed parameters for the Composite TOP are shown.

simpleRenderWithComposit

Next let’s look inside of our geo COMP, and make a few changes. First let’s change our geo to be a polygon rather than a mesh. We’ll also turn off the display and render flags for the torus (don’t worry, we’ll turn them on further down the chain.

torus

Next we’ll add a noise SOP.

Noise SOP

Next we’ll add a facet SOP, turning on unique points and compute normals.

facetSOP

Finally, let’s add a null SOP. On the null, let’s turn on the display and render flags. When it’s all said and done we should have something that looks like this.

noiseTorusChain

Let’s move up one layer out of our geo, back into the base we started in. Here let’s add a phong Material and apply it to our geo. Let’s also add a movie file in TOP connected to a null TOP, and set it as the color map for our phong.

colorMapAndMaterial

While we’re still thinking about our material, lets make a few changes. Let’s set our diffuse color to white, our specular color to a light gray, and turn up our shininess to 255.

phongNonDefaults

Shown are the non default parameters for the Phong Material.

Let’s also make a few changes to our light COMP. I’m after a kind of shiny faceted torus, so let’s change our light to a cone light, place it overhead and to the right of our geometry, and set it to look at our geo.

Shown are the non default parameters for the Light Component.

Shown are the non default parameters for the Light Component.

I’ve gone ahead and changed file in my movie file in TOP to a different default image so I can see the whole torus. In the end you should have a network that looks something like this.

textureTorus

Thinking ahead, I know that I’m going to want to have the freedom of changing a few parameters for this texture. I’d like to be able to control if it’s monochrome, as well as a few of the levels of the image. Let’s add a monochrome TOP and a level TOP between the movie file in and the null TOP.

postProcess

We’re almost ready to start thinking about extensions, but first we need to build a control network to operate our component. Let’s start by adding a constant CHOP and calling it attrAssign. Here I’m going to start thinking about what I want to control in this component. I want to drive the rotation of the x y and z axis for our geo, I want to control the amplitude of the noise, the saturation of our image, the black level, brightness, and opacity. I’m going to think of those parameters as:

  • rx
  • ry
  • rz
  • noiseAmp
  • monoVal
  • blkLvl
  • bright
  • opacity

I’ll start out my constant CHOP with those channel names, and some starting values.

attrAssign

For this particular component, I want to be able to set values, and have it smartly figure out transitions rather than needing it constantly feed it a set of smoothly changing values. There are a couple of different ways we might set this up, but I’m going to take a rout of using a speed CHOP for one set of operations, and a filter CHOP to smooth everything out. First I want to isolate the rx ry and rz channels, we can do that with a select CHOP. Next we’ll connect that to a speed CHOP. We can merge this changed set of channels back into the stream with a replace CHOP – replacing our static rx ry rz channels with the dynamic ones.

selectSpeedReplace

Finally, we can smooth out our data with a Filter CHOP, and end our chain of operations in a null CHOP.

controlChops

Our last step here is to export or reference to each of our control parameters. Our rotation channels should be referenced by our Geo1 for rx, ry, and rz. The Noise SOP in Geo1 should be connected to the channel noiseAmp, and our image controls should be connected to their respective parameters – Monochrome, Black Level, Brightness, and Opacity. In the end, you should end up with a complete network that looks something like this.

complete BaseCOMP

Alright, we now finally have a basic generative component set up, and we’re ready to start thinking about how we want our extensions to work with this bad boy. Let’s start with the simplest ideas here, and work our way up to something more complex. For starters we need to add a text DAT to our network. I’m going to call mine genGeoClass.

genGeo

Let’s add our first class to our genGeoClass text DAT. Our class is going to contain all of our functions that we want to use with our component. There are a few things we need to keep in mind for this process. First, white space is going to be important – tabs matter, and this is a great place to really learn that the hard way. Namespace also matters. We’re eventually going to promote our extensions (more on that later on down), and in order for that to work correctly our functions need to begin with capital letters. That will make more sense as we see that in practice, but for now it’s important to have that tucked away in your mind.

Let’s begin by adding a simple print command. First we define our class, and place our functions inside of the class. When we’re writing a class in Python we need to explicitly place self in our definitions. There are a number of very good reasons for this, and I’d encourage you to read up on the topic if you’re curious:

Why ‘self’ is used explicitly
Why the explicit self has to stay

For our purposes, let’s get our class started with the following;

class GenGeo:

    def SimplePrint( self ):
 
        print( 'Hello World' )
        
        return

Before we can see this in action, we need to set up our base COMP to use extensions. Let’s back out of our base, and take a look at our component parameters.

baseExtensions

Here I’ve set the module reference to be the operator called genGeoClass inside of base1. We can also see that we’re specifcally referencing the GenGeo() class that we just wrote. I’ve also gone ahead and turned on promote extensions. Make sure you click “Re-Init” Extensions at the bottom of the parameters page, and then we can see our extension in action.

Next let’s add a text DAT to the same directory as our base1. Here we’ll use the following piece of code to call the SimpleText() function we just defined:

op( 'base1' ).SimpleText()

Let’s open our text port, and run our text DAT.

SimpleText

That should feel like a little slice of magic. If you choose not to promote your extensions, the syntax for calling a function looks something like this:

op( 'base1' ).ext.GenGeo.SimplePrint()

Okay, this has been a lot of work so far to only print out “Hello World.” How can we make this a little more interesting? I’m so glad you asked. Here’s  a set of functions that I’ve already written. We can copy and paste these into our genGeoClass text DAT, and now we suddenly have a whole new host of functions we can call that perform some meta operations for us.

class GenGeo:

    def SimplePrint( self ):
        print( 'Hello World' )
        return

    def TorusPar( self , rows , columns ):
        op('geo1/torus1').par.rows = rows
        op('geo1/torus1').par.cols = columns
        return

    def TorusParReset( self ):
        op('geo1/torus1').par.rows = 10
        op('geo1/torus1').par.cols = 20 
        return

    def Texture( self , file ): 
        op('moviefilein1').par.file = file
        return

    def TextureReset( self ):
        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'
        return

    def Rot( self , rx , ry , rz ):
        attr = op('attrAssign')
 
        attr.par.value0 = rx
        attr.par.value1 = ry
        attr.par.value2 = rz
        return

    def RotReset( self ):
        attr = op('attrAssign')
        speed = op('speed1')
        filterCHOP = op('filter1')
        attr.par.value0 = 0
        attr.par.value1 = 0
        attr.par.value2 = 0
        speed.par.resetpulse.pulse()
        filterCHOP.par.resetpulse.pulse()
        return

    def TorusNoise( self , noiseAmp ):
        op( 'attrAssign' ).par.value3 = noiseAmp
        return

    def Mono( self , monoVal ):
        op( 'attrAssign' ).par.value4 = monoVal
        return

    def Levels( self , blkLvl , bright , opacity ):
        attr = op('attrAssign')
        attr.par.value5 = blkLvl
        attr.par.value6 = bright
        attr.par.value7 = opacity
        return

    def PostProcessReset( self ):
        attr = op('attrAssign')
        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def Background( self , onOff ):
        op('comp1').bypass = onOff
        return

To better understand what all of these do let’s look at a quick cheat sheet that I made:

# Test Print Statement
op( 'base1' ).SimplePrint()

# Set Rows and Columns
op( 'base1' ).TorusPar( 20 , 20 )

# Reset Rwos and Columns to 10 x 20
op( 'base1' ).TorusParReset()

# Set the texture of a movie file in TOP
op( 'base1' ).Texture( 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg' )

# Reset the Texture of movie file in TOP
op( 'base1' ).TextureReset()

# Set the Rotation Speed for the x y and / or z axis
op( 'base1' ).Rot( 10 , 15 , 20 )

# Reset the Rotation speed to 0, and the rotation values to 0
op( 'base1' ).RotReset()

# Set the Amplitude paramater of the Noise SOP for the Torus
op( 'base1' ).TorusNoise( 0.8 )

# Make the texture Monochrome
op( 'base1' ).Mono( 1.0 )

# Control the Black Leve, Brightness, and Opacity of the Texture
# that's applied to the Torus
op( 'base1' ).Levels( 0.25 , 1.5 , 0.8 )

# Reset all post process effects
op( 'base1' ).PostProcessReset()

# Turn off Background Image - 0 will turn the Background back on
op( 'base1' ).Background( 1 )

This is wonderful, but there’s one last thing for us to consider. Wouldn’t it be great if we had some initialization values in here, so at start-up or when we made a new instance of this comp we defaulted to a reliable base state? That would be lovely, and we can set that with an __init__ definition. Let’s add the following to our class:

    def __init__( self ):
 
        print( 'Gen Init' )
        attr = op('attrAssign')

        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'

        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1

        return

That means our whole class should now look like this:

class GenGeo:

    def __init__( self ):
        print( 'Gen Init' )
        attr = op('attrAssign')

        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'

        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def SimplePrint( self ):
        print( 'Hello World' )
        return

    def TorusPar( self , rows , columns ):
        op('geo1/torus1').par.rows = rows
        op('geo1/torus1').par.cols = columns
        return

    def TorusParReset( self ):
        op('geo1/torus1').par.rows = 10
        op('geo1/torus1').par.cols = 20 
        return

    def Texture( self , file ): 
        op('moviefilein1').par.file = file
        return

    def TextureReset( self ):
        op('moviefilein1').par.file = app.samplesFolder + '/Map/TestPattern.jpg'
        return

    def Rot( self , rx , ry , rz ):
        attr = op('attrAssign')
 
        attr.par.value0 = rx
        attr.par.value1 = ry
        attr.par.value2 = rz
        return

    def RotReset( self ):
        attr = op('attrAssign')
        speed = op('speed1')
        filterCHOP = op('filter1')
        attr.par.value0 = 0
        attr.par.value1 = 0
        attr.par.value2 = 0
        speed.par.resetpulse.pulse()
        filterCHOP.par.resetpulse.pulse()
        return

    def TorusNoise( self , noiseAmp ):
        op( 'attrAssign' ).par.value3 = noiseAmp
        return

    def Mono( self , monoVal ):
        op( 'attrAssign' ).par.value4 = monoVal
        return

    def Levels( self , blkLvl , bright , opacity ):
        attr = op('attrAssign')
        attr.par.value5 = blkLvl
        attr.par.value6 = bright
        attr.par.value7 = opacity
        return

    def PostProcessReset( self ):
        attr = op('attrAssign')
        attr.par.value4 = 0
        attr.par.value5 = 0
        attr.par.value6 = 1
        attr.par.value7 = 1
        return

    def Background( self , onOff ):
        op('comp1').bypass = onOff
        return

Alright, so why do we care? Well, this application of extensions frees us to think differently about this component. Let’s say that I want to make a few changes to this component’s behavior. First I want to set a new image to be the texture for the torus, next I want to change the rotation speed on the x and y axis, and finally I want to turn up the noise SOP. Previously, I might think about this by writing a series of scripts that looked something like:

op( 'base1/attrAssign' ).par.value0 = 20
op( 'base1/attrAssign' ).par.value1 = 30
op( 'base1/attrAssign' ).par.value3 = 0.8
op( 'base1/moviefilein1' ).par.file = 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg'

Instead, I can now write that like this:

op( 'base1' ).Texture( 'https://farm4.staticflickr.com/3696/10353390565_1fa6dbf704_o.jpg' )
op( 'base1' ).Rot( 20 , 30 , 0 )
op( 'base1' ).TorusNoise( 0.8 )

That might not seem like a huge difference here in our example network, but as we build larger and more complex components, this suddenly becomes hugely powerful as a working approach.

extensionsInAction

Check out the example file on GitHub if you get stuck along the way, or want to see exactly how I made this work.

Geometric Landscapes | TouchDesigner

rolling landscapeWorking on a new piece to premiere in Mexico I spent a lot of time experimenting with creating landscapes and backgrounds. Searching for a way into this exploration I wanted to play with the idea of instancing objects in 3D space, and the illusion of moving and shifting planes in space. This is already a popular visual style, and I was wanted to try my hand at exploring what it might look like to make something like this in TouchDesigner.

I’ve talked about instancing before, and so this challenge seemed like something that would be both fun, and interesting to play with. I also wanted something that mixed material methods – shaded, flat, and wire frame in appearance. Let’s take a look at how we can making something interesting happen using these ideas as a starting point.

Rendering is going to make or break us when thinking about how to set up this project, with that in mind it’s important to remember that a typcially rendering set-up needs something to be rendered (some geometry), a perspective from which to draw the object(s) (a camera), and a light source (a light, we don’t always need a light but as a rule of thumb it’s good to think that we need one). Our Geometry, Light, and Camera are all components, while our render operator is a Texture Operator (TOP). As a point of reference, here’s what a generic rendering set-up looks like:

classic render setup

We can tell our Render TOP to look at multiple Geometry Components, in the same network, or we can nest our active surfaces inside of a single GEO. Much of this depends on what you’re looking to create. The most important thing to consider is that surfaces operators (SOPs) are computed on the CPU unless placed inside of Geometry COMP – placed inside of a Geo they’re computed on the GPU instead. This makes a huge difference in your performance, and as a best practice it’s good to place any rendered geometry inside of a Geo.

Now let’s take a look at what our rendering set-up is going to look like for making our geometric landscape:

rolling landscape render set-up

Here we can see that we have a similar set-up on the left – a light, a geo, a camera, and a Render TOP. On the right I’ve got the geometry viewer open so we can see a little more about the relationship in the scene of the camera, light, and geometry. We can see that the camera is set above our geometry looking down, we can also see that we have a cone light set-up with a wide angle and a wide delta.

Now that we have a general sense of what we’re making let’s dig-in and make something interesting.

Lets start with an empty network. Lets start by adding a Geo to our network.

geo

To get started we’ll need to dive inside of geo to start making some changes. You can do this by double clicking on the Geo, using the quick key “i” (shortcut for inside), or by scroll wheel zooming into your geo. Inside our Geo we’ll see a torus that has it’s render and display flag set (the small blue and purple circles on the surface operator).

Inside of our geo let’s start by frist deleting the Torus SOP. Next let’s add a Grid SOP. Our grid is going to act the key generative element for us inside of this network – it’s going to give our surface its wire texture, the shading on the surface, and the location of where our spheres get placed. Our Grid is the central piece of what we’re making, and we’ll how in just a bit. Once we’ve added a grid to our network, we need to make a few changes to some its properties. First we want to make sure that it’s set to be a Polygon for Primitive type; we want to make sure that our orientation is set to ZX Plane; finally we want to change the size to 20 x 20.

grid setup

Next we’ll connect our Gird SOP to a Noise SOP. We’re going to use the Noise SOP to drive some of the shifting locations of the points in our grid. Before we move on, let’s make one quick change, On the Transform page in the Noise SOP’s parameter’s we can see that the translate z parameter is set to change with the second count of our project – me.time.seconds. This is excellent, and it keeps our noise animated over time, but it also means that we’re only working with 600 samples (in a default TouchDesigner network) because me.time.seconds is locked to our timeline. If, instead, we want noise that doesn’t have a hiccup every 10 seconds, we can instead use the call me.time.absSeconds. This uses the absolute number of seconds that TouchDesigner has been running to drive the transformations in the noise SOP. It’s a small change, but makes for a nicer look (at least in my opinion).

noise SOP

Next we’ll add a Material SOP to the our network. Our material SOP is going to allow us to assign a material to our grid. We’ll do this by also adding a Wireframe Material to our network. Before we assign our wireframe to our material we should see something like this:

before assignment

To assign the wireframe to the material, we’re just going to drag and drop the MAT onto the SOP.

assign MAT

Finally, we’re going to end this string by adding a Null SOP, making sure to turn on the render and display flags.

wire null

At this point we’ve made the Wireframe outlined elements of our grid. We’re now going to use the same data stream that we’ve already programmed to help us create a another layer of texture, and to create the locations for our spheres. Let’s start by adding our spheres to the network. To do this we’re going to do some instancing. When we’re instancing we need some location information for where to generate the copies of our source geometry. To get this information we’re going to use a SOP to CHOP to convert our SOP information into CHOP data.

SOPtoNow before we move on we need to change gears for just a moment. What we’re going to do next is to add another Geo inside of our current Geo – in Russian Nesting Doll Fashion. Why? Well, we’re going to do this in order to take advantage of the Geo’s ability to instance. Why not use the Copy SOP? I love me some Copy SOP action, but in this case the use of instancing is more efficient for this particular activity.

So, let’s add another Geo to our network:

new geo

Next we’re going to replace the torus inside of this Geo with a sphere. We’re also going to add a material inside of the geo. Easy, right? I’m going to set my sphere to be pretty small ( 0.09, 0.09, 0.09 ), I’m also going to make sure that I connect my sphere to a Null (in case I want to make any other changes), and then turn on the Null’s display and render flags. Finally I’m going to add a constant Material. When we’re all said and done you should have something like this:

inside the sphere

Excellent. Now we we back out of this nested Geo we should see just a single sphere. What?! Well, now we can set the Geo to instance – my favorite part.

small sphere

Let’s start by taking a look at the parameters of our Geo. Specifically, we want to look at the Instance page. Here we first need to turn on Instancing. Next we’re going to tell our Geo to look at the CHOP called sopto1. Finally we’ll set the TX, TY, and TZ parameters to correspond to the channels called tx, ty, and tz. If this seems like crazy talk, that’s okay – check out the picture below and it should make more sense:

instance page

We also want to head to Render Page of the Geo, and set this geo to use the material ./constant1 (this means, use the material inside of this geo called constant1 – ./ is a directory pointer indicating where to look for the thing in question, in this case a constant).

constant

Alright, now we can finally see some of our handy work – you should now see a sphere instanced at each of the vertices of the grid that we’re transforming with noise.

Spheres

Now let’s kick it up a notch. Now we’re going to add another Geo to our network.

last geo

We’re going to treat this Geo slightly differently. Inside let’s add an In SOP and a Phong Material. On our SOP In let’s make sure that the display and render flags are turned on, and for your SOP choose a nice dark color – I’m choosing a deep crimson.

geo3

The In SOP allows us to pass in the geometry that we’ve already made, acting as a kind of short-cut for us. When we go back to our Geo we just need to make sure that we’ve set our Render material to be ./phong1 – like with our constant this is the pathway to and the name of the material we want to use.

geo3mat

Alright, now you should have a network that looks something like this:

complete geo network

Now we’re ready to move out of our Geo and get ready to render our scene. Zooming out of our Geo we should see a network that looks something like this:

work space

In order to render our scene we need to revisit what we talked about at the beginning of the post – we need to add a Camera, a Light, and a Render TOP. Let’s go ahead and add these to our network.

rendersetupGeo

What gives?! This doesn’t look right at all. Well, part of what’s happening is that we don’t have our camera and light positioned correctly to render the scene correctly. To make this easier to understand, let’s change up our work space so we can use the geometry viewer (one of my favorite tools). Let’s start by dividing the workspace into two windows. We can do this by using the split work space icon in the menu bar:

menu bar

I like the vertical split, but you can choose whatever arrangement works for you. When you initially click on this split you’ll see two views of your current network location:

split step1

In order to see the geometry viewer we need to change the pane type selection for one of our windows. Clicking on the small drop down triangle will reveal a menu of network views. Let’s select Geometry Viewer from the list.

pane type

You should now see your network on one side, and the geometry from our Geo comp on the other side.

geo viewer

Now we’re cooking with gas. Alright, let’s make our lives just a little easier and change the scale of our light and camera to make them easier to see. Click on your light COMP, if your parameters window has disappeared you can bring it back by pressing “p” on your keyboard.” In the scale parameter, let’s turn that up to 10, 10, 10.

light scale

Great, now we can see part of the reason that our scene isn’t rendering the way we want it to. Our light is currently set up as a point light that’s positioned at the edge of our geometry. Let’s make some changes to our light’s properties so we get something closer to what we want. Let’s start by changing the location of our light, I’m going to place mine at 0, 10, 0.

lightplacement

Now let’s take a look at the Light page of the parameter’s window. Here I’m going to change my light to a Cone, and alter the cone size, delta, and rolloff. Experiment with different settings here to find something that you like.

light properties

Before you get too much experimenting done, however, you’ll probably notice that the cone of our light isn’t pointing towards the surface of our geometry. We can fix this by heading back to the Xform page of the parameters window, and setting the rotation values to -90, 0, 0.

light ROT

With our light starting to take shape, let’s get our camera in order. Over at our camera, let’s make the same initial change we made to the light and turn the scale up to 10, 10, 10 – this is going to make it much easier for us to find our camera.

cameraScale

With the scale turned up we can see that our camera needs to be translated back, up and rotated downwards so it’s looking at the geometry. After a little bit of adjusting I think I like my camera at:

TX    0
TY    6.2
TZ    18.9
RX    -16.8
RY    0
RZ    0

cameraTrans

Boom! Alright, let’s close our geometry viewer and take a closer look at our Render TOP to see what we’ve made.

finalRender

Nice work. This is a good looking start, and now that you know how it’s made you can start to really have fun – camera placement, light placement, noise amplitude, you name it, go crazy, make something fun or weird, or just silly.

TouchDesigner | Replicators, and Buttons, and Tables, oh my

replicator cue buttons

I want 30 buttons. Who doesn’t want 30 buttons in their life? Control panels are useful for all sorts of operations in TouchDesigner, but they can also be daunting if you’re not accustomed to how they work. In the past when I’ve wanted lots of buttons and sliders I’ve done all of my lay-out work the hard way… like the hardest way possible, one button or slider at a time. This is great practice, and for anyone who is compulsively organized this activity can be both maddening and deeply satisfying at the same time. If you feel best when you’re meticulously aligning your buttons and sliders in perfect harmony, this might be the read for you. If, however, you like buttons (lots of them), and you want to be able to use one of the most powerful components in TouchDesigner, then Replicators might just be the tool you’ve been looking for – even if you didn’t know it.

Replicators, big surprise, make copies of things. On the surface of it, this might not seem like the most thrilling of operators, but lets say that you want to automate a process that might be tedious if you set out to do it by hand. For example, let’s say you’re designing a tool or console that needs 30 buttons of the same type, but you’d like them have unique names and follow a template and have unique names. You could do this one button at a time, but if you ever change something inside of the button (like maybe you want the colors to change depending on their select state), making that change to each button might be a huge hassle. Replicators can help us make that process fast, easy, just plain awesome.

First thing, let’s make a container and dive inside of it to start making our buttons. Next let’s make a table. I’m going to use my table DAT to tell the replicator how to make the copies of my button. As you start to think about making a table imagine that each row (or column) is going to be used to create our copies. For example, I’ve created a table with thirty rows, Cue 1 – Cue 30.

Screenshot_030614_011703_AM

Next let’s make a button COMP and rename it button0 (don’t worry, this will make sense in a a few minutes).

Screenshot_030614_012057_AM

Alright, now we’re gonna do some detailed work to set-up our button so some magic will happen later. First up we’re going to make a small change to the Align Order of our button. The Align Order parameter of a button establishes the order in which your buttons are arranged in the control panel when you use the Align Parameter (if this doesn’t make sense right now, hang in there and it will soon). We’re going to use a simple python expression to specify this parameter. Expand the field for the Align Order and type the python expression:

me.digits

What’s this all about? This expression is calling for the number that’s in the name of our button. Remember that we renamed our button to button0. Using this expression we’ve told TouchDesigner that the alignment order parameter of this button should be 0 (in many programming languages you start counting at 0, so this makes this button the first in a list).

Screenshot_030614_012854_AM

Alright, now let’s start making some magic happen. Let’s go inside of our button and make a few changes to the background text. If this is your first time looking inside of the button component take a moment to get your bearings – we have a table DAT that’s changing the color of the button based on how we interact with this component, we have some text that we’re using for the background of the button, and we have a few CHOPs so we can see what the state of our button is when it’s toggled or pressed.

Screenshot_030614_013526_AM

Next we’re going to take a closer look at just the text TOP called bg. Looking first at the text page of the parameters dialogue gives us a sense of what’s happening here, specifically we can see that the text field is where we can change what’s displayed on this button. First up we’re going to tell this TOP that we want to pull some data from a DAT. In the DAT field type the path:

../table1

Next change the DAT Row to the following python expression:

me.parent().digits

Finally, clear the text parameter so it’s blank. With any luck you’re button should now read, “Cue 1,” or whatever you happened to type in the first row of your table.

Screenshot_030614_014601_AM

So what’s happening here?! First, we’re telling this TOP that we’re going to use some information from a table. Specifically, we’re going to use a table whose name is table1, and it’s location is in the parent network (that’s what ../ means). If we were to take our network path ../table1 and write it as a sentence it might read something like this – “hey TouchDesigner I want you to use a table DAT for this TOP, to find it look in the network above me for something named table1.” Next we have an expression that’s telling this TOP what row to pull text from. We just used a similar expression when we were setting our alignment order, and the same idea applies here – me.parent().digits as a sentence might read “hey TouchDesigner, look at my parent (the component that I’m inside of) and find the number at the end of its name.” Remember that we named this component button0, and that 0 (the digits from that name) correspond to the first row in our table.

Now it’s time to blow things up. Let’s add a replicator component to our network.

Screenshot_030614_015654_AM

In the parameters for our replicator first specify that table1 is the template table. Next let’s change the Operator Prefix to button to match the convention that we already started. Last by not least set your Master Operator to be your button0.

Screenshot_030614_020047_AM

If we set up everything correctly we should now have a total of 30 buttons arranged in a grid in our network. They should also each have a unique name that corresponds to the table that we used to make this array.

Screenshot_030614_020310_AM

Last but not least, let’s back up and see what this looks like from the container view. At this point you should be sorely disappointed, because our control panel looks all wrong – in fact it looks like there’s only one button. What gives?

Screenshot_030614_020604_AM

The catch here is that we still need to do a little bit of house keeping to get to what we’re after. In the Container’s parameters on the layout page make sure the Align parameter is set to “Layout Grid Columns” or “Layout Grid Rows” depending on what view better suits your needs. You might also want to play with the Align Margin if you’d like your buttons to have a little breathing room.

Screenshot_030614_020954_AM

Bingo-Bango you’ve just made some replicator mojo happen. With your new grid of buttons you’re ready to do all sorts of fun things. Remember that each of these buttons is based on the template of the master operator that we specified in the replicator. In our case that’s button0. This means that if you change that operator – maybe you change the color, or add an expression inside of the button – when you click the “recreate all operators” in the replicator, this remakes your buttons with those changes applied to every button.

Happy replicating.


A big thank you to Richard Burns and Space Monkeys for sharing this brief tutorial that inspired me to do some more looking and playing with Replicators:

TouchDesigner | The Underlying Geometry

One of the benefits of working with TouchDesigner is the ability to work in 3D. 3D objects are in the family of operators called SOPs – Surface Operators. One of the aesthetic directions that I wanted to explore was the feeling of looking into a long box. The world inside of this box would be characterized by examining artifacts as either particles or waves with a vaguely dual-slit kind of suggestion. With that as a starting point I headed into making the container for these worlds of particles and waves.


Before making any 3D content it’s important to know how TouchDesigner processes these objects in order to display them. On their own, Surface Operators can’t be displayed as a rendered texture. In TouchDesigner’s idiom textures are two-dimension surfaces, and it follows that the objects that live in that category are called TOPs, Texture Operators. Operators from different families can’t be directly connected with patch chords. In order to pass the information from a SOP to a TOP one must use a TOP called a Render. The Render TOP must be connected to three COMPs (Compositions) in order to create an image that can be displayed. The render TOP requires a Geometry COMP (something to be rendered), a Light COMP (something to illuminate the scene), and a Camera COMP (the perspective from which the object is to rendered). In this respect TD pulls from conventions familiar to anyone who has worked with Adobe’s After Effects. 

Knowing the component pieces required in order to successfully render a 3D object it’s easier to understand how I started to create the underlying geometry. The Geometry COMP is essentially a container object (with some special attributes) that holds the SOPs responsible for passing a surface to the Render TOP. The default Geometry COMP contains a torus as a geometry. 

We can learn a little about how the COMP is working by taking a look inside of the Geometry object. 


Here the things to pay close attention to are the two flags on the torus object. You’ll notice in the bottom right corner there is a purple and a blue circle that are illuminated. The purple circle is a “Render Flag” and tells TouchDesigner to render the object, and the blue circle is a “Display Flag” which tells TouchDesigner that this is the object that should be displayed in the Geometry COMP.

Let’s take a look at the network that I created.

Now let’s dissect how my geometry network is actually working. At first glance we can see that multiple objects are being combined into a single piece of geometry that’s ultimately being passed out of this Geometry COMP. 

If we look closer we’ll see that here that the SOP network looks like this:

Grid – Noise – Transform – Alpha Noise (here the bypass flag is turned on)

Grid creates a plane that’s created out of polygons. This is different from a rectangle that’s only composed four points. In order to create a surface that can deform I needed a SOP points in the middle of it. The grid is attached to a Noise SOP that’s animating the surface. Noise is attached to a transform SOP that allows me to change the position of this individual plane. The last stop in this chain is another Noise SOP. Originally I was experimenting with varying the transparency of the surface. Ultimately, I decided to move away from this look. Rather than cutting this out of the chain, I simply turned on the Bypass Flag which turns off this single SOP. This whole chain is repeated eight times (for a total of eight grids). 

These Nine planes are then connected so that the rest of the network looks like this:

Merge – Transform – Facet – Texture – Null – Out
Merge takes all of the inputs and puts them together into a single piece of geometry. Transform allows me to move object as a whole in space. Facet is a handy operator that allows you to compute the normals’ of a geometry, which is useful for creating some more dynamic shading. Texture was useful for another direction that I was exploring, ultimately  ended up turning on the bypass flag for this SOP. A null, like in other environments, is really just a place holder kind of object. In the idiomatic structure of TouchDesigner, the Null is operationally an object that one places at the end of operation string. This is considered a best practice for a number of reasons. High on the list of reasons to end a string in a Null is because this allows easy access for making changes to a string. TouchDesigner allows the programmer to insert operations between objects. By always ending a string in a Null it becomes very easy to make changes to the stream without having to worry about re-exporting parameters. Finally all of this ends in an Out. While the Out isn’t necessary for this string, at one point I wasn’t sure if I was going to pass this geometry into another component. Ending in the Out ensured that I would have that flexibility if I needed it.