Category Archives: Media Design

TouchDesigner | GitHub and External Toxes

git push git paid

Working with GitHub and TouchDesigner can be a real adventure – and what I hear most often is why and how. A bit ago I did a little write-up about working with git and Touch, but realize that’s still a bit intimidating for folks. This year at the TouchDesigner summit Zoe Sandoval and I taught a workshop about Modular Architectures and working with TouchDesigner. One of the pieces we didn’t have time to really pull apart was working with Git, so before we shared the videos from the workshop we thought it might be helpful to give some perspective about working with Git and where it might save you on a project.


Individual Videos

TouchDesigner | Case Study | Custom Parameters and Cues

I recently had the good fortune of being able to collaborate with my partner, Zoe Sandoval, on their MFA thesis project at UC Santa Cruz – { ūĚöõūĚöéūĚöĖūĚöóūĚöäūĚöóūĚöĚūĚöú } ūĚöėūĚöŹ ūĚöä { ūĚöõūĚöíūĚöĚūĚöěūĚöäūĚöē } Thesis work is strange, and even the best of us who have managed countless projects will struggle to find balance when our own work is on the line – there is always the temptation to add more, do more, extend the piece a little further, or add another facet for the curious investigator. Zoe had an enormous lift in front of them, and I wanted to help streamline some of the pieces that already had functioning approaches, but would have benefited from some additional attention and optimization. Specifically, how cues and states operated was an especially important area of focus. I worked closely with the artist to capture their needs around building cues / states and translate that into a straightforward approach that had room to grow as we made discoveries, and needed to iterate during the last weeks leading up to opening.

The Big Picture

{ remnants } of a { ritual } is an immersive installation comprised of projection, lighting, sound, and tangible media. Built largely with TouchDesigner, the installation required a coordinated approach for holistically transforming the space with discrete looks. The projection system included four channels of video (two walls, and a blended floor image); lighting involved one overhead DMX controlled instrument (driven by an ENTEC USB Pro), and four IoT Phillips Hue lights (driven by network commands – you can find a reusable approach on github); sound was comprised of two channels driven by another machine running QLab, which required network commands sent as OSC. The states of each of these end points, the duration of the transition, and the duration of the cue were all elements that needed to both be recorded, and recalled to create a seamless environmental experience.

Below we’re going to step though some of the larger considerations that led to the solution that was finally used for this installation, before we get there though it’s helpful to have a larger picture of what we actually needed to make. Here’s a quick run-down of some of the big ideas:

  • a way to convert a set of parameters to python dictionary – using custom parameters rather than building a custom UI is a fast way to create a standardized set of controls without the complexity of lots of UI building in Touch.
  • a reusable way to use storage in TouchDesigner to have a local copy of the parameters for fast reference – typically in these situations we want fast access to our stored data, and that largely looks like python storage; more than just dumping pieces into storage, we want to make sure that we’re thoughtfully managing a data structure that has a considered and generalized approach.
  • a way to write those collections of parameters to file – JSON in this case. This ensures that our preset data doesn’t live in our toe file and is more easily transportable or editable without having TouchDesigner open. Saving cues to file means that we don’t have to save the project when we make changes, and it also means that we have a portable version of our states / cues. This has lots of valuable applications, and is generally something you end up wanting in lots of situations.
  • a way to read those JSON files and put their values back into storage – it’s one thing to write these values to file, but it’s equally important to have a way to get the contents of our file back into storage.
  • a way to set the parameters on a COMP with the data structure we’ve been moving around – it’s great that we’ve captured all of these parameters, but what do we do with this data once we’ve captured it? The need here is thinking through what to do with that data once you have it captured.

Cuing needs

One of the most challenging, and most important steps in the process of considering a cuing system is to identify the granularity and scope of your intended control. To this end, I worked closely with the artist to both understand their design intentions, as well their needed degrees of control. For example, the composition of the projection meant that the blended floor projection was treated as a single input image source; similarly, the walls were a single image that spanned multiple projectors. In these cases, rather than thinking of managing all four projectors it was sufficient to only think in terms of the whole compositions that were being pulled. In addition to the images, it was important to the artist to be able to control the opacity of the images (in the case of creating a fade-in / out) as well as some image adjustments (black level, brightness, contrast, HSV Offset). Lighting, and sound had their own sets of controls – and we also needed to capture a name for the cue that was easily identifiable.

As lovely as it would be to suggest that we knew all of these control handles ahead of time, the truth is that we discovered which controls were necessary through a series of iterative passes – each time adding or removing controls that were either necessary or too granular. Another temptation in these instances is to imagine that you’ll be able to figure out your cuing / control needs on your feet – while that may be the case in some situations, it’s tremendously valuable to instead do a bit of planning about what you’ll want to control or adjust. You certainly can’t predict everything, and it’s a fool’s errand to imagine that you’re going to use a waterfall model for these kinds of projects. A more reasonable approach is to make a plan, test your ideas, make adjustments, test, rinse, repeat. An agile approach emphasizes smaller incremental changes that accumulate over time – this requires a little more patience, and a willingness to refactor more frequently, but has huge advantages when wrestling with complex ideas.

Custom Pars

In the past I would have set myself to the task of handling all of these controls in custom built UI elements – if was was creating an interface for a client and had sufficient time to address all of the UI / UX elements I might have taken that approach here, but since there was considerable time pressure it was instead easier (and faster) to think about working with custom parameters. Built in operators have their own set of parameters, and Touch now allows users to customize Component operators with all of the same parameters you find on other ops. This customization technique can be used to build control handles that might otherwise not need complete UI elements, and can be further extended by using the Parameter COMP – effectively creating a UI element out of the work you’ve already done while specifying the custom parameters. The other worthwhile consideration to call out here is your ability to essentially copy parameters from other objects. Consider the black level, contrast, and brightness pars included above. One approach would be to create each par individually, and set their min, max, and default values. It would, however, be much faster if we could just copy the attributes from the existing level TOP. Luckily we can do just that with a small trick.

We start by creating a base Comp (or any Component object), right clicking on the operator, and selecting customize component.

This opens the customize component dialogue where we can make alterations to our COMP. Start by adding a new page to your COMP and notice how this now shows up on the components parameters pages:

For now let’s also add a level TOP so we can see how this works. From your level TOP click and drag a parameter over to the customize component dialogue – dragging specifically to the Parameter column on the customize component dialogue:

This process faithfully captures the source parameter’s attributes – type, min, max, and default vals without you needing to input them manually. In this style of workflow the approach is to first start by building your operator networks so you know what ops you will want to control. Rather than designing the UI and later adding operator chain, you instead start with the operator chain, and only expose the parameters you’ll need / want to control. In this process you may find that you need more or fewer control handles, and this style of working easily accommodates that kind of workflow.

Capturing Pars

Creating a component with all of your parameters is part of the battle, how to meaningfully extract those values is another kettle of fish. When possible it’s always handy to take advantage of the features of a programming language or environment. In this case, I wanted to do two things – first I wanted to be able to stash cues locally in the project for fast retrieval, second I wanted to have a way to write those cues to disk so they were’t embedded in a toe or tox file. I like JSON as a data format for these kinds of pieces, and the Python equivalent to JSON is dictionaries. Fast in TD access means storage. So here we have an outline for our big picture ideas – capture the custom parameters, stash them locally in storage, and write them to disk.

One approach here would be to artisanaly capture each parameter in my hipster data structure – and while we could do that, time fighting with these types of ideas has taught me that a more generalized approach will likely be more useful, even if it takes a little longer to get it right. So what does that look like exactly?

To get started, let’s create a simple set of custom parameters on a base COMP. I’m going to use the trick we learned above to move over a handful of parameters form a level TOP: Black Level, Contrast, and Opacity:

To create a dictionary out of these pars I could write something very specific to my approach that might look something like this snippet:

At first glance that may seem like an excellent solution, but as time goes on this approach will let us down in a number of ways. I wont’ bother to detail all of them, but it is worth capturing a few of the biggest offenders here. This approach is not easily expanded – if I want to add more pars, I have to add them directly to the method itself. For a handful, this might be fine, but over ten and it will start to get very messy. This approach also requires duplicate work – the key name for values means that I need to manually verify if the key name and parameter name match (we don’t have to do this, but we’ll see later how this saves us a good chunk of work), if I misspell a word here I’ll be very sorry for it later. The scope of this approach is very narrow – very. In this case the target operator variable is set inside of the method, meaning that this approach will only ever work for this operator, at this level in the network. All of that and more largely mean that while I can use this method very narrowly, I can use this approach, but I’m going to be sorry for it in the long run.

Instead of the rather arduous process above, we might consider a more generalized approach to solving this problem. Lucky for us, we can use the pars() method to solve this problem. The pars() method is very handy for this kind of work, the major catch being that pars() will return all of the parameters on a given object. That’s all well and good, but what I really wanted here was to capture only custom parameters on a specific page, and to be able to ignore some parameters (I didn’t, for example, need / want to capture the “save cue” parameter). What might his kind of approach look like, let’s take a look at the snippet below.

Abstract Reusable code segment

What exactly is happening here? First off, this one is full of documentation so our future selves will know what’s happening – in fact this is probably more docstring than code. The big picture idea is rather than thinking about this problem one parameter at a time, we instead what to think of entire custom pages of parameters. Chances are we want to re-use this, so it’s been made to be fairly general – we pass in an operator, the name of the page we want to convert to a python dictionary, the name of our newly made preset, and a list of any parameters we might want to skip over. Once we pass all of those pieces into our function, what we get back is a dictionary full of those parameters.

Capture to Storage

Simply converting the page of parameters to a dictionary doesn’t do much for us – while it is a very neat trick, it’s really about what we do with these values once we have them in a data structure. In our case, I want to put them into storage. Why storage? We certainly could put these values into a table – though there are some gotchas there. Table values in TouchDesigner are always stored as strings – we might think of this as text. That matters because computers are notoriously picky about data, and find the challenge of differentiating between words, whole numbers, and numbers with decimal values very difficult. Programmers refer to words as strings, whole numbers as integers or ints, and numbers with decimal values as floats. Keeping all of our parameter values in a table DAT means they’re all converted to strings. Touch mostly does an excellent job of making this invisible to you, but when it goes wrong it tends to go wrong in ways that can be difficult to debug. Using storage places our values in a python dictionary where our data types are preserved – not converted to strings. If you’re only working with a handful of cues and a handful of parameters this probably doesn’t matter – but if you’re thinking about 20 or more parameters it doesn’t take many cues before working in native data types will make a big difference. For reference, an early iteration of the cuing system for this would have needed the equivalent of a table DAT with over 1000 rows to accommodate the stored parameters. These things add up quickly, more quickly than you first imagine that they might.

Okay, so what’s an example of a simple and reusable function we might use to get a dictionary into storage:

Write to file

Similar to the above, we likely want a simple way to write our stored cues to disk in the same format we’re using internally. Python dictionaries and JSON are nearly interchangeable data structures and for our needs we can think of them as being the same thing. We do need to import the JSON module to get this to work correctly, but otherwise this is a straightforward function to write.

What you end up with will look like this:

Reading from file

We’re close now to having a complete approach for working with cues / states. Our next puzzle piece here would be a way to read our JSON from disk, and replace what we have in storage with the file’s contents. This means that whatever is in the file can be used to replace what we have in storage.

What you end up with here might look like this:

Loading pars – does it work

This part is the most tricky. Here the the big idea is to create a duplicate operator that has all of the same custom parameters in our preset maker. Why? Well, that would mean that all of the parameter names match – so which would make loading parameters significantly easier and more straightforward. The other trick here is to remove any of the ignored pars from our ignore list – thinking back this is to ensure that we don’t use any of the parameters that we don’t want / need outside of recording them. We can start this process by making a copy of our operator that’s being used to capture our parameters and then deleting the pars we don’t need. Next we need to write a little script to handle moving around all of the values. That should look something like this:

Making a Module

All of this is a good bit of work, and if you’ve been following along, you probably now have a handful of text DATs doing all of this work. For the sake of keeping tidy, we can instead put all of this into a single DAT that we can use as a python module. Wrapping all of these pieces together will give us something like this:

If you want to see how this works and pull it apart from there you can pull an example TOE file form this repo.

TD JSON – another alternative

There’s another alternative to this approach – which is the new TD JSON elements that are in the TouchDesigner. You can read about them on Derivative’s wiki here. These tools are a promising alternative, and you can do very similar pieces here. In particular we might use something like pageToJSONDict()to do what we’re after. That might look something like this:

That’s slick, but what we get back is almost 75 lines worth of JSON. This feels a little overkill to me for what we’re after here – there’s lots of potential here, but it might be a little more than we need for this actual implementation. Maybe not though, depending on how you want to change this process, it might be just perfect.

Safety Rails

There are some pieces missing in the approach above that I ended up including in the actual implementation for the artist – I’m not going dig into some of these pieces, but it is worth calling attention to some of the other elements that were included. The biggest pieces that needed to be addressed were how we handle failure, duplicates, provided confirmation on an operation, or warned the user about possibly unintended operations.

The artist, for example, wanted to both have the UI flash and to get a message printed to the text port when a preset was successfully saved. The artist also wanted to make sure that a preset wasn’t automatically overwritten – instead they wanted to see a pop up message warning that a preset was going to be overwritten, allowing the user confirm or cancel that operation.

That may seem unnecessary for a tool you build for yourself… until it’s 2am and you haven’t slept, or you’re working fast, or there’s a crit in 10 minutes and you want to make one more adjustment, and and and, or or or. Handling these edge cases can not only add piece of mind, but also ensure you keep your project on the straight and narrow.

Additionally, how you handle failure in these situations is also important to plan – we never want these pieces to fail, but having a gracefully solution for how to handle these moments are tremendously important to both work through and plan. If nothing else, it’s elegantly handling the failure and printing a message – better still is if give yourself a clue about what went wrong. A few breadcrumbs can go a long way towards helping you find the trail you lost. In my opinion, failing code gets a bad wrap – it’s something we grumble over, not something we celebrate. The truth of the matter, however, is that failures are how we make projects better. Being able to identify where things went wrong is how you can improve on your process. It’s a little thing, but if you can shift (even if only slightly) how you feel about a failing process, it will allow you some room to embrace iterative process more easily.


Managing states / cues is tricky. It’s easy to consider this a rather trivial problem, and it isn’t until you really take time to think carefully about what you’re trying to achieve that you uncover the degree of complexity in the questions around how you manage the flow of information in your network. You wont get it right the first time, but chances are you didn’t ride a bike without a few falls, and you probably didn’t learn to play that instrument without getting a few scales wrong. It’s okay to get it wrong, it’s okay to refactor your code, it’s okay to go back to the drawing board as you search to find what’s right – that’s part of the process, it’s part of what will ultimately make for a better implementation.

No matter what, hang in there… keep practicing, leave yourself breadcrumbs – you’ll get there, even if it takes you longer than you want.

Happy programming.

Zoe Sandoval’s { remnants } of a { ritual }

You can see { remnants } of a { ritual } and the work of the DANM MFA Cohort through May 12th at UC Santa Cruz.

textport for performance | TouchDesiger


I love a good challenge, and today on the TouchDesigner slack channel there was an interesting question about how you might go about getting the contents of the textport into a texture to display. That’s a great question, and I can imagine a circumstance where that might be a fun and interesting addition to a set. Sadly, I have no idea about how you might make that happen. I looked through the wiki a bit to see if there were any leads, and it’s difficult to see if there’s actually a good way to grab the contents of the textport.

What do we do then?!

Well, it just so happens that this might be another great place to look at how to take advantage of using extensions in TouchDesigner. Here our extension is going to do some double duty for us. The big picture idea is that we’ll want to be able to use a single call to either display a string, or print and display a string. If you only wanted to print it you could just use print(), so we’ll leave that one out of the mix for now.

Let’s take a look at the extension and then pull apart what’s happening in side.

Okay, so what exactly are we doing here?!

The big picture is that we want a way to be able to log something to a text  object that can be displayed. In this case I choose a table DAT. The reasoning here is that a table DAT before being converted to just a text DAT allows us to do some simple clean up and line adjustments. Each new entry is posted in a row Рwhich makes for an easy way to limit the number of displayed rows. We can do this with a select DAT Рwhich is where we use our StartRow and EndRow members.

Why exactly do we use these? Well, this helps ensure that we can keep our newest row displayed. A text TOP can accept a text DAT of any length, but at some point the text will spill off the bottom – unless you use adaptive sizing. The catch there is that at some point the text will become impossible to read. A top and bottom boundary ensures that we can always have something portion of our text displayed. We use a simple logical test in our Display() method to see if we’ve hit that boundary yet, and if we have we can update our members plus one… moving them both along at the same time.

You may also notice that we have a separate method to display and print… why not just do this in a single method. Well, that’s a great question. We could just use a single method for this with another argument. That’s probably a better way to tackle this challenge, but I wanted to use this opportunity to show how we might call another method from within our class. This can be helpful in a number of different situations, and while this application is a little too simple to really take advantage of that technique, it gives you a peak into how it might work.

Want to download the tox and take it for a test drive? You can find the source code here.

Media Design | Building Projection Mapping

One of the courses I’m taking in my first year at ASU is a course called Media Design Applications. This course is centered around the use of various media design techniques in specific relation to their application in a theatrical setting. One of the techniques that we discussed in class is architectural projection mapping. This form has quickly become popular for forcing perspective, and opportunity for complex illusion. The underling principal of projection mapping is to highlight and take advantage of physical from in order to create the illusion that the entire surface is, itself, a screen. There are a variety of techniques to achieve this illusion, some based entirely in software and others based in the process of generating the artwork itself. This is an essential and powerful tool for the media designer as it opens up a wide range of possibilities for the creation of theatrical illusion. Before I start to talk about the process, here’s the project description:

Project Description:

Component 2 ‚Äď Geometry, Surface and Illusion¬†

Unfortunately – or possibly fortunately – media designers in the theatre get a nice, rectangular, white screen to shoot at from a perpendicular, on-center angle. In this section, we will explore methods for dealing with odd angles, weird shapes, and non-ideal surfaces, as well as exploring special effects that are possible through the right marriage of projection, surface and angle. For this project, you may choose a building, sculpture or other built object in the ASU environment, then map its geometry using the techniques shown in class and create content utilizing that geometry to best effect. Final presentations of this project will be in the evening on campus.

I started by this process by first scouting a location. After wandering around campus several times I one of the buildings that I kept coming back to was a energy solutions building by a company called NRG. One of the larger draws of this building happens to be the location. Positioned directly across from one of the campus dormitories it seemed like an ideal location that would have a built-in audience. While there’s no guarantee that there will be many students left on campus at this point, it never hurts to plan for the best.

The face of the building that points towards the dormitories is comprised of abstract raised polygons arranged in narrow panels. These panels come in two varieties creating a geometric and modern look for the building. One of the productions I’m working on next year has several design elements that are grounded in abstract geometric worlds, and this seemed like a prime opportunity to continue exploring what kind of animation works well in this idiom.

In the Asset or In the System

The debate that is often central to this kind of work is centered around an approach that values building a system (or program) for creating the aesthetic, or to instead create the work as fixed media artifacts. In other words, do you build a something that is at it’s core flexible and extendable (though problematic, finicky, and unabashedly high maintenance) or do you build something rigid and fixed (though highly reliable, hardware independent, and reproducible)? Different artists prefer different methods, and there are many who argue that one is obviously better than the other. The truth of the matter, however, is that in most cases the right approach is really a function of multiple parameters: who’s the client, what’s the venue, what’s the production schedule, what resources are available, is the project interactive, and so on. The theoretical debate here is truly interesting, and in some sense calls into question what the skill set is most appropriate for the artist who intends on pursuing this practice. The print analogy might be, do you focus on designing within the limitations of the tools that you have or do you commit to building a better printing press so that you can realize the design exists only as an abstract thought?¬†

Recent Arizona State MFA graduate Boyd Branch shared these thoughts about this very topic:

I don’t know if there is much of a debate between system building and design for production. Quite simply- every production demands aesthetics. The aesthetic is always the most important. The system is only useful in as much as it generates the appropriate aesthetic experience. It doesn’t matter how reliable, interesting, or functional a system is if it isn’t supplying an aesthetic relevant to production. A “flexible and extendable ” system is only useful if the aesthetic of flexibility and extendibility is ostensibly the the most relevant aesthetic. Interactivity is an aesthetic choice for performance and only relevant when ontology or autonomy are the dramatic themes. For theatre in particular, the system inevitably becomes a character, and unless that character is well defined dramatically, it has no business inserting itself into production.

The debate if any is internal for the designer and presented as a range of options for the producer/director. That debate is a negotiation between time and resources. A designer may be able to envision a system that can achieve an effect- but without sufficient experience with that system and the ability to provide a significant degree of reliability, such a system should not be proposed without articulating how the dramatic themes will inevitably shift to questions about technology.

Sometimes an aesthetic is demanded that requires experimentation on the part of the designer. A designer has to be knowledgable enough about their skill set to know how to explain the cost involved in achieving that aesthetic. And if that cost is reliability than it is incumbent on the designer to iterate that cost and explain how the production will hinge on the unpredictability of that system.

An unreliable system , however, is frankly rarely good for any production unless unreliability is the theme. If a production requires a particular aesthetic experience that seems to be only achievable with the creation of a new tool, then it must be recognized that that tool and the presence of that tool embody the major dramatic themes of production.

Avant garde theatre is one of the best environments for exploring the aesthetics of system building – but it is also the theatre that has the smallest budgets…

For this particular assignment we were charged with the approach of building everything in the asset itself. That is, building a fixed piece of video artwork that could then be deformed and adjusted with playback software (MadMapper).

AfterEffects as playground

Given the nature of this assignment it made sense that Adobe After Effects would be the tool of choice. AE is an essential tool for any media designer, especially as there are times when pre-rendered media is simply the best place to start. I spent a lot of time thinking about the direction that I wanted to move in terms of overall aesthetic for this particular assignment, and I found that again I was thinking about abstract geometric worlds, and the use of lighting in 3D environments in order to explore some of those ideas. As I’ve been thinking about the production I’m working on the in the fall it’s seemed increasingly important to take advantage of open ended assignments in order to explore the ideas and directions that feel right for that show. I’m really beginning to see that the cornerstone of successful independent learning comes from deliberate planning – what can I explore now that will help me on the next project? To that end, what kind of work do I want to be making in two years, and how can I set myself to be successful? Planning and scheduling may be one of the most under-stressed skills of any professional, and I certainly think that applies to artists.¬†

In thinking about abstract animation with After Effects I knew that I wanted to explore four different visual worlds: flat abstract art, lines and movement, 3D lighting and the illusion of perspective, and glitch art. Each of these has it’s own appeal, and each also has a place in the work that I’m thinking about for the fall.¬†

Worlds Apart

Flat and abstract

In thinking about making flat abstract art I started, as I typically do, by doing lots of visual research. One of the more interesting effects that I stumbled on was achieved by using AE’s radio waves plug-in in conjunction with a keyframed mask. YouTube user¬†MotionDesignCommun has a great tutorial¬†about how to achieve these particular visual effect. Overall the media takes on a smoke-like morphing kind of look that’s both captivating and graceful. A quick warning about this effect. This is definitely a render hog. The 30 seconds of this effect used for this project took nearly 7 hours to render. As a disclaimer I did render the video at a projector native 1920 x 1200. All of that aside, this effect was a great way to explore using morphing masks in a way that felt far more organic that I would have originally thought about.

Lines and Flat Movement

I also wanted to play with the traditional big-building projection mapping effect of drawn-in lines and moving shapes. In hindsight I think the effect of the lines drawing in took too long, and ultimately lost some of the visual impact that I was looking for. I also explored playing with transforming shapes here, and that actually was far more interesting and happened too quickly. My approach for this effect was largely centered around the use of masks in AE. Masks, layers, and encapsulated effects were really what drove this particular exploration. Ultimately I think spending more time to write an expression to generate the kind of look that I’m after would be a better use of my time. If I were to go back I think I could successfully craft the write formula to make the process of creating this animation easier, but it really took the effort of creating the first round of animation to help me find that place. One of the hard, but important lessons, that I’ve learned from programming is that sometimes you simply have to do something the hard way / long way a couple of times so that you really understand the process and procedural steps. Once you have a solid idea of what you’re trying to make, it becomes much easier to write the expression as an easier way to achieve the effect you’re after.¬†

3D Lighting and the illusion of Perspective

Another projection designer’s magic trick that I wanted to play with was the idea of creating perspective with digital lighting of a 3D environment. By replicating the geometry that you’re projecting onto, you the designer can create interesting illusions that seem impossible. In my case I started in After Effects by positing planes in 3D space at steep angles, and then masking them so that they appeared to mimic the geometry of the actual building. To make this easier for myself, I worked in a single column at a time. I then pre-composed individual columns so that they could easily be duplicated. The duplicated columns only needed small changes rather than requiring me to build each of the 86 triangles from scratch.¬†

Glitch Art a la After Effects

In another course I took this semester a group of students focused on using glitch art as a base for some of their artistic exploration. While they specifically focused on the alteration of i-frames and p-frames, I wanted to look at the kind of effect that can be purposefully created in AE, and then later modified. YouTube user VinhSon Nguyen has a great tutorial on creating a simple glitch effect in AE. More interesting than the act of making his glitch effect, is his approach. Rather than just adding in adjustment layers and making direct changes. Nguyen instead focuses on how to connect the attributes of a your effect to null objects, essentially making an effect that can be universally applied to any kind of artwork that you want to drop into your comp. This approach to working in AE was interesting as it seemed to start from the assumption that the effect one is making is something that should be easily applied to other works.

Put it all Together

With each section as its own comp the last step in the process was to create a final comp that transitioned between the different worlds, applied some global lighting looks and effects, and added a mask to prevent unwanted projector spill. This was also a great place to do some fine tuning, and to see how the different comps transitioned into one another. 

The Final Rendering 


AE: Create a Glitch Effect

AE: Morphing by MotionDesignCommun

Creative Cow – Creating a 3D Cube

MadMapper AfterEffects Tutorial for Building Projection

TouchDesigner | Sculpture

In the ever growing list of tools¬†that I’m experimenting with¬†Derivative’s TouchDesigner¬†is a¬†tool that time and again keeps coming up as something that’s worth learning,¬†experimenting¬†with, and developing¬†competencies¬†around it’s work flow. TD is a nodal¬†environmental¬†called a¬†network. Inside of the network nodes can be directly connected by by exporting¬†parameters.¬†

Nodes, also called Ops (Operations) are split into families specific to the¬†characteristics¬†of their behavior:¬†CHOPS (Channel Operators),¬†TOPS (Texture Operators),¬†SOPS (Surface Operators),¬†MATS (Materials), and¬†DATS (Data Operators). Nodes from within the same families can pass data directly to one another through patch cords (similar to¬†MaxMSP¬†and¬†Isadora). The output of nearly every node can be passed into other nodes by exporting parameter values. This process¬†divides¬†the process of passing data values into two distinct processes, one that’s centered around like to like processes and one that’s about moving from like to different.¬†

TouchDesigner’s nodes are the most powerful when they’re connected. Like Max single nodes do little by themselves, and are the most powerful when they’re connected. Also like Max the flexibility of TD is it’s ability to build nearly anything, and with that comes the fact that little is already built. Similar to Isadora is the native ability to build user interfaces as a part of the very fabric of building a program / user experience.¬†

One of the projects that I’m working on this semester is for a sculpture course. This course, called New Systems, is intended to address the link between media and sculpture. One of the areas that I’m interested in exploring is about collecting data from a circus¬†apparatus¬†and using that to drive a¬†visualization¬†in performance. I’m most interested in the direct link between how an¬†apparatus¬†is behaving and how that data can be¬†interpreted¬†in other ways. To that end this semester I set to the work of building an¬†apparatus¬†and¬†determining¬†how to parse that data. In my case I decided to use this¬†opportunity to¬†experiment¬†with TouchDesigner as a means of driving the media. While I was¬†successful¬†in welding together a square from stainless steel, after some consultation from my peers in my sculpture course it was determined that this structure was probably not safe to perform on. Originally I had planned to use a contact mic to capture some data from my interaction with the apparatus, and after a little bit of thinking and¬†consultation¬†with my¬†adviser¬†(Jake Pinholster) I decided that gyroscope data might be more useful.

My current plan is to move away from this being a performance apparatus and instead think of it as installed sculptural piece that serves as a projection surface. For data I’ll be using an iPod Touch running¬†Hexler’s touch OSC. Touch OSC passes data using¬†UDP packets¬†to¬†communicate¬†over a wired or wireless network using¬†Open Sound Control¬†(OSC). One of the many things that Touch OSC can do is pass the¬†accelerometer¬†data from an iOS device out to other applications. In my case Touch OSC is passing this information to TouchDesigner. TD is then used to pull this information and drive some of the media.

One of the challenges that my adviser posed in this process was to create three scenes that the ¬†media moved through. For the sake of experimentation I applied this challenge to the idea of working with containers in TouchDesigner. Containers are a method of¬†encapsulation¬†in TD, they’re a generic kind of object that can hold just about any kind of system. In my case I have three containers that are¬†equivalent¬†to different scenes. The time line moves the viewer through the different containers by¬†cross-fading¬†between them. Each container holds it’s own 3D environment that’s rendered in real time and linked to the live OSC inputs coming from an iPod touch.¬†

The best way to detail the process of programming this installation is to divide it up into the component pieces that make it all work. The structure of this network is defined by three hierarchical levels: The Container, Control, and Final Output; the individual composited scene, the underlying geometry.

Want to work your way through these ideas one chunk at a time? Visit their individual Posts here:

The Underlying Geometry

The Individual Composited Scene

The Container, Control, and Final Ouput

Want to work you way through the whole process, keep on scrolling.

The Underlying Geometry

One of the benefits of working with TouchDesigner is the ability to work in 3D. 3D objects are in the family of operators called SOPs РSurface Operators. One of the aesthetic directions that I wanted to explore was the feeling of looking into a long box. The world inside of this box would be characterized by examining artifacts as either particles or waves with a vaguely dual-slit kind of suggestion. With that as a starting point I headed into making the container for these worlds of particles and waves.

Before making any 3D content it’s important to know how TouchDesigner processes these objects in order to display them. On their own, Surface Operators can’t be displayed as a rendered texture. In TouchDesigner’s idiom textures are two-dimension surfaces, and it follows that the objects that live in that category are called TOPs, Texture Operators. Operators from different families can’t be directly connected with patch chords. In order to pass the information from a SOP to a TOP one must use a TOP called a Render. The Render TOP must be connected to three COMPs (Compositions) in order to create an image that can be displayed. The render TOP requires a Geometry COMP (something to be rendered), a Light COMP (something to¬†illuminate¬†the scene), and a Camera COMP (the perspective from which the object is to rendered). In this respect TD pulls from conventions familiar to anyone who has worked with Adobe’s After Effects.¬†

Knowing the component pieces required in order to¬†successfully¬†render a 3D object it’s easier to understand how I started to create the underlying geometry. The Geometry COMP is essentially a¬†container¬†object (with some special attributes) that holds the SOPs responsible for passing a surface to the Render TOP. The default Geometry COMP contains a torus as a geometry.¬†

We can learn a little about how the COMP is working by taking a look inside of the Geometry object. 

Here the things to pay close attention to are the two flags on the torus object. You’ll notice in the bottom right corner there is a purple and a blue circle that are illuminated. The purple circle is a “Render Flag” and tells TouchDesigner to render the object, and the blue circle is a “Display Flag” which tells TouchDesigner that this is the object that should be displayed in the Geometry COMP.

Let’s take a look at the network that I created.

Now let’s dissect how my geometry network is actually working. At first glance we can see that multiple objects are being combined into a single piece of geometry that’s¬†ultimately¬†being passed out of this Geometry COMP.¬†

If we look closer we’ll see that here that the SOP network looks like this:

Grid –¬†Noise –¬†Transform –¬†Alpha Noise (here the bypass flag is turned on)

Grid creates a plane that’s created out of polygons. This is different from a rectangle that’s only composed four points. In order to create a surface that can deform I needed a SOP points in the middle of it. The grid is attached to a Noise SOP that’s animating the surface. Noise is attached to a transform SOP that allows me to change the position of this individual plane. The last stop in this chain is another Noise SOP. Originally I was experimenting with varying the¬†transparency¬†of the surface.¬†Ultimately,¬†I¬†decided¬†to move away from this look. Rather than cutting this out of the chain, I simply turned on the Bypass Flag which turns off this single SOP.¬†This whole chain is repeated eight times (for a total of eight grids).¬†

These Nine planes are then connected so that the rest of the network looks like this:

Merge –¬†Transform –¬†Facet –¬†Texture –¬†Null –¬†Out

Merge takes all of the inputs and puts them together into a single piece of¬†geometry. Transform allows me to move object as a whole in space. Facet is a handy operator that allows you to compute the normals’ of a¬†geometry, which is useful for creating some more dynamic shading. Texture was useful for another direction that I was exploring,¬†ultimately¬† ended up turning on the bypass flag for this SOP. A null, like in other¬†environments, is really just a place holder kind of object. In the¬†idiomatic¬†structure¬†of TouchDesigner, the Null is operationally an object that one places at the end of operation string. This is considered a best practice for a number of reasons. High on the list of reasons to end a string in a Null is because this allows easy access for making changes to a string. TouchDesigner allows the programmer to insert operations between objects. By always ending a string in a Null it becomes very easy to make changes to the stream without having to worry about re-exporting parameters. Finally all of this ends in an Out. While the Out isn’t necessary for this string, at one point I wasn’t sure if I was going to pass this geometry into another¬†component. Ending in the Out ensured that I would have that flexibility if I needed it.

The Individual Composited Scene

There are always large questions to answer when thinking about creating an interactive work: Who is it for? What does it look likes? What are you trying to communicate? How much instruction do you provide, how little instruction do you provide? And on and on.  As I started to think about how this piece was going to work as an installation rather than as a performance apparatus, I started by thinking about what kind of data I could use to drive the visual elements of this work. One of the sensors that I knew I could easily incorporate into my current sculptural configuration was a an iPod Touch. The Touch has an on-board gyroscope and accelerometer. After a conversation with my adviser (Jake Pinholster) we decided that this would be a direction of exploration worth pulling apart, and from there I went back to TouchDesigner to start thinking about how I wanted to incorporate live data into the piece I was making.

When dealing with a challenge like building an interactive sculptural system that has at least three different¬†visualizations, it can be challenging to think about where to start. Different programmers are bound to have different approaches to addressing this question. My approach was to start by thinking about what kind of input data I had to work with. Because I was dealing with a sensor that relayed spatial information, this also help me think about how to represent that data. Next I thought about what different kinds of ways I wanted to present this information, and finally I addressed how to playback this experience for users. Some of my more esoteric and¬†existential¬†questions (why am I making this? what does it mean? what does it represent?) were addressed through the methodical programming process, and others were sussed out over contemplative cups of coffee. As much as I wish that these projects could have a straight line of execution, a checklist even, I’m discovering more and more that the act of creating and programming is often a winding path with happy (and unhappy) discoveries along the way.

My first step on this journey, however, was to address what kind of inputs I had to use.¬†Hexler¬†has an excellent app for sending UDP messages over wireless connections called Touch OSC. OSC, or Open Sound Control, is a communications¬†protocol¬†that uses UDP messages to send data over wired and wireless networks. It’s¬†functionally¬†similar to MIDI and has some additional¬†flexibilities¬†and constraints. In the case of touch OSC, one of the parameters that you can enable from your iOS device to send xyz data from the accelerometer. Getting Touch OSC up and running does require a few steps to get the ball rolling. First both the computer that’s receiving and the device that’s broadcasting need to be on the same network. Your broadcasting device will need the IP address of the receiving computer, and a¬†specified¬†port to send the data to (how to find your¬†IP address on a Mac, and¬†on a PC). Once this information is set on your broadcasting device, it’s time to add a Channel Operator to your TouchDesigner network.

In TouchDesigner, there is a CHOP called “OSC in.” This CHOP will allow you to receive OSC data over a wireless network. Once you’ve added the CHOP to your TD network you’ll have to specify the port that Touch OSC is broadcasting to, and then you should be in business. In my case once this was set up I could instantly see a stream of accelerometer data coming from my iPod Touch. In order to use these values, however, I needed to take some additional steps. The raw OSC data from Touch OSC comes in as a range of data from -1 to 1.¬†Additionally, the data comes in from one CHOP. My flow of operators looks like:

OSC In – Select – Lag – Math – Null

OSC In is the data input. The CHOP Select allows you to select a single channel out of a bundle of channels. In this case I used this to separate my X, Y, and Z inputs into different streams. The Lag chop helps to smooth out the attack and decay rates of input data. In my case this ensured that my final values used to control another object where kept from being too jittery. The Math CHOP is tremendously powerful, in my case I wanted to be able to map the values of my raw data [ -1 to 1 ], to a larger range of values, say 0 to 200. Finally I ended my string in a Null. A null in this case is very useful in case I need to add any other operators into my string.

Before thinking about how to use these values, it’s¬†important¬†to take a moment to revisit how geometry is rendered in TouchDesigner.¬†The geometry COMPs that are used to create the objects to be displayed can’t be¬†visualized¬†without using a render TOP. The render TOP requires three components in order to generate an image that can be seen. Render requires a source geometry, light, and camera. The Geometry COMP provides the location of surfaces, the light provides the necessary information about how the object is being lit. The camera COMP controls the¬†perspective¬†that the object is being rendered from. This is similar to an approach that one might use when creating 3D content in After Effects – an object to be rendered, a light so the object can be seen, and a camera to control the perspective the audience sees of the object.¬†Because we need to think of rendering by combining multiple COMPs, that can inform how we use live data.¬†

With some scaled values process and ready to export I was ready to think about how these values could¬†influence¬†the viewers perspective of the geometry. One of my initial thoughts was to render a cube that a user could look inside of. As the observer changed the orientation of the sensor, the virtual¬†environment¬†would also change in kind. While it’s possible to do this by rotating and translating the geometry itself, I instead decided to focus on the¬†orientation¬†of the camera instead. This has a few¬†advantages. One important advantage is the ability to tell a camera to look directly at a specified geometry. This means that in translating the camera (left or right, up or down, in or out) the camera stays focused on the center of the target geometry. This makes changing perspective much¬†simpler. ¬†

Initially I was thinking of rendering the entire 3D scene as a single geometry. In doing this, however, I was experiencing some challenges when thinking about the placement of lights and the overall organization of the geometry, and in applying texture to the surfaces. By using a Phong shader one can apply texture maps to the 3D geometry COMPs that have been created. By separating the interior and exterior pieces of the geometry and then compositing them after rendering I was able to apply different shaders to each geometry.

The portion of my network responsible for compositing the geometry looks like this:

Render 1, Render 2, Constant (black solid) – Composit – Transform – Null – Out

¬†Render 1, Render 2, and the Constant are the three source surfaces. Render 1 is the box, Render 2 is the merged set of waves, and the Constant is a black background. Another approach to this would be to set one of the camera background’s as black. These three flow into a Composit COMP. Next is the Transition COMP (this allowed for some small adjustments that needed to be made in order to help align the projection with the sculpture. Originally I made this string with a Null as the final output of this Component. I would eventually find that I needed an Out to pass this scene into another display module.¬†

I used the same techniques as above for the other two scenes Рstarting with establishing my data stream, generating the geometry, rendering out layers to be composited and then passed out to the visual stream.

Are these pictures too small? You can see higher quality versions by looking at this Flickr Gallery: Graduate School Documentation

The Container, Control, and Final Output

In thinking about how to meet the objectives that I had for this piece, one of my central questions was how to make sure that I could move through three cued scenes – either with manual or automatic triggers. I knew that I had three different¬†aesthetic¬†environments¬†that I wanted to move through. I explored several different options, and the one that¬†ultimately¬†made sense to me given my current level of proficiency (at this point I had only been programming in Touch for a total of three weeks) in TouchDesigner was to use a cross fading approach. Here’s what the whole network looks like:

In thinking about how to ensure that I was being efficient I decided to encapsulate my three different scenes in their own respective containers. You’ll notice on the left hand side that there are three containers – each holding it’s own 3D environment. These are joined through Corss fading TOPs though a final composite (for a mask) until ending in a Null that was used as the display canvas.¬†

I spent a lot of time thinking about how this piece was going to be both interactive and autonomous. It needed to be interactive in that the user was able to see how their interaction with an object was driving the visual media; it needed to be autonomous in its ability to transition between scenes and then loop back to the beginning of the network. I don’t think I’ve totally cracked the nut that is the right balance of interactivity and self-directed programming, but it feels like I did make strides towards addressing this question. My solution to these issues was to allow the interaction with the projection to be centered around the control of perspective, but to drive the transitions through the scenes with time-line triggers.

Unlike some other interactive programming environments, TouchDesigner has a timeline built into the fabric of the control system. The Timeline is based in frames, and the programmer can specify the number of frames per second as well as the total number of frames for a given project. My Timeline triggering system was the following string of CHOPs:

Timeline – Trigger – Null

Timeline reports out the current frame number. The trigger CHOP can be set to trigger at a given threshold (or in my case a frame number). This in turn is passed to a null and exported to a Corssfade TOP as a rate for crossfade. The Crossfades are daisy-chained together before finally being attached to the null that’s output to the projector.

With the system working I also needed to make a mask for the final projection to ensure that I wasn’t displaying any empty grid onto the floor of the gallery where this was being installed. I would typically make a mask for something like this in Photoshop, but decided to try making this all in the TouchDesigner programming¬†environment. My TOP operator string for this looked like:

Contsant – Transform – Blur – Composite

I started by creating a black constant that’s then passed to a transform so that it can be positioned into place. This is then passed to a blur to soften the edges, and finally to a composite to create a mask that contains a left, right, top, and bottom side. In hindsight I realize that I could use a single constant passed to four transform TOPs, to be a little more tidy. The mask as a composited object is then composited with final render stream before being passed to the Null that’s connected to the projector.¬†

In the end I’m fairly happy with this project. t’s been a steep learning curve, but well worth the hassle, angst, and late nights. It’s no small thing to have made a piece of interactive media driven sculpture in a programming environment where that’s totally new to me. For as hard as all of this work has proven to be, I have to remind myself that I’m actively doing the work that I came to Graduate School to do. Everyday I realize that I’ve been changed by my time in the desert, and by my time with the gifted and brilliant artists and friends that I’ve found here.¬†

Are these pictures too small? You can see larger versions of them here:

Media Design | Photo Styles Recreation

One of the courses I’m talking this semester is a Media Design course. ASU structures it’s courses into three classifications, A, B, and C Sessions. A Sessions course run during the frist half of the semester (the first quarter), B Session courses run the second half the of the semester (the second quarter), and C Session courses run the full length of the semester. The course is a B session course, and is just getting ramped up. The first project is¬†structured¬†around the need that designers frequently face in building assets that are in specific to known period of time. Copy art is one of the many skills that a good media designer needs tucked up his/her sleeve, and this assignment makes a strong case for learning that process. The project directions and results can be found below:

Project Directions:

For this project, please download the zipped folder of 4 images from the BB Assignmentssection. In this folder, you will find two images of daguerrotypes and two images shotwith Kodak Ektachrome film. Please follow the steps below to complete the project.

  • Examine the provided images closely. Research additional images that are also created in this format. Try to identify what features are inartistic to the image. How do these processes effect what subjects can / should be captured with this media?
  • Identify another medium that you will reproduce. Kodachrome? PixelVision? Silent movie stock? Repeat the above process for this additional medium.
  • Keeping in mind what you’ve learned about the interactions between subject and format, shoot 1 to three photographs that you will transform into faux versions of these three media.
  • You may neat tot spend some time researching photoshop tutorials online.
  • In the case of your self-chose third medium, please track your process, introducing why / how you chose this medium, how / where you researched it,why you think it would be useful, and the steps that yo have taken in the transformation (create a mini-tutorial).

Recreate a Daguerreotype

Recreate an Ektachrome

Recreating the GameBoy Camera with Photoshop and After Effects 

Here’s the look I’m trying to emulate:

After looking at the footage closely, here’s what I was looking to make sure that I emulated:

  • Image Size: 320 x 280
  • Color
  • Limitation of the sensor / look and feel of the footage
  • Frame Rate¬†

Here’s the quick and dirty break down of the process:

  • Use After Effects to export an image sequence
  • Open Photoshop and create a new Photoshop action (start recording)
  • Convert the image to Grayscale
  • Posterize the image with Levels
  • Use the Mezzotint filter – short lines
  • Use the Mosaic filter – 2 pixels
  • Start the batch process and export the images to another source folder
  • Import image sequence to After Effects
  • Set the frame rate to 10
  • Output the Final

Want to Following along? Here’s a quick tutorial about using After Effects and Photoshop to achieve this effect:

Here’s where the process gets us: