Tag Archives: Media

TouchDesigner | 3D solutions for a 2D world

12761364894_714f3b8985_nOne of the fascinating pieces of working in TouchDesigner is the ability to use 3D tools to solve 2D problems. For the last seven months or so I’ve been working on Daniel Fine’s thesis project – Wonder Dome. Dome projection is a wild ride, and one of the many challenges we’ve encountered is thinking about how to place media on the dome without the time intensive process of pre-rendering all of the content specifically for this projection environment. To address some of these issues we started working with Los Angles based Vortex Immersion Media, their lead programmer is the TouchDesigner specialist Jeff Smith of Eve Vapor. Part of the wonderful opportunity we’ve had in working with Vortex is getting to take an early look at Jeff’s custom built Dome Mapping tool. Built exclusively in TouchDesigner it’s an incredibly powerful piece of software built to make the dome warping and blending process straightforward. The next step in the process for us was to consider how we were going to place content on the interior surface of the dome. The dome mapping tool that we’re using uses a square raster as an input, and can be visualized by looking at a polar array. If you’re at all interested in dome projection, start by looking at Paul Bourke’s site – the wealth of information here has proven to be invaluable to the Wonder Dome team as we’ve wrestled with dome projection challenges. This square image is beautiful mapped to the interior surface of the dome, making placing content a matter of considering where on the array you might want a piece of artwork to live.

There are a number of methods for addressing this challenge, and my instinct was to build a simple TouchDesigner network that would allow us to see, place, and manipulate media in real time while we were making the show. Here’s what my simple asset placement component looks like:

asset placement

This component based approach makes it easy for the design and production team to see what something looks like in the dome in real time, and to place it quickly and easily. Additionally this component is built so that adding in animation for still assets is simple and straight forward.

Let’s start by taking a look at the network that drives this object, and cover the conceptual structure behind its operation.

Screenshot_022814_125736_AM

In this network we have a series of sliders that are controlling some key aspects of the media – orientation along a circular path, distance from the center, and zoom. These sliders also pass out values to a display component to make it easy to take note of the values needed for programming animation.

We also have a render chain that’s doing a few interesting things. First we’re taking a piece of source media, and using that to texture a piece of geometry with the same aspect ratio as our source. Next we’re placing that rectangle in 3D space and locking its movement to a predefined circular pathway. Finally we’re rendering this from the perspective of a camera looking down on the object as though it were on a table.

Here I’m using a circle SOP to create a circle that will be the pathway that my Geo COMP will rotate around. Here I ended this network in a null so that if we needed to make any changes I wouldn’t have to change the export settings for this pathway.

Screenshot_022714_095918_AM

You’ll also notice that we’re looking at the parameters for the circle where I’ve turned on the bulls-eye so we’re only seeing the parameters that I’ve changed. I’ve made this a small NURBS curve to give me a simple circle.

The next thing I want to think about is setting up a surface to be manipulated in 3D space. This could be a rectangle or a grid. I’m using a rectangle in this particular case, as I don’t need any fancy deformation to be applied to this object. In order to see anything made in 3D space we need to render those objects. The render process looks like a simple chain of component operators: a geo COMP, a camera COMP, a light COMP, and a render TOP. In order to render something in 3D space we need a camera (a perspective that we’re viewing the object from), a piece of geometry (something to render), and a light (something to illuminate the object).

Screenshot_022814_125903_AM

We can see in my network below that I’ve used an in TOP so that I can feed this container from the parent portion of the network. I’ve also given this a default image so that I can always see something in my container. You might also notice that while I have a camera and a geo, I don’t have a light COMP. This is because I’m using a material type that doesn’t require any lighting. More about that in a moment. We can also see that my circle is being referenced by the Geo, and that the in TOP is also being referenced by the Geo. To better understand what’s happening here we need to dive into the Geo COMP.

Screenshot_022814_125926_AM

Inside of the Geo COMP we can see a few interesting things at work. One thing you’ll notice is that I have a constant MAT and an info CHOP inside of this object. Both of these operators are referencing the in TOP in that’s in the parent network. My constant is referencing the in to establish what is going to be applied to the Geo as a material. My info CHOP gives me quick access to several of the attributes of my source file. Included in this list of attributes is the resolution of the source media. I can use this information to determine the aspect ratio of the source, and then make sure that my rectangle is sized to match. Using this process I don’t have to rely on a particular aspect ratio for my source material, I can pass this container any shape of rectangular image, and it will size itself appropriately.

Initially I just had three sliders that controlled the placement of my media in this environment. Then I started thinking about what I would really need during our technical rehearsals. It occurred to me that I would want the option to be able to place the media on the surface of the dome from a position other than behind the media server. To address this need I built a simple TouchOSC interface to replicate my three sliders. Next I captured that OSC information with TouchDesigner, and then passed that stream of floats into my container. From here I suddenly had to do some serious thinking about what I wanted this object to do. Ideally, I wanted to be able to control the container either form the media server, or from a remote access panel (TouchOSC). I also wanted the ability to record the position information that was being passed so I could use it later. This meant that I needed to think about how I was going to capture and recall the same information from three possible sources. To do this I first started by packaging my data with merge CHOPs. I also took this opportunity to rename my channels. For example, my OSC data read – osc_rot, osc_dist, osc_zoom; rotation, distance, and zoom sliders from my TouchOSC panel. I repeated this process for the sliders, and for the table that I was using. I also knew that I wanted to rename my stream, and pass it all to a null CHOP before exporting it across the network. To keep my network a little more tidy I used a base to encapsulate all of the patching and selecting, and switching that needed to happen for this algorithm to work properly.

Screenshot_022814_010413_AM

Inside of the base COMP we can see that I’m taking my three in CHOPs selecting for the appropriate channel, passing this to a switch (so I can control what value is driving the rendering portion of my network) and then back out again. You may also notice that I’m passing the switch values to a null, and then exporting that do a opViewer TOP. The opViewer TOP creates a rendered image of the channel operator at work. Why would I do this? Well, I wanted a confidence monitor for my patch-bay. The base COMP allows you to assign a TOP to its display. Doing this meant that I could see into a portion of the base, without having to actually be inside of this component.

Screenshot_022814_010506_AM

With all of the patching setup, I needed to build an interface that would control all of these changes. I also needed a way to capture the values coming out of TouchOSC, store them in a table, and then recall them later.

Screenshot_022814_010616_AM

The solution here was to build a few buttons to drive this interface. To drive the witch CHOP in my base component, I used three buttons encapsulated inside of a container COMP and set to operate as radio buttons. I then used a panel CHOP in the container to export which button was currently being toggled into the on position. Next I added a button COMP to record the values set from TouchOSC. Using a Chop to DAT I was able to capture the float values streaming into my network, and I knew that what I wanted was to be able to copy a set of these values to a table. To do this I used a panel execute DAT. This class of DAT looks at the panel properties of a specified container (buttons and sliders also qualify here), and runs a script when the specified conditions in the DAT are met. This is the portion of the network that gave me the most headache. Understanding how these DATs work, and the best method of working with them took some experimentation. To trouble shoot this, I started by writing my script in a text DAT, and then running it manually. Once I had a script that was doing what I wanted, I then set to the task of better understanding the panel execute DAT. For those interested in Python scripting in TouchDesigner, here’s the simple method that I used:

m = op (“chopto1”)
n = op (“table1”)
n.copy(m)

Here the operator chopto1 is the DAT that is capturing the OSC stream. The operator table1 is an empty table that I want to copy values to, it’s the destination for my data. The Python method copy starts by specifying the destination, and then the source that you want to pull from.

Screenshot_022814_010809_AM

Finally ready to work with the panel execute DAT, I discovered that all of my headaches were caused by misplacing the script. To get the DAT to operate properly I just had to make sure that my intended script was between the parameter specified, and the return call.

Screenshot_022814_010926_AM

One last helpful hint / tip that I can offer from working on this component is how to specify the order of your buttons in a container. One handy feature in the container parameters page is your ability to have TouchDesigner automatically array your buttons rather than placing them yourself. The catch, how do you specify the order the buttons should appear in? If you look a the parameter page for the buttons themselves, you’ll notice that they have a smartly named parameter named “Alignment Order.” This, sets their alignment order in the parent control panel.

If I’ve learned nothing else, I have learned that sometimes it’s the simplest things that are the easiest to miss.

Multiple Windows | TouchDesigner

For an upcoming project that I’m working on our show control needs to be able to send out video content to three different projectors. The lesson I’ve learned time and again with TouchDesigner is to first start by looking through their online documentation to learn about what my options are, and to get my bearings. A quick search of their support wiki landed me on the page about Multiple Monitors.

To get started I decided to roll with the multiple window component method – this seemed like it would be flexible and easy to address out the gate. Before I was ready for this step I had to get a few other things in order in my network. Ultimately, the need that I’m working to fill is distortion and blending for the interior surface of a dome using three projectors that need to warp and edge blend in real time. First up on my way to solving that problem was looking at using a cube map ) in order to address some of this challenge. In this first network we can see six faces of a cube map composited together, exported to a phong shader, and then applied to a dome surface which is then rendered in real time from three different perspectives.

Screenshot_121913_105543_PM

A general over view of the kind of technique I’m talking about can be found here. The real meat and potatoes of what I was after in this concept testing was in this part of the network:

Screenshot_121913_105619_PM

Here I have three camera components driving three different Render TOPs, which are in turn passing to three Null TOPs that are named P1, P2, and P3 – projector 1 – 3. As this was a test of the concepts of multiple monitor outs, you’ll notice that there isn’t much difference between the three different camera perspectives and that I haven’t added in any edge blending or masking elements to the three renders. Those pieces are certainly on their way, but for the sake of this network I was focused on getting multiple windows out of this project.

If we jump out of this Container Comp we can see that I’ve added three Window Components and a Button to my network. Rather than routing content into these window elements, I’ve instead opted to just export the contents to the window comps.

Screenshot_121913_105514_PM

If we take a closer look at that parameters of the Window Comp we can see what’s going on here in a little more detail:

Screenshot_121913_111605_PM

Here we can see that I’ve changed the Operator path to point to my null TOP inside of my container COMP. Here we can see that the path is “/project1/P1”. The general translation of this pathway would be “/the_name_of_container/the_name_of_the_operator“. Setting Operator path to your target operator will export the specified null when the window is opened, but it will not display the contents of the null in the node itself. If you’d like to see a preview of the render on the window node, you’ll also need to change the node pathway on the Common Page of the Window Comp. Here we can see what that looks like:

Screenshot_121913_111619_PM

Finally, I wanted to be able to test using a single button to open and close all three windows. When our media server is up and running I’d like to be able to open all three windows with a single click rather than opening them one window comp at a time. In order to test this idea, I added a single button component to my network. By exporting the state of this button to the “Open” parameter of the window on the Window Page I’m able to toggle all three windows on and off with a single button.

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.