Tag Archives: media design

Isadora | Button Basics

In a previous post I talked about how to get started in Isadora with some basics about slider operation. I also want to cover a little bit about using buttons with Izzy. 

Buttons are very handy interface controls. Before we get started, it’s important to cover a few considerations about how buttons work. When working with a physical button, like an arcade button on a midi controller, the action of pressing the button completes a circuit. When you release the button, you also break the circuit. In Isadora, we can control what happens when we press a button. Specifically, we can control what values are being transmitted when the button isn’t being pressed, when it is being pressed, and how the behaves (does it toggle, or is the signal momentary). Thinking about how a button behaves will help as you start to build an interface, simple or complex.

Let’s start by experimenting with a simple implementation of this process. We’ll create a white rectangle that fills our stage, connect our shape to a projector, and finally use a button to control the intensity of the projector. 

Start by creating a new scene, and adding a “Shapes” actor and a “Projector” actor. Connect the shapes’ video outlet to the projector’s video inlet.

Next change the width and height dimensions of the shape to be 100 and 100 respectively. Remember that Isadora doesn’t use pixel values, but instead works in terms percentage. In this case a value of 100 for the hight indicates that the shape should be 100% of the stage’s height, the same applies for the width value of 100. 

We should now have a white box that covers the height and width of the stage so that we only see white.

Now we’ll use a button to control a change in the stage from white to black. Remember that in order to start adding control elements we first need to reveal the control panel. You can do this by: selecting it from the drop down menu use Command-Shift-C to see only the control panel or use Control-Shift-S to see a split of the control panel and the programming space. If you’ve turned on the Grid for your programming space you’ll be able to see a distinct difference between the control panel space (on the left) and the programming space (on the right). You’ll also notice that with your Control panel active your actor selection bins have been replaced by control panel operators.

With the control panel visible, add a button. 

Once you’ve added your button to the control panel, you can change the size of the button by clicking and dragging the small white square on the bottom right of the button. 

Next let’s look at the options for the button. We can see what parameters we can control by double clicking on the button. When you do this you should see a pop up window the the following attributes:

  • Control Title – what the control is named
  • Width – how wide is this control (in pixels)
  • Height – how tall is this control (in pixels)
  • Font – the font used for this control
  • Font Size – self explanitory
  • Show Value of Linked Properties – this allows data from the patch itself to feed back into the control panel
  • Button Text – the text displayed on the button
  • Control ID – the numerical identification number of this contorl
  • Off Value – the numeric value sent when the button is in the off position
  • On Value – the numeric value sent when the button is in the on position
  • Mode (Momentary or Toggle) – the mode for the button. Momentary indicates that the on value is only transmitted while the button is being pressed. Toggle indicates that the values with toggle between on and off values with a click.
  • Don’t Send Off – prevents the button from sending the off value
  • Invert – inverts the on and off values

There are a few other options here, but they mostly have to deal with the appearance of the button. When you start thinking about how you want your control panel to look to an operator, these last parameters will be very helpful. 

For right now let’s leave the default parameters for the button’s options. Next connect the button’s control ID to the inlet on the Projector labeled “intensity.”

As we’re working on the control panel our edit mode is currently enabled which will prevent us from being able to actually click the button with the mouse. To check the controls we have two options:

  • We can disable edit mode by right clicking on the control panel work space and selecting “Disable Edit Mode” from the contextual menu.
  • We can use the option key to by-pass the above process.

If you’re doing some extensive testing of your control panel I’d recommend that you disable edit mode. On the other hand, if you’re only testing a single slider or button, I’d recommend using the option key as a much more efficient alternative.  

Holding down the option key, you should now be able to click the button in the control panel. You should see the white box flash on, and back off again as you press and release the mouse button. Right now as we click the button we’re sending a value of 100 to the intensity parameter of the Projector actor. This makes the shape opaque only so long as you’re pressing the button. 

Double click the button in the control panel and check the box for “Invert.” We’ve now inverted the message being sent from the bottom to the Projector. When you click the button you should now see the opposite. A white screen that flashes black, and then returns to white. 

Double click on the button in the control panel, uncheck the box for “Invert.” Change the “Mode” of the button from “Momentary” to “Toggle.” Now as you click the button you should notice that it stays depressed until you click it again. This allows you to toggle between the on and off states of the button. 

This, obviously, is only the beginning of how to work with Buttons. You might use a button to control the play back of a movie, what media was on the stage, the position of media on a stage, to jump between scenes, or to change any number of parameters in you parch. Knowing the basics of how buttons behave will help ensure that you can start to build a solid control panel that you can use during a live performance. 

A Variety of Approaches | Lessons from Grad School


I’m learning a lot in grad school. Some of the lessons that I’m learning are consistent with my goals and aspirations, some are lessons about realigning my expectations with reality, and some are unexpected discoveries about the nature of a discipline’s approach. As an interdisciplinary student my coursework is a purposeful patchwork from multiple departments and schools. This approach means that I’m fortunate to see the world through multiple lenses, and it also means that at times I’m a servant to many masters. In my case, I’ve seen the approach of the school of Art (in my second semester I took a media and sculpture course), AME (this stands for Arts, Media + Engineering), and the school of Theatre and Film

In thinking and talking about why we make art/sculpture/programs it seems like I’ve continually run into similar questions. Questions that are rooted in the desire to find meaning, direction, or justification for the art. While one might think of this as more ideological exercise than useful discussion, I think there’s value in the wrestling with questions of motivation and function. “Why” and “for what” help to focus the creator in the process of finding the path for a particular project. To that end I think there are six statements that I’ve heard time and again in talking with other makers, performers, designers, and the like.

Six Statements of focus:

  • The act of creation is about
  • The aesthetic experience is
  • The function of the object/art/program is
  • The proof is in
  • Value is derived from
  • The meaning of the object/art/program

How a discipline finishes the above statements can help to illustrate how their practitioners are encouraged to think of the world, and their contribution to their particular field. As a disclaimer, I don’t think think any of the following observations are good or bad. These are my observations about how new and developing artists in these respective fields are encouraged to think about their work, and the process of making their work. 

The Artist / Sculptor’s Method

  • The act of creation is about is the exploration.
  • The aesthetic experience is both in the artist’s method and in the viewer’s observation.
  • The function of the object/art/program is inconsequential; the suggestion of an function is just as powerful.
  • The proof is in the critique of the work by an outside artist who is successful.
  • Value is derived from the act of creating something new; if the art is successful or not is in some ways inconsequential so long as the artist is being pushed to deepen his/her methods and unique style.
  • The meaning of the object/art/program can be explicit, implied, or absent; this is the maker’s choice, and they are in no way bound to create a piece that has specific meaning. 

In many ways this approach is about concisely making Art with a capital A, while trying to imagine that you’re only creating art with ironic italics. There’s something of an identity crisis in this approach that almost feeds off the expectation that an audience may willingly accept impenetrable art as a sign that it must be intellectually advanced. Discussions in this environment tend to start from a place of process rather than working backwards from the indented experience. For example, my class often spent more time talking about what we were currently engaged in doing, rather than exploring what we wanted the audience to experience in seeing our work. Here it feels like the answers are hidden, and that part of the artist experience is finding solutions on your own. Ironically there’s a very Ryandian kind of perspective to this field. A kind of rugged individualism that covets the secrets to other people’s magic tricks. There is also a quiet acceptance that good work may take a lot of time, or it may take very little. Sometimes the artist just has to spend 14 hours sanding, and that’s just a part of the work. There is some kind of hipster-zen clarity about the world that can be read as detachment or general disregard for the world. 

The Programmer’s Method

    • The act of creation is about novelty and newness.
    • The aesthetic experience is secondary to the methodology in the programming.
    • The function of the object/art/program even if inconsequential must be based on logical rules.
    • The proof is in the procedural methodology; further, the proof is in the object / program’s reliable operation.
    • Value is derived from efficiencies and brevity (of the code).
    • The meaning of the object/art/program is allowed to be absent, or so abstract as to be invisible.

The programmers approach is built on rules. The starting point for a creative work might be an interest in continuing to explore a particular procedure, or the curiosity about how to accomplish a particular end. Some  works are born out the necessity of a project or contract. More than anything, I’ve noticed that this perspective is always grounded in the procedural steps for accomplishing a particular task. An effective program requires an understanding of the necessary pieces to accomplish a particular end. It also often requires a bit of creative problem solving in order to ensure that one isn’t stopped by hurdles. 

Depending on the project, the programmer may or may not start with the aesthetic of the finished product. In many cases, before the programmer can start to address how a particular system looks, s/he first must think about how to ensure that the system is consistently producing the intended results. Unlike the Artist’s method, the programmer relies on the experience of others who have had similar experiences. Before reinventing the wheel, the programmer first tries to establish how someone else has solved the same problem – what was the most elegant solution requiring the fewest system resources. What trade-offs need to be made in order to ensure consistent, stable operation? More importantly, the programmer lives in a world characterized as a race. Lots of other programmers are all working to solve the same problem, for the same pay-day. “Perfect” comes in a distant second from “done,” and while the goal is to always have elegant solutions, having a solution always trumps not having one. 

The Media (Theatrical) Designer’s Method

    • The act of creation is about conveying a message or feeling.
    • The aesthetic experience is primary to the work, and should have a purposeful relationship to the world of the production.
    • The function of the object/art/program is help tell the story of the production or performance.
    • The proof is in observer and the actor’s relationship with the media. 
    • Value is derived from the purposeful connection or disconnection of the art / program / work to the world that it exists inside of the play or performance.
    • The meaning of the object/art/program can be abstract or didactic so long as it is purposeful.

The media designer is in an interesting position in the theatre. Somewhere between lights, set, and sound is the realm of the media designer. Designer’s for the theatre are often bound by the world of the play and how their work supports the larger thematic and idiomatic conventions of the script. More importantly, the media designer’s work must live in the same world as the performance. The media may be comprised of contrasting images or ideas, it might be in aesthetic dissonance with the world or it may be in harmony, but it always lives in the same place as the performance. This work must also consciously consider the role and placement of the audience, the relationship between the media and the performers, and the amount of liveness required for a particular performance. 

Between the artist and the programmer, the media designer sometimes relies on the magic of implied causation (when the actor performs a particular gesture a technician presses a button to cue the shift in the media giving the audience the illusion of a direct relationship between the actor and the media), but may also need to create a system of direct causation (the actor or dancer is actually the impetus for changes in the media). Like the programmer, the media designer is also in a sort of race. The countdown to opening night is always an element of the design process. While “done” still trumps “perfect” this question takes on a different kind of dynamic for the media designer. “Done” might be something that happens during the second or third night of tech, and ideally “perfect” happens before opening. 

Isadora | Slider Basics

One of the most exciting (and also most challenging) parts of working with Isadora is thinking about how an operator is going to use your patch during a show. ASU’s program focuses on the importance of programming a show with the expectation that the person running your system may, or may not, have much experience. During the tech rehearsal process one of the Media Designer’s responsibilities is to train the operator with basic operation and trouble shooting techniques. 

While there are a wide variety of methods for controlling your system I want to take a moment to cover how you can use the Control Panel features of Isadora to create a simple custom interface. I’m also going to take a moment to talk about the different kinds of controls, how they work, and things you want to keep in mind as you’re using them. 

To get started, there are few different ways to reveal the control panel. You can:  select it from the drop down menu use Command-Shift-C to see only the control panel or use Control-Shift-S to see a split of the control panel and the programming space. If you’ve turned on the Grid for your programming space you’ll be able to see a distinct difference between the control panel space (on the left) and the programming space (on the right). You’ll also notice that with your Control panel active your actor selection bins have been replaced by control panel operators.

As you create new scenes, Isadora will start by connecting all scenes to the same control panel. There are a few different schools of thoughts in terms of best practice in the use of control panels. Using a single control panel for every scene means only building a single interface. As long as you’re only dealing with a limited number of simple cues this is a fine direction to head, and may be the easiest method in terms of programming. This approach can, however, get complicated very quickly if you’re triggering more than one actor per scene. In this scenario the programmer could loose track of where a button or slider is connected. This might cause unexpected playback results or could just be a source of headaches. For more complicated play-back situations, you may instead elect to have separate control panels for each scene. Depending on your programming needs this may be the best way to ensure that your controls are only linked to a single scene. 

To accomplish this, you’ll need to split your control panel. Isadora gives you several visual cues to determine how a scene and control panel are linked. When you glance at your scene list you’ll notice that the bar underneath is either continuous (a single control panel) or broken (a split control panel).  

To split the control panel click between the two scenes that you wish you separate. When you see your cursor separating the two scenes right click to get a contextual menu with the option to split the control panel. You should now see that the line between the two scenes is now broken. 

Let’s start by looking at a simple slider. To add a slider to your control panel start by double clicking in the control panel work space. Next type in “slider” and select it when it appears in the drop down menu. 

It’s important to note that there is a difference between the 2D slider, and the regular slider. For now, we just want the “slider” control. We can learn a little more about what our slider is doing by double clicking on it. 

You should see a pop up window with lots of information about our slider:

  • Control Title – what this control is named
  • Width – how wide is this control (in pixels)
  • Height – how tall is this control (in pixels)
  • Font – what’s the font used for this control
  • Font Size – self explanatory
  • Show Value of Linked Properties – this allows data from the patch itself to feed back into the control panel. As a note, for this to work properly, you’ll also need to enable the “Display Value” check-box (a big thank you to Matthew Haber for catching my error here)
  • Control ID – the numerical identification number of this control
  • Minimum – Slider’s work on a principal that at the bottom, or left, position this control switch will send the number that’s indicated in this box.
  • Maximum – Slider’s work on a principal that at the top, or right, position this control switch will send the number that’s indicated in this box.
  • Step – The counting increments for this control.
  • Display Value – Shows the current value being sent in the control panel itself.
  • Display Format – The number of floating points displayed.
  • Color – The color of the inside of the slider.

There are a few other options here, but they’re largely aesthetic, so I’m going to skip them for now.

Let’s start by working with the default values for the slider and to see how this is communicating with the patch itself. 

First, we will add a trigger value to the programming space so we can see how values are transferred from the control panel to the programming environment.

Next we can connect the Control ID from the control panel to the Value inlet not he Trigger Value Actor. We can do this by clicking on the Control ID, and dragging the red line to the “value” input.

You should now see a number next to the value input that corresponds to the slider’s control ID.

As we’re working on the control panel our edit mode is currently enabled which will prevent us from being able to actually move the slider with the mouse. To check the controls we have two options:

    • We can disable edit mode by right clicking on the control panel work space and selecting “Disable Edit Mode” from the contextual menu. 
  • We can use the option key to by-pass the above process.

If you’re doing some extensive testing of your control panel I’d recommend that you disable edit mode. On the other hand, if you’re only testing a single slider or button, I’d recommend using the option key as a much more efficient alternative.  

Holding down the option key, you should now be able to move the slider in the control panel. You’ll notice that the value linked to the slider also changes. 

You’ll also notice that the output from the trigger value has not changed. This is because we’re only adjusting the value, but not activating the trigger. Let’s activate the trigger at the same time we’re moving the slider.

To do this we attach the Control ID to the trigger inlet on the Trigger Value Actor. This will ensure that the actor triggers at the same time that the value is changed. Now when we move the the slider we can see that the output value also changes.

Now that we know how to send slider data to a trigger value we can now look at something a little more interesting. We are going to start by adding a shape actor and connecting that to a projector actor.

Next connect your vertical slider to the “vert pos” (Vertical Position) inlet on the shapes actor. 

Create a new slider in the control panel. Grab the small box on the bottom right corner and drag the slider to the right until you have created a horizontal slider. 

Connect your horizontal slider to the “hora pos” (Horizontal Position) inlet on the shapes actor.


Next we need to adjust the scaling values of the shape actor. Actor’s inlets and outlets can often be scaled to a set range. In order to properly use our slider we’ll need to adjust the inlet scaled values on the Shapes Actor. To do this click on the name of the attribute whose scaled values you’d like to adjust. Start by clicking on “hora pos.” We can see in the pop-up menu that the minimum value is currently set to −200, and the maximum value is set to 200. These values are too high. 


Isadora uses a coordinate system that assumes that the middle of the stage is the origin 0,0. Further, Isadora thinks in terms of percentages rather than pixels. In the case of our horizontal slider, a positive value of 50 represents half of the total stage length which puts us at the right most edge of the stage. In the case of shapes it’s also important to note that the shape’s position is relative to it’s center. A positive value of 50 still leaves half of our shape on the screen, no matter the dimensions of the shape.

Set the scaled values of the horizontal position to −65 and 65. Now when we drag our slider (remember to hold down the option key) we are able to move our box from all the way off the stage on the left to all the way off the stage on the right.

Another type of slider that might be self in this type of situation is the 2D slider. Create a new scene and split the control panel so we can see how this input control works. In the new scene add a Shapes Actor and a Projector Actor, and connect them. Now open add a 2D slider to the control panel.

Double click on the 2D slider so you can see a little more about how this particular control input works. Similar to the “slider” control you can see that you can title the slider, adjust the width, height, font, and so on. You’ll notice that there’s a X Control ID and a Y Control ID. 

Next we’ll click okay, and link Control ID 1 (the X control) to the “horz pos” inlet on the Shapes Actor. Now link the Control ID 2 (the Y control) to the “vert pos” inlet on the Shapes Actor. Check to make sure that the horizontal and vertical inlets on the Shapes Actor are properly scaled (last time we set them to −65 and 65). 

Now the single 2D slider behaves in the same way as the two sliders we set-up in the exercise above.

This, obviously, is only the beginning of how to work with Sliders and 2D Sliders. You might use a slider to control the playback position of a movie, or the position of a movie on the stage. You might use a slider to control position, zoom, rotation, width, height, really just about any kind of numerical attribute for an actor. The key things to keep in mind in this process are:

    • Knowing the range of values that your slider is transmitting
    • Knowing the scaled range of values that your actor is transposing values to
    • Knowing control ID
    • Knowing how to connect your control panel items to Actors in your patch

Media Design | Building Projection Mapping

One of the courses I’m taking in my first year at ASU is a course called Media Design Applications. This course is centered around the use of various media design techniques in specific relation to their application in a theatrical setting. One of the techniques that we discussed in class is architectural projection mapping. This form has quickly become popular for forcing perspective, and opportunity for complex illusion. The underling principal of projection mapping is to highlight and take advantage of physical from in order to create the illusion that the entire surface is, itself, a screen. There are a variety of techniques to achieve this illusion, some based entirely in software and others based in the process of generating the artwork itself. This is an essential and powerful tool for the media designer as it opens up a wide range of possibilities for the creation of theatrical illusion. Before I start to talk about the process, here’s the project description:

Project Description:

Component 2 – Geometry, Surface and Illusion 

Unfortunately – or possibly fortunately – media designers in the theatre get a nice, rectangular, white screen to shoot at from a perpendicular, on-center angle. In this section, we will explore methods for dealing with odd angles, weird shapes, and non-ideal surfaces, as well as exploring special effects that are possible through the right marriage of projection, surface and angle. For this project, you may choose a building, sculpture or other built object in the ASU environment, then map its geometry using the techniques shown in class and create content utilizing that geometry to best effect. Final presentations of this project will be in the evening on campus.

I started by this process by first scouting a location. After wandering around campus several times I one of the buildings that I kept coming back to was a energy solutions building by a company called NRG. One of the larger draws of this building happens to be the location. Positioned directly across from one of the campus dormitories it seemed like an ideal location that would have a built-in audience. While there’s no guarantee that there will be many students left on campus at this point, it never hurts to plan for the best.

The face of the building that points towards the dormitories is comprised of abstract raised polygons arranged in narrow panels. These panels come in two varieties creating a geometric and modern look for the building. One of the productions I’m working on next year has several design elements that are grounded in abstract geometric worlds, and this seemed like a prime opportunity to continue exploring what kind of animation works well in this idiom.

In the Asset or In the System

The debate that is often central to this kind of work is centered around an approach that values building a system (or program) for creating the aesthetic, or to instead create the work as fixed media artifacts. In other words, do you build a something that is at it’s core flexible and extendable (though problematic, finicky, and unabashedly high maintenance) or do you build something rigid and fixed (though highly reliable, hardware independent, and reproducible)? Different artists prefer different methods, and there are many who argue that one is obviously better than the other. The truth of the matter, however, is that in most cases the right approach is really a function of multiple parameters: who’s the client, what’s the venue, what’s the production schedule, what resources are available, is the project interactive, and so on. The theoretical debate here is truly interesting, and in some sense calls into question what the skill set is most appropriate for the artist who intends on pursuing this practice. The print analogy might be, do you focus on designing within the limitations of the tools that you have or do you commit to building a better printing press so that you can realize the design exists only as an abstract thought? 

Recent Arizona State MFA graduate Boyd Branch shared these thoughts about this very topic:

I don’t know if there is much of a debate between system building and design for production. Quite simply- every production demands aesthetics. The aesthetic is always the most important. The system is only useful in as much as it generates the appropriate aesthetic experience. It doesn’t matter how reliable, interesting, or functional a system is if it isn’t supplying an aesthetic relevant to production. A “flexible and extendable ” system is only useful if the aesthetic of flexibility and extendibility is ostensibly the the most relevant aesthetic. Interactivity is an aesthetic choice for performance and only relevant when ontology or autonomy are the dramatic themes. For theatre in particular, the system inevitably becomes a character, and unless that character is well defined dramatically, it has no business inserting itself into production.

The debate if any is internal for the designer and presented as a range of options for the producer/director. That debate is a negotiation between time and resources. A designer may be able to envision a system that can achieve an effect- but without sufficient experience with that system and the ability to provide a significant degree of reliability, such a system should not be proposed without articulating how the dramatic themes will inevitably shift to questions about technology.

Sometimes an aesthetic is demanded that requires experimentation on the part of the designer. A designer has to be knowledgable enough about their skill set to know how to explain the cost involved in achieving that aesthetic. And if that cost is reliability than it is incumbent on the designer to iterate that cost and explain how the production will hinge on the unpredictability of that system.

An unreliable system , however, is frankly rarely good for any production unless unreliability is the theme. If a production requires a particular aesthetic experience that seems to be only achievable with the creation of a new tool, then it must be recognized that that tool and the presence of that tool embody the major dramatic themes of production.

Avant garde theatre is one of the best environments for exploring the aesthetics of system building – but it is also the theatre that has the smallest budgets…

For this particular assignment we were charged with the approach of building everything in the asset itself. That is, building a fixed piece of video artwork that could then be deformed and adjusted with playback software (MadMapper).

AfterEffects as playground

Given the nature of this assignment it made sense that Adobe After Effects would be the tool of choice. AE is an essential tool for any media designer, especially as there are times when pre-rendered media is simply the best place to start. I spent a lot of time thinking about the direction that I wanted to move in terms of overall aesthetic for this particular assignment, and I found that again I was thinking about abstract geometric worlds, and the use of lighting in 3D environments in order to explore some of those ideas. As I’ve been thinking about the production I’m working on the in the fall it’s seemed increasingly important to take advantage of open ended assignments in order to explore the ideas and directions that feel right for that show. I’m really beginning to see that the cornerstone of successful independent learning comes from deliberate planning – what can I explore now that will help me on the next project? To that end, what kind of work do I want to be making in two years, and how can I set myself to be successful? Planning and scheduling may be one of the most under-stressed skills of any professional, and I certainly think that applies to artists. 

In thinking about abstract animation with After Effects I knew that I wanted to explore four different visual worlds: flat abstract art, lines and movement, 3D lighting and the illusion of perspective, and glitch art. Each of these has it’s own appeal, and each also has a place in the work that I’m thinking about for the fall. 

Worlds Apart

Flat and abstract

In thinking about making flat abstract art I started, as I typically do, by doing lots of visual research. One of the more interesting effects that I stumbled on was achieved by using AE’s radio waves plug-in in conjunction with a keyframed mask. YouTube user MotionDesignCommun has a great tutorial about how to achieve these particular visual effect. Overall the media takes on a smoke-like morphing kind of look that’s both captivating and graceful. A quick warning about this effect. This is definitely a render hog. The 30 seconds of this effect used for this project took nearly 7 hours to render. As a disclaimer I did render the video at a projector native 1920 x 1200. All of that aside, this effect was a great way to explore using morphing masks in a way that felt far more organic that I would have originally thought about.

Lines and Flat Movement

I also wanted to play with the traditional big-building projection mapping effect of drawn-in lines and moving shapes. In hindsight I think the effect of the lines drawing in took too long, and ultimately lost some of the visual impact that I was looking for. I also explored playing with transforming shapes here, and that actually was far more interesting and happened too quickly. My approach for this effect was largely centered around the use of masks in AE. Masks, layers, and encapsulated effects were really what drove this particular exploration. Ultimately I think spending more time to write an expression to generate the kind of look that I’m after would be a better use of my time. If I were to go back I think I could successfully craft the write formula to make the process of creating this animation easier, but it really took the effort of creating the first round of animation to help me find that place. One of the hard, but important lessons, that I’ve learned from programming is that sometimes you simply have to do something the hard way / long way a couple of times so that you really understand the process and procedural steps. Once you have a solid idea of what you’re trying to make, it becomes much easier to write the expression as an easier way to achieve the effect you’re after. 

3D Lighting and the illusion of Perspective

Another projection designer’s magic trick that I wanted to play with was the idea of creating perspective with digital lighting of a 3D environment. By replicating the geometry that you’re projecting onto, you the designer can create interesting illusions that seem impossible. In my case I started in After Effects by positing planes in 3D space at steep angles, and then masking them so that they appeared to mimic the geometry of the actual building. To make this easier for myself, I worked in a single column at a time. I then pre-composed individual columns so that they could easily be duplicated. The duplicated columns only needed small changes rather than requiring me to build each of the 86 triangles from scratch. 

Glitch Art a la After Effects

In another course I took this semester a group of students focused on using glitch art as a base for some of their artistic exploration. While they specifically focused on the alteration of i-frames and p-frames, I wanted to look at the kind of effect that can be purposefully created in AE, and then later modified. YouTube user VinhSon Nguyen has a great tutorial on creating a simple glitch effect in AE. More interesting than the act of making his glitch effect, is his approach. Rather than just adding in adjustment layers and making direct changes. Nguyen instead focuses on how to connect the attributes of a your effect to null objects, essentially making an effect that can be universally applied to any kind of artwork that you want to drop into your comp. This approach to working in AE was interesting as it seemed to start from the assumption that the effect one is making is something that should be easily applied to other works.

Put it all Together

With each section as its own comp the last step in the process was to create a final comp that transitioned between the different worlds, applied some global lighting looks and effects, and added a mask to prevent unwanted projector spill. This was also a great place to do some fine tuning, and to see how the different comps transitioned into one another. 

The Final Rendering 


Resources

AE: Create a Glitch Effect

AE: Morphing by MotionDesignCommun

Creative Cow – Creating a 3D Cube

MadMapper AfterEffects Tutorial for Building Projection

Isadora | Live-Camera Input as a Mask

Back in March I had an opportunity to see a production called Kindur put on by the Italian Company Compagnia TPO. One of the most beautiful and compelling effects that they utilized during the show was to use a live-camera to create a mask that revealed a hidden color field. The technique of using a live feed in this way allows a programmer to work with smaller resolution input video while still achieving a very fluid and beautiful effect. 

This effect is relatively easy to generate by using just a few actors. An overview of the whole Isadora Scene looks like this:

To star this process we’ll start with a Video-In Watcher actor. The video-in will be the live feed from our camera, and will ultimately be the mask that we’re using to obscure and reveal our underlying layer of imagery. This video-in actor connects to a Difference actor which looks for the difference between two sequencial frames in the video stream. This is then in turn passed to a Motion Blur actor. The motion blur actor will allow you to specify the amount of accumulated blur effect as well as the decay (disappearance rate) of the effect. To soften the edges of this the image stream is next passed to a Gaussian Blur actor. Finally this stream is passed to an Add Alpha Channel actor by passing the live feed into the mask inlet on the actor. The underlying geometry is then passed in through the video inlet in the Add Alpha Channel actor. Finally, the outlet of the Add Alpha actor is passed out to a projector. 

As a matter of best-practice I like to use a Performance Monitor actor when I’m creating a scene in order to keep an eye on the FPS count. This can also be useful when trying to diagnose what’s causing a system to slow down during playback. 

This effect works equally well over still images or video, and is certainly something that’s fun to experiment with. Like all things in live systems, your milage may vary – motion blur and gaussian blur can quickly become resource expensive, and it’s worth turning down your capture settings to help combat a system slow-down.

Isadora | Network Control

For an upcoming show one of the many problems that I’ll need to solve is how to work with multiple machines and multiple operating systems over a network. My current plan for addressing the needs of this production will be to use one machine to drive the interactive media, and then to slave two computers for cued media playback. This will allow me to distribute the media playback over several machines while driving the whole system from a single machine. My current plan is to use one Mac Pro to work with live data while slaving two Windows’ 7 PCs for traditionally cued media playback. The Mac Pro will drive a Barco, while the PC’s each drive a Sanyo projector each. This should give me the best of both worlds in some respects. Distributed playback, similar to WatchOut’s approach to media playback, while also allowing for more complex visual manipulation of live-captured video. 

To make all of this work, however, it’s important to address how to get two different machines running Isadora on two different operating systems to talk with one another. To accomplish this I’m going to use an OSC Transmit actor on the Master machine, and an OSC listener on the slaved machines. 

On the Master machine the set-up looks like this:

Trigger Value – OSC Transmit

The transmit actor needs the IP address of the slaved machines, as well as a port to broadcast to. The setup below is for talking to just one other machine. In my final setup I’ll create a specialized user actor that holds two OSC Transmit actors (one for each machine) that can be copied into each scene.

On the Slaved machines the setup takes two additional steps. First off it’s important to determine what port you want to receive messages from. You can do that by going to Isadora Preferences and selecting the Midi/Net tab. Here you can specify what port you want Isadora to listen to. At this point it’s important to catch the data stream. You can do this by oping up the Communications tab and selecting Stream Setup. From here make sure that you select “Open Sound Control” and click the box “Auto-Detect Input.” At this point you should see the Master Machine broadcasting with a channel name an address, and a data stream. Once this is setup the Actor patch for receive messages over the network looks like this:

OSC Listener – Whatever Actor you Want

In my case I’ll largely use just jump++ actors to transition between scenes, each with their own movie. 

You can, of course, do much more complicated things with this set-up all depending on your programming or play-back needs. 

TouchDesigner | Sculpture

In the ever growing list of tools that I’m experimenting with Derivative’s TouchDesigner is a tool that time and again keeps coming up as something that’s worth learning, experimenting with, and developing competencies around it’s work flow. TD is a nodal environmental called a network. Inside of the network nodes can be directly connected by by exporting parameters. 

Nodes, also called Ops (Operations) are split into families specific to the characteristics of their behavior: CHOPS (Channel Operators)TOPS (Texture Operators)SOPS (Surface Operators)MATS (Materials), and DATS (Data Operators). Nodes from within the same families can pass data directly to one another through patch cords (similar to MaxMSP and Isadora). The output of nearly every node can be passed into other nodes by exporting parameter values. This process divides the process of passing data values into two distinct processes, one that’s centered around like to like processes and one that’s about moving from like to different. 

TouchDesigner’s nodes are the most powerful when they’re connected. Like Max single nodes do little by themselves, and are the most powerful when they’re connected. Also like Max the flexibility of TD is it’s ability to build nearly anything, and with that comes the fact that little is already built. Similar to Isadora is the native ability to build user interfaces as a part of the very fabric of building a program / user experience. 

One of the projects that I’m working on this semester is for a sculpture course. This course, called New Systems, is intended to address the link between media and sculpture. One of the areas that I’m interested in exploring is about collecting data from a circus apparatus and using that to drive a visualization in performance. I’m most interested in the direct link between how an apparatus is behaving and how that data can be interpreted in other ways. To that end this semester I set to the work of building an apparatus and determining how to parse that data. In my case I decided to use this opportunity to experiment with TouchDesigner as a means of driving the media. While I was successful in welding together a square from stainless steel, after some consultation from my peers in my sculpture course it was determined that this structure was probably not safe to perform on. Originally I had planned to use a contact mic to capture some data from my interaction with the apparatus, and after a little bit of thinking and consultation with my adviser (Jake Pinholster) I decided that gyroscope data might be more useful.

My current plan is to move away from this being a performance apparatus and instead think of it as installed sculptural piece that serves as a projection surface. For data I’ll be using an iPod Touch running Hexler’s touch OSC. Touch OSC passes data using UDP packets to communicate over a wired or wireless network using Open Sound Control (OSC). One of the many things that Touch OSC can do is pass the accelerometer data from an iOS device out to other applications. In my case Touch OSC is passing this information to TouchDesigner. TD is then used to pull this information and drive some of the media.

One of the challenges that my adviser posed in this process was to create three scenes that the  media moved through. For the sake of experimentation I applied this challenge to the idea of working with containers in TouchDesigner. Containers are a method of encapsulation in TD, they’re a generic kind of object that can hold just about any kind of system. In my case I have three containers that are equivalent to different scenes. The time line moves the viewer through the different containers by cross-fading between them. Each container holds it’s own 3D environment that’s rendered in real time and linked to the live OSC inputs coming from an iPod touch. 

The best way to detail the process of programming this installation is to divide it up into the component pieces that make it all work. The structure of this network is defined by three hierarchical levels: The Container, Control, and Final Output; the individual composited scene, the underlying geometry.


Want to work your way through these ideas one chunk at a time? Visit their individual Posts here:

The Underlying Geometry

The Individual Composited Scene

The Container, Control, and Final Ouput

Want to work you way through the whole process, keep on scrolling.


The Underlying Geometry

One of the benefits of working with TouchDesigner is the ability to work in 3D. 3D objects are in the family of operators called SOPs – Surface Operators. One of the aesthetic directions that I wanted to explore was the feeling of looking into a long box. The world inside of this box would be characterized by examining artifacts as either particles or waves with a vaguely dual-slit kind of suggestion. With that as a starting point I headed into making the container for these worlds of particles and waves.

Before making any 3D content it’s important to know how TouchDesigner processes these objects in order to display them. On their own, Surface Operators can’t be displayed as a rendered texture. In TouchDesigner’s idiom textures are two-dimension surfaces, and it follows that the objects that live in that category are called TOPs, Texture Operators. Operators from different families can’t be directly connected with patch chords. In order to pass the information from a SOP to a TOP one must use a TOP called a Render. The Render TOP must be connected to three COMPs (Compositions) in order to create an image that can be displayed. The render TOP requires a Geometry COMP (something to be rendered), a Light COMP (something to illuminate the scene), and a Camera COMP (the perspective from which the object is to rendered). In this respect TD pulls from conventions familiar to anyone who has worked with Adobe’s After Effects. 

Knowing the component pieces required in order to successfully render a 3D object it’s easier to understand how I started to create the underlying geometry. The Geometry COMP is essentially a container object (with some special attributes) that holds the SOPs responsible for passing a surface to the Render TOP. The default Geometry COMP contains a torus as a geometry. 

We can learn a little about how the COMP is working by taking a look inside of the Geometry object. 

Here the things to pay close attention to are the two flags on the torus object. You’ll notice in the bottom right corner there is a purple and a blue circle that are illuminated. The purple circle is a “Render Flag” and tells TouchDesigner to render the object, and the blue circle is a “Display Flag” which tells TouchDesigner that this is the object that should be displayed in the Geometry COMP.

Let’s take a look at the network that I created.

Now let’s dissect how my geometry network is actually working. At first glance we can see that multiple objects are being combined into a single piece of geometry that’s ultimately being passed out of this Geometry COMP. 

If we look closer we’ll see that here that the SOP network looks like this:

Grid – Noise – Transform – Alpha Noise (here the bypass flag is turned on)

Grid creates a plane that’s created out of polygons. This is different from a rectangle that’s only composed four points. In order to create a surface that can deform I needed a SOP points in the middle of it. The grid is attached to a Noise SOP that’s animating the surface. Noise is attached to a transform SOP that allows me to change the position of this individual plane. The last stop in this chain is another Noise SOP. Originally I was experimenting with varying the transparency of the surface. Ultimately, I decided to move away from this look. Rather than cutting this out of the chain, I simply turned on the Bypass Flag which turns off this single SOP. This whole chain is repeated eight times (for a total of eight grids). 

These Nine planes are then connected so that the rest of the network looks like this:

Merge – Transform – Facet – Texture – Null – Out

Merge takes all of the inputs and puts them together into a single piece of geometry. Transform allows me to move object as a whole in space. Facet is a handy operator that allows you to compute the normals’ of a geometry, which is useful for creating some more dynamic shading. Texture was useful for another direction that I was exploring, ultimately  ended up turning on the bypass flag for this SOP. A null, like in other environments, is really just a place holder kind of object. In the idiomatic structure of TouchDesigner, the Null is operationally an object that one places at the end of operation string. This is considered a best practice for a number of reasons. High on the list of reasons to end a string in a Null is because this allows easy access for making changes to a string. TouchDesigner allows the programmer to insert operations between objects. By always ending a string in a Null it becomes very easy to make changes to the stream without having to worry about re-exporting parameters. Finally all of this ends in an Out. While the Out isn’t necessary for this string, at one point I wasn’t sure if I was going to pass this geometry into another component. Ending in the Out ensured that I would have that flexibility if I needed it.


The Individual Composited Scene

There are always large questions to answer when thinking about creating an interactive work: Who is it for? What does it look likes? What are you trying to communicate? How much instruction do you provide, how little instruction do you provide? And on and on.  As I started to think about how this piece was going to work as an installation rather than as a performance apparatus, I started by thinking about what kind of data I could use to drive the visual elements of this work. One of the sensors that I knew I could easily incorporate into my current sculptural configuration was a an iPod Touch. The Touch has an on-board gyroscope and accelerometer. After a conversation with my adviser (Jake Pinholster) we decided that this would be a direction of exploration worth pulling apart, and from there I went back to TouchDesigner to start thinking about how I wanted to incorporate live data into the piece I was making.

When dealing with a challenge like building an interactive sculptural system that has at least three different visualizations, it can be challenging to think about where to start. Different programmers are bound to have different approaches to addressing this question. My approach was to start by thinking about what kind of input data I had to work with. Because I was dealing with a sensor that relayed spatial information, this also help me think about how to represent that data. Next I thought about what different kinds of ways I wanted to present this information, and finally I addressed how to playback this experience for users. Some of my more esoteric and existential questions (why am I making this? what does it mean? what does it represent?) were addressed through the methodical programming process, and others were sussed out over contemplative cups of coffee. As much as I wish that these projects could have a straight line of execution, a checklist even, I’m discovering more and more that the act of creating and programming is often a winding path with happy (and unhappy) discoveries along the way.

My first step on this journey, however, was to address what kind of inputs I had to use. Hexler has an excellent app for sending UDP messages over wireless connections called Touch OSC. OSC, or Open Sound Control, is a communications protocol that uses UDP messages to send data over wired and wireless networks. It’s functionally similar to MIDI and has some additional flexibilities and constraints. In the case of touch OSC, one of the parameters that you can enable from your iOS device to send xyz data from the accelerometer. Getting Touch OSC up and running does require a few steps to get the ball rolling. First both the computer that’s receiving and the device that’s broadcasting need to be on the same network. Your broadcasting device will need the IP address of the receiving computer, and a specified port to send the data to (how to find your IP address on a Mac, and on a PC). Once this information is set on your broadcasting device, it’s time to add a Channel Operator to your TouchDesigner network.

In TouchDesigner, there is a CHOP called “OSC in.” This CHOP will allow you to receive OSC data over a wireless network. Once you’ve added the CHOP to your TD network you’ll have to specify the port that Touch OSC is broadcasting to, and then you should be in business. In my case once this was set up I could instantly see a stream of accelerometer data coming from my iPod Touch. In order to use these values, however, I needed to take some additional steps. The raw OSC data from Touch OSC comes in as a range of data from -1 to 1. Additionally, the data comes in from one CHOP. My flow of operators looks like:

OSC In – Select – Lag – Math – Null

OSC In is the data input. The CHOP Select allows you to select a single channel out of a bundle of channels. In this case I used this to separate my X, Y, and Z inputs into different streams. The Lag chop helps to smooth out the attack and decay rates of input data. In my case this ensured that my final values used to control another object where kept from being too jittery. The Math CHOP is tremendously powerful, in my case I wanted to be able to map the values of my raw data [ -1 to 1 ], to a larger range of values, say 0 to 200. Finally I ended my string in a Null. A null in this case is very useful in case I need to add any other operators into my string.

Before thinking about how to use these values, it’s important to take a moment to revisit how geometry is rendered in TouchDesigner. The geometry COMPs that are used to create the objects to be displayed can’t be visualized without using a render TOP. The render TOP requires three components in order to generate an image that can be seen. Render requires a source geometry, light, and camera. The Geometry COMP provides the location of surfaces, the light provides the necessary information about how the object is being lit. The camera COMP controls the perspective that the object is being rendered from. This is similar to an approach that one might use when creating 3D content in After Effects – an object to be rendered, a light so the object can be seen, and a camera to control the perspective the audience sees of the object. Because we need to think of rendering by combining multiple COMPs, that can inform how we use live data. 

With some scaled values process and ready to export I was ready to think about how these values could influence the viewers perspective of the geometry. One of my initial thoughts was to render a cube that a user could look inside of. As the observer changed the orientation of the sensor, the virtual environment would also change in kind. While it’s possible to do this by rotating and translating the geometry itself, I instead decided to focus on the orientation of the camera instead. This has a few advantages. One important advantage is the ability to tell a camera to look directly at a specified geometry. This means that in translating the camera (left or right, up or down, in or out) the camera stays focused on the center of the target geometry. This makes changing perspective much simpler.  

Initially I was thinking of rendering the entire 3D scene as a single geometry. In doing this, however, I was experiencing some challenges when thinking about the placement of lights and the overall organization of the geometry, and in applying texture to the surfaces. By using a Phong shader one can apply texture maps to the 3D geometry COMPs that have been created. By separating the interior and exterior pieces of the geometry and then compositing them after rendering I was able to apply different shaders to each geometry.

The portion of my network responsible for compositing the geometry looks like this:

Render 1, Render 2, Constant (black solid) – Composit – Transform – Null – Out

 Render 1, Render 2, and the Constant are the three source surfaces. Render 1 is the box, Render 2 is the merged set of waves, and the Constant is a black background. Another approach to this would be to set one of the camera background’s as black. These three flow into a Composit COMP. Next is the Transition COMP (this allowed for some small adjustments that needed to be made in order to help align the projection with the sculpture. Originally I made this string with a Null as the final output of this Component. I would eventually find that I needed an Out to pass this scene into another display module. 

I used the same techniques as above for the other two scenes – starting with establishing my data stream, generating the geometry, rendering out layers to be composited and then passed out to the visual stream.

Are these pictures too small? You can see higher quality versions by looking at this Flickr Gallery: Graduate School Documentation


The Container, Control, and Final Output

In thinking about how to meet the objectives that I had for this piece, one of my central questions was how to make sure that I could move through three cued scenes – either with manual or automatic triggers. I knew that I had three different aesthetic environments that I wanted to move through. I explored several different options, and the one that ultimately made sense to me given my current level of proficiency (at this point I had only been programming in Touch for a total of three weeks) in TouchDesigner was to use a cross fading approach. Here’s what the whole network looks like:

In thinking about how to ensure that I was being efficient I decided to encapsulate my three different scenes in their own respective containers. You’ll notice on the left hand side that there are three containers – each holding it’s own 3D environment. These are joined through Corss fading TOPs though a final composite (for a mask) until ending in a Null that was used as the display canvas. 

I spent a lot of time thinking about how this piece was going to be both interactive and autonomous. It needed to be interactive in that the user was able to see how their interaction with an object was driving the visual media; it needed to be autonomous in its ability to transition between scenes and then loop back to the beginning of the network. I don’t think I’ve totally cracked the nut that is the right balance of interactivity and self-directed programming, but it feels like I did make strides towards addressing this question. My solution to these issues was to allow the interaction with the projection to be centered around the control of perspective, but to drive the transitions through the scenes with time-line triggers.

Unlike some other interactive programming environments, TouchDesigner has a timeline built into the fabric of the control system. The Timeline is based in frames, and the programmer can specify the number of frames per second as well as the total number of frames for a given project. My Timeline triggering system was the following string of CHOPs:

Timeline – Trigger – Null

Timeline reports out the current frame number. The trigger CHOP can be set to trigger at a given threshold (or in my case a frame number). This in turn is passed to a null and exported to a Corssfade TOP as a rate for crossfade. The Crossfades are daisy-chained together before finally being attached to the null that’s output to the projector.

With the system working I also needed to make a mask for the final projection to ensure that I wasn’t displaying any empty grid onto the floor of the gallery where this was being installed. I would typically make a mask for something like this in Photoshop, but decided to try making this all in the TouchDesigner programming environment. My TOP operator string for this looked like:

Contsant – Transform – Blur – Composite

I started by creating a black constant that’s then passed to a transform so that it can be positioned into place. This is then passed to a blur to soften the edges, and finally to a composite to create a mask that contains a left, right, top, and bottom side. In hindsight I realize that I could use a single constant passed to four transform TOPs, to be a little more tidy. The mask as a composited object is then composited with final render stream before being passed to the Null that’s connected to the projector. 

In the end I’m fairly happy with this project. t’s been a steep learning curve, but well worth the hassle, angst, and late nights. It’s no small thing to have made a piece of interactive media driven sculpture in a programming environment where that’s totally new to me. For as hard as all of this work has proven to be, I have to remind myself that I’m actively doing the work that I came to Graduate School to do. Everyday I realize that I’ve been changed by my time in the desert, and by my time with the gifted and brilliant artists and friends that I’ve found here. 

Are these pictures too small? You can see larger versions of them here:

TouchDesigner | The Container, Control, and Final Ouput

In thinking about how to meet the objectives that I had for this piece, one of my central questions was how to make sure that I could move through three cued scenes – either with manual or automatic triggers. I knew that I had three different aesthetic environments that I wanted to move through. I explored several different options, and the one that ultimately made sense to me given my current level of proficiency (at this point I had only been programming in Touch for a total of three weeks) in TouchDesigner was to use a cross fading approach. Here’s what the whole network looks like:

In thinking about how to ensure that I was being efficient I decided to encapsulate my three different scenes in their own respective containers. You’ll notice on the left hand side that there are three containers – each holding it’s own 3D environment. These are joined through Corss fading TOPs though a final composite (for a mask) until ending in a Null that was used as the display canvas. 

I spent a lot of time thinking about how this piece was going to be both interactive and autonomous. It needed to be interactive in that the user was able to see how their interaction with an object was driving the visual media; it needed to be autonomous in its ability to transition between scenes and then loop back to the beginning of the network. I don’t think I’ve totally cracked the nut that is the right balance of interactivity and self-directed programming, but it feels like I did make strides towards addressing this question. My solution to these issues was to allow the interaction with the projection to be centered around the control of perspective, but to drive the transitions through the scenes with time-line triggers.

Unlike some other interactive programming environments, TouchDesigner has a timeline built into the fabric of the control system. The Timeline is based in frames, and the programmer can specify the number of frames per second as well as the total number of frames for a given project. My Timeline triggering system was the following string of CHOPs:

Timeline – Trigger – Null

Timeline reports out the current frame number. The trigger CHOP can be set to trigger at a given threshold (or in my case a frame number). This in turn is passed to a null and exported to a Corssfade TOP as a rate for crossfade. The Crossfades are daisy-chained together before finally being attached to the null that’s output to the projector.

With the system working I also needed to make a mask for the final projection to ensure that I wasn’t displaying any empty grid onto the floor of the gallery where this was being installed. I would typically make a mask for something like this in Photoshop, but decided to try making this all in the TouchDesigner programming environment. My TOP operator string for this looked like:

Contsant – Transform – Blur – Composite

I started by creating a black constant that’s then passed to a transform so that it can be positioned into place. This is then passed to a blur to soften the edges, and finally to a composite to create a mask that contains a left, right, top, and bottom side. In hindsight I realize that I could use a single constant passed to four transform TOPs, to be a little more tidy. The mask as a composited object is then composited with final render stream before being passed to the Null that’s connected to the projector. 

 In the end I’m fairly happy with this project. t’s been a steep learning curve, but well worth the hassle, angst, and late nights. It’s no small thing to have made a piece of interactive media driven sculpture in a programming environment where that’s totally new to me. For as hard as all of this work has proven to be, I have to remind myself that I’m actively doing the work that I came to Graduate School to do. Everyday I realize that I’ve been changed by my time in the desert, and by my time with the gifted and brilliant artists and friends that I’ve found here. 

 Are these pictures too small? You can see larger versions of them here:

TouchDesigner | The Individual Composited Scene

There are always large questions to answer when thinking about creating an interactive work: Who is it for? What does it look likes? What are you trying to communicate? How much instruction do you provide, how little instruction do you provide? And on and on.  As I started to think about how this piece was going to work as an installation rather than as a performance apparatus, I started by thinking about what kind of data I could use to drive the visual elements of this work. One of the sensors that I knew I could easily incorporate into my current sculptural configuration was a an iPod Touch. The Touch has an on-board gyroscope and accelerometer. After a conversation with my adviser (Jake Pinholster) we decided that this would be a direction of exploration worth pulling apart, and from there I went back to TouchDesigner to start thinking about how I wanted to incorporate live data into the piece I was making.

When dealing with a challenge like building an interactive sculptural system that has at least three different visualizations, it can be challenging to think about where to start. Different programmers are bound to have different approaches to addressing this question. My approach was to start by thinking about what kind of input data I had to work with. Because I was dealing with a sensor that relayed spatial information, this also help me think about how to represent that data. Next I thought about what different kinds of ways I wanted to present this information, and finally I addressed how to playback this experience for users. Some of my more esoteric and existential questions (why am I making this? what does it mean? what does it represent?) were addressed through the methodical programming process, and others were sussed out over contemplative cups of coffee. As much as I wish that these projects could have a straight line of execution, a checklist even, I’m discovering more and more that the act of creating and programming is often a winding path with happy (and unhappy) discoveries along the way.

My first step on this journey, however, was to address what kind of inputs I had to use. Hexler has an excellent app for sending UDP messages over wireless connections called Touch OSC. OSC, or Open Sound Control, is a communications protocol that uses UDP messages to send data over wired and wireless networks. It’s functionally similar to MIDI and has some additional flexibilities and constraints. In the case of touch OSC, one of the parameters that you can enable from your iOS device to send xyz data from the accelerometer. Getting Touch OSC up and running does require a few steps to get the ball rolling. First both the computer that’s receiving and the device that’s broadcasting need to be on the same network. Your broadcasting device will need the IP address of the receiving computer, and a specified port to send the data to (how to find your IP address on a Mac, and on a PC). Once this information is set on your broadcasting device, it’s time to add a Channel Operator to your TouchDesigner network.

In TouchDesigner, there is a CHOP called “OSC in.” This CHOP will allow you to receive OSC data over a wireless network. Once you’ve added the CHOP to your TD network you’ll have to specify the port that Touch OSC is broadcasting to, and then you should be in business. In my case once this was set up I could instantly see a stream of accelerometer data coming from my iPod Touch. In order to use these values, however, I needed to take some additional steps. The raw OSC data from Touch OSC comes in as a range of data from -1 to 1. Additionally, the data comes in from one CHOP. My flow of operators looks like:

OSC In – Select – Lag – Math – Null

OSC In is the data input. The CHOP Select allows you to select a single channel out of a bundle of channels. In this case I used this to separate my X, Y, and Z inputs into different streams. The Lag chop helps to smooth out the attack and decay rates of input data. In my case this ensured that my final values used to control another object where kept from being too jittery. The Math CHOP is tremendously powerful, in my case I wanted to be able to map the values of my raw data [ -1 to 1 ], to a larger range of values, say 0 to 200. Finally I ended my string in a Null. A null in this case is very useful in case I need to add any other operators into my string.

Before thinking about how to use these values, it’s important to take a moment to revisit how geometry is rendered in TouchDesigner. The geometry COMPs that are used to create the objects to be displayed can’t be visualized without using a render TOP. The render TOP requires three components in order to generate an image that can be seen. Render requires a source geometry, light, and camera. The Geometry COMP provides the location of surfaces, the light provides the necessary information about how the object is being lit. The camera COMP controls the perspective that the object is being rendered from. This is similar to an approach that one might use when creating 3D content in After Effects – an object to be rendered, a light so the object can be seen, and a camera to control the perspective the audience sees of the object. Because we need to think of rendering by combining multiple COMPs, that can inform how we use live data. 

With some scaled values process and ready to export I was ready to think about how these values could influence the viewers perspective of the geometry. One of my initial thoughts was to render a cube that a user could look inside of. As the observer changed the orientation of the sensor, the virtual environment would also change in kind. While it’s possible to do this by rotating and translating the geometry itself, I instead decided to focus on the orientation of the camera instead. This has a few advantages. One important advantage is the ability to tell a camera to look directly at a specified geometry. This means that in translating the camera (left or right, up or down, in or out) the camera stays focused on the center of the target geometry. This makes changing perspective much simpler.  

Initially I was thinking of rendering the entire 3D scene as a single geometry. In doing this, however, I was experiencing some challenges when thinking about the placement of lights and the overall organization of the geometry, and in applying texture to the surfaces. By using a Phong shader one can apply texture maps to the 3D geometry COMPs that have been created. By separating the interior and exterior pieces of the geometry and then compositing them after rendering I was able to apply different shaders to each geometry.

The portion of my network responsible for compositing the geometry looks like this:

Render 1, Render 2, Constant (black solid) – Composit – Transform – Null – Out

 Render 1, Render 2, and the Constant are the three source surfaces. Render 1 is the box, Render 2 is the merged set of waves, and the Constant is a black background. Another approach to this would be to set one of the camera background’s as black. These three flow into a Composit COMP. Next is the Transition COMP (this allowed for some small adjustments that needed to be made in order to help align the projection with the sculpture. Originally I made this string with a Null as the final output of this Component. I would eventually find that I needed an Out to pass this scene into another display module. 

I used the same techniques as above for the other two scenes – starting with establishing my data stream, generating the geometry, rendering out layers to be composited and then passed out to the visual stream.

Are these pictures too small? You can see higher quality versions by looking at this Flickr Gallery: Graduate School Documentation

Soot and Spit | Particles in Isadora

Holy challenges Batman. It seems like I’m constantly being humbled by the learning curve of graduate school. This spring one of ASU’s productions is Charles Mee’s Soot and Spit

Soot and Spit is grounded in the work of James Castle, an artist who was deaf and possibly autistic. One of the most powerful outlets for expression in Castle’s life was making art. He made countless works over the course of his life, and one of the mediums that he used was a mixture of soot and spit. With this as a contextual anchor the lead designer, Boyd Branch, was interested in exploring the possibility of using particles as a part of his final design.  

One of my charges in working on this production was to explore how to work with particles in Isadora (our planned play-back system). I started this process by doing a little digging on the web for examples, and the most useful resource that I found as a starting point was the Mark Coniglio (Isadora’s creator) example file. Here Mark has a very helpful breakdown of several different kinds of typical operations in Isadora, including a particle system. Looking at the Particle System Actor can feel a little daunting. In my case, The typical approach of toggling and noodling with values to look for changes wasn’t really producing any valuable results. It wasn’t until I took a close look at Mark’s example patch that I was able to finally make some head way.

We can start by looking at the 3D particle actor and working through a few important considerations to keep in mind when working with 3D particles in Isadora. One thing to remember is that when you’re creating particles, the rendering system needs multiple attributes for each particle that you’re generating (location in x, y, and z, velocity, scale, rotation, orientation, color, lifespan, and so on). To borrow a idiomatic convention from MaxMSP, you have to bang on these attributes for every particle that you create. There are a variety of methods for generating your bang, but for the sake of seeing some consistent particle generation I started by using a pulse generator. Pulse generators in Isadora are expressed in hertz (cycles per second), and when we’re working with our particle system we’ll frequently want a pulse generator to be attached at the front end of our triggers. To that end, we really want a single pulse generator to be driving as much of our particle generation as possible. This is to ensure all of our data about particle generation is synchronized, and to keep our system over head as low as possible. 

Let’s get this party started by making some conceptual plans about how we want to experiment with particles. I started by thinking of the particles as being emitted from a single source and being affected by gravity in a typical manner, i.e. falling towards the ground. 

Here’s my basic particle emitter set-up for this kind of setup: 

Let’s start by taking a look at the things we need to get started. As I mentioned before we need to start by frist getting a pulse generator set-up. Let’s start by adding a pulse generator, and looking at where it’s connected:

Here we can see that the pulse generator is hooked up to a custom user actor that I’ve called “Particle Feeder,” and to the “Add Obj” attribute in the 3D particle Actor. This approach is making sure that we’re only using a single pulse generator to bang on our particle system – pushing attribute changes and add object changes.

Next let’s look at the Particle Feeder actor that I made to make this process easier:

In just a moment we’ll take a look inside of this user actor, but before we dive inside let’s examine how we’re feeding the particle generator some information. Frequency is the input for the pulse generator, this is how quickly we’re generating particles. Var X, Y, and Z are used to generate a random range of velocities for our particles between an upper and lower limit. This makes sure that our particles aren’t uniform in how they’re moving in the space. If we don’t have any variation here our particles will all behave the same way. Finally we have a location for our emitter’s location: Origin X, Y, and Z. It’s important to remember that the particle system exists in 3D space, so we need three attributes to define it’s location. On the right side of the actor we can see that we’re passing out random values between our min and max values for X, Y, and Z as well as a X, Y, and Z origin data. 

Inside of this custom actor we see this:


At first glance we can see that we have four blocks of interest for this actor. First off it’s important to notice that our Frequency input is passed to all of our modules. The first three modules are copies of one another (one for X, Y, and Z). We can see here that our pulse generator is banging on a random number generation actor, that random value (from 0 to 100) is then passed to a Limit-Scale Value actor. The limit scale actor takes an input value in a specified range and scales it to another range. In our case it’s taking values between 0 and 100 and scaling them to be between -5 and 5. The resulting value is then passed out of this macro to it’s corresponding value.  Our bottom block pushing out data about our emitter location. It’s important to remember that we need to pass out the origin location for each particle that’s generated. This is why the location information is passed through a trigger value that’s being triggered by our systems pulse generator.

If we jump back out of our user actor can see how our input parameters are then passed to the 3D particle actor:

Ultimately, you’ll need to do your own experimenting with particle systems in order to get a firm handle on how they work. I found it useful to use custom actors to tidy up the patch and make sense of what was actually happening. I think the best way to work with particles is to get something up and running, and then to start by changing single attributes to see what kind of impact your change is making. If you’re not seeing any changes you may try passing your value through a trigger that’s attached to your pulse generator – remember that some attributes need to be passed to each particle that’s generated. 

Are some of these pictures too small to read? You can see larger versions on flickr by looking in this album: Grad School Documentation


One of the great joys of sharing your work is the opportunity to learn from others. John Collingswood (for more about John check out dbini industries and Taikabox), pointed out on Facebook that one of the very handy things you can do in isadora is to constrain values by setting the range of an input parameter. For example, I could forgo the min-max system set-up with my user actor and instead scale and constrain random values in the 3D particle input. When you click on the name of an input on an actor your get a small pop-up window which allows you to specify parameters for that input’s range and starting values. This means that you could connect a wave generator (with the wave pattern set to random) to an input on a 3D particle actor and then control the range of scaled values with the 3D particle actor. That would look something like this: