Tag Archives: production

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.

Escher Image and Animation Research

In thinking about what the media and animation for The Fall of the House of Escher might look like I’ve been sifting through the internet this summer looking for images that abstractly represent the universe and the behavior of particles and waves. Some of the more interesting work that I’ve found uses simple geometric shapes, and particle systems to evoke a sense of scale and distance and perspective. The work of the motion graphics designer Mr. Div is a prime example of someone who makes works that are both simple and also strikingly captivating.

The gif to the right is amy attempt at copying his piece “Tri-Heart”. I think copy-art is a practice that can’t be over stated. Recreating a work that you see from scratch teaches you more than simply following a tutorial. You are forced to wrestle with questions of how and why, and solve problems that don’t necessarily have clear solutions. While I don’t think this animation, specifically, is going to find it’s way into Escher, there are qualities of it that I really like and that feel distinctly quantum.

On the other end of the spectrum, in the “just follow along with a tutorial” category is a fascinating how-to create by minutephysics. Their quick After Effects tutorial covers how to create a simple simulation of formation of the universe. While it’s not scientifically accurate, the aesthetic conveys the look and feel of much more complex simulations that look at formation of the universe. Both of their videos are worth a watch, and the result of the tutorial can be seen here. Again, I don’t know that this exact animation will be something that I use, but it has elements that are inspiring and interesting.

In many respects there is a daunting amount of media in this production – interactive elements, large scale animations, moments of world-shifting perspective, and the expression of the abstract inner space of the atom. There’s a life-time of work in this production, and it goes up in October. There’s lots to do.

Lessons from the Road

You need a tech rider.

Better yet, you need a tech rider with diagrams, specific dimensions, and clear expectations.

In early June I was traveling with my partner, Lauren Breunig, to an aerial acrobatics festival in Denver, Colorado. Lauren is an incredibly beautiful and talented aerialist. One of the apparatuses that she performs on is what she calls “sliding trapeze.” This is essentially a trapeze bar with fabric loops instead of ropes.

Earlier this year Lauren was invited to perform at the Aerial Acrobatics Arts Festival of Denver as a performer in their “innovative” category. As an aerialist Lauren has already in many venues across the country, both on her invented apparatus as well as on more traditional circus equipment. In all of these cases she’s had to submit information about her apparatus, clearance requirements, and possible safety concerns.

So when it came time to answer some questions about rigging for the festival it seemed like old hat. One of the many things that Lauren had to submit was her height requirements for her bar provided that a truss was being suspended somewhere between 27 and 29 feet from the floor of the stage. In her case the height of the truss less critical than the height for her bar. In her case, the minimum distance from the floor to the rigging points is 15.5 feet. At this height her apparatus is high enough off of the ground that she can safely perform all of her choreography. This is also the lower limit of a height where she can jump to her bar unaided. Where this gets tricky is how one makes up the difference between the required rigging points and the height of the truss. The festival initially indicated that they would drop steel cable to make up the differences between required heights and the height of the truss, making it seems as though the performers only needed to worry about bringing their apparatus.

When we dropped off Lauren’s equipment we discovered that the realities of the rigging were slightly different than what the email correspondence had indicated would be the case. The truss had been set at a height of 27 feet, but the festival was no longer planning on dropping any cables for performers. Additionally they told us that they only had limited access to span sets and other equipment for making up the height difference. Luckily Lauren had packed some additional span sets, and had thought through some solutions that used some webbing (easily available from REI) to make up any discrepancies that might come up. This also, unfortunately, made her second guess the specs she had sent to the festival originally, and left her wondering if she had accurately determined the correct heights for her apparatus.

Memory Measurements

Having rigged and re-rigged this apparatus in numerous venues, Lauren had a strong sense of how her equipment worked with ceilings less than 20 feet. This also meant that she had didn’t have any fixed heights, and had instead lots of numbers bouncing around her head – one venue was rigged at 15.5 feet, but the ceiling was really at 17 feet; another the beams were at 22 or 23 feet, and the apparatus had been rigged at heights between 15.5 and 17 feet; and so on and so on. Additionally she typically rigs her own equipment, and has is therefore able to make specific adjustments based on what she’s seeing and feeling in a given space. For the festival, this wasn’t a possibility. So, after the miscommunication about the rigging situation and suddenly feeling insecure about the measurements she sent ahead we suddenly found ourselves talking through memories of other venues and trying to determine what height she actually needed.

Reverse engineering heights

We started by first talking through previous rigged situations – how high were the beams, how long is the apparatus, how far off the ground was she. Part of the challenge here was that this particular apparatus hangs at two different lengths because the fabric ropes stretch. This means that without a load it’s at a different distance from the floor than with a load. While this isn’t a huge difference, it’s enough to prevent her from being able to jump to her bar if it’s rigged too high or to put her in potential danger of smashing her feet if it’s rigged too low. While there were several things we knew, it was difficult to arrive at a hard and fast number with so many variables that were unknown or a range.

Drawing it out

Ultimately what helped the most was sitting down and drawing out some of the distances and heights. While this was far from perfect, it did finally give us some reference points to point to rather than just broadly talk through. A diagram goes a long way to providing a concrete representation of what you’re talking about, and it’s worth remembering the real value in this process. It meant that were were suddenly able to talk about things that we knew, only remembered, or guessed. This processes, however, still didn’t solve all of the problems Lauren was facing. We still had some questions about the wiggle room in our half-remembered figures, and making sure that she would be rigged at a height that was both safe and visually impressive. Finally, after an hour of drawing, talking, and drawing again we got to a place where we were reasonably confident about how she might proceed the next day. In thinking about this process, I realized that we could have made our lives a lot easier if we had done a little more homework before coming to the festival.

What she really needed

A Diagram

A complete drawing of the distances, apparatus, performer, rigging range, and artist-provided equipment would have made a lot of this easier. While the rigging process went without a hitch once she was in the theater, being able to send a drawing of what her apparatus looked like and how it needed to be rigged would have but as at ease and ensured that all parties were on the same page. A picture codifies concepts that might otherwise be difficult to communicate, and in our case this would have been a huge help.

A Fuller tech rider

While Lauren did send a Tech Rider with her submission, it occurred to us that a fuller tech rider would- have helped the festival, and it would have helped us. When dealing with an apparatus that she had to jump to reach, it would have been helpful for us to know exactly how high she could jump. There’s also a sweet spot that’s not too high for this apparatus, but where Lauren still needs a boost to reach the bar; this would have been another helpful range to have already known. While we have a reasonable amount of rigging materials, there’s also some equipment that we don’t have. Specifying what we plan to provide, or can provide with adequate notice would have been helpful inclusions in the conversation she was having with the festival. In hindsight, some of the statements that should have been added to her rider include:

  • the artist can jump for heights of
  • the artist needs assistance for heights
  • the artist will provide rigging for
  • the artist requires confirmation by

What does this have to do with projectors?

Let’s face it, tech riders are not the most exciting part of the production world. That said, by failing to specify what you need and what you are planning on providing it’s easy to suddenly be in a compromising position. While the consequences are different for an aerialist vs. a projectionist, the resulting slow-down in the tech process, or the need to reconfigure some portion of performance are very real concerns. The closer you are to a process or installation, the more difficult it becomes to really see all of the moving parts. Our exposure to any complicated process creates blind spots in the areas that we’ve automated, set-up once, or take for granted simply because they seem factual and straightforward. These are the privileges, and pitfalls, of working with the same equipment or apparatus for extended periods of time – we become blind to our assumptions about our process. Truly, this is the only way to work with a complicated system. At some point, some portion of the process becomes automated in our minds or in our practice in order to facilitate higher order problem solving. Once my projectors are hung and focused, I don’t think about the lensing when I’m trying to solve a programming problem.

While this may well be the case when you’re on your home turf, it’s another thing entirely to think about setting up shop somewhere new. When thinking about a new venue, it becomes imperative to look at your process with eyes divorced from your regular practice, and to instead think about how someone with unfamiliar eyes might look at your work. That isn’t to say that those eyes don’t have any experience, just that they’re fresh to your system / apparatus. In this way it might be useful to think of the tech rider as a kind of pre-flight checklist. Pilots have long known that there are simply too many things to remember when looking over a plane before take-off. Instead, they rely on check-lists to ensure that everything gets examined. Even experienced pilots rely on these checklists, and even obvious items get added to the list.

Knowing your equipment

Similarly, it’s not enough to just “know” your equipment. While intuition can be very useful, it’s also desperately important to have documentation of your actual specifications – what are the actual components of your machine, what are your software version numbers, how much power do you need, etc. There are always invisible parts of our equipment that are easy to take for granted, and it’s these elements that are truly important to think about when you’re setting up in a new venue. Total certainty may well be a pipe-dream, but it isn’t impractical to take a few additional steps to ensure that you’re ready to tackle any problems that may arise.

Packing your Bags

The real magic of this comes down to packing your bags. A solid rider, and an inventory of your system will cover most of your bases but good packing is going to save you. Finding room for that extra roll of gaff tape, or that extra power strip, or that USB mouse may mean that it takes you longer or that you travel one bag heavier but it will also mean a saved trip once you’re at the theatre. Including an inventory in your bags may seem like a pain, but it also means that you have a quick reference to know what you brought with you. It also means that when you’re in the heat of strike you know exactly what goes where. Diagrams and lists may not be the sexiest part of the work we do, but they do mean saved time and fewer headaches. At the end of the day, a few saved hours may mean a few more precious hours of sleep, or better yet a chance to grab a drink after a long day.

Isadora | Live-Camera Input as a Mask

Back in March I had an opportunity to see a production called Kindur put on by the Italian Company Compagnia TPO. One of the most beautiful and compelling effects that they utilized during the show was to use a live-camera to create a mask that revealed a hidden color field. The technique of using a live feed in this way allows a programmer to work with smaller resolution input video while still achieving a very fluid and beautiful effect. 

This effect is relatively easy to generate by using just a few actors. An overview of the whole Isadora Scene looks like this:

To star this process we’ll start with a Video-In Watcher actor. The video-in will be the live feed from our camera, and will ultimately be the mask that we’re using to obscure and reveal our underlying layer of imagery. This video-in actor connects to a Difference actor which looks for the difference between two sequencial frames in the video stream. This is then in turn passed to a Motion Blur actor. The motion blur actor will allow you to specify the amount of accumulated blur effect as well as the decay (disappearance rate) of the effect. To soften the edges of this the image stream is next passed to a Gaussian Blur actor. Finally this stream is passed to an Add Alpha Channel actor by passing the live feed into the mask inlet on the actor. The underlying geometry is then passed in through the video inlet in the Add Alpha Channel actor. Finally, the outlet of the Add Alpha actor is passed out to a projector. 

As a matter of best-practice I like to use a Performance Monitor actor when I’m creating a scene in order to keep an eye on the FPS count. This can also be useful when trying to diagnose what’s causing a system to slow down during playback. 

This effect works equally well over still images or video, and is certainly something that’s fun to experiment with. Like all things in live systems, your milage may vary – motion blur and gaussian blur can quickly become resource expensive, and it’s worth turning down your capture settings to help combat a system slow-down.

Isadora | Network Control

For an upcoming show one of the many problems that I’ll need to solve is how to work with multiple machines and multiple operating systems over a network. My current plan for addressing the needs of this production will be to use one machine to drive the interactive media, and then to slave two computers for cued media playback. This will allow me to distribute the media playback over several machines while driving the whole system from a single machine. My current plan is to use one Mac Pro to work with live data while slaving two Windows’ 7 PCs for traditionally cued media playback. The Mac Pro will drive a Barco, while the PC’s each drive a Sanyo projector each. This should give me the best of both worlds in some respects. Distributed playback, similar to WatchOut’s approach to media playback, while also allowing for more complex visual manipulation of live-captured video. 

To make all of this work, however, it’s important to address how to get two different machines running Isadora on two different operating systems to talk with one another. To accomplish this I’m going to use an OSC Transmit actor on the Master machine, and an OSC listener on the slaved machines. 

On the Master machine the set-up looks like this:

Trigger Value – OSC Transmit

The transmit actor needs the IP address of the slaved machines, as well as a port to broadcast to. The setup below is for talking to just one other machine. In my final setup I’ll create a specialized user actor that holds two OSC Transmit actors (one for each machine) that can be copied into each scene.

On the Slaved machines the setup takes two additional steps. First off it’s important to determine what port you want to receive messages from. You can do that by going to Isadora Preferences and selecting the Midi/Net tab. Here you can specify what port you want Isadora to listen to. At this point it’s important to catch the data stream. You can do this by oping up the Communications tab and selecting Stream Setup. From here make sure that you select “Open Sound Control” and click the box “Auto-Detect Input.” At this point you should see the Master Machine broadcasting with a channel name an address, and a data stream. Once this is setup the Actor patch for receive messages over the network looks like this:

OSC Listener – Whatever Actor you Want

In my case I’ll largely use just jump++ actors to transition between scenes, each with their own movie. 

You can, of course, do much more complicated things with this set-up all depending on your programming or play-back needs. 

Soot and Spit | Particles in Isadora

Holy challenges Batman. It seems like I’m constantly being humbled by the learning curve of graduate school. This spring one of ASU’s productions is Charles Mee’s Soot and Spit

Soot and Spit is grounded in the work of James Castle, an artist who was deaf and possibly autistic. One of the most powerful outlets for expression in Castle’s life was making art. He made countless works over the course of his life, and one of the mediums that he used was a mixture of soot and spit. With this as a contextual anchor the lead designer, Boyd Branch, was interested in exploring the possibility of using particles as a part of his final design.  

One of my charges in working on this production was to explore how to work with particles in Isadora (our planned play-back system). I started this process by doing a little digging on the web for examples, and the most useful resource that I found as a starting point was the Mark Coniglio (Isadora’s creator) example file. Here Mark has a very helpful breakdown of several different kinds of typical operations in Isadora, including a particle system. Looking at the Particle System Actor can feel a little daunting. In my case, The typical approach of toggling and noodling with values to look for changes wasn’t really producing any valuable results. It wasn’t until I took a close look at Mark’s example patch that I was able to finally make some head way.

We can start by looking at the 3D particle actor and working through a few important considerations to keep in mind when working with 3D particles in Isadora. One thing to remember is that when you’re creating particles, the rendering system needs multiple attributes for each particle that you’re generating (location in x, y, and z, velocity, scale, rotation, orientation, color, lifespan, and so on). To borrow a idiomatic convention from MaxMSP, you have to bang on these attributes for every particle that you create. There are a variety of methods for generating your bang, but for the sake of seeing some consistent particle generation I started by using a pulse generator. Pulse generators in Isadora are expressed in hertz (cycles per second), and when we’re working with our particle system we’ll frequently want a pulse generator to be attached at the front end of our triggers. To that end, we really want a single pulse generator to be driving as much of our particle generation as possible. This is to ensure all of our data about particle generation is synchronized, and to keep our system over head as low as possible. 

Let’s get this party started by making some conceptual plans about how we want to experiment with particles. I started by thinking of the particles as being emitted from a single source and being affected by gravity in a typical manner, i.e. falling towards the ground. 

Here’s my basic particle emitter set-up for this kind of setup: 

Let’s start by taking a look at the things we need to get started. As I mentioned before we need to start by frist getting a pulse generator set-up. Let’s start by adding a pulse generator, and looking at where it’s connected:

Here we can see that the pulse generator is hooked up to a custom user actor that I’ve called “Particle Feeder,” and to the “Add Obj” attribute in the 3D particle Actor. This approach is making sure that we’re only using a single pulse generator to bang on our particle system – pushing attribute changes and add object changes.

Next let’s look at the Particle Feeder actor that I made to make this process easier:

In just a moment we’ll take a look inside of this user actor, but before we dive inside let’s examine how we’re feeding the particle generator some information. Frequency is the input for the pulse generator, this is how quickly we’re generating particles. Var X, Y, and Z are used to generate a random range of velocities for our particles between an upper and lower limit. This makes sure that our particles aren’t uniform in how they’re moving in the space. If we don’t have any variation here our particles will all behave the same way. Finally we have a location for our emitter’s location: Origin X, Y, and Z. It’s important to remember that the particle system exists in 3D space, so we need three attributes to define it’s location. On the right side of the actor we can see that we’re passing out random values between our min and max values for X, Y, and Z as well as a X, Y, and Z origin data. 

Inside of this custom actor we see this:

At first glance we can see that we have four blocks of interest for this actor. First off it’s important to notice that our Frequency input is passed to all of our modules. The first three modules are copies of one another (one for X, Y, and Z). We can see here that our pulse generator is banging on a random number generation actor, that random value (from 0 to 100) is then passed to a Limit-Scale Value actor. The limit scale actor takes an input value in a specified range and scales it to another range. In our case it’s taking values between 0 and 100 and scaling them to be between -5 and 5. The resulting value is then passed out of this macro to it’s corresponding value.  Our bottom block pushing out data about our emitter location. It’s important to remember that we need to pass out the origin location for each particle that’s generated. This is why the location information is passed through a trigger value that’s being triggered by our systems pulse generator.

If we jump back out of our user actor can see how our input parameters are then passed to the 3D particle actor:

Ultimately, you’ll need to do your own experimenting with particle systems in order to get a firm handle on how they work. I found it useful to use custom actors to tidy up the patch and make sense of what was actually happening. I think the best way to work with particles is to get something up and running, and then to start by changing single attributes to see what kind of impact your change is making. If you’re not seeing any changes you may try passing your value through a trigger that’s attached to your pulse generator – remember that some attributes need to be passed to each particle that’s generated. 

Are some of these pictures too small to read? You can see larger versions on flickr by looking in this album: Grad School Documentation

One of the great joys of sharing your work is the opportunity to learn from others. John Collingswood (for more about John check out dbini industries and Taikabox), pointed out on Facebook that one of the very handy things you can do in isadora is to constrain values by setting the range of an input parameter. For example, I could forgo the min-max system set-up with my user actor and instead scale and constrain random values in the 3D particle input. When you click on the name of an input on an actor your get a small pop-up window which allows you to specify parameters for that input’s range and starting values. This means that you could connect a wave generator (with the wave pattern set to random) to an input on a 3D particle actor and then control the range of scaled values with the 3D particle actor. That would look something like this: