Tag Archives: Galvin Theatre

Isadora | Live-Camera Input as a Mask

Back in March I had an opportunity to see a production called Kindur put on by the Italian Company Compagnia TPO. One of the most beautiful and compelling effects that they utilized during the show was to use a live-camera to create a mask that revealed a hidden color field. The technique of using a live feed in this way allows a programmer to work with smaller resolution input video while still achieving a very fluid and beautiful effect. 

This effect is relatively easy to generate by using just a few actors. An overview of the whole Isadora Scene looks like this:

To star this process we’ll start with a Video-In Watcher actor. The video-in will be the live feed from our camera, and will ultimately be the mask that we’re using to obscure and reveal our underlying layer of imagery. This video-in actor connects to a Difference actor which looks for the difference between two sequencial frames in the video stream. This is then in turn passed to a Motion Blur actor. The motion blur actor will allow you to specify the amount of accumulated blur effect as well as the decay (disappearance rate) of the effect. To soften the edges of this the image stream is next passed to a Gaussian Blur actor. Finally this stream is passed to an Add Alpha Channel actor by passing the live feed into the mask inlet on the actor. The underlying geometry is then passed in through the video inlet in the Add Alpha Channel actor. Finally, the outlet of the Add Alpha actor is passed out to a projector. 

As a matter of best-practice I like to use a Performance Monitor actor when I’m creating a scene in order to keep an eye on the FPS count. This can also be useful when trying to diagnose what’s causing a system to slow down during playback. 

This effect works equally well over still images or video, and is certainly something that’s fun to experiment with. Like all things in live systems, your milage may vary – motion blur and gaussian blur can quickly become resource expensive, and it’s worth turning down your capture settings to help combat a system slow-down.

Isadora | Network Control

For an upcoming show one of the many problems that I’ll need to solve is how to work with multiple machines and multiple operating systems over a network. My current plan for addressing the needs of this production will be to use one machine to drive the interactive media, and then to slave two computers for cued media playback. This will allow me to distribute the media playback over several machines while driving the whole system from a single machine. My current plan is to use one Mac Pro to work with live data while slaving two Windows’ 7 PCs for traditionally cued media playback. The Mac Pro will drive a Barco, while the PC’s each drive a Sanyo projector each. This should give me the best of both worlds in some respects. Distributed playback, similar to WatchOut’s approach to media playback, while also allowing for more complex visual manipulation of live-captured video. 

To make all of this work, however, it’s important to address how to get two different machines running Isadora on two different operating systems to talk with one another. To accomplish this I’m going to use an OSC Transmit actor on the Master machine, and an OSC listener on the slaved machines. 

On the Master machine the set-up looks like this:

Trigger Value – OSC Transmit

The transmit actor needs the IP address of the slaved machines, as well as a port to broadcast to. The setup below is for talking to just one other machine. In my final setup I’ll create a specialized user actor that holds two OSC Transmit actors (one for each machine) that can be copied into each scene.

On the Slaved machines the setup takes two additional steps. First off it’s important to determine what port you want to receive messages from. You can do that by going to Isadora Preferences and selecting the Midi/Net tab. Here you can specify what port you want Isadora to listen to. At this point it’s important to catch the data stream. You can do this by oping up the Communications tab and selecting Stream Setup. From here make sure that you select “Open Sound Control” and click the box “Auto-Detect Input.” At this point you should see the Master Machine broadcasting with a channel name an address, and a data stream. Once this is setup the Actor patch for receive messages over the network looks like this:

OSC Listener – Whatever Actor you Want

In my case I’ll largely use just jump++ actors to transition between scenes, each with their own movie. 

You can, of course, do much more complicated things with this set-up all depending on your programming or play-back needs. 

Soot and Spit | Particles in Isadora

Holy challenges Batman. It seems like I’m constantly being humbled by the learning curve of graduate school. This spring one of ASU’s productions is Charles Mee’s Soot and Spit

Soot and Spit is grounded in the work of James Castle, an artist who was deaf and possibly autistic. One of the most powerful outlets for expression in Castle’s life was making art. He made countless works over the course of his life, and one of the mediums that he used was a mixture of soot and spit. With this as a contextual anchor the lead designer, Boyd Branch, was interested in exploring the possibility of using particles as a part of his final design.  

One of my charges in working on this production was to explore how to work with particles in Isadora (our planned play-back system). I started this process by doing a little digging on the web for examples, and the most useful resource that I found as a starting point was the Mark Coniglio (Isadora’s creator) example file. Here Mark has a very helpful breakdown of several different kinds of typical operations in Isadora, including a particle system. Looking at the Particle System Actor can feel a little daunting. In my case, The typical approach of toggling and noodling with values to look for changes wasn’t really producing any valuable results. It wasn’t until I took a close look at Mark’s example patch that I was able to finally make some head way.

We can start by looking at the 3D particle actor and working through a few important considerations to keep in mind when working with 3D particles in Isadora. One thing to remember is that when you’re creating particles, the rendering system needs multiple attributes for each particle that you’re generating (location in x, y, and z, velocity, scale, rotation, orientation, color, lifespan, and so on). To borrow a idiomatic convention from MaxMSP, you have to bang on these attributes for every particle that you create. There are a variety of methods for generating your bang, but for the sake of seeing some consistent particle generation I started by using a pulse generator. Pulse generators in Isadora are expressed in hertz (cycles per second), and when we’re working with our particle system we’ll frequently want a pulse generator to be attached at the front end of our triggers. To that end, we really want a single pulse generator to be driving as much of our particle generation as possible. This is to ensure all of our data about particle generation is synchronized, and to keep our system over head as low as possible. 

Let’s get this party started by making some conceptual plans about how we want to experiment with particles. I started by thinking of the particles as being emitted from a single source and being affected by gravity in a typical manner, i.e. falling towards the ground. 

Here’s my basic particle emitter set-up for this kind of setup: 

Let’s start by taking a look at the things we need to get started. As I mentioned before we need to start by frist getting a pulse generator set-up. Let’s start by adding a pulse generator, and looking at where it’s connected:

Here we can see that the pulse generator is hooked up to a custom user actor that I’ve called “Particle Feeder,” and to the “Add Obj” attribute in the 3D particle Actor. This approach is making sure that we’re only using a single pulse generator to bang on our particle system – pushing attribute changes and add object changes.

Next let’s look at the Particle Feeder actor that I made to make this process easier:

In just a moment we’ll take a look inside of this user actor, but before we dive inside let’s examine how we’re feeding the particle generator some information. Frequency is the input for the pulse generator, this is how quickly we’re generating particles. Var X, Y, and Z are used to generate a random range of velocities for our particles between an upper and lower limit. This makes sure that our particles aren’t uniform in how they’re moving in the space. If we don’t have any variation here our particles will all behave the same way. Finally we have a location for our emitter’s location: Origin X, Y, and Z. It’s important to remember that the particle system exists in 3D space, so we need three attributes to define it’s location. On the right side of the actor we can see that we’re passing out random values between our min and max values for X, Y, and Z as well as a X, Y, and Z origin data. 

Inside of this custom actor we see this:


At first glance we can see that we have four blocks of interest for this actor. First off it’s important to notice that our Frequency input is passed to all of our modules. The first three modules are copies of one another (one for X, Y, and Z). We can see here that our pulse generator is banging on a random number generation actor, that random value (from 0 to 100) is then passed to a Limit-Scale Value actor. The limit scale actor takes an input value in a specified range and scales it to another range. In our case it’s taking values between 0 and 100 and scaling them to be between -5 and 5. The resulting value is then passed out of this macro to it’s corresponding value.  Our bottom block pushing out data about our emitter location. It’s important to remember that we need to pass out the origin location for each particle that’s generated. This is why the location information is passed through a trigger value that’s being triggered by our systems pulse generator.

If we jump back out of our user actor can see how our input parameters are then passed to the 3D particle actor:

Ultimately, you’ll need to do your own experimenting with particle systems in order to get a firm handle on how they work. I found it useful to use custom actors to tidy up the patch and make sense of what was actually happening. I think the best way to work with particles is to get something up and running, and then to start by changing single attributes to see what kind of impact your change is making. If you’re not seeing any changes you may try passing your value through a trigger that’s attached to your pulse generator – remember that some attributes need to be passed to each particle that’s generated. 

Are some of these pictures too small to read? You can see larger versions on flickr by looking in this album: Grad School Documentation


One of the great joys of sharing your work is the opportunity to learn from others. John Collingswood (for more about John check out dbini industries and Taikabox), pointed out on Facebook that one of the very handy things you can do in isadora is to constrain values by setting the range of an input parameter. For example, I could forgo the min-max system set-up with my user actor and instead scale and constrain random values in the 3D particle input. When you click on the name of an input on an actor your get a small pop-up window which allows you to specify parameters for that input’s range and starting values. This means that you could connect a wave generator (with the wave pattern set to random) to an input on a 3D particle actor and then control the range of scaled values with the 3D particle actor. That would look something like this: