Tag Archives: Isadora

Shuffling Words Around | Isadora

About a month ago I was playing about in Isadora and discovered the Text/ure actor. Unlike some of the other text display actors, this one hides a secret. This actor lets you copy and paste into a whole block of text that you can then display one line at a time. Why do that? Well, that’s a fine question, and at the time I didn’t have a good reason to use this technique, but it seemed interesting and I tucked it into the back of my mind. Fast forward a few months, and today on the Facebook group – Isadora User Group (London) – I see the following call for help:


And that’s when I remembered the secret power of our friend the Text/ure actor. Taking Raphael’s case as an example let’s look at how we might solve this problem in Izzy.

First off we need to start by formatting our list of words. For the sake of simplicity I’m going to use a list of 10 words instead of 100 – the method is the same for any size list, but 10 will be easier for us to work with in this example. Start off by firing up your favorite plain text editing program. I’m going to use TextWrangler as my tool of choice on a Mac, if you’re on a PC I’d suggest looking into Notepad++.

In TextWrangler I’m going to make a quick list of words, making sure that there is a carriage return between each one – in other words I want each word to live on its own line. So far my sample text looks like this:


Boring, I know, but we’re still just getting started.
Next I’m going to open up Isadora and create a new patch. To my programming space I’m going to add the Text/ure actor:


So far this looks like any other actor with inlets on the left, and outputs on the right. If we look closely, however, we’ll see a parameter called “line” that should catch our attention. Now for the magic. If we double click on the actor in the blue space to the right of our inlets, we suddenly get a pop up window where we can edit text.



Next let’s copy and past our words into this pop up window. Once all of your text has been added, click “OK.”

Edit_Text_and_Untitled 2

Great. Now we have our text loaded into the Text/ure actor, but we can’t see anything yet. Before we move on, let’s connect this actor to a projector and turn on a preview so we can get a sense of what’s happening. To do this start by adding a Projector actor, then connecting the video outlet of the Text/ure actor to the video inlet of the Projector.

Untitled 2

Next show stages – you can do this from the menu bar, or you can use the keyboard shortcut Command G. If you’re already connected to another display then your image should show up on your other display device. If you’d like to only see a preview window you can force preview with the keyboard shortcut Command-Shift F.


Alright, now we’re getting somewhere. If we want to change what text is showing up we change the line number on the Text/ure actor.


Alright. So now to the question of shuffling through these different words. In Raphael’s original post, he was looking to not only be able to select different words, but also to have a shuffling method (and I’m going to assume that he doesn’t want to repeat). To crack this nut we’re going to use the shuffle actor, and some logic.

Let’s start by adding a shuffle actor to our patch, and let’s take a moment to look at how it works.

Untitled 3

Our Shuffling actor has a few parameters that are going to be especially important for us – min, max, shuffle, and next. Min, like the label says is the lowest value in the shuffle stack; Max is the highest value. Shuffle will reset the our counter, and reshuffle our stack. The next trigger will give us the next number in the stack. On the outlet side of our actor we see Remaining and Value. Value, is the shuffled number that we’re working with; Remaining is how many numbers are left. If we think of this as a deck of cards then we can start to imagine what’s happening here. On the left, shuffle is like actually shuffling the deck. Next is the same as dealing the next card. On the right the Value would be the face value of the card dealt, while remaining is how many cards are left in the deck.

Alright already, why is this important?! Well, it’s important because once we get to the end of our shuffled stack we can’t deal any more cards until we re-shuffle the deck. We can avoid this problem by adding a comparator actor to our patch. The comparator is a logical operation that compares two values, and then tells you when the result is a match (True) and when it doesn’t (False).

Untitled 4

To get our logic working the way we want let’s start by connecting the Shuffle’s Remaining value to the value2 of the Comparator. Next we’ll connect the true trigger from the Comparator back to the Shuffle inlet on the Shuffle actor.

Untitled 5

Great, now we’ve made a small feedback loop that automatically reshuffles our deck when we have used all of the values in our range. Now we can connect the Value outlet of the Shuffle Actor to the Line input of the Text/ure actor:

Untitled 6There we have it. Now the logic of our shuffle and comparator will allow us to keep running through our list of words that are in turn sent to our projector.


TouchOSC | Serious Show Control

TouchOSC BuffetI know. You love your iDevice / Android. You love your phone, your tablet, your phablet, your you name it. You love them. Better yet, you’ve discovered Hexler’s TouchOSC and the thought of controlling your show / set / performance set you on fire – literally. You were beside yourself with glee and quickly set yourself to the task of triggering things remotely. Let’s be honest, it’s awesome. It’s beyond awesome, in some respects there are few things cooler than being able to build a second interface for a programming environment that works on your favorite touch screen device. But before we get too proud of ourselves, let’s have a moment of honesty. True honesty. You’ve dabbled and scrambled, but have you ever really sat down to fully understand all of the different kinds of buttons and switches in TouchOSC? I mean really looked at them and thought about what they’re good for? I know, because I hadn’t either. But it’s time, and it’s time in a big way.

TouchOSC_Editor_-_Untitled_1_touchoscFirst things first, make sure you download a copy of the TouchOSC editor for your OS of choice. If you happen to be using windows, save yourself a huge hassle and make sure that you download the version of the editor (32 or 64 bit) that corresponds to the version of Java that you’re running – otherwise you’ll find yourself unable to open the editor and be grumpy. Great. Now in the editor create a new control setup. Make sure you’re configured to work on the device that you’re planning on playing with / use the most. In my case I’m working with an iPad layout. I prefer to use my iPad in a horizontal orientation most of the time for show control. I’m also only working with one page for now, and happy to leave it as named “1”. You’ll notice for now that the box next to auto for OSC is checked – we’ll leave that for now. Alright, now we’re going to do something that most of us have never done.

In your empty control panel, right click and add one of every different kind of object. Yep. Add one of each so we can look at how they all work. You might choose to skip the repetition of vertical and horizontal sliders – that’s a-okay, but make sure you pick one of them. By the end you should have something that looks like this:


That’s certainly not sexy, but it’s going to teach you more than you can imagine. Next upload that interface to your device of choice. If you’ve never uploaded a new interface to TouchOSC you’ll need to know the IP address of your computer. If you’re using a windows machine you can use the command prompt and ipconfig to find this information, on a mac use the network pane in System Preferences. Next you’ll want to transfer this layout to your device, Helxer has a wonderful piece of documentation about do get this done, and you can read it here.

Next take a moment to read through what all of those lovely widgets do and how they talk to your programming environment. After that the real fun starts. Start listening to your network with your programming environment of choice, and look at what kinds of messages you get from all of these different kinds of buttons and sliders.


If you’re working with Troikatronix Isadora you can start to see the signal flow coming from your layout by first going to the communications drop down menu, and then selecting “Stream Setup.”


Next you’ll want to select “auto detect input.”


Now you can start moving sliders and toggling buttons so you can record their address. Make sure that you select “Renumber Ports” and click “OK” on the bottom of the page.


Now you’re ready to use the OSC listener actor to use those inputs to drive any number of elements inside of your scene. You’ll notice that the “channel number” on the listener actor corresponds to the port number on the Stream-Setup dialog.



Now you’re off to the races. Do something exciting, or not, but play with all of the buttons, sliders, and gizmos so you know how they all work. For extra credit, remember that you can also send messages back to your layout. To do this you need to use the OSC Transmit actor. You’ll also need to know the IP address of your device, and the port that you’re transmitting to – you can find all of this information on the same page in TouchOSC where you initially set your target IP address.

Quartz Composer

If you’re working with Apple’s Quartz Composer you have to do a little more work to get your OSC stream up and running. For starters, you’ll need to know a little bit more about your OSC messages. You’ll notice in the TouchOSC editor that each object has an address that’s associated with it. For example, my fader is ” /1/fader1 “. To Touch OSC this reads as Fader 1 on Page 1 of your layout. In order to read information from this slider in Quartz, we’ll need to know this address, along with the type of information that we’re sending. In the case of a slider we’re sending a float (a number with fractional parts). To get started let’s open up a new Quartz Composer patch, and add an “OSC Receiver” from the library.


Now let’s take a closer look at that node. If we look at the inspector, and at the settings page we can see lots of useful information.


Here we can see what port we’re set to receive information on by default, as well as an example of how we need to format the key’s from our layout so Quartz can properly receive the signals coming over the network. Let’s set up our fader to be properly decoded by Quartz. First we’ll need to remove the test key. Next we’ll want to add a new key by typing in the dialog box /1/fader1 and designating this as a float. Finally, click the plus sign to add this key the receiver. I usually add something else to my Quartz patch to make sure that I’m passing values through my network, this is optional.


There you go. Now, to add the other buttons and sliders from your layout, you’ll need to similarly add keys for each of the buttons, sliders, and gizmos. Remember that you can find this information in your TouchOSC editor for this layout:


Now you’re cooking with gas. Experiment with your layout, and how you can use this buttons and sliders to drive your Quartz Composer creations. For extra credit remember that you can also transmit back to your device. In this case you’ll need to use the “OSC Sender” object in Quartz. You’ll need to know the IP address of your target device, as well as the port that you’re communicating on. Have fun, and making something interesting… or if nothing else, learn something along the way.


If you’re working with Derivative’s TouchDesigner you have two options for receiving OSC information – CHOPs or DATS. Both of these methods work well but might have different uses for different situations.

Screenshot_032514_083023_PMLet’s start by looking at the OSC in CHOP. First we need to specify what port we want to listen to. Take a look at the TouchOSC layout to make sure the ports and addresses match. Then start moving sliders and buttons to see them appear in TouchDesigner.


To use these to drive various parts of your network you can either use a select CHOP or reference these channels with expressions.

Next let’s look at what we see coming from the OSC In DAT.

Screenshot_032514_083545_PMHere instead of a changing single we see table with a message header and then with our float values coming in as static messages. Depending on the circumstance one or the other of these methods is going to help you drive your network.

For extra credit use the OSC out Chop to push values back to your TouchOSC layout – remember that you’ll need the IP address of your device, the port you’re receiving messages on, and you’ll need to format the header of the message correctly so that it corresponds to the right slider or button. Have fun learning and playing with your layout of all of the different kinds of controls.

Sending and Receiving TouchOSC Values in Isadora

Sometime last year (in 2012) I came across Graham Thorne’s instructional videos about how to transmit data from Isadora to Touch OSC. Here’s Part 1 and Part 2 – if this is something that interests you, I’d highly recommend by that you watch these two videos first.

While I didn’t have a TouchOSC project at the time that I was working on, it got me thinking about how interfaces communicate information about what’s happening in a patch, and how that information communicates to the operator or user of a given system. This year I’m working on the thesis project of Daniel Fine (an MFA student here at ASU), and one of the many challenges we’re bound to face is how to visualize and interact with a system that’s spread across multiple computers, operating systems, and controlling a variety of different systems in the installation / performance space.

To that end, I thought that it would be important to start thinking about how to to send control data to a TouchOSC interface, and how to then ensure that we can see relationships between different control values in a given control panel. That all sounds well and good, but it’s awfully vague. A concrete exploration of this kind of concept was what I needed to start planning, in order to more fully wrap my head around how idea could be more fully exploited in a performance setting.

In order to do this I decided that I wanted to accomplish a simple task with a TouchOSC control panel. On a simple panel layout I wanted the position of Slider 1 to inversely change the position of Slider 2, and vise versa. In this way, moving Slider 1 up moves Slider 2 down, and moving Slider 2 up moves Slider 1 down. In performance setting it’s unlikely that I’d need something this simple, but for the sake of testing an idea this seemed like it would give me the kind of information that I might need.

First let’s look at the whole patch:

The Whole Patch

The set up for this starts by configuring Isadora to receive data from TouchOSC. If you’re new to this process start by reading this post (or at least though the Stream Set-Up section) to learn about how to start receiving data in Isadora from TouchOSC. Next we’re going to use a few simple actors to make this happen. We’re going to use the following actors:

  • OSC Listener
  • Limit Scale Value (there are other ways to scale values, I just like this method as a way to clearly see what values you’re changing and in what way)
  • OSC Transmitter

OSC ListenerOnce you have your connections between TouchOSC and Isadora set up you’ll want to make sure that you’ve isolated a single slider. We can do this by using the OSC Lister Actor. The OSC Listener reports the data coming from a given channel that you specific in the input inlet on the actor. The listener then sends out the transmitted values from the value outlet.

Limit Scale ValueWe have two sliders that we’re working with – Channel 1 and 2 respectively (they also have names, but we’ll get to that later). We first want to look at Channel 1. We’re going to set the OSC Listener actor to channel 1 and then connect the value output to the value inlet on a Limit-Scale Value Actor. The Limit-Scale Value Actor allows you to change, scale, or otherwise remap the floats or integers to a new range of values. This is important because TouchOSC uses normalized values (values from 0-1) for the range of the sliders in its control panels. In order to create an inverse relationship between two sliders we want to remap 0-1 to output as values from 1-0. We can do that by enter the following values in the inlets on the actor:

  • limit min: 0
  • limit max: 1
  • out min: 1
  • out max: 0

OSC TransmitThe remaining step is to connect the output from our Limit-Scale Value Actor to an OSC Transmit Actor. The OSC Transmit Actor, like its name suggests, transits data wrapped in the OSC protocol. In order to fully understand how this data is being transmitted we need to know a few things about this actor. In looking at its components we can see that it is made up of the following inlets:

  • UDP Addr – UDP address. This is the IP address of the computer or device that you’re taking to. You can determine what this address is in TouchOSC by looking at the info panel. Imagine that this is the street name for a house that you’re sending a letter to.
  • Port – this is the port on the device that you’re sending data to. It’s important that you know what port you’re trying to talk to on a given device so that your message can be parsed. If the UDP Address is the street name, the port number is akin to the house number that you’re trying to send a letter to.
  • Address – The address in the case of TouchOSC is the individual target / name of an asset that you want to change. Each of the sliders and buttons on a TouchOSC panel have a name (for example – /1/fader1), the address is how you tell Isadora what slider you are wanting to change. You can determine these names by looking closely at your Stream Set-up when you’re connecting your device to Isadora. To follow with our letter sending metaphor above, the Address is the name of the person you’re sending the letter to.
  • Use Type – this allows us to toggle the sending mechanism on and off.
  • Value – this is the value that we’re transmitting to our other device.

To use the OSC Transmit actor we need to fill in all of the appropriate fields with the information from our mobile device. You’ll need to specify the UPD Address, the Port number, the Address, and connect the value out form our Scale-Value actor to the value inlet of the OSC Transmit Actor.

In this test I started by having fader1 drive fader2. Once I got this working, I then repeated all of the steps above, for the other fader – if you look closely at the full patch this will make more sense. The resulting interaction can be seen in the gifs below.


Custom Quartz Compositions in Isadora

The What and Why

The more I work with Isadora, the more I feel like there isn’t anything it can’t do. As a programming environment for live performance it’s a fast way to build, create, and modify visual environments. One of the most interesting avenues for exploration in this regard is working with Quartz Composer. Quartz is a part of Apple’s integrated graphics technologies for developers and is built to render both 2D and 3D content by using the system’s GPU. This, for the most part, means that Quartz is fast. On top of being fast, it allows you access to GPU accelerated rendering making for visualizations that would be difficult if you were only relying on CPU strength.

Quartz has been interesting to me largely as it’s quick access to a GPU-accelerated high performance rendering environment capable of 2D, 3D and transparency. What’s not to love? As it turns out, there’s lot to be challenged by in Quartz. Like all programming environments it’s rife with its own idiosyncrasies, idioms, and approaches to the rendering process. It’s also a fair does of fun once you start to get your bearings.

Why does all of this matter? If you purchase the Isadora Core Video upgrade you have access to all of the Core Imaging processing plugins native to OS X. In addition to that you’re now able to use Quartz Composer patches as Actors in Isadora. This makes it possible to build a custom Quartz Composer patch and use it within the Isadora environment. Essentially this opens up a whole new set of possibilities for creating visual environments, effects, and interactivity for the production or installation that you might be working on.

Enough Already, Let’s Build Something

There are lots of things to keep in mind as you start this process, and perhaps one of the most useful guidelines I can offer is to be patent. Invariably there will be things that go wrong, or misbehave. It’s the nature of the beast, paying close attention to the details of the process is going to make or break you when it all comes down to it in the end.

We’re going to build a simple 3D Sphere in Quartz then prep it for control from Isadora. Easy.

Working in Quartz

First things first, you’ll need to get Quartz Composer. If you don’t already have it, you’ll need to download Quartz Composer. Check out I Love QC’s video about how to do this:

The next thing we’re going to do is to fire up QC. When prompted to choose a template, select the basic composition, and then click “Choose.”


One of the first things we need to talk about is what you’re seeing in the Quartz environment. The grid like window that you’re started with is your patch editor. Here you connect nodes in order to create or animate your scene.


You should also see a window that’s filled with black. This is your “viewer” window. Here you’ll see what you’re creating in the patch editor.


Additionally you can open up two more windows, by clicking the corresponding icons in the patch editor. First find the button for Patch Library, and click that to open up a list of nodes available for use within the network.

The Patch Library holds all of the objects that are available for use within the Quartz editor. While you can scroll through all of the possible objects when you’re programming, it’s often more efficient to use the search box at the bottom of the library.


Next open open up the patch inspector.
The patch inspector lets you see and edit the settings and parameters for a given object.

Untitled_-_Editor 3

Let’s start by making a sphere. In patch library search for “sphere” and add it to your patch. Out the gate we’ll notice that this sucks. Rather, this doesn’t currently look interesting, or like a sphere for that matter. What we’re currently seeing is a sphere rendered without any lighting effects. This means that we’re only seeing the outline of the sphere on the black background.


This brings us to one of the programming conventions in Quartz. In Quartz we have to place objects inside of other components in order to tell QC that we want a parent’s properties to propagate to the child component.

To see what that means let’s add a “lighting” patch to our network. Congratulations, nothing happened. In order to see the lighting object’s properties change the sphere, we need to place the sphere inside of that object. Select the sphere object in the editor, Command-X to cut, double click on the Lighting object, then Command-V to paste.


This is better, but only slightly.

Untitled_-_Viewer_and_Untitled_-_Editor 2

Let’s start by changing the size properties of our sphere. Open the Patch Inspector and click on the Sphere object in the editor. Now we can see a list of properties for our Sphere. Let’s start by adjusting the diameter of our sphere. I’m going to change my diameter to .25.

Sphere_and_Untitled_-_Viewer_and_Untitled_-_Editor 2

Next, select “settings” from the drop down menu in the Patch Inspector. Here I’m going to turn up the number of sub divisions of my sphere to give it a smoother appearance.

Sphere_and_Untitled_-_Viewer_and_Untitled_-_Editor 3

With our sphere looking pretty decent I want to add some subtle animation to give it a little more personality. We can do this by adding a LFO (low-frequency oscillator). We’ll use our LFO to give our sphere a little up and down motion. In the Patch Library search for LFO and add it to your editor next to your sphere.

Untitled_-_Editor 4

Next click the “Result” outlet on the “Wave Generator (LFO)” and connect it to the “Y Position” inlet on the sphere.

Wonderful… but this is going to make me sea sick.

Next we’re going to make some adjustments to the LFO. With your patch inspector open, click on the LFO. Let’s the following changes:

Period to 2
Phase to 0
Amplitude to .01
Offset to 0


Now you should have a sphere that’s very gently bouncing in the space.

Next let’s return to the parent lighting patch to make some changes to the lighting in this environment. We can get back to the parent either by clicking on the button “edit parent” or by clicking on the position in the string of objects where we want to go.

Untitled_-_Editor 5

In the root patch let’s click on the lighting object and change some parameters:

Material Specularity to 0.1
Material Shininess to 128
Light 1 Attenuation to .2
Light 1 X Position to -0.25
Light 1 Y Position 0.5
Light 1 Z Position to 1


Excellent. We should now have a sphere gently bobbing in space with a light located just barely to the left, up, and away (as a note these are locations in relation to our perspective looking at the sphere).

At this point we could leave this as it is and open it in Isadora, but it wouldn’t be very exciting. In order for Isadora to have access to make changes to a QC patch we have to “publish” the inputs that we want to change. In other words, we have to choose what parameters we want to have access to in Isadora before we save and close our QC patch.

I’m thinking that I’d like this sphere to have a few different qualities that can be changed from Isadora. I want to be able to:

  • Change the Lighting Color (Hue, Saturation, and Luminosity as separate controls)
  • Change the position of the light
  • Change the Sphere Size

In QC in order to pass a value to an object, the parameter in question needs to be published from the Root patch. This will make more sense in a second, but for now let’s dive into making some parameters available to Isadora. First up we’re going to add a HSL to our patch editor. This is going to give us the ability to control color as Hue, Saturation, and Luminosity as individual parameters.

Connect the Color outlet of the HSL to the Light 1 Color inlet on the lighting object.

Untitled_-_Editor 6

Now let’s do some publishing. Let’s start by right clicking on the HSL object. From the pop-up menu select “Publish Inputs” and one at a time publish Hue, Saturation, and Luminosity. You’ll know a parameter is being published if it’s got a green indicator light.


Next publish the X, Y, and Z position inputs for the lighting object. This time make sure you change the names Light X Pos, Light Y Pos, and Light Z Pos as you publish the parameters.


At this point we’ve published our Color values, and our position values, but only for the light. I still want to be able to change the diameter of sphere from Isadora. To do this we need to publish the diameter parameter variable from the “sphere” object, then again from the lighting object.

First double click on the lighting object to dive inside of it. Now publish the diameter parameter on the sphere, and make sure to name it “Sphere Diameter.” When you return to the root patch you’ll notice that you can now see the “Sphere Diameter” parameter.


We now need to publish this parameter one more time so that Isadora will be able to make changes to this variable.

Here we need to pause to talk about good house keeping. Like all things in life, the more organized you can keep your patches, the happier you will be in the long run. To this end we’re going to do a little input splitting, organizing, and commenting. Let’s start by right clicking anywhere in your patch and selecting “Add Note.” When you double click on this sticky note you’ll be able to edit the text inside of it. I’m going to call my first note “Lighting Qualities.”

Next I’m going to go back to my HSL, right click on the patch, and select “Input Splitter” and select Hue. You’ll notice that you now have a separate input for Hue that’s now separate from the HSL. Repeat this process for Saturation and Luminosity. I’m going to do the same thing to my lighting position variables that are published. Next I’m going to make another note called “Sphere Qualities” and then split m sphere diameter and drag it to be inside of this note. When I’m done my patch looks like this:

Untitled_-_Editor 8

Right now this seems like a lot of extra work. For something this simple, it sure is. The practice, however, is important to consider. In splitting out the published inputs, and organizing them in notes we can (at a glance) see what variables are published, and what they’re driving. Commenting and organizing your patches ultimately makes the process of working with them in the future all the easier.

With all of our hard work done, let’s save our Quartz patch.

Working in Isadora

Before we fire up Isadora it’s important to know where it looks to load quartz patches. Isadora is going to look in the Compositions folder that’s located in the Library, System Library, and User Library directories. You can tell Isadora to only look in any combination of these three at start up. Make sure that you copy your new quartz composition into one of those three directories (I’d recommend giving your custom quartz comps a unique color or folder to make them easier to find in the future).

With your QC patch in place, and Isadora fired up lets add our custom patch to a scene. Double click anywhere in the programming space and search for the name of your patch. I called mine Simple Sphere. We can now see that we have our composition with all of the variables we published in QC.


We can see what our composition looks like by adding a CI projector and connecting the image output form our QC actor to the image inlet on the projector. Let’s also make sure that we set the CI projector to keep the aspect ratio of our image.


When you do this you should see nothing. What gives?!
If you look back at your custom actor you’ll notice that the diameter of the sphere is currently set to 0. Change that parameter to the 0.1, or any other size you choose.

Untitled___Stage_1_and_Untitled 2

You should now see a dim floating sphere. What gives?!
If you look at the position of your light you’ll notice that it’s currently set to 0,0,0, roughly the same location as the sphere. Let’s move our light so we can see our sphere:

Light 1 X Position to -0.25
Light 1 Y Position 0.5
Light 1 Z Position to 1

Untitled___Stage_1_and_Untitled 3

If you’re connected to an external monitor or projector you’ll also want to make sure that you set the render properties to match the resolution of your output device:

Untitled 3

There you have it. You’ve now built a custom quartz patch that you can drive from Isadora.

Isadora | Button Basics

In a previous post I talked about how to get started in Isadora with some basics about slider operation. I also want to cover a little bit about using buttons with Izzy. 

Buttons are very handy interface controls. Before we get started, it’s important to cover a few considerations about how buttons work. When working with a physical button, like an arcade button on a midi controller, the action of pressing the button completes a circuit. When you release the button, you also break the circuit. In Isadora, we can control what happens when we press a button. Specifically, we can control what values are being transmitted when the button isn’t being pressed, when it is being pressed, and how the behaves (does it toggle, or is the signal momentary). Thinking about how a button behaves will help as you start to build an interface, simple or complex.

Let’s start by experimenting with a simple implementation of this process. We’ll create a white rectangle that fills our stage, connect our shape to a projector, and finally use a button to control the intensity of the projector. 

Start by creating a new scene, and adding a “Shapes” actor and a “Projector” actor. Connect the shapes’ video outlet to the projector’s video inlet.

Next change the width and height dimensions of the shape to be 100 and 100 respectively. Remember that Isadora doesn’t use pixel values, but instead works in terms percentage. In this case a value of 100 for the hight indicates that the shape should be 100% of the stage’s height, the same applies for the width value of 100. 

We should now have a white box that covers the height and width of the stage so that we only see white.

Now we’ll use a button to control a change in the stage from white to black. Remember that in order to start adding control elements we first need to reveal the control panel. You can do this by: selecting it from the drop down menu use Command-Shift-C to see only the control panel or use Control-Shift-S to see a split of the control panel and the programming space. If you’ve turned on the Grid for your programming space you’ll be able to see a distinct difference between the control panel space (on the left) and the programming space (on the right). You’ll also notice that with your Control panel active your actor selection bins have been replaced by control panel operators.

With the control panel visible, add a button. 

Once you’ve added your button to the control panel, you can change the size of the button by clicking and dragging the small white square on the bottom right of the button. 

Next let’s look at the options for the button. We can see what parameters we can control by double clicking on the button. When you do this you should see a pop up window the the following attributes:

  • Control Title – what the control is named
  • Width – how wide is this control (in pixels)
  • Height – how tall is this control (in pixels)
  • Font – the font used for this control
  • Font Size – self explanitory
  • Show Value of Linked Properties – this allows data from the patch itself to feed back into the control panel
  • Button Text – the text displayed on the button
  • Control ID – the numerical identification number of this contorl
  • Off Value – the numeric value sent when the button is in the off position
  • On Value – the numeric value sent when the button is in the on position
  • Mode (Momentary or Toggle) – the mode for the button. Momentary indicates that the on value is only transmitted while the button is being pressed. Toggle indicates that the values with toggle between on and off values with a click.
  • Don’t Send Off – prevents the button from sending the off value
  • Invert – inverts the on and off values

There are a few other options here, but they mostly have to deal with the appearance of the button. When you start thinking about how you want your control panel to look to an operator, these last parameters will be very helpful. 

For right now let’s leave the default parameters for the button’s options. Next connect the button’s control ID to the inlet on the Projector labeled “intensity.”

As we’re working on the control panel our edit mode is currently enabled which will prevent us from being able to actually click the button with the mouse. To check the controls we have two options:

  • We can disable edit mode by right clicking on the control panel work space and selecting “Disable Edit Mode” from the contextual menu.
  • We can use the option key to by-pass the above process.

If you’re doing some extensive testing of your control panel I’d recommend that you disable edit mode. On the other hand, if you’re only testing a single slider or button, I’d recommend using the option key as a much more efficient alternative.  

Holding down the option key, you should now be able to click the button in the control panel. You should see the white box flash on, and back off again as you press and release the mouse button. Right now as we click the button we’re sending a value of 100 to the intensity parameter of the Projector actor. This makes the shape opaque only so long as you’re pressing the button. 

Double click the button in the control panel and check the box for “Invert.” We’ve now inverted the message being sent from the bottom to the Projector. When you click the button you should now see the opposite. A white screen that flashes black, and then returns to white. 

Double click on the button in the control panel, uncheck the box for “Invert.” Change the “Mode” of the button from “Momentary” to “Toggle.” Now as you click the button you should notice that it stays depressed until you click it again. This allows you to toggle between the on and off states of the button. 

This, obviously, is only the beginning of how to work with Buttons. You might use a button to control the play back of a movie, what media was on the stage, the position of media on a stage, to jump between scenes, or to change any number of parameters in you parch. Knowing the basics of how buttons behave will help ensure that you can start to build a solid control panel that you can use during a live performance. 

Isadora | Slider Basics

One of the most exciting (and also most challenging) parts of working with Isadora is thinking about how an operator is going to use your patch during a show. ASU’s program focuses on the importance of programming a show with the expectation that the person running your system may, or may not, have much experience. During the tech rehearsal process one of the Media Designer’s responsibilities is to train the operator with basic operation and trouble shooting techniques. 

While there are a wide variety of methods for controlling your system I want to take a moment to cover how you can use the Control Panel features of Isadora to create a simple custom interface. I’m also going to take a moment to talk about the different kinds of controls, how they work, and things you want to keep in mind as you’re using them. 

To get started, there are few different ways to reveal the control panel. You can:  select it from the drop down menu use Command-Shift-C to see only the control panel or use Control-Shift-S to see a split of the control panel and the programming space. If you’ve turned on the Grid for your programming space you’ll be able to see a distinct difference between the control panel space (on the left) and the programming space (on the right). You’ll also notice that with your Control panel active your actor selection bins have been replaced by control panel operators.

As you create new scenes, Isadora will start by connecting all scenes to the same control panel. There are a few different schools of thoughts in terms of best practice in the use of control panels. Using a single control panel for every scene means only building a single interface. As long as you’re only dealing with a limited number of simple cues this is a fine direction to head, and may be the easiest method in terms of programming. This approach can, however, get complicated very quickly if you’re triggering more than one actor per scene. In this scenario the programmer could loose track of where a button or slider is connected. This might cause unexpected playback results or could just be a source of headaches. For more complicated play-back situations, you may instead elect to have separate control panels for each scene. Depending on your programming needs this may be the best way to ensure that your controls are only linked to a single scene. 

To accomplish this, you’ll need to split your control panel. Isadora gives you several visual cues to determine how a scene and control panel are linked. When you glance at your scene list you’ll notice that the bar underneath is either continuous (a single control panel) or broken (a split control panel).  

To split the control panel click between the two scenes that you wish you separate. When you see your cursor separating the two scenes right click to get a contextual menu with the option to split the control panel. You should now see that the line between the two scenes is now broken. 

Let’s start by looking at a simple slider. To add a slider to your control panel start by double clicking in the control panel work space. Next type in “slider” and select it when it appears in the drop down menu. 

It’s important to note that there is a difference between the 2D slider, and the regular slider. For now, we just want the “slider” control. We can learn a little more about what our slider is doing by double clicking on it. 

You should see a pop up window with lots of information about our slider:

  • Control Title – what this control is named
  • Width – how wide is this control (in pixels)
  • Height – how tall is this control (in pixels)
  • Font – what’s the font used for this control
  • Font Size – self explanatory
  • Show Value of Linked Properties – this allows data from the patch itself to feed back into the control panel. As a note, for this to work properly, you’ll also need to enable the “Display Value” check-box (a big thank you to Matthew Haber for catching my error here)
  • Control ID – the numerical identification number of this control
  • Minimum – Slider’s work on a principal that at the bottom, or left, position this control switch will send the number that’s indicated in this box.
  • Maximum – Slider’s work on a principal that at the top, or right, position this control switch will send the number that’s indicated in this box.
  • Step – The counting increments for this control.
  • Display Value – Shows the current value being sent in the control panel itself.
  • Display Format – The number of floating points displayed.
  • Color – The color of the inside of the slider.

There are a few other options here, but they’re largely aesthetic, so I’m going to skip them for now.

Let’s start by working with the default values for the slider and to see how this is communicating with the patch itself. 

First, we will add a trigger value to the programming space so we can see how values are transferred from the control panel to the programming environment.

Next we can connect the Control ID from the control panel to the Value inlet not he Trigger Value Actor. We can do this by clicking on the Control ID, and dragging the red line to the “value” input.

You should now see a number next to the value input that corresponds to the slider’s control ID.

As we’re working on the control panel our edit mode is currently enabled which will prevent us from being able to actually move the slider with the mouse. To check the controls we have two options:

    • We can disable edit mode by right clicking on the control panel work space and selecting “Disable Edit Mode” from the contextual menu. 
  • We can use the option key to by-pass the above process.

If you’re doing some extensive testing of your control panel I’d recommend that you disable edit mode. On the other hand, if you’re only testing a single slider or button, I’d recommend using the option key as a much more efficient alternative.  

Holding down the option key, you should now be able to move the slider in the control panel. You’ll notice that the value linked to the slider also changes. 

You’ll also notice that the output from the trigger value has not changed. This is because we’re only adjusting the value, but not activating the trigger. Let’s activate the trigger at the same time we’re moving the slider.

To do this we attach the Control ID to the trigger inlet on the Trigger Value Actor. This will ensure that the actor triggers at the same time that the value is changed. Now when we move the the slider we can see that the output value also changes.

Now that we know how to send slider data to a trigger value we can now look at something a little more interesting. We are going to start by adding a shape actor and connecting that to a projector actor.

Next connect your vertical slider to the “vert pos” (Vertical Position) inlet on the shapes actor. 

Create a new slider in the control panel. Grab the small box on the bottom right corner and drag the slider to the right until you have created a horizontal slider. 

Connect your horizontal slider to the “hora pos” (Horizontal Position) inlet on the shapes actor.

Next we need to adjust the scaling values of the shape actor. Actor’s inlets and outlets can often be scaled to a set range. In order to properly use our slider we’ll need to adjust the inlet scaled values on the Shapes Actor. To do this click on the name of the attribute whose scaled values you’d like to adjust. Start by clicking on “hora pos.” We can see in the pop-up menu that the minimum value is currently set to −200, and the maximum value is set to 200. These values are too high. 

Isadora uses a coordinate system that assumes that the middle of the stage is the origin 0,0. Further, Isadora thinks in terms of percentages rather than pixels. In the case of our horizontal slider, a positive value of 50 represents half of the total stage length which puts us at the right most edge of the stage. In the case of shapes it’s also important to note that the shape’s position is relative to it’s center. A positive value of 50 still leaves half of our shape on the screen, no matter the dimensions of the shape.

Set the scaled values of the horizontal position to −65 and 65. Now when we drag our slider (remember to hold down the option key) we are able to move our box from all the way off the stage on the left to all the way off the stage on the right.

Another type of slider that might be self in this type of situation is the 2D slider. Create a new scene and split the control panel so we can see how this input control works. In the new scene add a Shapes Actor and a Projector Actor, and connect them. Now open add a 2D slider to the control panel.

Double click on the 2D slider so you can see a little more about how this particular control input works. Similar to the “slider” control you can see that you can title the slider, adjust the width, height, font, and so on. You’ll notice that there’s a X Control ID and a Y Control ID. 

Next we’ll click okay, and link Control ID 1 (the X control) to the “horz pos” inlet on the Shapes Actor. Now link the Control ID 2 (the Y control) to the “vert pos” inlet on the Shapes Actor. Check to make sure that the horizontal and vertical inlets on the Shapes Actor are properly scaled (last time we set them to −65 and 65). 

Now the single 2D slider behaves in the same way as the two sliders we set-up in the exercise above.

This, obviously, is only the beginning of how to work with Sliders and 2D Sliders. You might use a slider to control the playback position of a movie, or the position of a movie on the stage. You might use a slider to control position, zoom, rotation, width, height, really just about any kind of numerical attribute for an actor. The key things to keep in mind in this process are:

    • Knowing the range of values that your slider is transmitting
    • Knowing the scaled range of values that your actor is transposing values to
    • Knowing control ID
    • Knowing how to connect your control panel items to Actors in your patch

Isadora | Live-Camera Input as a Mask

Back in March I had an opportunity to see a production called Kindur put on by the Italian Company Compagnia TPO. One of the most beautiful and compelling effects that they utilized during the show was to use a live-camera to create a mask that revealed a hidden color field. The technique of using a live feed in this way allows a programmer to work with smaller resolution input video while still achieving a very fluid and beautiful effect. 

This effect is relatively easy to generate by using just a few actors. An overview of the whole Isadora Scene looks like this:

To star this process we’ll start with a Video-In Watcher actor. The video-in will be the live feed from our camera, and will ultimately be the mask that we’re using to obscure and reveal our underlying layer of imagery. This video-in actor connects to a Difference actor which looks for the difference between two sequencial frames in the video stream. This is then in turn passed to a Motion Blur actor. The motion blur actor will allow you to specify the amount of accumulated blur effect as well as the decay (disappearance rate) of the effect. To soften the edges of this the image stream is next passed to a Gaussian Blur actor. Finally this stream is passed to an Add Alpha Channel actor by passing the live feed into the mask inlet on the actor. The underlying geometry is then passed in through the video inlet in the Add Alpha Channel actor. Finally, the outlet of the Add Alpha actor is passed out to a projector. 

As a matter of best-practice I like to use a Performance Monitor actor when I’m creating a scene in order to keep an eye on the FPS count. This can also be useful when trying to diagnose what’s causing a system to slow down during playback. 

This effect works equally well over still images or video, and is certainly something that’s fun to experiment with. Like all things in live systems, your milage may vary – motion blur and gaussian blur can quickly become resource expensive, and it’s worth turning down your capture settings to help combat a system slow-down.

Isadora | Network Control

For an upcoming show one of the many problems that I’ll need to solve is how to work with multiple machines and multiple operating systems over a network. My current plan for addressing the needs of this production will be to use one machine to drive the interactive media, and then to slave two computers for cued media playback. This will allow me to distribute the media playback over several machines while driving the whole system from a single machine. My current plan is to use one Mac Pro to work with live data while slaving two Windows’ 7 PCs for traditionally cued media playback. The Mac Pro will drive a Barco, while the PC’s each drive a Sanyo projector each. This should give me the best of both worlds in some respects. Distributed playback, similar to WatchOut’s approach to media playback, while also allowing for more complex visual manipulation of live-captured video. 

To make all of this work, however, it’s important to address how to get two different machines running Isadora on two different operating systems to talk with one another. To accomplish this I’m going to use an OSC Transmit actor on the Master machine, and an OSC listener on the slaved machines. 

On the Master machine the set-up looks like this:

Trigger Value – OSC Transmit

The transmit actor needs the IP address of the slaved machines, as well as a port to broadcast to. The setup below is for talking to just one other machine. In my final setup I’ll create a specialized user actor that holds two OSC Transmit actors (one for each machine) that can be copied into each scene.

On the Slaved machines the setup takes two additional steps. First off it’s important to determine what port you want to receive messages from. You can do that by going to Isadora Preferences and selecting the Midi/Net tab. Here you can specify what port you want Isadora to listen to. At this point it’s important to catch the data stream. You can do this by oping up the Communications tab and selecting Stream Setup. From here make sure that you select “Open Sound Control” and click the box “Auto-Detect Input.” At this point you should see the Master Machine broadcasting with a channel name an address, and a data stream. Once this is setup the Actor patch for receive messages over the network looks like this:

OSC Listener – Whatever Actor you Want

In my case I’ll largely use just jump++ actors to transition between scenes, each with their own movie. 

You can, of course, do much more complicated things with this set-up all depending on your programming or play-back needs. 

Soot and Spit | Particles in Isadora

Holy challenges Batman. It seems like I’m constantly being humbled by the learning curve of graduate school. This spring one of ASU’s productions is Charles Mee’s Soot and Spit

Soot and Spit is grounded in the work of James Castle, an artist who was deaf and possibly autistic. One of the most powerful outlets for expression in Castle’s life was making art. He made countless works over the course of his life, and one of the mediums that he used was a mixture of soot and spit. With this as a contextual anchor the lead designer, Boyd Branch, was interested in exploring the possibility of using particles as a part of his final design.  

One of my charges in working on this production was to explore how to work with particles in Isadora (our planned play-back system). I started this process by doing a little digging on the web for examples, and the most useful resource that I found as a starting point was the Mark Coniglio (Isadora’s creator) example file. Here Mark has a very helpful breakdown of several different kinds of typical operations in Isadora, including a particle system. Looking at the Particle System Actor can feel a little daunting. In my case, The typical approach of toggling and noodling with values to look for changes wasn’t really producing any valuable results. It wasn’t until I took a close look at Mark’s example patch that I was able to finally make some head way.

We can start by looking at the 3D particle actor and working through a few important considerations to keep in mind when working with 3D particles in Isadora. One thing to remember is that when you’re creating particles, the rendering system needs multiple attributes for each particle that you’re generating (location in x, y, and z, velocity, scale, rotation, orientation, color, lifespan, and so on). To borrow a idiomatic convention from MaxMSP, you have to bang on these attributes for every particle that you create. There are a variety of methods for generating your bang, but for the sake of seeing some consistent particle generation I started by using a pulse generator. Pulse generators in Isadora are expressed in hertz (cycles per second), and when we’re working with our particle system we’ll frequently want a pulse generator to be attached at the front end of our triggers. To that end, we really want a single pulse generator to be driving as much of our particle generation as possible. This is to ensure all of our data about particle generation is synchronized, and to keep our system over head as low as possible. 

Let’s get this party started by making some conceptual plans about how we want to experiment with particles. I started by thinking of the particles as being emitted from a single source and being affected by gravity in a typical manner, i.e. falling towards the ground. 

Here’s my basic particle emitter set-up for this kind of setup: 

Let’s start by taking a look at the things we need to get started. As I mentioned before we need to start by frist getting a pulse generator set-up. Let’s start by adding a pulse generator, and looking at where it’s connected:

Here we can see that the pulse generator is hooked up to a custom user actor that I’ve called “Particle Feeder,” and to the “Add Obj” attribute in the 3D particle Actor. This approach is making sure that we’re only using a single pulse generator to bang on our particle system – pushing attribute changes and add object changes.

Next let’s look at the Particle Feeder actor that I made to make this process easier:

In just a moment we’ll take a look inside of this user actor, but before we dive inside let’s examine how we’re feeding the particle generator some information. Frequency is the input for the pulse generator, this is how quickly we’re generating particles. Var X, Y, and Z are used to generate a random range of velocities for our particles between an upper and lower limit. This makes sure that our particles aren’t uniform in how they’re moving in the space. If we don’t have any variation here our particles will all behave the same way. Finally we have a location for our emitter’s location: Origin X, Y, and Z. It’s important to remember that the particle system exists in 3D space, so we need three attributes to define it’s location. On the right side of the actor we can see that we’re passing out random values between our min and max values for X, Y, and Z as well as a X, Y, and Z origin data. 

Inside of this custom actor we see this:

At first glance we can see that we have four blocks of interest for this actor. First off it’s important to notice that our Frequency input is passed to all of our modules. The first three modules are copies of one another (one for X, Y, and Z). We can see here that our pulse generator is banging on a random number generation actor, that random value (from 0 to 100) is then passed to a Limit-Scale Value actor. The limit scale actor takes an input value in a specified range and scales it to another range. In our case it’s taking values between 0 and 100 and scaling them to be between -5 and 5. The resulting value is then passed out of this macro to it’s corresponding value.  Our bottom block pushing out data about our emitter location. It’s important to remember that we need to pass out the origin location for each particle that’s generated. This is why the location information is passed through a trigger value that’s being triggered by our systems pulse generator.

If we jump back out of our user actor can see how our input parameters are then passed to the 3D particle actor:

Ultimately, you’ll need to do your own experimenting with particle systems in order to get a firm handle on how they work. I found it useful to use custom actors to tidy up the patch and make sense of what was actually happening. I think the best way to work with particles is to get something up and running, and then to start by changing single attributes to see what kind of impact your change is making. If you’re not seeing any changes you may try passing your value through a trigger that’s attached to your pulse generator – remember that some attributes need to be passed to each particle that’s generated. 

Are some of these pictures too small to read? You can see larger versions on flickr by looking in this album: Grad School Documentation

One of the great joys of sharing your work is the opportunity to learn from others. John Collingswood (for more about John check out dbini industries and Taikabox), pointed out on Facebook that one of the very handy things you can do in isadora is to constrain values by setting the range of an input parameter. For example, I could forgo the min-max system set-up with my user actor and instead scale and constrain random values in the 3D particle input. When you click on the name of an input on an actor your get a small pop-up window which allows you to specify parameters for that input’s range and starting values. This means that you could connect a wave generator (with the wave pattern set to random) to an input on a 3D particle actor and then control the range of scaled values with the 3D particle actor. That would look something like this:


Phase 2 | Halfway House

Media design is an interesting beast in the theatre. Designers are called upon to create digital scenery, interactive installations, abstract imagery, immersive environments, ghost like apparitions, and a whole litany of other illusions or optical candy. The media designer is part system engineer, part installation specialist, and part content creator. This kind of design straddles a very unique part of the theatrical experience as it sits somewhere between the concrete and the ephemeral. We’re often asked to create site specific work that relates to the geometry and architecture of the play, and at the same time challenged to explore what can be expressed through sound and light. 

One of the compelling components of ASU’s School of Theatre and Film (SoTF) is its commitment to staging new works. In addition to producing works that are tried and true, ASU also encourages its students to create works for the stage. As a part of this commitment  the department has developed a three phase program to serve the process of developing a work for full main-stage production. 
  • Phase 1 – Phase one is between a staged reading and a work-shop production of a play. This phase allows the team to focus on sorting out the nuts and bots of the piece – what is the play / work really addressing  and what are the obstacles that need to be addressed before it moves onto the next stage of production. 
  • Phase 2 – Phase two is a workshop production environment  With a small budget and a design team the production team creates a staged version of the work that operates within strict design constraints. Here the lighting plot is fixed, scenic elements are limited, and media has access to two fixed projectors focused on two fixed screens.  This phase is less about the technical aspects of the production, and more focused on getting the work up in front of an audience so that the writer and director have a chance to get some sense of what direction to move next.
  • Phase 3 – Phase 3 is a full main-stage production of a work. Here there production has a full design team, larger budget, and far fewer constraints on the implementation of the production. 
While productions can skip one of the stages, ideally they are produced in at least one phase (either one or two) before before being put up as a phase three show. 
This semester I was selected to be the media designer on call for the two original works slotted in as Phase 2 productions: Los Santos, and The Halfway House. These two new works are both written by current ASU playwrights, who are invested in receiving some critical and informative feedback bout their work. The beginning part of this process begins with production meetings where directors pitch their visions of the production and start the brainstorming / creating process with the designers. Ultimately,  Los Santos decided against using any media for their production. Halfway House, however, did decide that it wanted some media driven moments in their production. 
My role in this process was to work with the director to find the moments where media could be utilized in the production, film and edit the content, and program the playback system for the short run of the production. After reading through the play a few times I met with Laurelann Porter, the director, to talk about how media could be used for this show. Important to the design process was understanding the limitations of the production. In the case of the Phase 2 productions, the projectors and screens are fixed. This limitation is in part a function of reducing the amount of tech-time, as well as limiting the complications imposed a set and lighting when doing complex projection. Looking at the script I thought the best use of media would be to enhance some of the transition moments in the production. Several of the transitions in the show involve moments where there is action taking place “elsewhere” (this is the language used by the playwright). These moments seemed perfect for media to help illustrate. In meeting with the director we identified the major moments that would benefit from some media presence, and started brainstorming from there.
A large part of the production process is planning and organization. In the case of lighting, sound, and media designers are tasked with identifying the moments when their mediums will be used, and creating a cue sheet. Cue sheets are essentially a set of discretely identified moments that allow a stage manager to give directions about how the show runs. Media, lights, and sound all have their own board operators (actual humans), and the stage manager gives them directions about when to start or stop a given cue. Creating a cue sheet with this fact in mind helps to ensure that a designer has working understanding of how to plan the moments that are being created. My process of reading the script looked like this:
  • 1st time through – for the story and arc of the action
  • 2nd time through – identify possible moments for media
  • 3rd time through – refine the moments start to create a working cue sheet
  • 4th time through – further refinement, label cues, look for problematic moments
After talking with the director and identifying what moments were going to be mediated material, it was time to create a shooting list, and plan for how to use a single afternoon with the actors to record all of the necessary footage for the show. We had one afternoon with the actors to film the transition moments. I worked with the director to determine a shooting order (to make sure that we efficiently used the actors’ time), and to identify locations and moments that needed to be captured. From here it was a matter of showing up, setting up, and recording. This transitioned smoothly to the editing process that was a matter of cutting and touching up the footage for the desired look.

The School of Theatre and Film currently have two show control systems at our disposal. Dataton’s Watchout4 and Troikatronix’s Isadora. Given the timing of the phase 2 productions, I knew that the Isadora machine was going to be available to me for show control. Like MaxMSP, Isadora a is a node-based visual programming environment. Importantly, Isadora is truly designed with performance in mind, and has a few features that therefore make it easier to use in a theatrical production environment. 

Typically a theatrical production requires a additional steps for media that are similar to the lighting process – lensing, and plotting for example. For the Phase two productions  the the shows use a standard lighting and media plot that doesn’t change. This means that there’s little additional work in terms of projector placement, focusing, masking, and the like that I have to do as a designer. For a larger production I would need to create a system diagram that outlines the placement of computers, projectors, cable, and other system requirements. Additionally, I would need to do the geometry to figure out where to place the projectors to ensure that I had a wide enough throw with my image to cover my desired surfaces, and I would need to work with the lighting designer to determine where on the lighting plot there was room for this equipment. This element of drafting, planning, and system design can easily be taken for granted by new designers but it’s easily one of the most important steps in the process as has an effect on how the show looks and runs. With all of the physical components in place, and the media assets created the designer is now looks at programming the playback system. In the case of Isadora this also means designing an interface for the operator.
One of the pressing realities of designing media for a theatrical installation is the need to create playback system knowing that someone unfamiliar with the programming environment will be operating the computer driving the media. ASU’s operators are typically undergraduate students that may or may not be technical theatre majors. In some cases an operator may be very familiar with a given programming interface, while others may not have ever run media for a show. Theatre in educational institutions are a wonderful place for students to have an opportunity to learn lots of new tools, and get their feet wet with a number of different technologies. In this respect I think it’s incumbent upon the designer to create a patch that has an interface that’s as accesible as possible for a new operator. In my case, each moment in the show where there is media playing (a cue) has  corresponding button that triggers the start, playback, and stop for the given video. 

Media is notoriously finicky in live performance. It can be difficult to program, washed out by stage lights, perform poorly if it’s not encoded properly, or any host of other possible problems. In the case of Half Way House, the process went very smoothly. The largest problem had more to do with an equipment failure that pushed back equipment installation than with the editing or programming process. While this is a simple execution of using media in a production, it was valuable for a number for the individuals involved in the process – the director, lighting designer, sound designer, and stage manager to name only a few. There are large questions in the theatre world about the role of media in production – is it just fancy set dressing? how is it actively contributing to telling the story of the show? is it worth the cost? does it have a place in an idiom largely built around the concept of live bodies? And the list goes on. I don’t think that this implementation serves to address any of those questions, but for the production team it did start the process of demystifying the work of including media in a production, and that’s not nothing.

Tools Used
Programming and Playback- Isadora | TrokaTronix
Projector – InFocus HD projector
Video Editing – Adobe After Effects , Adobe Premiere
Image Editing – Adobe Photoshop
Filming / Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe PremiereAdobe After Effects