Category Archives: Isadora

Shuffling Words Around | Isadora

About a month ago I was playing about in Isadora and discovered the Text/ure actor. Unlike some of the other text display actors, this one hides a secret. This actor lets you copy and paste into a whole block of text that you can then display one line at a time. Why do that? Well, that’s a fine question, and at the time I didn’t have a good reason to use this technique, but it seemed interesting and I tucked it into the back of my mind. Fast forward a few months, and today on the Facebook group – Isadora User Group (London) – I see the following call for help:

Isadora_User_Group__London_

And that’s when I remembered the secret power of our friend the Text/ure actor. Taking Raphael’s case as an example let’s look at how we might solve this problem in Izzy.

First off we need to start by formatting our list of words. For the sake of simplicity I’m going to use a list of 10 words instead of 100 – the method is the same for any size list, but 10 will be easier for us to work with in this example. Start off by firing up your favorite plain text editing program. I’m going to use TextWrangler as my tool of choice on a Mac, if you’re on a PC I’d suggest looking into Notepad++.

In TextWrangler I’m going to make a quick list of words, making sure that there is a carriage return between each one – in other words I want each word to live on its own line. So far my sample text looks like this:

untitled_text

Boring, I know, but we’re still just getting started.
Next I’m going to open up Isadora and create a new patch. To my programming space I’m going to add the Text/ure actor:

Untitled

So far this looks like any other actor with inlets on the left, and outputs on the right. If we look closely, however, we’ll see a parameter called “line” that should catch our attention. Now for the magic. If we double click on the actor in the blue space to the right of our inlets, we suddenly get a pop up window where we can edit text.

 

Edit_Text_and_Untitled

Next let’s copy and past our words into this pop up window. Once all of your text has been added, click “OK.”

Edit_Text_and_Untitled 2

Great. Now we have our text loaded into the Text/ure actor, but we can’t see anything yet. Before we move on, let’s connect this actor to a projector and turn on a preview so we can get a sense of what’s happening. To do this start by adding a Projector actor, then connecting the video outlet of the Text/ure actor to the video inlet of the Projector.

Untitled 2

Next show stages – you can do this from the menu bar, or you can use the keyboard shortcut Command G. If you’re already connected to another display then your image should show up on your other display device. If you’d like to only see a preview window you can force preview with the keyboard shortcut Command-Shift F.

Untitled___Stage_1_and_Untitled

Alright, now we’re getting somewhere. If we want to change what text is showing up we change the line number on the Text/ure actor.

changing-text

Alright. So now to the question of shuffling through these different words. In Raphael’s original post, he was looking to not only be able to select different words, but also to have a shuffling method (and I’m going to assume that he doesn’t want to repeat). To crack this nut we’re going to use the shuffle actor, and some logic.

Let’s start by adding a shuffle actor to our patch, and let’s take a moment to look at how it works.

Untitled 3

Our Shuffling actor has a few parameters that are going to be especially important for us – min, max, shuffle, and next. Min, like the label says is the lowest value in the shuffle stack; Max is the highest value. Shuffle will reset the our counter, and reshuffle our stack. The next trigger will give us the next number in the stack. On the outlet side of our actor we see Remaining and Value. Value, is the shuffled number that we’re working with; Remaining is how many numbers are left. If we think of this as a deck of cards then we can start to imagine what’s happening here. On the left, shuffle is like actually shuffling the deck. Next is the same as dealing the next card. On the right the Value would be the face value of the card dealt, while remaining is how many cards are left in the deck.

Alright already, why is this important?! Well, it’s important because once we get to the end of our shuffled stack we can’t deal any more cards until we re-shuffle the deck. We can avoid this problem by adding a comparator actor to our patch. The comparator is a logical operation that compares two values, and then tells you when the result is a match (True) and when it doesn’t (False).

Untitled 4

To get our logic working the way we want let’s start by connecting the Shuffle’s Remaining value to the value2 of the Comparator. Next we’ll connect the true trigger from the Comparator back to the Shuffle inlet on the Shuffle actor.

Untitled 5

Great, now we’ve made a small feedback loop that automatically reshuffles our deck when we have used all of the values in our range. Now we can connect the Value outlet of the Shuffle Actor to the Line input of the Text/ure actor:

Untitled 6There we have it. Now the logic of our shuffle and comparator will allow us to keep running through our list of words that are in turn sent to our projector.

shuffle-text

TouchOSC | Serious Show Control

TouchOSC BuffetI know. You love your iDevice / Android. You love your phone, your tablet, your phablet, your you name it. You love them. Better yet, you’ve discovered Hexler’s TouchOSC and the thought of controlling your show / set / performance set you on fire – literally. You were beside yourself with glee and quickly set yourself to the task of triggering things remotely. Let’s be honest, it’s awesome. It’s beyond awesome, in some respects there are few things cooler than being able to build a second interface for a programming environment that works on your favorite touch screen device. But before we get too proud of ourselves, let’s have a moment of honesty. True honesty. You’ve dabbled and scrambled, but have you ever really sat down to fully understand all of the different kinds of buttons and switches in TouchOSC? I mean really looked at them and thought about what they’re good for? I know, because I hadn’t either. But it’s time, and it’s time in a big way.

TouchOSC_Editor_-_Untitled_1_touchoscFirst things first, make sure you download a copy of the TouchOSC editor for your OS of choice. If you happen to be using windows, save yourself a huge hassle and make sure that you download the version of the editor (32 or 64 bit) that corresponds to the version of Java that you’re running – otherwise you’ll find yourself unable to open the editor and be grumpy. Great. Now in the editor create a new control setup. Make sure you’re configured to work on the device that you’re planning on playing with / use the most. In my case I’m working with an iPad layout. I prefer to use my iPad in a horizontal orientation most of the time for show control. I’m also only working with one page for now, and happy to leave it as named “1”. You’ll notice for now that the box next to auto for OSC is checked – we’ll leave that for now. Alright, now we’re going to do something that most of us have never done.

In your empty control panel, right click and add one of every different kind of object. Yep. Add one of each so we can look at how they all work. You might choose to skip the repetition of vertical and horizontal sliders – that’s a-okay, but make sure you pick one of them. By the end you should have something that looks like this:

TouchOSC_Editor_-_Untitled_1_touchosc

That’s certainly not sexy, but it’s going to teach you more than you can imagine. Next upload that interface to your device of choice. If you’ve never uploaded a new interface to TouchOSC you’ll need to know the IP address of your computer. If you’re using a windows machine you can use the command prompt and ipconfig to find this information, on a mac use the network pane in System Preferences. Next you’ll want to transfer this layout to your device, Helxer has a wonderful piece of documentation about do get this done, and you can read it here.

Next take a moment to read through what all of those lovely widgets do and how they talk to your programming environment. After that the real fun starts. Start listening to your network with your programming environment of choice, and look at what kinds of messages you get from all of these different kinds of buttons and sliders.

Isadora

If you’re working with Troikatronix Isadora you can start to see the signal flow coming from your layout by first going to the communications drop down menu, and then selecting “Stream Setup.”

Communications_and_Menubar

Next you’ll want to select “auto detect input.”

Menubar_and_Untitled

Now you can start moving sliders and toggling buttons so you can record their address. Make sure that you select “Renumber Ports” and click “OK” on the bottom of the page.

Menubar_and_Untitled

Now you’re ready to use the OSC listener actor to use those inputs to drive any number of elements inside of your scene. You’ll notice that the “channel number” on the listener actor corresponds to the port number on the Stream-Setup dialog.

Menubar_and_Untitled

 

Now you’re off to the races. Do something exciting, or not, but play with all of the buttons, sliders, and gizmos so you know how they all work. For extra credit, remember that you can also send messages back to your layout. To do this you need to use the OSC Transmit actor. You’ll also need to know the IP address of your device, and the port that you’re transmitting to – you can find all of this information on the same page in TouchOSC where you initially set your target IP address.

Quartz Composer

If you’re working with Apple’s Quartz Composer you have to do a little more work to get your OSC stream up and running. For starters, you’ll need to know a little bit more about your OSC messages. You’ll notice in the TouchOSC editor that each object has an address that’s associated with it. For example, my fader is ” /1/fader1 “. To Touch OSC this reads as Fader 1 on Page 1 of your layout. In order to read information from this slider in Quartz, we’ll need to know this address, along with the type of information that we’re sending. In the case of a slider we’re sending a float (a number with fractional parts). To get started let’s open up a new Quartz Composer patch, and add an “OSC Receiver” from the library.

Menubar_and_Untitled_-_Editor

Now let’s take a closer look at that node. If we look at the inspector, and at the settings page we can see lots of useful information.

Menubar_and_OSC_Receiver_and_Untitled_-_Editor

Here we can see what port we’re set to receive information on by default, as well as an example of how we need to format the key’s from our layout so Quartz can properly receive the signals coming over the network. Let’s set up our fader to be properly decoded by Quartz. First we’ll need to remove the test key. Next we’ll want to add a new key by typing in the dialog box /1/fader1 and designating this as a float. Finally, click the plus sign to add this key the receiver. I usually add something else to my Quartz patch to make sure that I’m passing values through my network, this is optional.

Menubar_and_OSC_Receiver_and_Untitled_-_Editor

There you go. Now, to add the other buttons and sliders from your layout, you’ll need to similarly add keys for each of the buttons, sliders, and gizmos. Remember that you can find this information in your TouchOSC editor for this layout:

TouchOSC_Editor_-_Untitled_1_touchosc

Now you’re cooking with gas. Experiment with your layout, and how you can use this buttons and sliders to drive your Quartz Composer creations. For extra credit remember that you can also transmit back to your device. In this case you’ll need to use the “OSC Sender” object in Quartz. You’ll need to know the IP address of your target device, as well as the port that you’re communicating on. Have fun, and making something interesting… or if nothing else, learn something along the way.

TouchDesigner

If you’re working with Derivative’s TouchDesigner you have two options for receiving OSC information – CHOPs or DATS. Both of these methods work well but might have different uses for different situations.

Screenshot_032514_083023_PMLet’s start by looking at the OSC in CHOP. First we need to specify what port we want to listen to. Take a look at the TouchOSC layout to make sure the ports and addresses match. Then start moving sliders and buttons to see them appear in TouchDesigner.

Screenshot_032514_083345_PM

To use these to drive various parts of your network you can either use a select CHOP or reference these channels with expressions.

Next let’s look at what we see coming from the OSC In DAT.

Screenshot_032514_083545_PMHere instead of a changing single we see table with a message header and then with our float values coming in as static messages. Depending on the circumstance one or the other of these methods is going to help you drive your network.

For extra credit use the OSC out Chop to push values back to your TouchOSC layout – remember that you’ll need the IP address of your device, the port you’re receiving messages on, and you’ll need to format the header of the message correctly so that it corresponds to the right slider or button. Have fun learning and playing with your layout of all of the different kinds of controls.

Sending and Receiving TouchOSC Values in Isadora

Sometime last year (in 2012) I came across Graham Thorne’s instructional videos about how to transmit data from Isadora to Touch OSC. Here’s Part 1 and Part 2 – if this is something that interests you, I’d highly recommend by that you watch these two videos first.

While I didn’t have a TouchOSC project at the time that I was working on, it got me thinking about how interfaces communicate information about what’s happening in a patch, and how that information communicates to the operator or user of a given system. This year I’m working on the thesis project of Daniel Fine (an MFA student here at ASU), and one of the many challenges we’re bound to face is how to visualize and interact with a system that’s spread across multiple computers, operating systems, and controlling a variety of different systems in the installation / performance space.

To that end, I thought that it would be important to start thinking about how to to send control data to a TouchOSC interface, and how to then ensure that we can see relationships between different control values in a given control panel. That all sounds well and good, but it’s awfully vague. A concrete exploration of this kind of concept was what I needed to start planning, in order to more fully wrap my head around how idea could be more fully exploited in a performance setting.

In order to do this I decided that I wanted to accomplish a simple task with a TouchOSC control panel. On a simple panel layout I wanted the position of Slider 1 to inversely change the position of Slider 2, and vise versa. In this way, moving Slider 1 up moves Slider 2 down, and moving Slider 2 up moves Slider 1 down. In performance setting it’s unlikely that I’d need something this simple, but for the sake of testing an idea this seemed like it would give me the kind of information that I might need.

First let’s look at the whole patch:

The Whole Patch

The set up for this starts by configuring Isadora to receive data from TouchOSC. If you’re new to this process start by reading this post (or at least though the Stream Set-Up section) to learn about how to start receiving data in Isadora from TouchOSC. Next we’re going to use a few simple actors to make this happen. We’re going to use the following actors:

  • OSC Listener
  • Limit Scale Value (there are other ways to scale values, I just like this method as a way to clearly see what values you’re changing and in what way)
  • OSC Transmitter

OSC ListenerOnce you have your connections between TouchOSC and Isadora set up you’ll want to make sure that you’ve isolated a single slider. We can do this by using the OSC Lister Actor. The OSC Listener reports the data coming from a given channel that you specific in the input inlet on the actor. The listener then sends out the transmitted values from the value outlet.

Limit Scale ValueWe have two sliders that we’re working with – Channel 1 and 2 respectively (they also have names, but we’ll get to that later). We first want to look at Channel 1. We’re going to set the OSC Listener actor to channel 1 and then connect the value output to the value inlet on a Limit-Scale Value Actor. The Limit-Scale Value Actor allows you to change, scale, or otherwise remap the floats or integers to a new range of values. This is important because TouchOSC uses normalized values (values from 0-1) for the range of the sliders in its control panels. In order to create an inverse relationship between two sliders we want to remap 0-1 to output as values from 1-0. We can do that by enter the following values in the inlets on the actor:

  • limit min: 0
  • limit max: 1
  • out min: 1
  • out max: 0

OSC TransmitThe remaining step is to connect the output from our Limit-Scale Value Actor to an OSC Transmit Actor. The OSC Transmit Actor, like its name suggests, transits data wrapped in the OSC protocol. In order to fully understand how this data is being transmitted we need to know a few things about this actor. In looking at its components we can see that it is made up of the following inlets:

  • UDP Addr – UDP address. This is the IP address of the computer or device that you’re taking to. You can determine what this address is in TouchOSC by looking at the info panel. Imagine that this is the street name for a house that you’re sending a letter to.
  • Port – this is the port on the device that you’re sending data to. It’s important that you know what port you’re trying to talk to on a given device so that your message can be parsed. If the UDP Address is the street name, the port number is akin to the house number that you’re trying to send a letter to.
  • Address – The address in the case of TouchOSC is the individual target / name of an asset that you want to change. Each of the sliders and buttons on a TouchOSC panel have a name (for example – /1/fader1), the address is how you tell Isadora what slider you are wanting to change. You can determine these names by looking closely at your Stream Set-up when you’re connecting your device to Isadora. To follow with our letter sending metaphor above, the Address is the name of the person you’re sending the letter to.
  • Use Type – this allows us to toggle the sending mechanism on and off.
  • Value – this is the value that we’re transmitting to our other device.

To use the OSC Transmit actor we need to fill in all of the appropriate fields with the information from our mobile device. You’ll need to specify the UPD Address, the Port number, the Address, and connect the value out form our Scale-Value actor to the value inlet of the OSC Transmit Actor.

In this test I started by having fader1 drive fader2. Once I got this working, I then repeated all of the steps above, for the other fader – if you look closely at the full patch this will make more sense. The resulting interaction can be seen in the gifs below.

 

Cue Building for Non-Linear Productions

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

Fall_of_the_House_of_Escher_SHOW_DEMO.izzIn building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

Fall_of_the_House_of_Escher_SHOW_DEMO.izz 2

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.

Custom Quartz Compositions in Isadora

The What and Why

The more I work with Isadora, the more I feel like there isn’t anything it can’t do. As a programming environment for live performance it’s a fast way to build, create, and modify visual environments. One of the most interesting avenues for exploration in this regard is working with Quartz Composer. Quartz is a part of Apple’s integrated graphics technologies for developers and is built to render both 2D and 3D content by using the system’s GPU. This, for the most part, means that Quartz is fast. On top of being fast, it allows you access to GPU accelerated rendering making for visualizations that would be difficult if you were only relying on CPU strength.

Quartz has been interesting to me largely as it’s quick access to a GPU-accelerated high performance rendering environment capable of 2D, 3D and transparency. What’s not to love? As it turns out, there’s lot to be challenged by in Quartz. Like all programming environments it’s rife with its own idiosyncrasies, idioms, and approaches to the rendering process. It’s also a fair does of fun once you start to get your bearings.

Why does all of this matter? If you purchase the Isadora Core Video upgrade you have access to all of the Core Imaging processing plugins native to OS X. In addition to that you’re now able to use Quartz Composer patches as Actors in Isadora. This makes it possible to build a custom Quartz Composer patch and use it within the Isadora environment. Essentially this opens up a whole new set of possibilities for creating visual environments, effects, and interactivity for the production or installation that you might be working on.

Enough Already, Let’s Build Something

There are lots of things to keep in mind as you start this process, and perhaps one of the most useful guidelines I can offer is to be patent. Invariably there will be things that go wrong, or misbehave. It’s the nature of the beast, paying close attention to the details of the process is going to make or break you when it all comes down to it in the end.

We’re going to build a simple 3D Sphere in Quartz then prep it for control from Isadora. Easy.

Working in Quartz

First things first, you’ll need to get Quartz Composer. If you don’t already have it, you’ll need to download Quartz Composer. Check out I Love QC’s video about how to do this:

The next thing we’re going to do is to fire up QC. When prompted to choose a template, select the basic composition, and then click “Choose.”

Template_Chooser

One of the first things we need to talk about is what you’re seeing in the Quartz environment. The grid like window that you’re started with is your patch editor. Here you connect nodes in order to create or animate your scene.

Untitled_-_Editor

You should also see a window that’s filled with black. This is your “viewer” window. Here you’ll see what you’re creating in the patch editor.

Untitled_-_Viewer

Additionally you can open up two more windows, by clicking the corresponding icons in the patch editor. First find the button for Patch Library, and click that to open up a list of nodes available for use within the network.

The Patch Library holds all of the objects that are available for use within the Quartz editor. While you can scroll through all of the possible objects when you’re programming, it’s often more efficient to use the search box at the bottom of the library.

Library

Next open open up the patch inspector.
The patch inspector lets you see and edit the settings and parameters for a given object.

Untitled_-_Editor 3

Let’s start by making a sphere. In patch library search for “sphere” and add it to your patch. Out the gate we’ll notice that this sucks. Rather, this doesn’t currently look interesting, or like a sphere for that matter. What we’re currently seeing is a sphere rendered without any lighting effects. This means that we’re only seeing the outline of the sphere on the black background.

Untitled_-_Viewer_and_Untitled_-_Editor

This brings us to one of the programming conventions in Quartz. In Quartz we have to place objects inside of other components in order to tell QC that we want a parent’s properties to propagate to the child component.

To see what that means let’s add a “lighting” patch to our network. Congratulations, nothing happened. In order to see the lighting object’s properties change the sphere, we need to place the sphere inside of that object. Select the sphere object in the editor, Command-X to cut, double click on the Lighting object, then Command-V to paste.

Untitled_-_Viewer_and_Untitled_-_Editor_1

This is better, but only slightly.

Untitled_-_Viewer_and_Untitled_-_Editor 2

Let’s start by changing the size properties of our sphere. Open the Patch Inspector and click on the Sphere object in the editor. Now we can see a list of properties for our Sphere. Let’s start by adjusting the diameter of our sphere. I’m going to change my diameter to .25.

Sphere_and_Untitled_-_Viewer_and_Untitled_-_Editor 2

Next, select “settings” from the drop down menu in the Patch Inspector. Here I’m going to turn up the number of sub divisions of my sphere to give it a smoother appearance.

Sphere_and_Untitled_-_Viewer_and_Untitled_-_Editor 3

With our sphere looking pretty decent I want to add some subtle animation to give it a little more personality. We can do this by adding a LFO (low-frequency oscillator). We’ll use our LFO to give our sphere a little up and down motion. In the Patch Library search for LFO and add it to your editor next to your sphere.

Untitled_-_Editor 4

Next click the “Result” outlet on the “Wave Generator (LFO)” and connect it to the “Y Position” inlet on the sphere.

Wonderful… but this is going to make me sea sick.

Next we’re going to make some adjustments to the LFO. With your patch inspector open, click on the LFO. Let’s the following changes:

Period to 2
Phase to 0
Amplitude to .01
Offset to 0

Wave_Generator__LFO__and_Untitled_-_Viewer_and_Untitled_-_Editor

Now you should have a sphere that’s very gently bouncing in the space.

Next let’s return to the parent lighting patch to make some changes to the lighting in this environment. We can get back to the parent either by clicking on the button “edit parent” or by clicking on the position in the string of objects where we want to go.

Untitled_-_Editor 5

In the root patch let’s click on the lighting object and change some parameters:

Material Specularity to 0.1
Material Shininess to 128
Light 1 Attenuation to .2
Light 1 X Position to -0.25
Light 1 Y Position 0.5
Light 1 Z Position to 1

Lighting_and_Untitled_-_Viewer_and_Untitled_-_Editor

Excellent. We should now have a sphere gently bobbing in space with a light located just barely to the left, up, and away (as a note these are locations in relation to our perspective looking at the sphere).

At this point we could leave this as it is and open it in Isadora, but it wouldn’t be very exciting. In order for Isadora to have access to make changes to a QC patch we have to “publish” the inputs that we want to change. In other words, we have to choose what parameters we want to have access to in Isadora before we save and close our QC patch.

I’m thinking that I’d like this sphere to have a few different qualities that can be changed from Isadora. I want to be able to:

  • Change the Lighting Color (Hue, Saturation, and Luminosity as separate controls)
  • Change the position of the light
  • Change the Sphere Size

In QC in order to pass a value to an object, the parameter in question needs to be published from the Root patch. This will make more sense in a second, but for now let’s dive into making some parameters available to Isadora. First up we’re going to add a HSL to our patch editor. This is going to give us the ability to control color as Hue, Saturation, and Luminosity as individual parameters.

Connect the Color outlet of the HSL to the Light 1 Color inlet on the lighting object.

Untitled_-_Editor 6

Now let’s do some publishing. Let’s start by right clicking on the HSL object. From the pop-up menu select “Publish Inputs” and one at a time publish Hue, Saturation, and Luminosity. You’ll know a parameter is being published if it’s got a green indicator light.

HSL_Color

Next publish the X, Y, and Z position inputs for the lighting object. This time make sure you change the names Light X Pos, Light Y Pos, and Light Z Pos as you publish the parameters.

Untitled_-_Editor

At this point we’ve published our Color values, and our position values, but only for the light. I still want to be able to change the diameter of sphere from Isadora. To do this we need to publish the diameter parameter variable from the “sphere” object, then again from the lighting object.

First double click on the lighting object to dive inside of it. Now publish the diameter parameter on the sphere, and make sure to name it “Sphere Diameter.” When you return to the root patch you’ll notice that you can now see the “Sphere Diameter” parameter.

Untitled_-_Editor

We now need to publish this parameter one more time so that Isadora will be able to make changes to this variable.

Here we need to pause to talk about good house keeping. Like all things in life, the more organized you can keep your patches, the happier you will be in the long run. To this end we’re going to do a little input splitting, organizing, and commenting. Let’s start by right clicking anywhere in your patch and selecting “Add Note.” When you double click on this sticky note you’ll be able to edit the text inside of it. I’m going to call my first note “Lighting Qualities.”

Next I’m going to go back to my HSL, right click on the patch, and select “Input Splitter” and select Hue. You’ll notice that you now have a separate input for Hue that’s now separate from the HSL. Repeat this process for Saturation and Luminosity. I’m going to do the same thing to my lighting position variables that are published. Next I’m going to make another note called “Sphere Qualities” and then split m sphere diameter and drag it to be inside of this note. When I’m done my patch looks like this:

Untitled_-_Editor 8

Right now this seems like a lot of extra work. For something this simple, it sure is. The practice, however, is important to consider. In splitting out the published inputs, and organizing them in notes we can (at a glance) see what variables are published, and what they’re driving. Commenting and organizing your patches ultimately makes the process of working with them in the future all the easier.

With all of our hard work done, let’s save our Quartz patch.

Working in Isadora

Before we fire up Isadora it’s important to know where it looks to load quartz patches. Isadora is going to look in the Compositions folder that’s located in the Library, System Library, and User Library directories. You can tell Isadora to only look in any combination of these three at start up. Make sure that you copy your new quartz composition into one of those three directories (I’d recommend giving your custom quartz comps a unique color or folder to make them easier to find in the future).

With your QC patch in place, and Isadora fired up lets add our custom patch to a scene. Double click anywhere in the programming space and search for the name of your patch. I called mine Simple Sphere. We can now see that we have our composition with all of the variables we published in QC.

Untitled

We can see what our composition looks like by adding a CI projector and connecting the image output form our QC actor to the image inlet on the projector. Let’s also make sure that we set the CI projector to keep the aspect ratio of our image.

Untitled___Stage_1_and_Untitled

When you do this you should see nothing. What gives?!
If you look back at your custom actor you’ll notice that the diameter of the sphere is currently set to 0. Change that parameter to the 0.1, or any other size you choose.

Untitled___Stage_1_and_Untitled 2

You should now see a dim floating sphere. What gives?!
If you look at the position of your light you’ll notice that it’s currently set to 0,0,0, roughly the same location as the sphere. Let’s move our light so we can see our sphere:

Light 1 X Position to -0.25
Light 1 Y Position 0.5
Light 1 Z Position to 1

Untitled___Stage_1_and_Untitled 3

If you’re connected to an external monitor or projector you’ll also want to make sure that you set the render properties to match the resolution of your output device:

Untitled 3

There you have it. You’ve now built a custom quartz patch that you can drive from Isadora.

Understanding Accelerometer Data | Isadora

This summer, as I’m thinking about future production work for the coming years, I’ve started to consider what kind of live data I want to be able to use in the context of live performance. To that end, one of the more interesting sensors worth examining are accelerometers. While there are lots of ways to work with these seniors, a simple way to get started is to use an iPhone or iPod touch to broadcast it’s data over a wireless network. In a project that I tackled in the Spring of 2013 I used this technique when working with TouchDesigner on a piece of installation art.

In continuing to think about (and work with) accelerometer data it’s important to look at what information you’re actually getting from the sensor. I’ve started exploring this process by creating a simple Isadora patch to visualize this flow of information, and to document how the data corresponds to the motion of an iPod touch or iPhone.

TouchOSC to Isadora

To start we need to first capture the OSC data with Isadora. I’m using TouchOSC for process. With TouchOSC installed on your mobile device you first need to make sure that’ve identified your IP address, and chosen a port number that corresponds to Isadora’s OSC incoming port number (found in Isadora’s preferences under Midi / Net). Next we need to ensure that TouchOSC is broadcasting the accelerometer data. You can do this by first tapping on the “Settings” icon in the upper corner of the TouchOSC panel.

From here selection “Options”

Under options we need to make sure that “Accelerometer (/xyz)” is enabled.

Back in Isadora, provided that our IP address and ports are correct we now only need to set up the communications stream properties. To do this we’ll start by clicking on the “Communications” menu item and selecting “Stream Setup” from the drop down list.

First we need to make sure that we’ve selected the “Open Sound Control” stream, then clicked the “Auto-Detect Input” box. At this point you should now see three live data points under the column labeled “data.” In order to be able to work with this data in the Isadora patching window we need to make sure that we’ve clicked the box to enable the port, and changed the number from 0 to 1. Once this is all set we can click on the “ok” button and start placing some actors.

The Big Picture

First let’s take  look at the full patch and talk through what our next steps are going to be.

In order to have a fuller sense of what the accelerometer data is doing, we’re going to create a simple visualization. We’ll do this by creating a small oval for each data stream, and map their change to the vertical axis, while mapping a constant change to the horizontal axis. We’ll then set up a simple screen refresh so we can see a rough graph of this data over time that will automatically be cleared each time our visualization wraps around the screen.

This approach should allow us to see how the three data from the accelerometer correlate to one another, as well as giving us a chance to consider how we might work with this kind of data flow inside of an Isadora patch. We’ll also have a chance to look at how to purposefully erase the stage background in Isadora, and how to create a simple system to automate that process.

Changes over Time

With our communications stream set-up, and TouchOSC broadcasting to Isadora we can now think about how to visual represent this information over time. We’re going to start this process with the following flow of actors:

OSC Listener – Smoother – Shapes – Projector

Let’s first look at the OSC Listener and the Smoother actors. Once we add the OSC Listener we need to change the “channel” value to “1.” You may remember that in our Communication Stream we were seeing the data from TouchOSC labeled as “/accxyz” with live values represented as (float, float, float). As we add listener actors we’ll notice that those floats correspond to channel’s 1,2,and 3. This allows us to isolate the list of values without having to do any additional parsing of data. You’ll also notice that I’ve attached the Listener to a Smoother actor. The accelerometer in your iPod Touch or iPhone is extremely sensitive. As such, you’ll notice that you may end up with some unwanted noise in your data stream. We’re going to use the Smoother to even out that noise. Connect the value outlet of the OSC Listener actor to the “value in” input on the smoother actor. I’ve changed the smoothing value to .9 and  set the frequency to 100 Hz. You’re milage may vary here, so keep in mind that you may need to adjust these values once you’ve got your whole patch up and running.

In order to keep the values in a range that we will display on our stage we’re going to apply a little bit of scaling to the value out from the Smoother Actor. To do this, click on the text “value out” on the Smoother actor, and change the limit min value to -2.5 and the limit max value to 2.5. This will constrain our scale our values slightly and ensure that any change we see from the accelerometer is visible on the stage.

Next we’ll connect the “value out” of the Smoother actor to the “vert pos” (vertical position) inlet on a shapes actor. On the shapes actor you’ll also want to change the size of the shape, and the kind of shape. I’ve set mine to be a oval with a hight of 2 and a width of 2. Next you’ll connect the video output of the Shapes actor the video input of a projector actor and we should finally be able to see something for all of our hard work.

Now in the center of the stage we should see an oval that moves up and down in relation to the movement of our iPod touch.  I’ve added the video from my camera so you can see the movement of the iPod touch in relation to the movement of the oval.

While we’ve made some good progress we’re still only seeing one axis of data from the accelerometer, and on top of that our oval is stuck in the center of the screen. Next we’ll tackle these very problems to create something that we can see change over time as the Y-axis.

First, lets start by taking the actor string we’ve just created and duplicating it twice. Select the whole string:

OSC Listener – Smoother – Shapes – Projector

copy it, and paste it twice. This should give us three of these strings. Next change the channel numbers on the second and third strings to be 2 and 3 respectively.

At this point you may also wish to give each string it’s own color so you can differentiate why the axises of the accelerometer.

Let’s add some horizontal motion to our ovals. We’ll do this by adding a wave generator actor. Change the wave-type to saw-tooth, and the frequency to 0.1Hz. Next connect the outlet value to the “horz pos” (horizontal position) values of the three shapes actors.

You should now see the ovals representing all three axis moving across the screen. At this point we’ve connected our iPod touch’s accelerometer to Isaodra, given each axis it’s own shape, mapped the change in accelerometer to vertical position, and mapped the horizontal position of our ovals to a single wave generator so we can see the change of accelerometer data over time.

Our next steps here should help us see this information a little more clearly. Before we start we first need to know a little bit more about one of the actors in Isadora. The Stage Background actor allows us to play with some interesting feedback effects here in Isadora. Useful for us today is going to be playing with value for erasing the screen. By leaving this value turned off we will be able to give our ovals trails. That said, we don’t want to totally turn off screen erasing, otherwise we’ll quickly loose track of our ovals as they wrap from one side of the screen to the other. In order to solve this problem we’ll use a few comparator actors, a wave generator, and a few trigger values.

I’m sure there’s a more elegant approach to this challenge, this however was my 30-minute solution. First, I wanted to be able to see the trails of the ovals we’ve just created. I also wanted these trails to disappear as the ovals wrapped from the right side of the stage to the left. While you could do this by adding another wave generator, the simplest solution (in my opinion) is to use the same wave generator that’s driving the horizontal movement of the ovals. Essentially, the nuts and bots of my approach is this: use the wave generator driving the horizontal movement of the ovals to toggle the screen erase function from off to on and back to off.

To do this we’ll complete the following steps:

  • First create a comparator actor and connect the value outlet from our wave generator to the value 1 inlet on the comparator actor. Next ensure that the comparison method is changed to lt (less than). Now we’ll change the second value of the comparator to 0.1. This means that when our wave generator is publishing a value of less than 0.1 to our comparator we’ll see a “true” trigger.
  • Next we’ll create an envelope generator. Connect our comparator’s “true” outlet to the “trigger” inlet of the envelope generator. Change the envelope generator to have two segments. Rate 1 and 2 should be changed to .25 seconds.
  • Now create another comparator actor. Connect the “output” value from the envelope generator to the “value 1” inlet of the new comparator actor. Leave the comparison method to eq (equal), and leave the “value 2” at 0.
  • Create two trigger value actors. One trigger value should be set to 0, while the other trigger value to should be set to 1. Connect the comparator actor’s outlets of true and false to the trigger values 0 and 1 respectively.
  • Finally, create a stage background actor. Connect both outlets from the trigger values to the “erase” inlet on the stage background actor.

All of that should look like this:

From here you can start to play with your accelerometer and you’ll be able to see a visual representation of the values coming out of the sensor.

Isadora | Button Basics

In a previous post I talked about how to get started in Isadora with some basics about slider operation. I also want to cover a little bit about using buttons with Izzy. 

Buttons are very handy interface controls. Before we get started, it’s important to cover a few considerations about how buttons work. When working with a physical button, like an arcade button on a midi controller, the action of pressing the button completes a circuit. When you release the button, you also break the circuit. In Isadora, we can control what happens when we press a button. Specifically, we can control what values are being transmitted when the button isn’t being pressed, when it is being pressed, and how the behaves (does it toggle, or is the signal momentary). Thinking about how a button behaves will help as you start to build an interface, simple or complex.

Let’s start by experimenting with a simple implementation of this process. We’ll create a white rectangle that fills our stage, connect our shape to a projector, and finally use a button to control the intensity of the projector. 

Start by creating a new scene, and adding a “Shapes” actor and a “Projector” actor. Connect the shapes’ video outlet to the projector’s video inlet.

Next change the width and height dimensions of the shape to be 100 and 100 respectively. Remember that Isadora doesn’t use pixel values, but instead works in terms percentage. In this case a value of 100 for the hight indicates that the shape should be 100% of the stage’s height, the same applies for the width value of 100. 

We should now have a white box that covers the height and width of the stage so that we only see white.

Now we’ll use a button to control a change in the stage from white to black. Remember that in order to start adding control elements we first need to reveal the control panel. You can do this by: selecting it from the drop down menu use Command-Shift-C to see only the control panel or use Control-Shift-S to see a split of the control panel and the programming space. If you’ve turned on the Grid for your programming space you’ll be able to see a distinct difference between the control panel space (on the left) and the programming space (on the right). You’ll also notice that with your Control panel active your actor selection bins have been replaced by control panel operators.

With the control panel visible, add a button. 

Once you’ve added your button to the control panel, you can change the size of the button by clicking and dragging the small white square on the bottom right of the button. 

Next let’s look at the options for the button. We can see what parameters we can control by double clicking on the button. When you do this you should see a pop up window the the following attributes:

  • Control Title – what the control is named
  • Width – how wide is this control (in pixels)
  • Height – how tall is this control (in pixels)
  • Font – the font used for this control
  • Font Size – self explanitory
  • Show Value of Linked Properties – this allows data from the patch itself to feed back into the control panel
  • Button Text – the text displayed on the button
  • Control ID – the numerical identification number of this contorl
  • Off Value – the numeric value sent when the button is in the off position
  • On Value – the numeric value sent when the button is in the on position
  • Mode (Momentary or Toggle) – the mode for the button. Momentary indicates that the on value is only transmitted while the button is being pressed. Toggle indicates that the values with toggle between on and off values with a click.
  • Don’t Send Off – prevents the button from sending the off value
  • Invert – inverts the on and off values

There are a few other options here, but they mostly have to deal with the appearance of the button. When you start thinking about how you want your control panel to look to an operator, these last parameters will be very helpful. 

For right now let’s leave the default parameters for the button’s options. Next connect the button’s control ID to the inlet on the Projector labeled “intensity.”

As we’re working on the control panel our edit mode is currently enabled which will prevent us from being able to actually click the button with the mouse. To check the controls we have two options:

  • We can disable edit mode by right clicking on the control panel work space and selecting “Disable Edit Mode” from the contextual menu.
  • We can use the option key to by-pass the above process.

If you’re doing some extensive testing of your control panel I’d recommend that you disable edit mode. On the other hand, if you’re only testing a single slider or button, I’d recommend using the option key as a much more efficient alternative.  

Holding down the option key, you should now be able to click the button in the control panel. You should see the white box flash on, and back off again as you press and release the mouse button. Right now as we click the button we’re sending a value of 100 to the intensity parameter of the Projector actor. This makes the shape opaque only so long as you’re pressing the button. 

Double click the button in the control panel and check the box for “Invert.” We’ve now inverted the message being sent from the bottom to the Projector. When you click the button you should now see the opposite. A white screen that flashes black, and then returns to white. 

Double click on the button in the control panel, uncheck the box for “Invert.” Change the “Mode” of the button from “Momentary” to “Toggle.” Now as you click the button you should notice that it stays depressed until you click it again. This allows you to toggle between the on and off states of the button. 

This, obviously, is only the beginning of how to work with Buttons. You might use a button to control the play back of a movie, what media was on the stage, the position of media on a stage, to jump between scenes, or to change any number of parameters in you parch. Knowing the basics of how buttons behave will help ensure that you can start to build a solid control panel that you can use during a live performance. 

Isadora | Slider Basics

One of the most exciting (and also most challenging) parts of working with Isadora is thinking about how an operator is going to use your patch during a show. ASU’s program focuses on the importance of programming a show with the expectation that the person running your system may, or may not, have much experience. During the tech rehearsal process one of the Media Designer’s responsibilities is to train the operator with basic operation and trouble shooting techniques. 

While there are a wide variety of methods for controlling your system I want to take a moment to cover how you can use the Control Panel features of Isadora to create a simple custom interface. I’m also going to take a moment to talk about the different kinds of controls, how they work, and things you want to keep in mind as you’re using them. 

To get started, there are few different ways to reveal the control panel. You can:  select it from the drop down menu use Command-Shift-C to see only the control panel or use Control-Shift-S to see a split of the control panel and the programming space. If you’ve turned on the Grid for your programming space you’ll be able to see a distinct difference between the control panel space (on the left) and the programming space (on the right). You’ll also notice that with your Control panel active your actor selection bins have been replaced by control panel operators.

As you create new scenes, Isadora will start by connecting all scenes to the same control panel. There are a few different schools of thoughts in terms of best practice in the use of control panels. Using a single control panel for every scene means only building a single interface. As long as you’re only dealing with a limited number of simple cues this is a fine direction to head, and may be the easiest method in terms of programming. This approach can, however, get complicated very quickly if you’re triggering more than one actor per scene. In this scenario the programmer could loose track of where a button or slider is connected. This might cause unexpected playback results or could just be a source of headaches. For more complicated play-back situations, you may instead elect to have separate control panels for each scene. Depending on your programming needs this may be the best way to ensure that your controls are only linked to a single scene. 

To accomplish this, you’ll need to split your control panel. Isadora gives you several visual cues to determine how a scene and control panel are linked. When you glance at your scene list you’ll notice that the bar underneath is either continuous (a single control panel) or broken (a split control panel).  

To split the control panel click between the two scenes that you wish you separate. When you see your cursor separating the two scenes right click to get a contextual menu with the option to split the control panel. You should now see that the line between the two scenes is now broken. 

Let’s start by looking at a simple slider. To add a slider to your control panel start by double clicking in the control panel work space. Next type in “slider” and select it when it appears in the drop down menu. 

It’s important to note that there is a difference between the 2D slider, and the regular slider. For now, we just want the “slider” control. We can learn a little more about what our slider is doing by double clicking on it. 

You should see a pop up window with lots of information about our slider:

  • Control Title – what this control is named
  • Width – how wide is this control (in pixels)
  • Height – how tall is this control (in pixels)
  • Font – what’s the font used for this control
  • Font Size – self explanatory
  • Show Value of Linked Properties – this allows data from the patch itself to feed back into the control panel. As a note, for this to work properly, you’ll also need to enable the “Display Value” check-box (a big thank you to Matthew Haber for catching my error here)
  • Control ID – the numerical identification number of this control
  • Minimum – Slider’s work on a principal that at the bottom, or left, position this control switch will send the number that’s indicated in this box.
  • Maximum – Slider’s work on a principal that at the top, or right, position this control switch will send the number that’s indicated in this box.
  • Step – The counting increments for this control.
  • Display Value – Shows the current value being sent in the control panel itself.
  • Display Format – The number of floating points displayed.
  • Color – The color of the inside of the slider.

There are a few other options here, but they’re largely aesthetic, so I’m going to skip them for now.

Let’s start by working with the default values for the slider and to see how this is communicating with the patch itself. 

First, we will add a trigger value to the programming space so we can see how values are transferred from the control panel to the programming environment.

Next we can connect the Control ID from the control panel to the Value inlet not he Trigger Value Actor. We can do this by clicking on the Control ID, and dragging the red line to the “value” input.

You should now see a number next to the value input that corresponds to the slider’s control ID.

As we’re working on the control panel our edit mode is currently enabled which will prevent us from being able to actually move the slider with the mouse. To check the controls we have two options:

    • We can disable edit mode by right clicking on the control panel work space and selecting “Disable Edit Mode” from the contextual menu. 
  • We can use the option key to by-pass the above process.

If you’re doing some extensive testing of your control panel I’d recommend that you disable edit mode. On the other hand, if you’re only testing a single slider or button, I’d recommend using the option key as a much more efficient alternative.  

Holding down the option key, you should now be able to move the slider in the control panel. You’ll notice that the value linked to the slider also changes. 

You’ll also notice that the output from the trigger value has not changed. This is because we’re only adjusting the value, but not activating the trigger. Let’s activate the trigger at the same time we’re moving the slider.

To do this we attach the Control ID to the trigger inlet on the Trigger Value Actor. This will ensure that the actor triggers at the same time that the value is changed. Now when we move the the slider we can see that the output value also changes.

Now that we know how to send slider data to a trigger value we can now look at something a little more interesting. We are going to start by adding a shape actor and connecting that to a projector actor.

Next connect your vertical slider to the “vert pos” (Vertical Position) inlet on the shapes actor. 

Create a new slider in the control panel. Grab the small box on the bottom right corner and drag the slider to the right until you have created a horizontal slider. 

Connect your horizontal slider to the “hora pos” (Horizontal Position) inlet on the shapes actor.


Next we need to adjust the scaling values of the shape actor. Actor’s inlets and outlets can often be scaled to a set range. In order to properly use our slider we’ll need to adjust the inlet scaled values on the Shapes Actor. To do this click on the name of the attribute whose scaled values you’d like to adjust. Start by clicking on “hora pos.” We can see in the pop-up menu that the minimum value is currently set to −200, and the maximum value is set to 200. These values are too high. 


Isadora uses a coordinate system that assumes that the middle of the stage is the origin 0,0. Further, Isadora thinks in terms of percentages rather than pixels. In the case of our horizontal slider, a positive value of 50 represents half of the total stage length which puts us at the right most edge of the stage. In the case of shapes it’s also important to note that the shape’s position is relative to it’s center. A positive value of 50 still leaves half of our shape on the screen, no matter the dimensions of the shape.

Set the scaled values of the horizontal position to −65 and 65. Now when we drag our slider (remember to hold down the option key) we are able to move our box from all the way off the stage on the left to all the way off the stage on the right.

Another type of slider that might be self in this type of situation is the 2D slider. Create a new scene and split the control panel so we can see how this input control works. In the new scene add a Shapes Actor and a Projector Actor, and connect them. Now open add a 2D slider to the control panel.

Double click on the 2D slider so you can see a little more about how this particular control input works. Similar to the “slider” control you can see that you can title the slider, adjust the width, height, font, and so on. You’ll notice that there’s a X Control ID and a Y Control ID. 

Next we’ll click okay, and link Control ID 1 (the X control) to the “horz pos” inlet on the Shapes Actor. Now link the Control ID 2 (the Y control) to the “vert pos” inlet on the Shapes Actor. Check to make sure that the horizontal and vertical inlets on the Shapes Actor are properly scaled (last time we set them to −65 and 65). 

Now the single 2D slider behaves in the same way as the two sliders we set-up in the exercise above.

This, obviously, is only the beginning of how to work with Sliders and 2D Sliders. You might use a slider to control the playback position of a movie, or the position of a movie on the stage. You might use a slider to control position, zoom, rotation, width, height, really just about any kind of numerical attribute for an actor. The key things to keep in mind in this process are:

    • Knowing the range of values that your slider is transmitting
    • Knowing the scaled range of values that your actor is transposing values to
    • Knowing control ID
    • Knowing how to connect your control panel items to Actors in your patch

Isadora | Live-Camera Input as a Mask

Back in March I had an opportunity to see a production called Kindur put on by the Italian Company Compagnia TPO. One of the most beautiful and compelling effects that they utilized during the show was to use a live-camera to create a mask that revealed a hidden color field. The technique of using a live feed in this way allows a programmer to work with smaller resolution input video while still achieving a very fluid and beautiful effect. 

This effect is relatively easy to generate by using just a few actors. An overview of the whole Isadora Scene looks like this:

To star this process we’ll start with a Video-In Watcher actor. The video-in will be the live feed from our camera, and will ultimately be the mask that we’re using to obscure and reveal our underlying layer of imagery. This video-in actor connects to a Difference actor which looks for the difference between two sequencial frames in the video stream. This is then in turn passed to a Motion Blur actor. The motion blur actor will allow you to specify the amount of accumulated blur effect as well as the decay (disappearance rate) of the effect. To soften the edges of this the image stream is next passed to a Gaussian Blur actor. Finally this stream is passed to an Add Alpha Channel actor by passing the live feed into the mask inlet on the actor. The underlying geometry is then passed in through the video inlet in the Add Alpha Channel actor. Finally, the outlet of the Add Alpha actor is passed out to a projector. 

As a matter of best-practice I like to use a Performance Monitor actor when I’m creating a scene in order to keep an eye on the FPS count. This can also be useful when trying to diagnose what’s causing a system to slow down during playback. 

This effect works equally well over still images or video, and is certainly something that’s fun to experiment with. Like all things in live systems, your milage may vary – motion blur and gaussian blur can quickly become resource expensive, and it’s worth turning down your capture settings to help combat a system slow-down.

Isadora | Network Control

For an upcoming show one of the many problems that I’ll need to solve is how to work with multiple machines and multiple operating systems over a network. My current plan for addressing the needs of this production will be to use one machine to drive the interactive media, and then to slave two computers for cued media playback. This will allow me to distribute the media playback over several machines while driving the whole system from a single machine. My current plan is to use one Mac Pro to work with live data while slaving two Windows’ 7 PCs for traditionally cued media playback. The Mac Pro will drive a Barco, while the PC’s each drive a Sanyo projector each. This should give me the best of both worlds in some respects. Distributed playback, similar to WatchOut’s approach to media playback, while also allowing for more complex visual manipulation of live-captured video. 

To make all of this work, however, it’s important to address how to get two different machines running Isadora on two different operating systems to talk with one another. To accomplish this I’m going to use an OSC Transmit actor on the Master machine, and an OSC listener on the slaved machines. 

On the Master machine the set-up looks like this:

Trigger Value – OSC Transmit

The transmit actor needs the IP address of the slaved machines, as well as a port to broadcast to. The setup below is for talking to just one other machine. In my final setup I’ll create a specialized user actor that holds two OSC Transmit actors (one for each machine) that can be copied into each scene.

On the Slaved machines the setup takes two additional steps. First off it’s important to determine what port you want to receive messages from. You can do that by going to Isadora Preferences and selecting the Midi/Net tab. Here you can specify what port you want Isadora to listen to. At this point it’s important to catch the data stream. You can do this by oping up the Communications tab and selecting Stream Setup. From here make sure that you select “Open Sound Control” and click the box “Auto-Detect Input.” At this point you should see the Master Machine broadcasting with a channel name an address, and a data stream. Once this is setup the Actor patch for receive messages over the network looks like this:

OSC Listener – Whatever Actor you Want

In my case I’ll largely use just jump++ actors to transition between scenes, each with their own movie. 

You can, of course, do much more complicated things with this set-up all depending on your programming or play-back needs.