Tag Archives: Live Performance

Live Camera as Mask | TouchDesigner

Back in May I wrote a quick piece about how to use a camera as a mask in Isadora. This is a very powerful technique for working with live cameras, and no matter how many times I see it I’m still struck by how fun it is. This isn’t difficult to program, and one of the questions I wanted to answer this summer was how to create a network in TouchDesigner that could accomplish the same process. Before we get started it’s going to be important to make sure that you have a camera (a web-cam is fine) connected to your computer. While you can create this same effect using any recorded footage, it’s certainly more compelling and playful when you’re working with a live camera. You’ll also want to make sure that you know how the feedback TOP works in Touch Designer. If you’re new to this programming environment you might take a moment to read through how to work with the feedback TOP here.

Getting Started

We’ll start by creating a new project, and by selecting all of the standard template operators and deleting them. We can stay inside of the default container that TouchDesigner starts us off with as we create our network. Let’s start by taking a look at what we’re going to do, and the operators that are going to be involved.

Unlike other tutorials, this time we’re going to work almost exclusively with Texture Operators (TOPS). The effect we’re looking to create is to use the feed from a live camera as a mask to hide, or reveal, another layer that could be either a video or photo (in our case we’ll work with a photo today, though it’s the same process and the same TOP when working with video). To do this we need to first remove a portion of our video stream, we’ll create a little bit of motion blur with the Feedback TOP, next we’ll composite this mask with our background layer, we’ll finish by creating a final composite with a black background layer.

Some like it Hot

Without a kinect we’re really just approximating separation. Luckily, if you can control your light there are some ways to work around this. Essentially what we’re looking to create is an image where the hottest (or brightest) portion is the subject we want to separate from the background. In another post I’ll talk about some more complicated methods, for now lets look at what we can do with just a camera and some light.

We’ll start by creating a string of TOPs like this:

Movie In – Monochrome – Threshold – Feedback – Blur – Level – Composite

You’ll want to make sure that you’ve connected the Threshold to the second inlet on the Composite TOP, and assigned the Target TOP of the Feedback operator as the Composite operator in this first portion of the string.

Remember to adjust the opacity on the Level operator to a value less than .95 and greater than 0.8. You will also need to spend some time adjusting the parameters of the Blur Operator to fine tune the aesthetic that you’re after.


The Threshold TOP is going to be your best friend in this string. This operator is going to help you control how much background you throw away. By adjusting the threshold value you’re controlling the pixel values that get passed as white and what values are thrown away and converted to alpha. This means that as long as you can keep light on your subject, and mostly off of your background, you’ll be able to isolate the subject from unwanted background pixels. This will take some adjusting, but it’s well worth your time and attention. If you need a little more fine grain control here, you insert add a Level TOP to adjust your image before it gets to the Threshold TOP.

Composite Composite Composite

The final component of this network is to composite all of our images. First we’ll need to add a Movie In TOP as well as a Constant for our backgrounds. Next we need to add two more Composite TOPs and finally a Null. It might be useful to think of this as a string that looks like three Composite TOPs ended in a Null with some additional operators at each stage. First our composite of our live camera and feedback string is going to combined with our Movie In TOP. In the Composite TOP’s parameters make sure that the Operand method is Multiply. This replaces the white values from our previous string with the pixel values from the Movie In TOP. Next we’re going to composite this string with a constant. In this case I’m using a constant black background. Depending on the venue or needs of a production you might well choose another color, you can do this by adjusting the parameters of the constant TOP. Finally we’ll end the whole string with a Null.

We’ve now used a live feed as a mask to reveal another video or image. Next you might think about where in these strings you might think about adding other operators to achieve different affects or moods. Happy programming.

House of Escher | Media Design

In December of 2012 I was approached at an ASU School of Theatre and Film party and asked if I would be interested in working on a project that would begin the following semester, and premiere a new work in the Fall of 2013. As this is exactly the kind of opportunity that I came to ASU to peruse, I eagerly agreed to be a part of the project. 

Some Background

ASU’s School of Theatre and Film (soon to also include Dance) has a very interesting graduate school program for performers and designers. Operating on a cohort based system, the school admits a group of performers, directors, and designers (Scenic, Costume, and Lighting) every three years. One of the other graduate programs at the school, the one in which I’m enrolled, can enroll students each year. My program, Interdisciplinary Digital Media and Performance (IDM), straddles both the school of Arts, Media, and Engineering as well as the School of Theatre and Film. Students in my program have pursued a variety of paths, and one skill that’s often included in those various paths is media and projection design for stage productions. Just today as I was watching the live web-cast of the XboxOne announcement, I was thinking to myself, “some designer planned, created, and programmed the media for this event… huh, I could be doing something like that someday.”

The latest cohort of actors, designers, and directors started in the Fall of 2011, which means that the group is due to graduate in the Spring of 2013. In both the second and third year of the cohort’s program they work to create a newly devised piece that’s performed in one of the theatre’s on campus as ASU. Occasionally, this group also needs a media designer, and it’s their new show for 2014 that I was asked to be a part of. 

The Fall of the House of Escher

Our devising process started with some source material that we used as the preliminary research to start our discussion about what show we wanted to make. Our source materials were Edgar Allen Poe’s The Fall of the House of Usher, M.C Escher, and Quantum Mechanics. With these three pillars as our starting point we dove into questions of how to tackle these issues, tell an interesting story, and work to meet creative needs of the group. 

One of our first decisions focused on the structure of show that we wanted to create. After a significant amount of discussion we finally settled on tackling a Choose Your Own Adventure (CYOA) kind of structure. This partially arose as a means of exploring how to more fully integrate the audience experience with the live performance. While it also brought significant design limitations and challenges, it ultimately was the methodology the group decided to tackle. 

Shortly after this we also settled on a story as a framework for our production. Much of our exploratory conversation revolved around the original Poe work, and it was soon clear that the arc of the Fall of the House of Usher would be central to the story we set out to tell. The wrinkle in this simple idea came as our conversations time and again came back to how Poe and Quantum Mechanics connect with one another. As we talked about parallel universes, and the problems of uncertainty, we decided to take those very conversations as a cue for what direction to head with the production. While one version of the CYOA model takes patrons on the traditional track of Poe’s gothic story, audience members are also free to send our narrator down different dark paths to explore what else might be lurking in the Usher’s uncanny home. Looking at the photo below you can see where the audience has an opportunity to choose a new direction, and how that impacts the rest of the show. 

While this was a fine starting point, we also realized that it only giving the audience an opportunity to explore one avenue of possibility in the house felt a little flat. To address that point we discussed a repeated journey through the house in a Ground Hog Day-esque repeated style. Each run of the show will send the audience through the CYOA section three times, allowing them the opportunity to see the other dark corners of the house, and learn more about the strange inhabitants of the home. I did a little bit of map-making and mapped out all of the possible paths for our production; that is, what are all of the possible permutations of the three legged journey through the house. The resulting map means that there are twelve different possible variations for the production. A challenge, to be sure. 

Media and the House

So what’s media’s role in this production? The house is characterized by it’s Escher patterned qualities. Impossible architecture and tricks of lighting and perspective create a place that is uncanny, patterned, but also somehow strangely captivating. Just when it seems like the house has shared all of it’s secrets there are little quantum blips and pulses that help us remember that things are somehow not right until ultimately the house collapses. 

Our host (who spends his/her time slipping between the slices of the various paths the audience tumbles through) is caught as a destabilized field of particles only sometimes coalesced. The culminating scene is set in a place beyond the normal, a world of quantum weirdness – small as the inside of an atom, and vast as the universe itself. it’s a world of particles and waves, a tumbling peak inside of the macro and micro realities of our world that are either too big or too small for us to understand on a daily basis. 

Media’s role is to help make these worlds, and to help tell a story grounded in Poe’s original, but transformed by a madcap group of graduate students fighting their way out of a their own quantum entanglement. 

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today’s obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren’t just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you’re not so easily taken advantage of.

With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it’s relationship to arousal states in the subject. While it’s easy to claim that one is impartial and objective at all times, monitoring one’s physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject’s heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 

So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you’re interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I’m at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that’s being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 

Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I’d like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 
My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there’s a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.

Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It’s paying your brother-in law who wanted to learn Premiere Pro to make the videos. It’s a look that’s infomercial snake-oil slick. 

Reactions from Participants – General Comments / Observations

  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like “oh, that’s what you like huh?” or ” you better not be looking at him / her.” The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a “you try it, I dare you” kind of way.
  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question “what are you doing?” was “Finding out I’m a bad person.” It didn’t surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.
  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like “I think I just like this guy’s picture better.”
  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience.
  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.

Tools Used

Heart Rate – Pulse Sensor and Arduino Uno

Programming for Arduino – Arduino

Program to Read Serial data – Processing
Message Protocol – Open Sound Control
OSC Processing Library – Andreas Schlegel OSC Library for Processing 
Programming Initial Tests – MaxMSP
Programming and Playback- Isadora
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop
Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe Premiere, Adobe After Effects