Tag Archives: Media Installations

Phase 2 | Halfway House

Media design is an interesting beast in the theatre. Designers are called upon to create digital scenery, interactive installations, abstract imagery, immersive environments, ghost like apparitions, and a whole litany of other illusions or optical candy. The media designer is part system engineer, part installation specialist, and part content creator. This kind of design straddles a very unique part of the theatrical experience as it sits somewhere between the concrete and the ephemeral. We’re often asked to create site specific work that relates to the geometry and architecture of the play, and at the same time challenged to explore what can be expressed through sound and light. 

One of the compelling components of ASU’s School of Theatre and Film (SoTF) is its commitment to staging new works. In addition to producing works that are tried and true, ASU also encourages its students to create works for the stage. As a part of this commitment  the department has developed a three phase program to serve the process of developing a work for full main-stage production. 
  • Phase 1 – Phase one is between a staged reading and a work-shop production of a play. This phase allows the team to focus on sorting out the nuts and bots of the piece – what is the play / work really addressing  and what are the obstacles that need to be addressed before it moves onto the next stage of production. 
  • Phase 2 – Phase two is a workshop production environment  With a small budget and a design team the production team creates a staged version of the work that operates within strict design constraints. Here the lighting plot is fixed, scenic elements are limited, and media has access to two fixed projectors focused on two fixed screens.  This phase is less about the technical aspects of the production, and more focused on getting the work up in front of an audience so that the writer and director have a chance to get some sense of what direction to move next.
  • Phase 3 – Phase 3 is a full main-stage production of a work. Here there production has a full design team, larger budget, and far fewer constraints on the implementation of the production. 
While productions can skip one of the stages, ideally they are produced in at least one phase (either one or two) before before being put up as a phase three show. 
This semester I was selected to be the media designer on call for the two original works slotted in as Phase 2 productions: Los Santos, and The Halfway House. These two new works are both written by current ASU playwrights, who are invested in receiving some critical and informative feedback bout their work. The beginning part of this process begins with production meetings where directors pitch their visions of the production and start the brainstorming / creating process with the designers. Ultimately,  Los Santos decided against using any media for their production. Halfway House, however, did decide that it wanted some media driven moments in their production. 
My role in this process was to work with the director to find the moments where media could be utilized in the production, film and edit the content, and program the playback system for the short run of the production. After reading through the play a few times I met with Laurelann Porter, the director, to talk about how media could be used for this show. Important to the design process was understanding the limitations of the production. In the case of the Phase 2 productions, the projectors and screens are fixed. This limitation is in part a function of reducing the amount of tech-time, as well as limiting the complications imposed a set and lighting when doing complex projection. Looking at the script I thought the best use of media would be to enhance some of the transition moments in the production. Several of the transitions in the show involve moments where there is action taking place “elsewhere” (this is the language used by the playwright). These moments seemed perfect for media to help illustrate. In meeting with the director we identified the major moments that would benefit from some media presence, and started brainstorming from there.
A large part of the production process is planning and organization. In the case of lighting, sound, and media designers are tasked with identifying the moments when their mediums will be used, and creating a cue sheet. Cue sheets are essentially a set of discretely identified moments that allow a stage manager to give directions about how the show runs. Media, lights, and sound all have their own board operators (actual humans), and the stage manager gives them directions about when to start or stop a given cue. Creating a cue sheet with this fact in mind helps to ensure that a designer has working understanding of how to plan the moments that are being created. My process of reading the script looked like this:
  • 1st time through – for the story and arc of the action
  • 2nd time through – identify possible moments for media
  • 3rd time through – refine the moments start to create a working cue sheet
  • 4th time through – further refinement, label cues, look for problematic moments
After talking with the director and identifying what moments were going to be mediated material, it was time to create a shooting list, and plan for how to use a single afternoon with the actors to record all of the necessary footage for the show. We had one afternoon with the actors to film the transition moments. I worked with the director to determine a shooting order (to make sure that we efficiently used the actors’ time), and to identify locations and moments that needed to be captured. From here it was a matter of showing up, setting up, and recording. This transitioned smoothly to the editing process that was a matter of cutting and touching up the footage for the desired look.

The School of Theatre and Film currently have two show control systems at our disposal. Dataton’s Watchout4 and Troikatronix’s Isadora. Given the timing of the phase 2 productions, I knew that the Isadora machine was going to be available to me for show control. Like MaxMSP, Isadora a is a node-based visual programming environment. Importantly, Isadora is truly designed with performance in mind, and has a few features that therefore make it easier to use in a theatrical production environment. 

Typically a theatrical production requires a additional steps for media that are similar to the lighting process – lensing, and plotting for example. For the Phase two productions  the the shows use a standard lighting and media plot that doesn’t change. This means that there’s little additional work in terms of projector placement, focusing, masking, and the like that I have to do as a designer. For a larger production I would need to create a system diagram that outlines the placement of computers, projectors, cable, and other system requirements. Additionally, I would need to do the geometry to figure out where to place the projectors to ensure that I had a wide enough throw with my image to cover my desired surfaces, and I would need to work with the lighting designer to determine where on the lighting plot there was room for this equipment. This element of drafting, planning, and system design can easily be taken for granted by new designers but it’s easily one of the most important steps in the process as has an effect on how the show looks and runs. With all of the physical components in place, and the media assets created the designer is now looks at programming the playback system. In the case of Isadora this also means designing an interface for the operator.
One of the pressing realities of designing media for a theatrical installation is the need to create playback system knowing that someone unfamiliar with the programming environment will be operating the computer driving the media. ASU’s operators are typically undergraduate students that may or may not be technical theatre majors. In some cases an operator may be very familiar with a given programming interface, while others may not have ever run media for a show. Theatre in educational institutions are a wonderful place for students to have an opportunity to learn lots of new tools, and get their feet wet with a number of different technologies. In this respect I think it’s incumbent upon the designer to create a patch that has an interface that’s as accesible as possible for a new operator. In my case, each moment in the show where there is media playing (a cue) has  corresponding button that triggers the start, playback, and stop for the given video. 

Media is notoriously finicky in live performance. It can be difficult to program, washed out by stage lights, perform poorly if it’s not encoded properly, or any host of other possible problems. In the case of Half Way House, the process went very smoothly. The largest problem had more to do with an equipment failure that pushed back equipment installation than with the editing or programming process. While this is a simple execution of using media in a production, it was valuable for a number for the individuals involved in the process – the director, lighting designer, sound designer, and stage manager to name only a few. There are large questions in the theatre world about the role of media in production – is it just fancy set dressing? how is it actively contributing to telling the story of the show? is it worth the cost? does it have a place in an idiom largely built around the concept of live bodies? And the list goes on. I don’t think that this implementation serves to address any of those questions, but for the production team it did start the process of demystifying the work of including media in a production, and that’s not nothing.

Tools Used
Programming and Playback- Isadora | TrokaTronix
Projector – InFocus HD projector
Video Editing – Adobe After Effects , Adobe Premiere
Image Editing – Adobe Photoshop
Filming / Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe PremiereAdobe After Effects

Emerge | Commons

This year I was fortunate to have the opportunity to contribute to the performance schedule of ASU’s conference about art, science, and the future. This is the second year that Emerge has happened at ASU, with the final night being  a culminating festival of performance and art. In the Fall of 2012 I worked with a group of artists to put together a proposal for creating a performance in Neeb Plaza on ASU’s campus. This courtyard that sits nestled between Neeb hall, the Art building, and Design houses a new student generated installation called X-Space each year. Looking to solicite the creation of new works, the Herberger institute put out a call for artists interested in organizing a performance that occurs in X-Space. Called X-Act, applicants were asked to consider hw they would use the space and engage the campus. Early in January my team found out that we our proposal, Commons, was selected. One of the stipulations of the grant was that we would have a showing during the final showcase of Emerge. With this news in mind, our team started the process of creating the installation we had proposed.

One of the elements that our team was committed to realizing was finding a way to integrate projection into the performance on this very geometrically interesting space. I started by measuring the physical dimensions of the space in order to determine the distance required for the projectors that I had available for this project. Using a bit of math one can calculate the throw distance of a projector. Alternatively it’s also easy to use Projector Central’s Projection Calculator in order to lock down approximate distances that you might need. With the numbers in front of me I was able to start making a plan about potential projector placement, as well as my options for the performance given the constraint of the size of image that I could create. With the limitations of distance roughly mapped out I headed to the space after dark to do some initial tests. The hard truth about the amount of ambient light in the plaza, and the limits of the InFocus projectors meant that I needed to shy away from projecting large in favor of being brighter. The compromise of brightness and size was to map the front surfaces of X-Space. To accomplish this, I needed to connect two projectors with a Matrox TripleHead. This piece of equipment allows for multi-monitor work where the computer sees the two projectors as though they were a single canvas. 

It took about 4 hours to pull of the necessary equipment, install, and 
focus the projectors. 
Once I had the projectors up and in place I was finally able to start mapping the surfaces. I had decided early on that I was going to use a piece of software called Modul8 to control my media playback. Modul8 is a VJ software software package that’s robust and easy to use. Unlike other pieces of software, Modul8 is more like an instrument than an autonomous agent that can run independently. While there are a bunch of functions that you can automate inside of the software, it’s largely built around the idea of live-mixing the media that you’re using. In terms of automation, Modul8 allows the operator to use audio input to control a number of playback triggers. For this project the team used a track by DJ Earworm for audio, largely motivated by the desires of the group recruited for the dance performance. One of the additional benefits of Modul8 is it’s ability to send Syphon out media. This means that this piece of playback software can be easily integrated with the mapping tool MadMapper. Here it was as important to know what media system (projectors, hardware, and software) I was using, as the conceptual idea around the performance itself. 
Media Diagram


After getting the hardware installed I started mappping the surfaces of X-Space, creating individual quads and masks for each plane. All in all it took me about three hours to create the maps and masks for the architecture. At this point I was finally able to start experimenting with what kind of media I wanted to use, and how I wanted to arrange it in the space. All in all I budgeted about 16 hours to get this project up and running. Implementing the plan I had created ended up taking about 16.5 hours. This meant that I had one night where I worked on this installation until 3:15 AM, and another night where I was working until just before midnight.  We also had a rather unfortunate miscommunication with the Emerge planning staff about this installation, and the importance of having a security guard available to monitor the site over night. Installation started on Thursday evening, and each of the team members took a shift over night to monitor the outdoor equipment. Luckily we ended up with security for the second night, and didn’t have to pull any more all-nighters. 


Finally, while this project looked beautiful on the empty space, there was a miscommunication about audience placement and how stanchions were going to be used at the actual event. While the team had discussed the importance of roping off the performance space, that request was lost on the actual event planners. Consequently the audience largely obstructed the projections as they used the actual stage space as seating. Additionally, the space was filled with stage lighting and projectors rented for another performance which only served to wash-out Commons media, and distract audience members. While this was certainly not a failure, it did leave a lot to be desired given the time, planning, and sleepless nights that implementation required. It’s just another lessoned learned, even if learned the hard way.



Tools Used
Programming and Playback- Modul8
Mapping – MadMapper
Multi-Monitor Control – Matrox TripleHead2Go Digital
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop 
Documentation – iPhone 4S, Canon 7D
Editing Documentation – Adobe PremiereAdobe After Effects





Commons
X-Act Proposal

Ethan Jackson | MSD New Production Innovation | College of Design
Chelsea Pace | MFA Performance | School of Theatre and Film
Kris Pourzal | MFA Dance / School of Dance
Matthew Ragan | MFA Interdisciplinary Digital Media and Performance | School of Theatre and Film & Arts, Media + Engineering

Activating a campus of this size is a challenge. With so many majors across so many schools, it would be impossible to activate the entirety of the campus with only arts students, or any small group of students for that matter. We propose Commons.

We are excited to propose a 100 person ensemble comprised mostly of non-performers. This ensemble would be assembled by the team over the next several months by contacting graduate students and undergraduates from various departments and schools across ASU, both inside and outside of the Herberger Institute. The inattention is that the sample of students would be a proportional and accurate representation of the population of the Tempe Campus student body.


This project is ambitions and we are not ignorant of the challenges presented by gathering an ensemble of this size. The difficulty is doubled with you consider that we intend to bring mostly non-performers into the ensemble. The groundwork fro the process of contacting graduates students to enlist undergraduates from across campus is already being laid through contacts in Preparing Future Faculty and the Graduate and Professional Student Associate. 

The piece inherently activates the campus by reaching out across so many disciplines and getting people together, working together, and making art. The choreography will be created by the team and also crowd sourced from the assembled ensemble and the music (potentially) will b a remix of music surged by the group.

Not to be confused with a flash-mob, Commons will be a collaboration with all 100 performers. Created as an ensemble, the performs will truly have ownership over the piece and more than just regurgitating choreography, the piece will be brought to life by the population of Arizona State University.


The piece will use the X-Space, the cement plaza to the south, and the wall of the building west of the X-Space. When the audience enters the cement courtyard immediately south of the space, the floor will be lit with interactive projections triggered by the movement of the crowd. After the audience has gathered, the ensembles will emerge from the X-Space installation and begin a choreographed sequence. Theatrical lights, projections, and sound will be utilized to create an immersive environment for both the audience and the performers.

As the choreography builds and more performers are added, a live video feed will begin and will be projected several stories high onto the textured wall of the building to the west of the courtyard. THe projection will be live video of the performance and of the the audience.


The piece is approximately 30 minutes in duration and would allow for various groups of students who are professional distanced from performing to express themselves in a performative and expressive way. The piece ends with the performers exiting through the crowd and out onto campus where they will continue to perform choreography for 10 minutes in a space that is significant to their experience at ASU.

Rather than making something with HIDA students that only HIDA students see and perform in, Commons will truly activate the campus to come together, make something, and take it out into their communities across campus. 

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today’s obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren’t just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you’re not so easily taken advantage of.


With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it’s relationship to arousal states in the subject. While it’s easy to claim that one is impartial and objective at all times, monitoring one’s physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject’s heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 


So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you’re interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I’m at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that’s being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 


Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I’d like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 
My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there’s a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.

Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It’s paying your brother-in law who wanted to learn Premiere Pro to make the videos. It’s a look that’s infomercial snake-oil slick. 




Reactions from Participants – General Comments / Observations

  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like “oh, that’s what you like huh?” or ” you better not be looking at him / her.” The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a “you try it, I dare you” kind of way.
  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question “what are you doing?” was “Finding out I’m a bad person.” It didn’t surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.
  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like “I think I just like this guy’s picture better.”
  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience.
  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.

Tools Used

Heart Rate – Pulse Sensor and Arduino Uno

Programming for Arduino – Arduino

Program to Read Serial data – Processing
Message Protocol – Open Sound Control
OSC Processing Library – Andreas Schlegel OSC Library for Processing 
Programming Initial Tests – MaxMSP
Programming and Playback- Isadora
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop
Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe Premiere, Adobe After Effects