Tag Archives: projection mapping

TouchDesigner | Stoner Tricks

There was a great question that recently popped up on the Forum about using the Stoner component from the Palette.

Every time I use the stoner tool I delete these ops first thing.

For some reason the locked TOP doesn’t seem to have anything to do with the real output, yet it saves that data with the TOE and it’s easy for the toe to be huge for no reason.

I had 3 4K stoners and the file was 150 mb. Remove these ops and the toe goes to 140KB

Stoner throws errors when moving points after deleting these, however doesn’t seem to impact the functionality of the stoner, it still outputs the correct UV and warp texture. It still persists the data after a save.

Can someone explain why those ops are even there? it looks only like it’s saving the demo image before you start using it.

Read the whole thread

Long story short, what looks like a ramp is actually a displacement map. The idea here is that you can actually get all of the benefits of the stoner’s displacement, without running the whole component. Unless your mapping is changing dynamically, you can instead use this texture to drive a remap TOP which in turn handles your distortion. Richard Burns wrote a lovely little piece about this on Visualesque.

I wrote about what these ops are good for in a post a few years ago when working on a short installation that was in Argentina – Building a Calibration UI. Sadly, I never got to the second part of that post to dig into how we could actually use this feature of the stoner. Fast forward a few years and when collaborating with Zoe Sandoval on their thesis project (which featured four channels of projection) – { remnants } of a { ritual } – I used a very similar approach to leveraging Stoner’s flexibility to use a single UI for multiple displacement maps.

So… how do we actually use it?!

Well, I finally had some time to knock out a walk through of how to make this work in your projects, some python to help you get it moving and organized quickly, and ways to keep your calibration data out of your project file. Hope this sheds some light on some of the ways you can better take advantage of the Stoner.

Check out a sample project here.

YouTube Playlist

Individual Vids

Media Design | Building Projection Mapping

One of the courses I’m taking in my first year at ASU is a course called Media Design Applications. This course is centered around the use of various media design techniques in specific relation to their application in a theatrical setting. One of the techniques that we discussed in class is architectural projection mapping. This form has quickly become popular for forcing perspective, and opportunity for complex illusion. The underling principal of projection mapping is to highlight and take advantage of physical from in order to create the illusion that the entire surface is, itself, a screen. There are a variety of techniques to achieve this illusion, some based entirely in software and others based in the process of generating the artwork itself. This is an essential and powerful tool for the media designer as it opens up a wide range of possibilities for the creation of theatrical illusion. Before I start to talk about the process, here’s the project description:

Project Description:

Component 2 – Geometry, Surface and Illusion 

Unfortunately – or possibly fortunately – media designers in the theatre get a nice, rectangular, white screen to shoot at from a perpendicular, on-center angle. In this section, we will explore methods for dealing with odd angles, weird shapes, and non-ideal surfaces, as well as exploring special effects that are possible through the right marriage of projection, surface and angle. For this project, you may choose a building, sculpture or other built object in the ASU environment, then map its geometry using the techniques shown in class and create content utilizing that geometry to best effect. Final presentations of this project will be in the evening on campus.

I started by this process by first scouting a location. After wandering around campus several times I one of the buildings that I kept coming back to was a energy solutions building by a company called NRG. One of the larger draws of this building happens to be the location. Positioned directly across from one of the campus dormitories it seemed like an ideal location that would have a built-in audience. While there’s no guarantee that there will be many students left on campus at this point, it never hurts to plan for the best.

The face of the building that points towards the dormitories is comprised of abstract raised polygons arranged in narrow panels. These panels come in two varieties creating a geometric and modern look for the building. One of the productions I’m working on next year has several design elements that are grounded in abstract geometric worlds, and this seemed like a prime opportunity to continue exploring what kind of animation works well in this idiom.

In the Asset or In the System

The debate that is often central to this kind of work is centered around an approach that values building a system (or program) for creating the aesthetic, or to instead create the work as fixed media artifacts. In other words, do you build a something that is at it’s core flexible and extendable (though problematic, finicky, and unabashedly high maintenance) or do you build something rigid and fixed (though highly reliable, hardware independent, and reproducible)? Different artists prefer different methods, and there are many who argue that one is obviously better than the other. The truth of the matter, however, is that in most cases the right approach is really a function of multiple parameters: who’s the client, what’s the venue, what’s the production schedule, what resources are available, is the project interactive, and so on. The theoretical debate here is truly interesting, and in some sense calls into question what the skill set is most appropriate for the artist who intends on pursuing this practice. The print analogy might be, do you focus on designing within the limitations of the tools that you have or do you commit to building a better printing press so that you can realize the design exists only as an abstract thought? 

Recent Arizona State MFA graduate Boyd Branch shared these thoughts about this very topic:

I don’t know if there is much of a debate between system building and design for production. Quite simply- every production demands aesthetics. The aesthetic is always the most important. The system is only useful in as much as it generates the appropriate aesthetic experience. It doesn’t matter how reliable, interesting, or functional a system is if it isn’t supplying an aesthetic relevant to production. A “flexible and extendable ” system is only useful if the aesthetic of flexibility and extendibility is ostensibly the the most relevant aesthetic. Interactivity is an aesthetic choice for performance and only relevant when ontology or autonomy are the dramatic themes. For theatre in particular, the system inevitably becomes a character, and unless that character is well defined dramatically, it has no business inserting itself into production.

The debate if any is internal for the designer and presented as a range of options for the producer/director. That debate is a negotiation between time and resources. A designer may be able to envision a system that can achieve an effect- but without sufficient experience with that system and the ability to provide a significant degree of reliability, such a system should not be proposed without articulating how the dramatic themes will inevitably shift to questions about technology.

Sometimes an aesthetic is demanded that requires experimentation on the part of the designer. A designer has to be knowledgable enough about their skill set to know how to explain the cost involved in achieving that aesthetic. And if that cost is reliability than it is incumbent on the designer to iterate that cost and explain how the production will hinge on the unpredictability of that system.

An unreliable system , however, is frankly rarely good for any production unless unreliability is the theme. If a production requires a particular aesthetic experience that seems to be only achievable with the creation of a new tool, then it must be recognized that that tool and the presence of that tool embody the major dramatic themes of production.

Avant garde theatre is one of the best environments for exploring the aesthetics of system building – but it is also the theatre that has the smallest budgets…

For this particular assignment we were charged with the approach of building everything in the asset itself. That is, building a fixed piece of video artwork that could then be deformed and adjusted with playback software (MadMapper).

AfterEffects as playground

Given the nature of this assignment it made sense that Adobe After Effects would be the tool of choice. AE is an essential tool for any media designer, especially as there are times when pre-rendered media is simply the best place to start. I spent a lot of time thinking about the direction that I wanted to move in terms of overall aesthetic for this particular assignment, and I found that again I was thinking about abstract geometric worlds, and the use of lighting in 3D environments in order to explore some of those ideas. As I’ve been thinking about the production I’m working on the in the fall it’s seemed increasingly important to take advantage of open ended assignments in order to explore the ideas and directions that feel right for that show. I’m really beginning to see that the cornerstone of successful independent learning comes from deliberate planning – what can I explore now that will help me on the next project? To that end, what kind of work do I want to be making in two years, and how can I set myself to be successful? Planning and scheduling may be one of the most under-stressed skills of any professional, and I certainly think that applies to artists. 

In thinking about abstract animation with After Effects I knew that I wanted to explore four different visual worlds: flat abstract art, lines and movement, 3D lighting and the illusion of perspective, and glitch art. Each of these has it’s own appeal, and each also has a place in the work that I’m thinking about for the fall. 

Worlds Apart

Flat and abstract

In thinking about making flat abstract art I started, as I typically do, by doing lots of visual research. One of the more interesting effects that I stumbled on was achieved by using AE’s radio waves plug-in in conjunction with a keyframed mask. YouTube user MotionDesignCommun has a great tutorial about how to achieve these particular visual effect. Overall the media takes on a smoke-like morphing kind of look that’s both captivating and graceful. A quick warning about this effect. This is definitely a render hog. The 30 seconds of this effect used for this project took nearly 7 hours to render. As a disclaimer I did render the video at a projector native 1920 x 1200. All of that aside, this effect was a great way to explore using morphing masks in a way that felt far more organic that I would have originally thought about.

Lines and Flat Movement

I also wanted to play with the traditional big-building projection mapping effect of drawn-in lines and moving shapes. In hindsight I think the effect of the lines drawing in took too long, and ultimately lost some of the visual impact that I was looking for. I also explored playing with transforming shapes here, and that actually was far more interesting and happened too quickly. My approach for this effect was largely centered around the use of masks in AE. Masks, layers, and encapsulated effects were really what drove this particular exploration. Ultimately I think spending more time to write an expression to generate the kind of look that I’m after would be a better use of my time. If I were to go back I think I could successfully craft the write formula to make the process of creating this animation easier, but it really took the effort of creating the first round of animation to help me find that place. One of the hard, but important lessons, that I’ve learned from programming is that sometimes you simply have to do something the hard way / long way a couple of times so that you really understand the process and procedural steps. Once you have a solid idea of what you’re trying to make, it becomes much easier to write the expression as an easier way to achieve the effect you’re after. 

3D Lighting and the illusion of Perspective

Another projection designer’s magic trick that I wanted to play with was the idea of creating perspective with digital lighting of a 3D environment. By replicating the geometry that you’re projecting onto, you the designer can create interesting illusions that seem impossible. In my case I started in After Effects by positing planes in 3D space at steep angles, and then masking them so that they appeared to mimic the geometry of the actual building. To make this easier for myself, I worked in a single column at a time. I then pre-composed individual columns so that they could easily be duplicated. The duplicated columns only needed small changes rather than requiring me to build each of the 86 triangles from scratch. 

Glitch Art a la After Effects

In another course I took this semester a group of students focused on using glitch art as a base for some of their artistic exploration. While they specifically focused on the alteration of i-frames and p-frames, I wanted to look at the kind of effect that can be purposefully created in AE, and then later modified. YouTube user VinhSon Nguyen has a great tutorial on creating a simple glitch effect in AE. More interesting than the act of making his glitch effect, is his approach. Rather than just adding in adjustment layers and making direct changes. Nguyen instead focuses on how to connect the attributes of a your effect to null objects, essentially making an effect that can be universally applied to any kind of artwork that you want to drop into your comp. This approach to working in AE was interesting as it seemed to start from the assumption that the effect one is making is something that should be easily applied to other works.

Put it all Together

With each section as its own comp the last step in the process was to create a final comp that transitioned between the different worlds, applied some global lighting looks and effects, and added a mask to prevent unwanted projector spill. This was also a great place to do some fine tuning, and to see how the different comps transitioned into one another. 

The Final Rendering 


AE: Create a Glitch Effect

AE: Morphing by MotionDesignCommun

Creative Cow – Creating a 3D Cube

MadMapper AfterEffects Tutorial for Building Projection

Emerge | Commons

This year I was fortunate to have the opportunity to contribute to the performance schedule of ASU’s conference about art, science, and the future. This is the second year that Emerge has happened at ASU, with the final night being  a culminating festival of performance and art. In the Fall of 2012 I worked with a group of artists to put together a proposal for creating a performance in Neeb Plaza on ASU’s campus. This courtyard that sits nestled between Neeb hall, the Art building, and Design houses a new student generated installation called X-Space each year. Looking to solicite the creation of new works, the Herberger institute put out a call for artists interested in organizing a performance that occurs in X-Space. Called X-Act, applicants were asked to consider hw they would use the space and engage the campus. Early in January my team found out that we our proposal, Commons, was selected. One of the stipulations of the grant was that we would have a showing during the final showcase of Emerge. With this news in mind, our team started the process of creating the installation we had proposed.

One of the elements that our team was committed to realizing was finding a way to integrate projection into the performance on this very geometrically interesting space. I started by measuring the physical dimensions of the space in order to determine the distance required for the projectors that I had available for this project. Using a bit of math one can calculate the throw distance of a projector. Alternatively it’s also easy to use Projector Central’s Projection Calculator in order to lock down approximate distances that you might need. With the numbers in front of me I was able to start making a plan about potential projector placement, as well as my options for the performance given the constraint of the size of image that I could create. With the limitations of distance roughly mapped out I headed to the space after dark to do some initial tests. The hard truth about the amount of ambient light in the plaza, and the limits of the InFocus projectors meant that I needed to shy away from projecting large in favor of being brighter. The compromise of brightness and size was to map the front surfaces of X-Space. To accomplish this, I needed to connect two projectors with a Matrox TripleHead. This piece of equipment allows for multi-monitor work where the computer sees the two projectors as though they were a single canvas. 

It took about 4 hours to pull of the necessary equipment, install, and 
focus the projectors. 
Once I had the projectors up and in place I was finally able to start mapping the surfaces. I had decided early on that I was going to use a piece of software called Modul8 to control my media playback. Modul8 is a VJ software software package that’s robust and easy to use. Unlike other pieces of software, Modul8 is more like an instrument than an autonomous agent that can run independently. While there are a bunch of functions that you can automate inside of the software, it’s largely built around the idea of live-mixing the media that you’re using. In terms of automation, Modul8 allows the operator to use audio input to control a number of playback triggers. For this project the team used a track by DJ Earworm for audio, largely motivated by the desires of the group recruited for the dance performance. One of the additional benefits of Modul8 is it’s ability to send Syphon out media. This means that this piece of playback software can be easily integrated with the mapping tool MadMapper. Here it was as important to know what media system (projectors, hardware, and software) I was using, as the conceptual idea around the performance itself. 
Media Diagram

After getting the hardware installed I started mappping the surfaces of X-Space, creating individual quads and masks for each plane. All in all it took me about three hours to create the maps and masks for the architecture. At this point I was finally able to start experimenting with what kind of media I wanted to use, and how I wanted to arrange it in the space. All in all I budgeted about 16 hours to get this project up and running. Implementing the plan I had created ended up taking about 16.5 hours. This meant that I had one night where I worked on this installation until 3:15 AM, and another night where I was working until just before midnight.  We also had a rather unfortunate miscommunication with the Emerge planning staff about this installation, and the importance of having a security guard available to monitor the site over night. Installation started on Thursday evening, and each of the team members took a shift over night to monitor the outdoor equipment. Luckily we ended up with security for the second night, and didn’t have to pull any more all-nighters. 

Finally, while this project looked beautiful on the empty space, there was a miscommunication about audience placement and how stanchions were going to be used at the actual event. While the team had discussed the importance of roping off the performance space, that request was lost on the actual event planners. Consequently the audience largely obstructed the projections as they used the actual stage space as seating. Additionally, the space was filled with stage lighting and projectors rented for another performance which only served to wash-out Commons media, and distract audience members. While this was certainly not a failure, it did leave a lot to be desired given the time, planning, and sleepless nights that implementation required. It’s just another lessoned learned, even if learned the hard way.

Tools Used
Programming and Playback- Modul8
Mapping – MadMapper
Multi-Monitor Control – Matrox TripleHead2Go Digital
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop 
Documentation – iPhone 4S, Canon 7D
Editing Documentation – Adobe PremiereAdobe After Effects

X-Act Proposal

Ethan Jackson | MSD New Production Innovation | College of Design
Chelsea Pace | MFA Performance | School of Theatre and Film
Kris Pourzal | MFA Dance / School of Dance
Matthew Ragan | MFA Interdisciplinary Digital Media and Performance | School of Theatre and Film & Arts, Media + Engineering

Activating a campus of this size is a challenge. With so many majors across so many schools, it would be impossible to activate the entirety of the campus with only arts students, or any small group of students for that matter. We propose Commons.

We are excited to propose a 100 person ensemble comprised mostly of non-performers. This ensemble would be assembled by the team over the next several months by contacting graduate students and undergraduates from various departments and schools across ASU, both inside and outside of the Herberger Institute. The inattention is that the sample of students would be a proportional and accurate representation of the population of the Tempe Campus student body.

This project is ambitions and we are not ignorant of the challenges presented by gathering an ensemble of this size. The difficulty is doubled with you consider that we intend to bring mostly non-performers into the ensemble. The groundwork fro the process of contacting graduates students to enlist undergraduates from across campus is already being laid through contacts in Preparing Future Faculty and the Graduate and Professional Student Associate. 

The piece inherently activates the campus by reaching out across so many disciplines and getting people together, working together, and making art. The choreography will be created by the team and also crowd sourced from the assembled ensemble and the music (potentially) will b a remix of music surged by the group.

Not to be confused with a flash-mob, Commons will be a collaboration with all 100 performers. Created as an ensemble, the performs will truly have ownership over the piece and more than just regurgitating choreography, the piece will be brought to life by the population of Arizona State University.

The piece will use the X-Space, the cement plaza to the south, and the wall of the building west of the X-Space. When the audience enters the cement courtyard immediately south of the space, the floor will be lit with interactive projections triggered by the movement of the crowd. After the audience has gathered, the ensembles will emerge from the X-Space installation and begin a choreographed sequence. Theatrical lights, projections, and sound will be utilized to create an immersive environment for both the audience and the performers.

As the choreography builds and more performers are added, a live video feed will begin and will be projected several stories high onto the textured wall of the building to the west of the courtyard. THe projection will be live video of the performance and of the the audience.

The piece is approximately 30 minutes in duration and would allow for various groups of students who are professional distanced from performing to express themselves in a performative and expressive way. The piece ends with the performers exiting through the crowd and out onto campus where they will continue to perform choreography for 10 minutes in a space that is significant to their experience at ASU.

Rather than making something with HIDA students that only HIDA students see and perform in, Commons will truly activate the campus to come together, make something, and take it out into their communities across campus. 

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today’s obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren’t just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you’re not so easily taken advantage of.

With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it’s relationship to arousal states in the subject. While it’s easy to claim that one is impartial and objective at all times, monitoring one’s physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject’s heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 

So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you’re interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I’m at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that’s being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 

Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I’d like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 
My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there’s a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.

Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It’s paying your brother-in law who wanted to learn Premiere Pro to make the videos. It’s a look that’s infomercial snake-oil slick. 

Reactions from Participants – General Comments / Observations

  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like “oh, that’s what you like huh?” or ” you better not be looking at him / her.” The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a “you try it, I dare you” kind of way.
  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question “what are you doing?” was “Finding out I’m a bad person.” It didn’t surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.
  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like “I think I just like this guy’s picture better.”
  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience.
  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.

Tools Used

Heart Rate – Pulse Sensor and Arduino Uno

Programming for Arduino – Arduino

Program to Read Serial data – Processing
Message Protocol – Open Sound Control
OSC Processing Library – Andreas Schlegel OSC Library for Processing 
Programming Initial Tests – MaxMSP
Programming and Playback- Isadora
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop
Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe Premiere, Adobe After Effects