Category Archives: Installation

TouchDesigner | The Underlying Geometry

One of the benefits of working with TouchDesigner is the ability to work in 3D. 3D objects are in the family of operators called SOPs – Surface Operators. One of the aesthetic directions that I wanted to explore was the feeling of looking into a long box. The world inside of this box would be characterized by examining artifacts as either particles or waves with a vaguely dual-slit kind of suggestion. With that as a starting point I headed into making the container for these worlds of particles and waves.


Before making any 3D content it’s important to know how TouchDesigner processes these objects in order to display them. On their own, Surface Operators can’t be displayed as a rendered texture. In TouchDesigner’s idiom textures are two-dimension surfaces, and it follows that the objects that live in that category are called TOPs, Texture Operators. Operators from different families can’t be directly connected with patch chords. In order to pass the information from a SOP to a TOP one must use a TOP called a Render. The Render TOP must be connected to three COMPs (Compositions) in order to create an image that can be displayed. The render TOP requires a Geometry COMP (something to be rendered), a Light COMP (something to illuminate the scene), and a Camera COMP (the perspective from which the object is to rendered). In this respect TD pulls from conventions familiar to anyone who has worked with Adobe’s After Effects. 

Knowing the component pieces required in order to successfully render a 3D object it’s easier to understand how I started to create the underlying geometry. The Geometry COMP is essentially a container object (with some special attributes) that holds the SOPs responsible for passing a surface to the Render TOP. The default Geometry COMP contains a torus as a geometry. 

We can learn a little about how the COMP is working by taking a look inside of the Geometry object. 


Here the things to pay close attention to are the two flags on the torus object. You’ll notice in the bottom right corner there is a purple and a blue circle that are illuminated. The purple circle is a “Render Flag” and tells TouchDesigner to render the object, and the blue circle is a “Display Flag” which tells TouchDesigner that this is the object that should be displayed in the Geometry COMP.

Let’s take a look at the network that I created.

Now let’s dissect how my geometry network is actually working. At first glance we can see that multiple objects are being combined into a single piece of geometry that’s ultimately being passed out of this Geometry COMP. 

If we look closer we’ll see that here that the SOP network looks like this:

Grid – Noise – Transform – Alpha Noise (here the bypass flag is turned on)

Grid creates a plane that’s created out of polygons. This is different from a rectangle that’s only composed four points. In order to create a surface that can deform I needed a SOP points in the middle of it. The grid is attached to a Noise SOP that’s animating the surface. Noise is attached to a transform SOP that allows me to change the position of this individual plane. The last stop in this chain is another Noise SOP. Originally I was experimenting with varying the transparency of the surface. Ultimately, I decided to move away from this look. Rather than cutting this out of the chain, I simply turned on the Bypass Flag which turns off this single SOP. This whole chain is repeated eight times (for a total of eight grids). 

These Nine planes are then connected so that the rest of the network looks like this:

Merge – Transform – Facet – Texture – Null – Out
Merge takes all of the inputs and puts them together into a single piece of geometry. Transform allows me to move object as a whole in space. Facet is a handy operator that allows you to compute the normals’ of a geometry, which is useful for creating some more dynamic shading. Texture was useful for another direction that I was exploring, ultimately  ended up turning on the bypass flag for this SOP. A null, like in other environments, is really just a place holder kind of object. In the idiomatic structure of TouchDesigner, the Null is operationally an object that one places at the end of operation string. This is considered a best practice for a number of reasons. High on the list of reasons to end a string in a Null is because this allows easy access for making changes to a string. TouchDesigner allows the programmer to insert operations between objects. By always ending a string in a Null it becomes very easy to make changes to the stream without having to worry about re-exporting parameters. Finally all of this ends in an Out. While the Out isn’t necessary for this string, at one point I wasn’t sure if I was going to pass this geometry into another component. Ending in the Out ensured that I would have that flexibility if I needed it.

Neuro | The De-objectifier

Last semester Boyd Branch offered a class called the Theatre of Science that was aimed at exploring how we represent science in various modes expression. Boyd especially wanted to call attention to the complexity of addressing issues about how todays research science might be applied in future consumable products. As a part of this process his class helped to craft two potential performance scenarios based on our discussion, readings, and findings. One of these was Neuro, the bar of the future. Take a cue from today’s obsession with mixology (also called bartending), we aimed to imagine a future where the drinks your ordered weren’t just booze filled fun-times, but something a little more insipidly inspiring. What if you could order a drink that made you a better person? What if you could order a drink that helped you erase your human frailties? Are you too greedy, have specialty cocktail of neuro-chemicals and vitamins to help make you generous. Too loving or giving, have something to toughen you up a little so you’re not so easily taken advantage of.


With this imagined bar of the future in mind, we also wanted to consider what kind of diagnostic systems might need to be in place in order to help customers decide what drink might be right for them. Out of my conversations with Boyd we came up with a station called the De-Objectifier. The goal of the De-Objectifier is to help patrons see what kind of involuntary systems are at play at any given moment in their bodies. The focus of this station is heart rate and it’s relationship to arousal states in the subject. While it’s easy to claim that one is impartial and objective at all times, monitoring one’s physiology might suggest otherwise. Here the purpose of the station is to show patrons how their own internal systems make being objective harder than it may initially seem. A subject is asked to wear a heart monitor. The data from the heart monitor is used to a calibrate a program to establish a resting heart rate and an arousal threshold for the individual. The subject is then asked to view photographs of various models. As the subject’s heart rate increases beyond the set threshold the clothing on the model becomes increasingly transparent. At the same time an admonishing message is displayed in front of the subject. The goal is to maintain a low level of arousal and to by extension to master one physiological aspect linked to objectivity. 


So how does the De-objectifier work?! The De-objectifier is built on a combination of tools and code that work together to create the experience for the user. The heart monitor itself is built from a pulse sensor and an Arduino Uno. (If you’re interested in making your own heart rate monitor look here.) The original developers of this product made a very simple processing sketch that allows you to visualize the heart rate data passed out of the Uno. While I am slowly learning how to program in Processing it is certainly not an environment where I’m at my best. In order to work in an programming space that allowed me to code faster I decided that I needed a way to pass the data out of the Processing sketch to another program. Open Sound Control is a messaging protocol that’s being used more and more often in theatrical contexts, and it seemed like this project might be a perfect time to learn a little bit more about OSC. To pass data over OSC I amended the heart rate processing sketch and used the Processing OSC Library written by Andreas Schlegel to broadcast the data to another application. 


Ultimately, I settled on using Isadora. While I started in MaxMSP, I realized that for the deadlines that I needed to meet I was just going to be able to program faster in Isadora than in Max. This was a hard choice, especially as MaxMSP is quickly growing on me in terms of my affection for a visual programming language. I also like the idea of using Max because I’d like the De-objectifier to be able to stand on its own without any other software and I think that Max would be the right choice for developing a standalone app. That said, the realities of my deadlines for deliverables meant that Isadora was the right choice. 
My Isadora patch includes three scenes. The first scene runs as a pre-show state. Here an motion graphic filled movie plays on a loop as an advertisement to potential customers. The second scene is for tool calibration. Here the operator can monitor the pulse sensor input from the arduino and set the baseline and threshold levels for playback. Finally there’s a scene that includes the various models. The model scene has an on-off toggle that allows the operator to enter this mode with the heart rate data not changing the opacity levels of any images. Once the switch is set to the on position the data from the heart rate sensor is allowed to have a real-time effect on the opacity of the topmost layer in the scene.

Each installation also has an accompanying infomercial like trailer and video vignettes that provide individuals with feedback about their performance. Here Boyd described the aesthetic style for these videos as a start-up with almost too much money. It’s paying your brother-in law who wanted to learn Premiere Pro to make the videos. It’s a look that’s infomercial snake-oil slick. 




Reactions from Participants – General Comments / Observations

  • Couples at the De-Objectifier were some of the best participants to observe. Frequently one would begin the process, and at some point become embarrassed during the experience. Interestingly, the person wearing the heart rate monitor often exhibited few visible signs of anxiety. The direct user was often fixated on the screen wearing a gaze of concentration and disconnection. The non-sensored partner would often attempt to goad the participant by using phrases like “oh, that’s what you like huh?” or ” you better not be looking at him / her.” The direct user would often not visible respond to these cues, instead focusing on changing their heart rate. Couples nearly always convinced their partner to also engage in the experience, almost in a “you try it, I dare you” kind of way.
  • Groups of friends were also equally interesting. In these situations one person would start the experience and a friend would approach and ask about what was happening. A response that I frequently heard from participants to the question “what are you doing?” was “Finding out I’m a bad person.” It didn’t surprise users that their heart rate was changed by the images presented to them, it did surprise many of them to see how long it took to return to a resting heart rate as the experience went on.
  • By in large participants had the fastest return to resting rate times for the images with admonishing messages about sex. Participants took the longest to recover to resting rates when exposed to admonishing messages about race. Here participants were likely to offer excuses for their inability to return to resting rate by saying things like “I think I just like this guy’s picture better.”
  • Families were also very interesting to watch. Mothers were the most likely family member to go first with the experience, and were the most patient when being goaded by family members. Fathers were the least likely to participate in the actual experience.
  • Generally participants were surprised to see that actual heart rate data was being reported. Many thought that data was being manipulated by the operator.

Tools Used

Heart Rate – Pulse Sensor and Arduino Uno

Programming for Arduino – Arduino

Program to Read Serial data – Processing
Message Protocol – Open Sound Control
OSC Processing Library – Andreas Schlegel OSC Library for Processing 
Programming Initial Tests – MaxMSP
Programming and Playback- Isadora
Video Editing – Adobe After Effects
Image Editing – Adobe Photoshop
Documentation – iPhone 4S, Canon 7D, Zoom H4n
Editing Documentation – Adobe Premiere, Adobe After Effects

Delicious Max/MSP Tutorial 4: Vocoder

This week I was gutsy, I did two MaxMSP tutorials. I know, brave. Sam’s tutorials on YouTube continue to be a fascinating way to learn Max, as well as yielding some interesting projects. This second installment this week is about building a vocoder. The audio effect now common place is still incredibly rewarding, especially when running through a mic rather than using a recorded sample. There is a strange pleasure in getting to hear the immediate effects of this on your voice, which is further compounded by the ability to add multiple ksliders (keyboards) to the mix. Below is the tutorial I followed along with yesterday, and a resulting bit of fun that I had as a byproduct.

A silly patch made for a dancer’s birthday using the technique outlined by Sam in his tutorial above.

Delicious Max/MSP Tutorial 2: Step Sequencer

Another MaxMSP tutorial from dude837 today in the afternoon. Today I the step sequencer in the video below. This seems like slow going, and maybe a little strange since I keep jumping around in this set of tutorials. This is a rough rode. Maybe it’s not rough so much as it’s slow at times. I guess that’s the challenge of learning anything new, it always has times when it’s agonizingly slow, and times when ideas and concepts come fast and furious. The patience that learning requires never ceases to amaze me. Perhaps that’s what feels so agonizing about school when we’re young – it’s a constant battle to master concepts, it’s a slow road that never ends. Learning to enjoy the difficult parts of a journey is tough business. Anyway, enough of that tripe. On to another tutorial. 



Sound Trigger | MaxMSP

Programming is often about solving problems, sometimes problems that you didn’t know that you actually had to deal with. This past week the Media Installations course that I’m taking spent some time discussion issues of synchronicity between computers for installations, especially in a situation where the latency of wired or wireless connections creates a problem. When 88 computers all need to be “listening” in order to know when to start playback, how can you solve that problem?
Part of the discussion in the class centered around using the built-in microphones on modern laptops as a possible solution. Here the idea was that if every computer had it’s microphone turned on, the detection of a sound (say a clap) would act as a trigger for all the machines. Unless you are dealing with distances where the speed of sound becomes a hurdle for accuracy, this seemed like a great solution. So I built a patch to do just that.
This max patch uses the internal microphone to listen to the environment and send out a trigger messages (a “bang” in max-speak) when a set sonic threshold is crossed. As an added bonus, this patch also turns off the systems involved with detection once it’s set in motion. Generally, it’s seemed to me that a fine way to keep things from going wrong is to streamline your program so that it’s running as efficiently as possible.


Tools Used
Programming – MaxMSP
Countdown Video – Adobe After Effects
Screen Cast – ScreenFlow
Video Editing – Adobe Premiere

Sparrow Song | Drawing with Light

One of the effects that I’ve used in two productions now is where lines appear to draw-in over time in a video. This effect is fairly easy to generate in After Effects, and I wanted to take a quick moment to detail how it actually works. 

This process can start many ways. For Sparrow song it started by connecting a laptop directly to the projectors being used, and using photoshop to map light directly onto the set. You can see in the photo to the right that each surface that’s intended to be a building has some kind of drawn on look. In photoshop each of these buildings exists as an independent layer. This makes it easy to isolate effects or changes to individual buildings in the animation process.

Here’s a quick tutorial about how I animated the layers to create the desired effect:


Now that I’ve done this several times it finally feels like a fairly straightforward process – even if it can be a rather time consuming one.

Here’s an example of what the rendered video looks like to the playback system.

Here’s an album of documentation photos from the closing show.


Tools Used
Digital Drawing Input – Wacom intuos4
Mapping and Artwork – Adobe Photoshop
Animation and Color – Adobe After Effects
Photos – Cannon EOS 7D
Photo Processing – Adobe LightRoom 4

Mapping and Melting

This semester I have two courses with conceptual frameworks that overlap. I’m taking a Media Installations course, and a course called New Systems Sculpture. Media Installations is an exploration of a number of different mediated environments, specially art installations in nature. This course deals with the issues of building playback systems that may be interactive, or may be self-contained. New Systems is taught through the art department, and is focused on sculpture and how video plays as an element in the creation of sculptured environments

This week in the Media Installations course each person was charged with mapping some environment or object with video. While the wikipedia entry on the subject is woefully lacking, the concept is becoming increasingly mainstream. The idea is to use a projector and to paint surfaces with light in a very precise way. In the case of this course, the challenge was to cover several objects with simultaneous video. 

I spent some time thinking about what I wanted to map, and what I wanted to project, and was especially interested in using a zoetrope kind of effect in the process. I kept thinking back to late nights playing with Yooouuutuuube, and I wanted to create something that was loosely based on the kind of idea. To get started I found a video that I thought might be especially visually interesting for this process. My friend and colleague Mike Caulfield has a tremendously inspiring band called The Russian Apartments. The music he writes is both haunting and inspiring, an electronica ballad and call to action. It’s beautiful, and whenever I listen to a track it stays with me throughout the day. Their videos are equally interesting, and Gods, a video from 2011, kept coming back to me. 


To start I took the video and created a composition in AfterEffects with 16 versions of the video playing simultaneously. I then offset each video by 10 frames from the previous. The Effect is a kind of video lag that elongates time for the observer, creating strange transitions in both space and color. I then took this source video and mapped it to individual surfaces, creating a mapped set of objects all playing the same video with the delay. You can see the effect in the video below:




Meanwhile in my sculpture course we were asked to create a video of process art: continuous, no cuts, no edits. I spent a lot of time thinking about what to record, and could not escape some primal urge to record the destruction of some object. In that vein I thought it would be interesting do use a slow drip of acetone on styrofoam. Further, I wanted to light this with a projector. I decided to approach this by using the mapping project that I had already created, and instead to frame the observation of this action from a closer vantage point. Interestingly, Gods has several scenes where textures crumble and melt in a similar way to the acetone effect on the styrofoam. It wasn’t until after I was done shooting that I saw this similarity. You can see the process video below:




Tools Used:

Mapping – MadMapper
Media Generation – Adobe After Effects
Projector – InFocus IN2116 DLP Projector

Media Installations | Simple Sync

Assignment Description

Simple Synch Youtube Rips

Look at the synchronization utility I made in max, and see if you can get at least two computers to synch up some kind of video playback. More computers could be even more interesting.

My Patch

Initially I had a lot of ideas about where to go with this assignment. Unfortunately  I’m brand new to MAX 6, and the learning curve is a little steep. That said, initially I was thinking about a four computer arrangement. These four laptops would be arranged in a “x” formation, with a laptop at each point of the “x.” Computers that were across from one another would stream of the video from the built-in cameras to the opposite screen. The resulting experience for the observer would be looking at a screen that was always displaying the viewer’s back. This configuration required that the video stream be passed over TCP/IP to the other computer. While this was a great start, ultimately I had to move a different direction – I ended up experiencing a fair amount of difficulty passing a video signal over TCP/IP. I’m sure that this can be done, but I wasn’t having any luck and needed to move on. 



In moving forward, however, I stumbled upon Dude873’s video reverb patch. After following along with his tutorial I was able to successfully replicate his patch and the effect that it created. Moving forward I loaded the patch onto two computers and made a few small changes. Specifically, I created a patch that had a udpsend object and a patch that had a udpreceive object. These objects watch the network for an incoming signal to initialize the webcam and to start the effect. The control computer starts and stops the effect on both machines. Hit the video below to see how this works, and download the max patches if you’d like to experiment yourself.


Video


My Video Reverb Patches

     Video Reverb Send Maxpat
     Video Reverb Receive Maxpat