Tag Archives: Open Sound Control

WonderDome | Workshop Weekend 1

WonderDome

In 2012 Dan Fine started talking to me about a project he was putting together for his MFA thesis. A fully immersive dome theatre environment for families and young audiences. The space would feature a dome for immersive projection, a sensor system for tracking performers and audience members, all built on a framework of affordable components. While some of the details of this project have changed, the ideas have stayed the same – an immersive environment that erases boundaries between the performer and the audience, in a space that can be fully activated with media – a space that is also watching those inside of it.

Fast forward a year, and in mid October of 2013 the team of designers and our performer had our first workshop weekend where we began to get some of our initial concepts up on their feet. Leading up to the workshop we assembled a 16 foot diameter test dome where we could try out some of our ideas. While the project itself has an architecture team that’s working on an portable structure, we wanted a space that roughly approximated the kind of environment we were going to be working in. This test dome will house our first iteration of projection, lighting, and sound builds, as well as the preliminary sensor system.

Both Dan and Adam have spent countless hours exploring various dome structures, their costs, and their ease of assembly. Their research ultimately landed the team on using a kit from ZipTie Domes for our test structure. ZipTie Domes has a wide variety of options for structures and kits. With a 16 foot diameter dome to build we opted to only purchase the hub pieces for this structure, and to cut and prep the struts ourselves – saving us the costs of ordering and shipping this material.

In a weekend and change we were able to prep all of the materials and assemble our structure. Once assembled we were faced with the challenge of how to skin it for our tests. In our discussion about how to cover the structure we eventually settled on using a parachute for our first tests. While this material is far from our ideal surface for our final iteration, we wanted something affordable and large enough to cover our whole dome. After a bit of searching around on the net, Dan was able to locate a local military base that had parachutes past their use period that we were able to have for free. Our only hiccup here was that the parachute was multi colored. After some paint testing we settled on treating the whole fabric with some light gray latex paint. With our dome assembled, skinned, and painted we were nearly ready for our workshop weekend.

Media

There’s healthy body of research and methodology for dome projection on the web, and while reading about a challenge prepped the team for what we were about to face it wasn’t until we go some projections up and running that we began to realize what we were really up against. Our test projectors are InFocus 3118 HD machines that are great. There are not, however, great when it comes to dome projection. One of our first realizations in getting some media up on the surface of the dome was the importance of short throw lensing. Our three HD projectors at a 16 foot distance produced a beautifully bright image, but covered less of our surface than we had hoped. That said, our three projectors gave us a perfect test environment to begin thinking about warping and edge blending in our media.

TouchDesigner

One of the discussions we’ve had in this process has been about what system is going to drive the media inside of the WonderDome. One of the most critical elements to the media team in this regard is the ability to drop in content that the system is then able to warp and edge blend dynamically. One of the challenges in the forefront of our discussions about live performance has been the importance of a flexible media system that simplifies as many challenges as possible for the designer. Traditional methods of warping and edge blending are well established practices, but their implementation often lives in the media artifact itself, meaning that the media must be rendered in a manner that is distorted in order to compensate for the surface that it will be projected onto. This method requires that the designer both build the content, and build the distortion / blending methods. One of the obstacles we’d like to overcome in this project is to build a drag and drop system that allows the designer to focus on crafting the content itself, knowing that the system will do some of the heavy lifting of distortion and blending. To solve that problem, one of the pieces of software that we were test driving as a development platform is Derivative’s TouchDesigner.

Out of the workshop weekend we were able to play both with rendering 3D models with virtual cameras as outputs, as well as with manually placing and adjusting a render on our surface. The flexibility and responsiveness of TouchDesigner as a development environment made this process relatively fast and easy. It also meant that we had a chance to see lots of different kinds of content styles (realistic images, animation, 3D rendered puppets, etc.) in the actual space. Hugely important was a discovery about the impact of movement (especially fast movement) coming from a screen that fills your entire field of view.

TouchOSC Remote

Another hugely important discovery was the implementation of a remote triggering mechanism. One of our other team members, Alex Oliszewski, and I spent a good chunk of our time talking about the implementation of a media system for the dome. As we talked through our goals for the weekend it quickly became apparent that we needed for him to have some remote control of the system from inside of the dome, while I was outside programming and making larger scale changes. The use of TouchOSC and Open Sound Control made a huge difference for us as we worked through various types of media in the system. Our quick implementation gave Alex the ability to move forward and backwards through a media stack, zoom, and translate content in the space. This allowed him the flexibility to sit away from a programming window to see his work. As a designer who rarely gets to see a production without a monitor in front of me, this was a huge step forward. The importance of having some freedom from the screen can’t be understated, and it was thrilling to have something so quickly accessible.

Lights

Adam Vachon, our lighting designer, also made some wonderful discoveries over the course of the weekend. Adam has a vested interest in interactive lighting, and to this end he’s also working in TouchDesigner to develop a cue based lighting console that can use dynamic input from sensors to drive his system. While this is a huge challenge, it’s also very exciting to see him tackling this. In many ways it really feels like he’s doing some exciting new work that addresses very real issues for theaters and performers who don’t have access to high end lighting systems. (You can see some of the progress Adam is making on his blog here)

Broad Strokes

While it’s still early in our process it’s exciting to see so many of the ideas that we’ve had take shape. It can be difficult to see a project for what it’s going to be while a team is mired in the work of grants, legal, and organization. Now that we’re starting to really get our hands dirty, the fun (and hard) work feels like it’s going to start to come fast and furiously.


Thoughts from the Participants:

From Adam Vachon

What challenges did you find that you expected?

The tracking; I knew it would hard, and it has proven to be even more so. While a simple proof-of-concept test was completed with a Kinect, a blob tracking camera may not be accurate enough to reliably track the same target continuously. More research is showing that Ultra Wide Band RFID Real Time Locations System may be the answer, but such systems are expensive. That said, I am now in communications with a rep/developer for TiMax Tracker (an UWB RFID RTLS) who might be able to help us out. Fingers crossed!

What challenges did you find that you didn’t expect?

The computers! Just getting the some of computers to work the way they were “supposed” to was a headache! That said, it is nothing more than what I should have expected in the first place. Note for the future: always test the computers before workshop weekend!

DMX addressing might also become a problem with TouchDesigner, though I need to do some more investigation on that.

How do you plan to overcome some of these challenges?

Bootcamping my macbook pro will help on the short term computer-wise, but it is definitely a final solution. I will hopefully be obtaining a “permanent” test light within the next two weeks as well, making it easier to do physical tests within the Dome.

As for TouchDesigner, more playing around, forum trolling, and attending Mary Franck’s workshop at the LDI institute in January.

What excites you the most about WonderDome?

I get a really exciting opportunity: working to develop a super flexible, super communicative lighting control system with interactivity in mind. What does that mean exactly? Live tracking of performers and audience members, and giving away some control to the audience. An idea that is becoming more an more to me as an artist is finding new ways for the audience to directly interact with a piece of art. On our current touch-all-the-screens-and-watch-magic-happen culture, interactive and immersive performance is one way for an audience to have a more meaningful experience at the theatre.

 

From Julie Rada

What challenges did you find that you expected?

From the performer’s perspective, I expected to wait around. One thing I have learned in working with media is to have patience. During the workshop, I knew things would be rough anyway and I was there primarily as a body in space – as proof of concept. I expected this and didn’t really find it to be a challenge but as I am trying to internally catalogue what resources or skills I am utilizing in this process, so far one of the major ones is patience. And I expect that to continue.

I expected there to be conflicts between media and lights (not the departments, the design elements themselves). There were challenge, of course, but they were significant enough to necessitate a fundamental change to the structure. That part was unexpected…

Lastly, directing audience attention in an immersive space I knew would be a challenge, mostly due to the fundamental shape of the space and audience relationship. Working with such limitations for media and lights is extremely difficult in regard to cutting the performer’s body out from the background imagery and the need to raise the performer up.

What challenges did you find that you didn’t expect?

Honestly, the issue of occlusion on all sides had not occurred to me. Of course it is obvious, but I have been thinking very abstractly about the dome (as opposed to pragmatically). I think that is my performer’s privilege: I don’t have to implement any of the technical aspects and therefore, I am a bit naive about the inherent obstacles therein.

I did not expect to feel so shy about speaking up about problem solving ideas. I was actually kind of nervous about suggesting my “rain fly” idea about the dome because I felt like 1) I had been out of the conversation for some time and I didn’t know what had already been covered and 2) every single person in the room at the time has more technical know-how than I do. I tend to be relatively savvy with how things function but I am way out of my league with this group. I was really conscious of not wanting to waste everyone’s time with my kindergarten talk if indeed that’s what it was (it wasn’t…phew!). I didn’t expect to feel insecure about this kind of communication.

How do you plan to overcome some of these challenges?

Um. Tenacity?

What excites you the most about WonderDome?

It was a bit of a revelation to think of WonderDome as a new performance platform and, indeed, it is. It is quite unique. I think working with it concretely made that more clear to me than ever before. It is exciting to be in dialogue on something that feels so original. I feel privileged to be able to contribute, and not just as a performer, but with my mind and ideas.

Notes about performer skills:

Soft skills: knowing that it isn’t about you, patience, sense of humor
Practical skills: puppeteering, possibly the ability to run some cues from a handheld device

Visualizing OSC Data | TouchDesigner

After looking at how to work with accelerometer data in Isadora in an earlier post, I thought it might also be worth looking at how to approach the same challenge working with Derivative’s TouchDesigner. In the Spring of 2013, for an installation piece, I used TouchDesigner to create a sculpture with a reactive projection component. While I learned a lot about working with TouchOSC in the process, I didn’t spend much time really digging into understanding what kind of data I was getting out of my sensor – in this case I’m using an iPod Touch running TouchOSC, broadcasting data over a wireless network. This type of sensor is one that I hope to use in furutre live performances, and so spending some time really digging into what the detects is an area of interest for me.

TouchOSC to TouchDesigner

To start we need to first capture the OSC data with TouchDesigner. I’m using TouchOSC for this process. With TouchOSC installed on your mobile device you first need to make sure that’ve identified your IP address, and chosen a port number that corresponds to the port number you’ve selected in TouchDesigner. Next we need to ensure that TouchOSC is broadcasting the accelerometer data. You can do this by first tapping on the “Settings” icon in the upper corner of the TouchOSC panel. From here selection “Options” Under options we need to make sure that “Accelerometer (/xyz)” is enabled. Back in TouchDesigner we’ll need to do a little work to get started. You’ll need to create a new network, and toss out all of the template OPs so we can get started fresh. You can choose to create a new container or to use the one that’s present in the template file. Once we’re in this first container we’ll start by creating an OSC In Channel Operator (CHOP). I’ve set my network port to 5001 (the same as in TouchOSC). Once your port numbers are set you should see thee bars in the CHOP reporting out the data from TouchOSC. We can see with a quick glance here that we have three channels of information: accxyz1, accxyz2, and accxyz3. Once we dive into our network we’ll come back to this in order to control our visualization.

The Big Picture

First let’s take  look at the full patch and talk through what our next steps are going to be. In order to have a fuller sense of what the accelerometer data is doing, we’re going to create a simple visualization. We’ll do this by creating a small sphere for each data stream, and map their change to the vertical axis, while mapping a constant change to the horizontal axis. We’ll then set up a simple feedback effect so we can see a rough graph of this data over time that will automatically be cleared each time our visualization wraps around the screen. This approach should allow us to see how the three data channels from the accelerometer correlate to one another, as well as giving us a chance to consider how we might work with this kind of data flow inside of a TouchDesigner network. Start to finish we’ll work with all of the different classes of operators (Components, Textures, Channels, Surface, Materials, and Data), as look at connecting operators and exporting paraemters.

Getting our OSC Data Ready

While the process of programming often bounces back and forth between modules, I’m going to start us out by looking at how I’ve parsed the OSC data so that it can be useful for us as we transition to other operators in our network. The flow of operators looks like this:

OSC In – Select – Lag – Math – Null

The same OSC in is passed to two copies of the “select – lag – math – null” string. While there are plenty of other ways to accomplish this same flow of data, this will allow us to make changes to the mapping of data coming out of individual channels (if we happen to need that option). We start with the OSC In, and then pass this value to a Select CHOP. The Select CHOP helps us to pull a single stream of data out of our OSC In Chop. In adding the Select CHOP make sure that in your parameters window (short-cut key “p”) you’ve selected the appropriate channel. You’ll notice that in the example below, I’ve selected “accxz1” as the first select operator. Next I’ve added a Lag CHOP. This operator will allow us to control the rate at which the data is being processed from our sensor. In essence, this allows us to smooth out the noise from the accelerometer, in effect making it a little less sensitive. Your mileage may vary here, but for my current configuration I’ve set the Lag values to 0.5 and 0.5. After you get your system up and running you may well want to return to these parameters, but for the time being this is a fine place to start. The Math CHOP allows us to remap the incoming values to a different range. This is another place that will require some individual adjustments once you get your whole network set up. To get started we can begin by setting the from range to -2, and 2. The To Range should be set to -1.5, and 1.5. While you’re not likely to see values of -2 or 2 coming from your accelerometer when you get started, this will make sure that there isn’t any data that’s outside of our measurement range. We’ll end our string with a Null. It’s considered good TouchDesigner programming practice to always end your strings with a Null. This ensures that if you want to change your string, add operators, or in any way alter your work, you don’t need to re-export any data. So long as you have ended your work with a Null, you can always make changes upstream of this object. While it feels cumbersome when you’re first starting in TouchDesigner, it’s hands down a practice that will save you time and headaches as you make increasingly complicated networks. Last, but not least, don’t forget to add a Text DAT to the front end of your string. This operator doesn’t do anything in terms of function, but it does allow you a space to write comments to yourself (or whoever is going to be working with your network). Making a few notes about how your network is working, and your underlying thoughts in setting up your chain of operators will help refresh your memory when you come back to a portion of your network after focusing on other areas.

Working with Surface Operators

Anytime we’re drawing some geometry in TouchDesigner we need to Surface Operators (SOPs). While we could do this same process with an image and by only using Texture Operators (TOPs) that means that we’re relying on something other than TouchDesigner to create our artwork. For this particular exercise, I think it’s worth thinking about how we might use TouchDesigner exclusively to create the visualization we’re going to see. Let’s take a look at the whole SOP network before we dive into how to put this all together.

As a general practice we can see out the gate that we’re going to use a single sphere and then create three copies of that single geometry that we’ll then change by exporting our TouchOSC CHOPs to change the behavior of our spheres. We’ll end the whole process by passing our SOPs into a Geometry component, and applying a Phong shader.

Let’s start by first creating a single Sphere SOP. Next we’ll connect that to a Transform SOP. Next we’ll pass our sphere into a Geo Component. To do this, we’ll need to make a few changes to the standard Geo Component. Start by creating a new Geo Component. Double click on the Geo. Once inside, delete the place holding Torus and add an In SOP. Make sure that you turn on both the blue display flag and the purple render flag on the the In SOP. By replacing the Torus with an In we’ll now be able to pass in the SOP operator strings outside of the Geo Component. Exit the Geo by hitting the “u” key (u for up), or by using the mouse wheel to zoom out of the component. You should now notice that there’s an inlet on the left side of our Geo Component. You can now connect the Transform SOP to the Geo, and you should see a picture of the sphere in the window.

Next create a Material called a Phong. Our Phong is going to allow us to add some color to our Geo. Drag the Phong on top of the Geo – you should see an arrow with a plus sign appear to tell you that you’re applying this Phong shader to this Geo Component. In the options for the Phong you can now select any color you’d like. With one whole string completed, you can select the string from Transform through Phong, copy and paste the string two more times for a total of three spheres. Make sure that you connect the Sphere SOP to the two additional strings, and you should be in business.

Next we’re going to export the CHOPs from our accelerometer channels to change the location of our spheres. To make this tutorial readable, I’m going to forgo detailing how to export CHOPs. If this isn’t something you’ve done before, you can look for more information about exporting CHOPs in TouchDesigner by first looking at Derivative’s official documentation here.

Export the Null CHOPs for accxyz1-3 to the Transform SOPs of the three respective Spheres. Make sure that you export the Nulls to the Y position field of the Transform SOPs. All three speheres should now be moving up and down based upon the change in accelerometer data  for each channel.

We’re also going to want to be able to change the size of our spheres based upon the over all look of our visualization. To do this we’re going to set up a simple string of CHOPs that we’ll then export to all three Transform SOPs. Start by creating a Constant CHOP and connect it to a Null CHOP. The Constant outputs a constant number in a channel and will allow us to change the dimensions of our spheres collectively. Next export the Null from this string to all three size attributes (x, y, z) of all three transform SOPs. In the parameters of the Constant change the value of the channel to 0.025.

Finally, don’t forget to add a Text DAT. Again, this is a great place to add some notes to yourself, or notes for your operator about what you’ve done and what’s important to consider when coming back to work with this string later.

Changes over Time

Moving up and down is only a part of this challenge. While what we’ve made so far is interesting, it still lacks a few key elements. For starters, we want to be able to see this change over time. In our case that means seeing the spheres move from left to right across the screen. Here’s a look at the string of operators that’s going to help us make that happen:

Wave – Math – Null 
and
Wave – Math – Trigger – Null

We’re going to use a single Wave CHOP to drive the lateral movement of our spheres, and to act as the trigger to erase our video feedback that we’ll make once we’ve set our geometry to render.

The first thing we need to do is to create a Wave CHOP. This operator will create waves a regular intervals which we can control with the attributes of this operator. First we want to change the type of wave to be ramp. A ramp will start at 0 and increase to a specified value, before returning to 0. Next we’ll set the the Period and the Amplitude to 30.

Next we need to look at the Channel Tab of this operator’s parameters. On the Channel tab we need to set the End value to 30 seconds. The values that we’re currently changing are time it takes for the wave to cycle. You may ultimately find that you’d like this to happen faster or slower, but we can use these values at least as a place to start. We also need to set the End and REnd values to be 1800 ( 60 frames per second * 30 seconds = 1800 frames). This will ensure that we have enough time for our animation to actually wrap from right to left.

The first Math CHOP that we’re going to change is on the top string. This operation is going to scale our wave values to make sure that we’re starting just off the screen on the left, and traveling to just off the screen on the right. In the From Range insert the values 0 and 30. In the To Range change the values to -2.1 and 2.1. Connect this to the Null. Next export the Null CHOP to the X position field of all three Transform SOPs.

One of the things we can do to plan ahead, is to build a trigger based on this same wave that will erase our video feedback that’s going to be generated in the rendering process. Create a new Math CHOP and connect it to the same Wave CHOP. In the From Range change the values to 0 and 30, and in the To Range change the values to 0 and 1.

Next we’re going to add a Trigger CHOP to our second string. A trigger can be used for any number of purposes, and in our case it’s going to help ensure that we have a clean slate for each time our spheres wrap from the right side of the screen back to the left. With the Math CHOP connected to the trigger, the only change we should need to make is to ensure that the Trigger Threshold is set to 0. Connect the trigger to a Null. We’ll come back to this CHOP once we’re building the rendering system for our visualization.

Rendering

Rendering a 3D object is fairly straight forward provided that you keep a few things in mind. The process of rendering an object requires at three components: at least one Geometry (the thing being rendered), a Camera (the perspective from which it’s being rendered), and a Light (how the object is being illuminated). We’ve already got three Geo components, now we just need to add a light and a camera. Next we’ll add a Render TOP, and we should see our three spheres being drawn in the Render TOP window. If you want to know a little more about how this all works, take a moment to read through the process of rendering in a little more detail. Let’s take a look at our network to get a sense of what’s going to happen in the rendering process.

There are a couple of things happening here, but the long and short of it is that we’re going to create a little bit of video feedback, blur our image, add black background, and create a composite out of all of it.

We’re going to start by making these two strings:

Render – Feedback – Level – Composite – Blur
and
Render – Level – Over – Over – Out

Here’s a closer look at the left side of the strings of operators that are rendering our network (as a note, our Render TOP is off the screen the left):

Once our string is set up, make sure that your Feedback is set to target Comp1. Essentailly what we’re creating here a layer of rendered video that is the persistent image of the spheres passing from left to right. Additionally, at this point we want to export the trigger CHOP to the feedback TOP to the bypass parameter. When the bypass value is greater than 0 it turns off the feedback effect. This means that when our trigger goes off the screen will clear, and then the bypass value will be set back to 0.

Now let’s look at the right side of our rendering network:

Here we can see our over TOPs acting as composite agents. The final Over TOP is combined with a Constant TOP acting as a black background. Finally all of this is passed to an Out TOP so it can be passed out of our container.

That’s it! Now we’ve built a simple (ha!) visualization for the three channels of data coming out of an iPod Touch accelerometer passed over a wireless network. The next steps here are to start to play. What relationships are interesting, or not interesting. How might this be used in a more compelling or interesting way? With a programming environment like TouchDesigner the sky is really the limit, it’s just a matter of stretching your wings.