In the Fall of 2011, I decided to pursue a master’s degree. This decision came after seven years living and working in the New England for a variety of educational programs. After completing my undergraduate program I had taken a position working for an educational outreach program in Keene, New Hampshire serving low income first generation college students. After five years I transitioned to working for a faculty development office focused on technology and curriculum integration and development. During my time in the North East I also trained and performed as a circus artist, photographer, and videographer. In many respects this life felt fractured – a work / life paradigm defined by fractures and separations rather than unities and confluences. Where was the art in my technology, and where was the technology in my art? Certainly one might argue that the use of film in the form of videography and photography provided for some intersectionality in my development as an artist, but I wanted more. I wanted to be part of the growing industry of developer / programmer / artists I found spread across the web who were, like the agents of previous art movements, changing the way western culture saw the world.
In particular one of the most poignant examples of intersections of art and performance that made a lasting impression on me was the ISAM tour. For this show Tobin moved away from presentations focused as DJ set to performances that were more audio visual experience in nature. This transition came as a collaboration between a number of large scale design studios both for the fabrication of the physical environment as well as the media system driving visuals for the performance. This was a concrete repression of the kind of work I wanted to be a part of – an intersection of video production, animation, and live visuals. Uninitiated to the practicalities and challenges of this work, I quickly dreamt of unifications between other forms – live media and dance, circus, theatre et al.
My interest in these intersections eventually pushed me to find the Interdisciplinary Digital Media and Performance program at Arizona State University. Situated between the school of Film Dance and Theatre, and Arts Media and Engineering I found one of the few programs in the country focused on cultivating and training artist / programmers for this exciting field.
What follows is an examination of my three years in an interdisciplinary program as a case study for understanding current and future trends in the design, instruction, and implementation of new media. Towards that end, this document will focus on the formative projects that have shaped my experience and focus, the challenges that have helped define my area of specialty, and the larger ideological frames that I continue to encounter and interrogate as a part of my artistic and technical practice.
Some disclosures
Before beginning this particular examination and reflection it is important to make some explicit disclosures about my history, perspective, methods, and intention. As a thirty three year old white male, I come from and largely privileged and centric cultural paradigmatic perspective. Raised in a family of meager means my childhood was, in many respects, informed by the challenges faced by working class families. My parents would eventually divorce, but through my childhood and early teenage years there was always a large focus on education, self reflection, and societal contribution. My first exposure to computers came when my father brought home a Radio / Shack TRS-80 from a garage sale.
Built on the BASIC Operating System, my formative experiences with computers as tools were founded on the principles learning to code by painstakingly copying it line by line from a provided booklet. As a young child much of my experience here was comprised of watching my father carry out this Sisyphean task, but it left me with a lasting impression of the nature of building operations line by line.
As I grew into my teens my parents’ looming divorce necessarily created shifts in our family. To scratch my technology itch I spent time with my our next door neighbors, one of whom was a retired air force officer with a penchant for tinkering with computers. Forever in the process of learning, Russell would endlessly talk me through the inner workings of MS-DOS, file management, and computers as tools for computation and not just entertainment. I was, like any pre-teen, primarily interested in games more than any other application. Russell was infinitely patient with me, always allowing me some indulgences while also pushing me to learn more about the application of computational principles. After my parents divorce my mother doubled down on the importance of technology in our lives – both for me as a student, and for her as she picked up various part time jobs requiring basic computer skills.
This early grounding of personal computer as tool, entertainment, and expressive medium left me with an imbedded value structure. Given the current socio-cultural dependence on computers it’s hard to remember a world where this deviation and not norm; however as a teen in the 90’s who was in the band, school chamber choir, school plays, photography dark room, lit club, but also loved computers it was a strange time. Through High School and into College my love of computers and their power never changed, and I subsequently looked for their application and use in all that I studied and did.
All of this serves as a disclosure to some of the invisible biases that have shaped my experiences, and are likely to influence the reflection that follows. While I am often very personally cognizant of the disruptive and sometimes-destructive nature of distributed computation as a part of our cultural fabric, my personal history would largely leave me characterized as techno-utopian. This is further reinforced by my gender and ethnicity. That is to say that it should come as no surprise that a white male whose formative cultural years were in the 80s and 90s likes computers – and in fact, that the cultural and logical biases of these machines and their operating systems are largely invisible to him.
Year One 2012-2013
My first year at ASU in the Interdisciplinary Digital Media and Performance program was largely characterized by a whirlwind of projects and experiences. My larger intention was to use my time in the following way: year one would be a broad sampling of possible avenues for future focus; year two would be a time to narrow my focus and to being to develop a specialty, and my final year would further narrow my scope with an emphasis on the skills that had been developed over the previous two years. This seemed like a logical and driven approach of the use of my time at ASU, allowing for both broad exploration and for the development of concrete skills and specialties. The aggressive approach of my first year would mean that I would contribute to a total of twelve projects and productions. This was not an insubstantial endeavor, and though exhausting it left me with a broad sampling of technologies and areas for exploration. This would also mean that I was quickly exposed to both the theoretical discourse of this field as well as the underlying technologies employed in its execution. Of special note there are three projects that brought this first year into sharp focus: ¡Bocón!, Neuro, and Concrete Matters at Braggs Pie Factory.
¡Bocón! was a main stage production for the ASU School of Film, Dance, and Theatre season. A theatre for youth production, ¡Bocón! told the story of a young boy’s precious escape from a war torn community, the search for his search for his voice, and the hope of refuge in the United States. The production’s design was characterized by a mélange of cultural forms, puppets, masks, live music, and fully embodied movement of the performers. Running the gamut of nightmarish to comedic moments, the production was staged in the round on the Galvin stage in the Nelson Fine Arts Center. Daniel Fine asked me to assist his media design on the production, and we quickly set ourselves to the task of thinking hard about how to fill the space with projection to both support and influence the narrative and design of the production. This particular production would use nine projectors to create a screen that wrapped around the audience and blanketed the floor of the playing space. The imagery for this production was non-realistic in nature, relying on the use of illustration, animation, and abstract forms.
As my first assisted design at ASU, this was equal parts terrifying and thrilling. Working with Dan was tremendously rewarding and gave me the opportunity to talk deeply with another designer about how to cultivate meaning, develop and support moments in the production, design the playback system, install equipment, and make content for a live production on this scale. This would be my first major exposure to the ins and outs of projection mapping, lensing, projector placement, content playback programming, installation, content creation for non-traditional screen sizes, display methods, communication protocols, and asset management. If there was a metaphorical deep end of the pool, I had jumped in – with ankle weights on.
Another large focus for this production was also on creating moments of interactivity between the media and performers. Initially Dan and I experimented with various methods of developing and creating these moments. This would often mean starting with questions of sensing – how does the computer see the performance space and the performers. Perhaps one of the most important lessons I learned in working on ¡Bocón! was the importance of research; for this production that manifest around questions of equipment and software for sensing and performer tracking. While our research was very fruitful, we also quickly ran up against the limitations of our budget in relation to the equipment required for our initial designs. Undeterred, we looked for other means to simulate the effect that was too costly to implement with hardware.
The research we ultimately presented to the director was inconsistent with the direction of the production, and so the interactivity elements were cut from the final design. Like so much of life, in the theatre it was imperative that we perused our idea with as much passion and fervor as possible even if it wasn’t going to be implemented in production. Though tremendously disappointing, removing this element form the production freed us to focus on other important elements of the design.
In hindsight, ¡Bocón! was perhaps the best start I could have had in experiencing media design for the stage. The design was overly ambitious, unwieldy, and often maddening – but I was never alone in wrestling with these ideas. As a part of a team of designers and artists we worked together to solve these problems, and find creative solutions even when they were not immediately apparent. Again and again I learned to return to the question, “but is it good / right for the show?” Even when this meant cutting beautiful sequences of video, our focus always returned to the central question of how what we were making related to the production.
This was also the beginning of a longer term creative collaboration with Dan Fine and Steve Christensen, artists’ whose work and observations continually inspire me to be better at what I do. Additionally, this pulled into sharp focus exactly how much was left to learn and what was in front of me for the next two and a half years.
The applied project of Boyd Branch, Neuro began in the fall as theatrical devising course centered around the Theatre of Science. Boyd was focused on the intersections between science and theatre, and how one might use theatrical techniques to explore scientific concepts. During the Fall of 2012 the project explored different ideas that would then be staged as a full production in the Spring semester. Through this process participants weekly brought in various pieces of scientific research and Boyd led discussions and improvisations around the topics selected. At the end of the Fall semester two scenarios were presented in a small public showing: Algae Nation, and Neuro. Algae Nation was imagined as a tech expo running from present day to an imagined 50 year future at regular intervals. The explored ideas ranged from the highly probable to the absurd as each subsequent iteration of the expo raised the stakes of human obsession. Neuro imagined the bar of the future where human neurochemistry could be manipulated in order to elicit particular emotional sates and behaviors. Are you too cold and unfeeling, Neuro has a cocktail for you to make you more compassionate and loving. Out of these two staged readings Neuro was selected to be developed further in the Spring.
Borrowing from some of the ideas explored in Algae Nation, Neuro developed into a bar with four interactive stations where patrons could learn about their invisible neurochemical biases. I developed a station called the De-Objectifier. The De-Objectifier was designed to monitor a participant’s heart rate while they observed the image of a model on a screen. As a participant’s heart rate increased, the models’ clothing became increasingly transparent; conversely, as the participant’s heart rate decreased, the model’s clothing became more opaque. An accompanying admonishing message also appeared alongside the model on the screen. This particular installation played on the mostly invisible autonomic systems that are always present in our bodies. While this was reductively exploited to admonish participants for objectifying the models, it also stood as a clear reminder that much of the functional nature of the body is invisible to us – the corpus that we inhabit is forever in flux, and that those physical changes play a part in one’s experienced emotional state.
The De-Objectifier was my first interactive installation. An LED with attached photo sensor was attached to an Arduino and used a change in measured brightness to determine heart rate. This was then passed over USB to an Processing Sketch running on a laptop, which in turn passed the signal internally via Open Sound Control to Isadora. The custom application in Isadora had both a control panel for use by the operator, in this case me, as well as front facing display for the user/participant. Where ¡Bocón! had focused on a purely presentational system for the media, the De-Objectifier was deeply dependent on interactivity. Participants often required coaching to remember that they could lower their heart rate with steady even breaths, or reassurances to help put them at ease about their emotional experience of being judged by a faceless computational system.
There were also a huge number of lessons learned here in the execution of this project. Over the course of two iterations – first on campus at the Emerge festival, and then at the SPARK festival at the Mesa Arts Center – the system design moved from being centrally administered to being distributed across multiple laptops. The challenges of installation management and environmental show control system where enormous, and required repeated evaluation and restructuring. While I was responsible for the De-Objectifier, I also assisted in the installation and management of the entire system. The challenges of signal and power management came into sharp focus on this project, leaving me with a lasting appreciation for careful planning early in any project.
Neuro was a remarkable first year experience. Seeing an installation / production from ideation through implementation over two venues meant that I experienced, first hand, the stresses and rewards of creating something entirely new. Wildly different from my experience with ¡Bocón!, Neuro was an event directly centered on engaging and interacting with audience members. The experience of being largely self directed for a stand alone installation helped to emphasize just how much attention to detail and focus is required when building an interactive installation. This also helped to draw attention to the necessity of strong coding skills. While what language or environment to learn was still something I hadn’t decided / discovered, it was suddenly clear that over the next two years I would not just be building visual compositions, but whole tools and applications. The mental shift precipitated by this project was in beginning to understand the complex distribution of assets and code that make any project work – what is the system responsible for doing in real time, what’s played back from a file, how can we tell, how do you build something reliable but also surprising? Neuro was the beginning of many of these questions, the very questions that continue to follow me.
One of my last projects during my first year in the program was an art installation for a course titled New Systems Sculpture. While this course was largely designed to help art students learn how to edit video, I was able to negotiate with the instructor to create a final project examining the intersection of projection and the body as sculpture. Rather than playing back fixed media, I planned to focus on generating live media and exploring interactivity in the piece. Originally, I wanted to focus on creating a new circus apparatus to use in live performance. My naiveté gave way as I quickly learned that the limits of my welding abilities would hinder me from this particular goal. Instead, I transformed the piece I had made into a sculpture and begin the process of building some generative media. One of the parallel challenges I was exploring was learning Derivative’s TouchDesigner, a programming and developing environment for media applications. Ultimately, this work would become a interactive sculptural projection installation piece programmed in TouchDesigner that used the accelerometer in an iPod Touch to drive the projection through a wireless communication protocol.
What I didn’t anticipate was just how ambitious this project would be. I was largely on my own in this endeavor. During the sculpture class I was trained in one afternoon on all of the equipment to cut and weld steel. The only instruction I could find for learning TouchDesigner were long-form videos recorded from workshops held in Montreal. So I set out to teach myself, spending long hours wrestling with software and with steel. During this project there were countless times when it seemed like I had made all of the wrong choices. This was one of the most difficult projects in this first year – learning a new fabrication skill, and new computational methods simultaneously. Perhaps one of the hard learned lessons on this project was the patience necessary when encountering challenges far beyond the scope one’s understanding. In the first year of graduate school I had quickly run into a field that was in colloquial terms, un-googleable. That is to say the topics were, and in many cases still are, far enough outside of the common lexicon of archived knowledge that finding answers required a deeper understanding of the encountered problem.
I learned, again and again, that knowing what questions to ask and how to interrogate one’s underlying presuppositions about a topic were skills far more useful than simple answers. While I initially found this maddening, and often find that it frustrates my students, I began to come to terms with fact that how I was thinking about a problem would shape the answer I looked for. This might best be illustrated with a reductive imagined conversation with myself about what to eat for lunch:
Self A – “How do I make a peanut butter sandwich?”
Self B – “Why do want to make a peanut butter sandwich?”
Self A – “Because I have peanut butter and bread in the pantry, and I’m hungry.”
Self B – “Do you really want a peanut butter sandwich, or are just hungry?”
Self A – “I guess I’m just hungry.”
Self B – “So you really want to know what to make yourself to eat – that’s different than wanting how to make a peanut butter sandwich.”
The real presupposition I was coming to terms with was my deep seeded belief that there was a singular “correct” way to solve any given problem. Instead, I was learning that there were hundreds of solutions to any given problem, and that any given solution relied on the other variables involved.
Unlike ¡Bocón! and Neuro, the installation at Braggs was a stand alone interactive art piece. It’s meaning was largely in what users brought to the gallery. That is to say that the observers were actively part of the meaning-making process as they engaged with the work. Rather than having a specific narrative thrust, this piece was instead contemplative and self contained. It somehow seemed fitting to make a solo piece for a gallery at the end of that first semester. This is also when I committed to learning TouchDesigner as a programming environment. It felt like an untapped resource, and a community of knowledge that I could meaningfully contribute to. While the learning curve was steep, I also began to find traction in something that set me apart as a student, as an artist, and as a scholar. Little did I know how deep the rabbit hole was going to go.
Complete List of Production Contributions
POVV | Media Installation Assistant. Tempe, AZ.
55 in Concrete | Lighting Installation. Tempe, AZ.
A Circus School of Arizona Halloween Extravaganza | Private Event. Stage Manager. Scottsdale, AZ.
Espionage en Couture | Benefit fundraiser. Circus Performance. Scottsdale, AZ.
¡Bocón! | Media Design Associate. Tempe, AZ).
Theatre of Science presents: The Oxytocin Pleasure Dome, and AlgaeNation led by Boyd Branch | Performer, Collaborator. Tempe, AZ. (Development)
Sparrow Song | Media Artist, Binary Theatre. Tempe, AZ.
X-Act | Media Designer. Tempe, AZ.
Theatre of Science Presents: Neuro | Media Assistant. ASU Tempe, AZ; Mesa Arts Center, Mesa, AZ.
Half-way House | Media Designer. Phase 2 Production, ASU Tempe, AZ.
Soot and Spit | Isadora Programmer. ASU Tempe.
Concrete Matters – Gallery Opening at Bragg’s Pie Factory | TouchDesigner programming for an interactive sculpture. Phoenix, AZ.
Year Two 2013-2014
Year one left me with a broad sense of the field I had now entered, and year two pushed me closer to a specialty and focus. While my list of production contributions narrowed, my involvement with each production also deepened. During this year I would design two newly premiered productions in the Lyceum theatre, The Fall of the House of Escher and Before You Ruin It. While these productions were both in the same space, they were otherwise very different from one another. Escher opened the season, and came out of semester long devising process with the acting, directing, and design cohort of 2014. This was their final production together and their primary devising source texts were from quantum physics, the writing of Edgar Allen Poe, and the art of M.C. Escher. A choose your own adventure story meets recursive loop, The Fall of the House of Escher was comprised of a complex branching series of possible outcomes.
The design of this production would rely on the use of TROIKATRONIX Isadora as the playback engine to navigate this complex set of possible outcomes. During the ideation and devising portion of this production the team placed a large emphasis on interactivity, and live control for the media. In order to accomplish this vision I employed a combination of traditional playback assets, and custom built Quartz Composer. Quartz Composer exists as a visual programming environment for OpenGL (Open Graphics Library) as well as a number of other Application Programming Interface (APIs) that take advantage of hardware acceleration for the computation of visuals. By leveraging the power of the Graphics Processing Unit (GPU) in this production, we were able to make immediate changes to the effects on stage without the need to re-render video assets. The intention here was also to use these elements to drive the interactive moments in the production by creating media that was responsive to the performers.
While much of the interactivity was ultimately cut from the production due to the limitations of the rehearsal schedule, the design of a responsive and flexible system persisted. In this particular production then, the design of the media was an act both of traditional composition as well as algorithmic composition. This experience instilled in me both a fascination and passion for the programming of performance systems with the ability to manipulate their contents live. The parameterization of a media system allows the designer to work with traditional assets and also to have the freedom to make changes in the performance space with immediate feedback. This also allows for the construction of responsive systems driven by data and user/performer input. Reductively, one might think of this as the difference of playing cinema as media in a performance space, versus playing a video game as media in the performance space. While these may share many of the same attributes, they also carry with them an inherently different ideology about how the media should respond.
In contrast to Escher, Before You Ruin It was a production built entirely on traditional linier playback. Centered on the story of Infocom, a software studio responsible for several hit text adventure games in the 1980s, Before You Ruin It was a play whose story resonates with the same start-up tech dramas that pepper silicon valley. The irony of a play about videogames with a media design grounded in traditional playback rather than interactivity was not lost on me. While the initial design for the production was reserved in scale, the implementation in production grew substantially. Programmed for performance with Dataton’s Watchout, this production was largely traditionalist in the approach to making the content. Built with images and video composed in Adobe’s Photoshop and After Effects, the process for creating the media for this production was largely centered on solid visual techniques and compositional form. The aesthetic for the production was grounded in visual langue of early 80s computer monochrome.
The technical challenges of Before You Ruin It would prove to come as wrestling’s with the established programming biases of Watchout. The code underlying any program or application carries with it an explicit set of limitations and ideological frames about how one should work in that particular paradigm. Microsoft Word is not a good image editor. It contains a number of tools for editing photos, but it’s a poor replacement for Photoshop. Photoshop, on the other hand, is a poor text editor. There is a text tool, and there are a number of operations one can complete with text, but it’s a poor environment to write a novel. This simplistic example is used to help illustrate the existence of inherent biases in applications. While it is possible to do a number of live visual effects in Watchout (Compositing, Masking, etc.), that’s not necessarily the right tool for complex combinations of these effects. This hard and frustrating lesson was important to learn, but also provided a number of questions that would drive my work in building other pieces of playback software.
The Spring of 2014 would be world premiere of Daniel Fine’s thesis project and entrepreneurial venture – Wonder Dome. Conceptualized as an immersive performance and interactive space, Wonder Dome was designed to interrogate a number of theoretical ideas as well as practical limitations in current technology. I contributed to this project as system designer, programmer, and playback application developer. The enormity of this project was a terrifying challenge. While the technical obstacles are too numerous to enumerate it may well be sufficient to say point that in approaching Wonder Dome the team was looking to create an entirely self contained theatre and studio space with no industry standard methods of playback. Every system was designed from the ground up, running almost entirely on custom software. Wonder Dome did not have a sound or lighting console, and the lion share of the media was run off of a single server.
While Concrete Matters at Braggs had been my first love affair with TouchDesigner, this production would test my resolve. After a lengthily research period, we finally settled on using TouchDesigner as the main programming environment for developing a new playback system for Wonder Dome. One of the many system requirements established early in the development process was the need for the media system to be capable of image warp and blend between three projectors in real time. The media design of the production needed to be free from the constraints of an additional pipeline element of distortion when rendering, and ideally the system needed to be flexible enough to build some elements in playback software rather than as pre-made video. In December we had the good fortune of beginning a working relationship with Vortex Immersion out of Los Angles, CA. Their extensive work with dome environments was tremendously helpful and proved to make an enormous difference in our ability to make solid progress. Vortex had already solved the problem of multi projector alignment, blend, and warp with a custom piece of software also developed in TouchDesigner. After a visiting their studios they kindly partnered with the Wonder Dome team, and allowed us to use their existing software.
After 5 months of research, programming, and development I was quickly coming to the realization that while it would be possible to build a playback application or a blending and warping tool, it was unlikely that I would be able to do both by the time the production premiered. The partnership with Vortex was not only a success in terms of building bridges between industry and education, but literally allowed me to begin focusing on how the media would be generated and cued for the show. In addition to building the playback software for the production, I led the process of building the physical computer that would serve as the primary media server. From putting together the hardware to building the software, I was deep down the rabbit hole in terms of learning what an experimental media system really was. In addition to learning the intricacies of TouchDesigner as a programming environment, I also started to learn Python as a scripting language. Any visual programming language can only afford the programmer so much flexibility. At some point, one simply needs a line based programming language. I had resisted this realization, and Wonder Dome brought into striking clarity exactly why I needed to truly learn a scripting language.
Wonder Dome would have 25 performances over the course of five days, with more than 700 attendees. During the run there were no system failures or cancelled shows, and once out of the technical rehearsal process the production ran on a regular schedule without any serious system maintenance required. In hindsight, it’s difficult understand how we managed to complete this project. Our remarkable team of collaborators had produced a fun and successful production for children and families, far outreaching many of own expectations of what we thought possible.
As Wonder Dome came to a close, both Dan and I were contacted about summer design work and decided to work together on both projects. Mantarraya was a part of the Proyecta projection festival in Puebla, Mexico and Terra Tractus was a massive site-specific meditative audio, video, lighting experience. Levering what we had learned from Wonder Dome, we approached both projects from the perspective of building new media applications to drive the visual elements for both productions. My role in both of these projects was largely centered around system design and media-application programming. Having built a high end machine for Wonder Dome we settled on using TouchDesigner to run these two large scale systems. In both instances we pushed for finding interactive and responsive moments in the media system to create a bridge between the physical and digital worlds represented in this productions.
These productions left me confronting exactly how complicated real-time rendering and system design can be when it’s done well. I also began to realize how much I had actually learned in the process of working on these productions, and how much it would have changed my experience as a student if I had been exposed to some of these topics in a classroom setting with lower stakes. The frustrations I had experienced with Isadora and Watchout were slowly coming into focus from the perspective of programmer rather than user. While still only dimly, I was beginning to see that the limitations built into an application were the function of what a programmer (or team of programmers) could build so that the user experience felt intuitive and didn’t require complex technical knowledge. This murky vision and general feeling would find clarity in my first project during the Fall of 2014.
Complete List of Production Contributions
The Fall of the House of Escher | Media Designer. ASU Tempe, AZ.
Asylum | Co-Media Designer. ASU Tempe, AZ.
Echo | Media Design Consultant. Tempe, AZ.
Before You Ruin It | Media Designer. ASU Tempe, AZ.
Wonder Dome | Media and Systems Designer, TouchDesigner Programmer. Mesa Arts Center, AZ.
mAn T a RR AY a | Media and Systems Designer. Puebla, Mexico.
TERRA TRACTUS The Earth Moves | Media Associate and Programmer. Branford, CT.
Vortext Immersion | TouchDesigner Programmer. San Diego, CA
Year Three 2014-2015
The Fall of my final year in the Interdisciplinary Digital Media and Performance program would prove to be full of unique and exciting challenges. The year long Thesis project I had settled on tacking was to build a media playback system with the intention of building an application with re-use in mind, design a production with this new piece of software, and teach a course centered around the principles of programming and design for generative media systems. In late July I had been asked to teach a course for Arts Media and Engineering called Compositional and Computational Principles for Media Arts. This course had previously been taught as a Processing course https://processing.org/, but the director of AME was willing to let me teach the course with TouchDesigner. Suddenly, I was in a position where the applied project I had planned to stretch over the course of a year, was now happening in a single semester. While it was tempting to try and stack all of these ideas together, I instead used the opportunity prototype the course I was planning for the spring, and to learn exactly which concepts resonated with students, and which concepts needed additional reinforcement.
Unlike previous production schedules which had often felt like a mad scramble, the design of romeoandjulietVOID felt surrealistically focused. I regularly attended rehearsals, talked deeply with the director about the media for the production, and approached the software from the perspective of build, revise, build. I continually ran into obstacles that forced me to rethink my approach, and to research exactly how the computational elements of the application I was building functioned. One of the longest running areas of research was truly understanding how video memory operated, was allocated, and subsequently released by an application. I moved from writing simple scripts to writing whole functions, and then moving to modular implementations. One telling example of this particular endeavor lie in finding a method for releasing video memory from cue to cue in the production. This process initially started as 40 lines of code which I reduced to 20 by writing a function. This was further reduced to just two lines of code with a modular implementation for calling my function from anywhere within the application. With an early focus on building the actual application, I was able to focus nearly entirely on the media’s design before the production’s technical rehearsals began.
While I ultimately learned that one of the cornerstone methods I was employing in the implementation of this application was flawed, I was able to circumvent any problems and prototype a solution to this issue in a version that was too late to be implemented in performance. More important to me was the sudden realization that I was finally able to diagnose a problem, determine it’s origin, and develop/debug solution. While this may well seem trite, it was a powerfully reassuring moment to have an intimate and broad enough knowledge of the computational framework of both software and the nature of the hardware to facilitate a moment of programmer-clarity. “Ah-Ha!” moments up until now had often been veiled in some mystery – “Ah-ha! This works, though I don’t completely understand why,” is a very different emotional/cognitive sensation than “Ah-ha! This works, and I know exactly why – in fact I designed to work this exact way.”
The lessons learned in the development of the show control system for romeoandjulietVOID were paramount to teaching for in the Fall, and in the Spring. While it’s easy to say that in every production one learns something valuable, rjVOID was a moment where I shifted from being a programmer who scripted when necessary to a programmer comfortable finding script based solutions for problems that I would have previously just attempted to circuitously circumvent. The production of and the course in the Fall were both successful, and left me with a very different perspective on the application of creative coding as well as the challenge of learning how to code.
The Fall course, Compositional and Computational Principles for Media Arts, was focused on both the creative elements in coding as well as the logical and systems based approach need to create a successful program. Students in the course learned the basic elements of object oriented programming, modular programming, designing applications with specific aesthetic intents, human interface building, parsing sensor information, and finally building complete applications. The course was also intentionally designed to house all course assignment descriptions and demonstrations online. At the end of this page you’ll find a complete course listing for Assignments, and Learning Resources. Each learning resource is coded to identity it’s core concepts so students could, at a glance, quickly find the resource that was correct for them. Additionally, complete examples from each demonstration / lesson are packaged in a course code pack that’s publically available on github.
This established rhythm for instruction, documentation, and deployment has followed through with THP 494 & 598 Generative Media for Live Performance. As a smaller and more focused course, this group has been able to cover significantly more material than the Fall course of 50 students. Covering material ranging from the basics of navigating the programming environment to complex methods for instancing geometry and textures for live rendering the Spring course has continually impressed me with their dedication and willingness to learn hard and fast. This intentionally designed course also carries an online listing of course examples, and a code pack for students who are stuck on a particular lesson. Beyond simple aesthetic exploration, this course has focused on building systems of visuals that respond to data or sensor driven inputs. These live generative systems are inexorably tied to the liveness of a given moment, their vary function dependent upon being made live. While this is certainly a small sampling of possible applications for live performance, it is also the area with the least amount of legible documentation. Nearly all of these examples rely on some previously established knowledge of a given programming language; this course, while fast paced, has given students both a visual programming paradigm, and a scripting language. Perhaps one of the proudest moments I’ve had as an instructor has come when students have been able to take what we’ve learned in class and use that to decode the complex official Python and TouchDesigner documentation all on their own. While time-to-skill acquisition is different for every student, my personal experience has taught me that having an instructor facilitate that process is far less frustrating than learning it entirely independently.
Of course, the Spring has been more than just these three projects and both the Ars Robotica project and Beneath have been sizable endeavors. In both cases, my work has been high focused on the design and implementation of both application and aesthetics. Ars Robotic is a joint project with the School of Earth and Space Exploration to further explore and develop methods for creating life-like movement in humanoid robotics. In Robotica it was important to tackle questions of sensing, parsing, formatting, and transmission. Additionally I needed to build an interface that would supply the user with enough visual feedback as to understand how their motion was being translated by the robot. Of additional paramount importance was the ability to capture movement from the sensor, save it, and replay it outside of the presence of a person driving the robotics system. While these operations seem rudimentary for any fully built application, devising the appropriate method to use when building an application from scratch requires a not insignificant investment of planning and consideration. Building full systems is a complex endeavor, and challengingly it also happens to be an undertaking that is often difficult to articulate to non-programmers.
This has been most present in the tremendously exciting and challenging project at the Marston Theatre, Beneath. Diving into understanding how modern seismology research has changed our understanding of the world and how the very cycles that drive it, Beneath looks to use the high definition 3D system in the School of Earth Science and Exploration’s flagship observatory. The exciting challenge of this project is to build a real time 3D navigable show control system that leverages both the skills and abilities of traditional 2D artists in 3D, as well as the ability to visualize current seismic data. While the Marston currently uses a SkyScan system of 9 control and rendering machines, in a week of initial research we were able to develop a system for accurately generating real-time 3D content and playing 2D content in faux simulated depth with only a single media server. The initial research, however, is far from the final development of a full feature playback system. The challenges present in this particular environment are monumental, requiring careful planning and consideration in order to build a useful tool for programming and performing for a live production.
There is, of course, more to say about this final year but perhaps the most useful reflections can be manifest as observations relevant to questions of programmatic structure, practical design, and good curriculum.
Complete List of Production Contributions
romeoandjulietVOID | Media Designer, TouchDesigner Programmer. Tempe, AZ.
The Veteran’s Project | Media Installation Programmer, Tempe, AZ.
The Hour We Knew Nothing of Each Other | Media System Consultant, Generative Content Builder. Tempe, AZ.
arsRobtica Project | Media Designer, TouchDesigner Programmer. Tempe, AZ.
Beneath | Assistant Media Designer, TouchDesigner Programmer, System Designer. Tempe, AZ.
Orange Theatre Company Spring Mixer | VJ, New System Designer / Engineer, TouchDesigner Programmer. Phoenix, AZ.
Larger Observations
Computational Thinking is Hard
The Google for Education page which functions as outreach curriculum from the internet giant lists Computational Thinking as being composed of four specific techniques: decomposition, pattern recognition, pattern generalization and abstraction, algorithm design. Wikipedia more generally defines Computational Thinking as “a process that generalizes a solution to open ended problems.” The Center for Computational Thinking at Carnegie Mellon has several bulleted definitions on their landing page:
- Computational thinking is a way of solving problems, designing systems, and understanding human behavior that draws on concepts fundamental to computer science. To flourish in today’s world, computational thinking has to be a fundamental part of the way people think and understand the world.
- Computational thinking means creating and making use of different levels of abstraction, to understand and solve problems more effectively.
- Computational thinking means thinking algorithmically and with the ability to apply mathematical concepts such as induction to develop more efficient, fair, and secure solutions.
- Computational thinking means understanding the consequences of scale, not only for reasons of efficiency but also for economic and social reasons.
The same landing page also asserts that:
Computer science is having a revolutionary impact on scientific research and discovery. Simply put, it is nearly impossible to do scholarly research in any scientific or engineering discipline without an ability to think computationally. The impact of computing extends far beyond science, however, affecting all aspects of our lives. To flourish in today’s world, everyone needs computational thinking.
In the theatre, and arguably in most of the arts, computational thinking and methodologies are essential to the frameworks of modern new-media forms. Images, sound, lighting, and even fabrication are now primarily manipulated in the domain of digital tools. In fact, there are few physical tools left that do not, in some manner, incorporate the presence of a microprocessor. This transition requires that current artists and technicians be familiar with the underlying concepts, principles, and precepts that define how the encoded data is manipulated. One might be tempted to reductively state the obvious, that: “living in the modern world requires a familiarity with computers.” While this statement isn’t false, it diminishes the importance of seeing computational frameworks as a means of understanding and solving problems.
Live events carry with them a cornucopia of challenges and considerations which must be approached thoughtfully and meaningfully. One of the primary tools for addressing these challenges are computers, and beyond understanding the principles of mechanical operation one needs also to understand the principles that govern a given system, function, or application. A clear example here might be considering how one might play a pre-made video during live event. First several physical system considerations must be made:
- What is the video playing back on / through – television, monitor, projector, et cetera.
- What are the physical dimensions of the display?
- Where is the display device located, what sight-line considerations need to be made in order to ensure that the media is visible to the majority of event spectators?
- Do any special compositional considerations need to be made in the placement or dressing of the display device?
- What is the required cabling to connect to this device – both for video signal and for power.
- Does the distance between the display device and the playback device exceed the maximum distance for signal transmission? What is the system appropriate solution to this problem?
- How will this cabling be run – are any considerations necessary to prevent trip hazards, or possible disconnections during the event?
Once these questions are adequately addressed, one must also begin to consider some computational questions:
- What is the playback device, and what is the human-computer interface associated with it? Mac, Windows PC, stand alone media player (eg. DVD player).
- What is the playback software associated with this device?
- What are the digital dimensions of the display device? Is the playback software capable of displaying at this resolution? Are multiple computers / playback devices required?
- What is the software-preferred codec for media encoding?
- What is the appropriate encoding tool for ensuring the media is properly prepared for playback?
- How is the media to be stored and accessed during the event?
- Is any additional communication with other departments required – sound, lighting, scenic, etc. What are the appropriate considerations for communication between departments?
Examining a partial list of the required considerations for the seemingly simple operation of playing a single video during a performance it begins to become clear why computational thinking is an essential skill for the media artist.
Computational thinking, however, is hard. It is often tempting to use cognitive elisions in the planning process, allowing for past experience or assumption to fill in the gaps of seemingly obvious problems. “Then we connect the computer to the projector,” assumes the availability of a computer, the presence of all the requisite hardware and software, the proper preparation of media files, the presence of appropriate connectors and cabling, as well as many other elements that require specific planning. Instead, one should completely disassemble the problem in order to fully understand how the pieces work together, and where the direction intervention of a designed solution is necessary.
Pushing past the “simple” charge of playing back pre-made content, when one beings to address the challenges of live generated media an additional set of considerations must be made. Responsive mediated environments are driven not only by pre-made content but by logic systems programmed into computers. This endeavor requires that the designer/programmer be fluent in at least one (though practically speaking, multiple) programming languages and development environments. OpenGL is an excellent example of a programming language often exploited in the context of responsive mediate environments. As an open source library of methods for communicating directly with the graphics processing unit of a computer, it is an exceedingly fast means of manipulating pixel information. It is also a language that’s based in C, and subsequently is less human readable than other programming languages. For example, to draw a green line from one corner of a display to the opposite corner over a gradient transitioning from back to white the following code is necessary:

Systems Thinking
While the mechanisms of drawing images for display – both as playback and real-time generative rendering – are some of the most rewarding elements involved, one of the important pieces of engineering is centered in questions of system control. Here turn-key industry-standard systems work to free the designer / artist from the challenge and headache of the minutia of system programing. While that can be a blessing for a traditional production, new, devised, site specific, or experimental works frequently come into conflict with fixed ideological architectures about how an event should be controlled. If one is liberated from fixed structures for system operation, one is free to ask, “if the control of an event could be anything, what should it be?” For the designer / programmer, the context of the performative event often helps to provide a scaffolding for addressing this question, and one might begin to draw general categorical divisions based on questions of system behavior:
Should the designed system operate autonomously according to a set of predefined states and behaviors? Here one might consider the engineering of a stand alone application whose sole purpose is autonomous action. The functions built into such an application might be adaptive, or may simply have the appearance of adaptation. Stand alone systems range in complexity from direct playback, to environmental and/or sensor driven activation of predefined or adaptive states.
Should the system operate according to a set of predefined states, commonly called cues, based on the actions of an operator? Many existing pieces of show-control software exist to address this specific need for playback operation. Here each moment, or state, in a given show is predefined as a set of parameters in an application, for example: a file to playback, a given duration, a location, a quality of transition, etc. A cue based approach has complete and crystallized moments that are initialized by an operator, and are often the choice for live events based on rehearsed or precisely planned material.
Should the system act as a computational and rendering engine based on user input? An application designed to be utilized as an actively mutable and responsive agent in performance may share more similarities with the concept of instrument than system control. The software as responsive tool often relies on pre-existing material or code that is then coaxed and driven by an artist/operator/programmer. Here the lines of distinction quickly begin to blur as these kinds of tools run the range of purpose built applications to stand alone tool-sets. A responsive system intended for live performance has concrete manifestations in applications built for DJs and VJs. These particular performance disciplines rely on the flexibility of software to respond in real time to the user/programmer/artist. Using the metaphor of instrument, it is easy to imagine that the artist performing might be strictly a musician, or a musician who also builds instruments. This particular type of system works best in scenarios with an emphasis on improvisation where the artist/programmer is actively engaged in manipulating the software as a part of the performance.
These amorphous categorical divisions help to provide a starting point for conversations about the engineering requirements for any given performance system. The interrogation of infrastructural needs yield answers about approach that might otherwise go unaddressed. While questions of technical requirements may seem out of place in the otherwise conceptual framework of aesthetics, one should consider that how a system is designed and engineered is deeply connected to the possibilities of its expression. That is to say that there is meaning-making embedded in the code; while largely invisible at first glance, every application carries the biases, values, and priorities of its programmer(s). How a program is designed and implemented carries an invisible ideological lens, through which the media it delivers is focused.
While many live events have clear leanings in the nature of their structure, a growing number of hybrid site-specific installation-performances cross the categorical boundaries outlined above. Many of productions devised, and envisioned during the past three years exist as examples of this new kind of performance / event / installation. Embedded in these projects are questions about how to solve the engineering challenges of an event which presents with characteristics similar to all three categorical divisions. What is it to cue an autonomous system fed by an online repository of material uploaded in real time? What are the considerations, structures, methodologies, protocols, and communication infrastructure required to facilitate this kind of work with high performance and reliability? How does one approach issues of aesthetic unity across disparate devised moments generated during rehearsal? Finally, how does one optimize such a system for live performance?
All of this, however, assumes that the translation from concept to application exists perfectly. No matter how talented the programmer, one cannot dismiss the reality that writing code is an act of translation. The messy, semantic, metaphor rich, effusive language present in the ideation phase must at some point be codified and crystalized into an ordered set of operations to be compiled and executed. How then, does one wrestle with the human penchant for sematic and language-based cognitive orderings that are at odds with the act of writing code? This very frustration propelled me to take a measured and purposeful approach in writing and teaching other programmer / artists.
Teaching the Dark Arts
“…there is no way to learn the really dark part of the dark art except by doing.” Mary Franck
Learning to be an effective programmer is nearly impossible to achieve when only practiced in abstract. Understanding how the conceptual logical framework of a computational method functions requires practice. Simply stated, “if you want to learn to program, than program something.” That, of course, is easy to idiomatically posture with absolute authority, and is perhaps one of the most daunting blockades to new learners. Once resolved it is easy to suddenly face a seemingly unending barrage of questions: What language? What tools? What learning resources? What projects? Where does one look for help? How do I decode this new terminology? Do I really have to Google everything?
My experience in learning to program has been shaped by a number of environmental and situational forces, and the programming environment that has proven to be the most accessible for me is Derivative’s TouchDesigner. While TouchDesigner often carries a stigma of having a steep learning curve, its visual programming environment allows the programmer to see nearly every procedural step being executed by the computer. For me, as a learner and as an instructor, the ability to visually debug code has been a cornerstone in my ability to make faster progress and explain complex concepts. The programming environment one works in is a kind of idiom – it carries a particular metaphor for understanding problems and making meaning, it has a bias about how it sees the world, and establishes a set of value structures which are embedded in its operations. While there are a number of visual programming environments, TouchDesigner’s biases resonated with me the strongest. As an instructor, this particular idiom also functions well as a flexible and extensible learning environment. There are a number of operations that are quickly accessible to new learners. Once past the initial learning curve, students frequently experience a burst of skill acquisition with the freedom to quickly map out complex ideas. Intermediate and advanced students quickly find that power of using a scripting language gives them access to a nearly limitless range of possible functions.
With an established programming environment, I structured courses to build incrementally upon the discussions, demonstrations, and projects that were completed each week. The benefit of teaching a tool new to (nearly) all students is the freedom of having a zero baseline for assumed knowledge. Starting from the beginning, while daunting, allowed for the establishment of consistent practices and methodologies. This also facilitated opportunities to focus on fundamental concepts. While the metaphor of scales and etudes is a bit exhausted, it is a fitting analogy. Consistent practice of core skills and concepts helps to establish practiced patterns – patterns that can then be leveraged in more complex arrangements of ideas. The structure of course material was also largely iterative in nature. Each assignment relied on the concepts previously explored, while also pushing students to continually refine their original ideas. While the structure of the courses listed below exists as a concrete record of the materials, a simplified structuring of the course topics is as follows:
- Data flow
- Data structures
- Building simple signal flow – flow + structures
- Interaction (user / sensor) – we don’t touch the code
- Building for real time rendering
- Building for reuse – understanding the concept of modules
- Scripting – functions that don’t yet exist
- Predicting the future and setting speed limits
- Building full applications
While the 300 level course focused on a slow and steady reinforcement of these topics, the 400 / 500 level course has consistently established these topics as present in every example. Every class meeting involves building / programming in some capacity. Though we certainly encounter sessions with more abstract focus or theoretical framing, it has been imperative to me that students program every day that they are in class. Setting aside time for improvisation and experimentation within a given topic or demonstration has also been an essential part of in-class time. Students are regularly encouraged to ask one another questions, as well as talk the class through how they addressed a particular problem or challenge. While building community in a smaller course has been significantly easier than in the larger, in both classrooms students consistently worked with and consulted one another. Though difficult, a personal value structure as an instructor is one on one time with every student. In both courses, I spent time circulating in the class to answer questions and explain concepts.
By student request, and in order to reduce the occurrence of repeat concept explanations, as many of the course lectures as possible where recorded separately and posted online for asynchronous access. This practice also allowed for direct concept referencing in assignment construction. Rather than simply reminding students of the concept being addressed in a given assignment, I was instead able to link to (or embed) the relevant topics from class alongside the assignment criteria. All of the course assignments involved creative or designed expressions in a visual medium, but only concrete measurable objectives were taken into consideration for the evaluation of a submission. My biased priority has been to first produce competent programmers who can translate ideas from concept to code. Though this is met with some resistance from other academics, it is my position that in learning to program functionality should come first. It was my experience that without fundamental computational frameworks it was nearly impossible to actualized concepts or ideas. Throughout my experience learning to be a programmer, I continually return to linguistic metaphor embedded in the naming of a programming paradigm as a “language.” One learns how to program in a “language” that has a particular syntax and grammar, a set of conventions and embedded ideas, even a penchant for a particular type of expression. It should then come as no surprise that new language acquisition requires the establishment of parallel ideas and forms with the learner’s first language.
While Rushkoff’s battle cry of Program or Be Programmed continually finds new opponents eager to make the opposite assertion, there is an important paradigmatic shift suggested by presence of this very argument. In the cultural transition of computers from being objects for the technophile alone to being carried in the pockets of a majority of adults, an undeniable reality exists in an increasingly collective reliance on computational means of making meaning and communicating. Whether or not coding is the new literacy is a story that will be told tomorrow’s historians, but it is undeniably an integral element into today’s cultural fabric. This is especially evident in the growing number of creative coding projects shaping the conversation about new media forms. The ability to code is the ability to reshape how we see the world. In my opinion, the skills I want to learn and the skills I want to share are just those.
AME 394 | Compositional and Computational Principles for Media Arts
In much of today’s contemporary media practice there is a tight coupling of compositional form, content, and underlying computational mechanisms. This integration holds the potential to yield new modes of expression and wholly new art experiences as is evident in emerging forms of real time generative art, network-based art, game-based art, and interactive performance. As both practitioners and participants, we must develop a critical understanding of the relevant compositional and computational principles that frame this work. In this course, students will develop a working understanding of fundamental compositional and computational principles, and apply their understanding through the realization of exploratory media artworks.
Student Learning Outcomes:
Upon successful completion of this course, students will be able to:
- identify opportunities to use modular based programing methods in your own work.
- experiment with dynamic media structures.
- compose interactive TouchDesigner networks that use sound and video.
- communicate with other programs or computers over a network.
Projects and Assignments
- Project 1 – Composition
- Op Snippets 2 – Building a Nervous System
- Project 2 – Manipulation
- Op Snippets 3 | Buttons and Sliders Everywhere
- Project 3 | Control
- Project 4 – Final Project First Draft
- Final Proposal
- Final Project
Learning Resources & In Class Examples
- Parameter Expressions
- Local Variables
- Storage
- Interface Elements
- Interface Building
- Table Referencing
- Panel CHOP
- Button Customization
- Slider Customization
- Building 2D Sliders
- Nesting Containers
- Slider with Feedback
- Scripting
- CHOP Execute DATs
- Buttons with Color
- Sliders with Color
- Open Viewer
- Buttons to Run Scripts
- Panel Execute DAT
- View Script
- Open Viewer Command
- Panel Values
- me.digits
- Name – me.name
- Digits – me.digits
- Container Align Order
- Radio Buttons
- Hierarchy
- Perform Mode & Open in Perform Mode
- Configuration for Perform Mode
- Open in Perform Mode
- Simple VJ Set-UP
- Select TOPs
- A|B Switching
- Real Time Rendering
- Logic Testing – If Else Statements
- Setting Parameters with Scripts
- Panel Execute DATs
- Replicators
- Replication
- Table Referencing
- Clones
- Interface Building
- Audio Analysis
- Audio Analysis – Based on Mary Franck’s Rouge
- Interface Building
- Table Referencing
- Panel Execute DATs
- File Path Referencing
- Multi-Process Communication
- Touch In and Touch Out
- OSC In and OSC Out
- Shared Memory In and Out
Download the Course Code pack from GitHub
THP 494 & 598 | Generative Media for Live Performance
Today’s live performance technologies increasingly rely on the use of inter/re-active and generative tools. This approach to creating visual content by controlling lighting, video, or physical systems requires that the artist cultivate a deep understanding of the computational principles and methods used to manipulate data as a primary substance in the creative process. As both practitioners and pioneers, the artist must endeavor to both understand the approach of other engineers and programmers, while also engaging the practice of developing software for general and specific use cases. It is not enough to rely solely on existing frameworks and architectures when developing new work; especially if the approach is unconventional. In this course, students will cultivate an approach for modular programming, complete tool development, and generative process focused aesthetics.
Student Learning Outcomes:
Upon successful completion of this course, students will be able to:
- compose inter/re-active and generative media systems.
- perform basic and intermediary scripting tasks with Python 3.
- identify opportunities for programming efficiently, with an emphasis on purposefully assigning tasks to either GPU or CPU.
- communicate with other programs or computers over a network.
Learning Resources & In Class Examples
- Image Selector – Container Method
- Interface Building
- Table Referencing
- Replicators
- Clones
- CHOP Execute DATs
- Scripting
- Parameter Assignment
- Image Selector – Instance Method
- Rendering Real Time Geometry
- Instancing Geometry
- Texture Instancing
- Interface Building
- Table Referencing
- Replicators
- Clones
- Render Picking
- DAT Execute DATs
- Scripting
- Logical Testing – If Else Statements
- Parameter Assignment
- Playing with Feedback
- The Composite TOP
- The Feedback TOP
- Scaling CHOP values
- Encapsulation
- Storage
- Panel Execute DATs
- Importing Python Libraries
- Instancing – A Closer Look
- Realtime Rendering networks
- Instancing – Geometry
- Instancing – RGBA replacement
- Pixel Sampling with Python
- SOP to DAT
- TOP to CHOP
- CHOP to DAT
- DAT organization
- Render Pass TOP
- Make it with Data
- Part 1
- Component Building
- Real time rendering
- TOP Networks
- Feedback TOPs
- Ramp TOP
- Ramp Keys
- Eval DAT
- Python scripts to sample pixel values
- Part 2
- Controls for Components
- Horizontal Sliders
- Tables to hold slider parameters
- Clones
- me.digits / me.parent().digits
- Scaling slider values to drive parameters
- Part 3
- Using Data to drive parameters
- Rule based art making
- Replication
- Execute DAT
- Keeping 60 FPS with multiple complex containers
- Scripting the Lock Flag
- Part 4
- The Select COMP
- The Table COMP
- The Panel Execute DAT
- The Table COMP to load presets
- The DAT Execute DAT
- Part 1
- Data Experiments
- Part 1
- Using Table Data
- Table Data to drive parameters
- Prototype to component process
- The Eval DAT
- Replication
- Meaning Making in programming
- Part 2
- Adaptation – re-using conceptual ideas and prototypes
- Animating table data – hold samples in a CHOP
- The Panel Execute DAT
- The Table COMP to load presets
- Speed and Lookup CHOPs
- The importance of re-use, and why / how we make more abstract components
- Your programming is a representation of how you see the world.
- Part 1
- A little about Modules, Local Variables, and Storage
- Local Variables
- Modules
- Storage
- Generative Design | Noise and Shape
- Shape
- Drawing with pseudo random numbers (Noise)
- moving between data types – SOP to CHOP, CHOP to SOP
- Noise CHOP and SOP
- Rendering
- Feedback Networks
- Sampling images for pixel values – ex: op(‘noise1′).sample(x=0 , y=0)[0]
- Noise
- Instancing (with Psychedelic Jamboree colors – or with a Ramp)
- Noise CHOP
- Cross CHOP
- Orthographic Camera
- Feedback
- Shape
- Python Lists
- List structure
- Building Lists
- For Loops and list Making
- Storage
- Putting Lists into storage
- Python Dictionaries
- Dictionary structure
- Building Dictionaries
- For Loops and Dictionary Making
- Storage
- Putting Dictionaries into storage
- Replicators – Replicating Text TOPs
- Basic Replicator Networks
- Convert DAT
- Transpose DAT
- me.digits when replicating
- Simple Instancing
- Basic Networks for Instancing Geometry
- Moving between data types – SOP to CHOP, CHOP to SOP
- Noise CHOP and SOP
- Rendering
- Texture Instancing
- Combining Replicating and Instancing
- Pattern Matching
- Creating Data dependent networks with expressions – aka stop hard coding every parameter
- Render Pass
- Feedback
- The Table COMP
- How to feed the Table COMP
- Table COMP structure and principles
- Evaluate DAT
- Panel Execute DAT
- Table COMP customization
- Getting values / actions out of the Table COMP