robotcowboy began as a platform for musical performance with wearable computers and an exploration into embodiment and energy with technology. It is not a single musical act, but an ongoing experiment in what I term “Human-Computer Performance”:
As the next phase of the project begins with version 2.0, I am summing up my motivations and plans for the future here.
robotcowboy is dead. Long live robotcowboy.
Coming from a punk and new wave rock background (see my college band 7inchWave), I experimented with computer-based composition and performance, yet was unhappy with how much my music making approach and performance style had to be altered. I needed to be able to move, to engage the audience, to remove the computer as a separate element on stage. It seemed to me the advent of the laptop had brought portable digital music to musicians, yet these musicians were adapting themselves and their workflow to the machine. I decided I wanted to do the opposite and build a custom system.
I developed the robotcowboy wearable-based platform in 2006-7 as my Master of Science thesis project (see pdf),. Ubuntu Linux and Pure Data running on a Xybernaut MAV wearable computer provided the computational muscle and patching workflow, while IO was provided through a usb hub, sound card, and midi interface. A gamepad and Casio DG-20 Digital Guitar became primary instrumental interfaces and the robotcowboy helmet (an old idea from 7inchWave) came to life as a wearable visual display.
In developing the system for my own workflow, I developed a philosophy and approach to performance with robotcowboy which is summed up in a manifesto of sorts:
10 tenets of robotcowboy
- the cyborg embodies the computation: all of the gear must be worn on the body
- plug and play: plug in the system, turn it on, it works
- human frailty: there must be room to fail
- human energy: live sweat must be felt when listening to every song
- good with the bad: the human is enabling/disabling, the machine is enabling/disabling
- a real prototype: new ideas and gear must be tested in a real environment, risks must be taken
- the non-recording artist: the performance is the commodity, recordings should be live
- WY SEE IWYG: sound and action should be proportional
- WY HEAR IWYG: everything should be reproducible live
- creative freedom: open source software should be used as much as possible
robotcowboy played festivals, conferences, clubs, galleries, bars, and house parties throughout Europe and the US from 2007-2011. A 2 month, 40 date US tour (Consoles Afire 08) in the beginning of 2008 provided the real world, experiential trials I felt the project needed in order to be considered successful. From the outset, I did not plan on making an academic prototype only seen in a published paper, but one of software, wires, electricity and sweat presented to new and un-expecting audiences each night.
Tracks from the Console Afire 08 tour:
Problems and hiccups would always occur and, through live experimentation, I made them part of the set. It has always been important for me to combine man and machine in such a way as to highlight the strengths and weaknesses of both. My human half is frail and makes mistakes, yet brings live unpredictable energy. My computer half provides a potential to generate/process any sound and instrumentation, yet requires constant power and can crash hard in unexpected ways. Whenever the machine locked up (a relatively frequent occurrence), I would simply fall over dead until it rebooted. This was a constant challenge, yet brought excitement to the show for me as a performer, even if my audience misunderstood me. A robotcowboy show may not go correctly in the terms of the polished digital world, but it was always an experience.
In 2009 I began working with my electric guitar in robotcowboy and ended up with a hybrid set: one part minimal helmeted performance, the other cyborg new wave surf. This mix never really quite worked and I was caught between the desire for experimentation and my love of banging on loud strings. Looking back, I feel the earlier songs with the digital guitar were a more successful combination of these themes and I was never really happy with the hybrid set as it clearly shows my inability to choose a direction.
Tracks from the SubOptimal Summer Tour 09:
Unfortunately, after the initial burst of creation from 2006-8, I found the act solidifying. My experiments slowed as I began dealing with making the system more robust. Handmade gear always requires constant maintenance and I have rebuilt the helmet at least 3 times. Aside from a proposal for a special performance (3 Experiments in Energy), what was originally envisioned as a platform for performance, became a static musical set.
The system worked, yet with a 500Mhz Pentium 3, 256 MB of RAM, and barely a video card to speak of, my computational capabilities were limited. It could handle stereo in and out at about 12-16 ms latency, but adding an extra reverb or running anything more then simple 8bit blitted graphics would result in stuttering audio. I treated this as a virtue as long as possible, clarity in minimal design, yet eventually they became stumbling blocks and I became less inclined to experiment.
In the meantime, mobile computing has exploded in leaps and bounds and it no longer takes a custom system to work with audio-visual content in realtime. My hope for a world full of mobile music machines and software seems to be coming true and new pioneers (notably Onyx Ashanti) are utilizing these devices and building their own in a similar fashion for a similar end.
How is robotcowboy relevant now, almost 6 years after it’s birth? The technology is proven at this point … where do I take my experiments now?
Currently, I am working on an MFA at the Carnegie Mellon School of Art and exploring the motivations I had in developing robotcowboy as well as critiquing it’s weaknesses. Now, more then ever, I see that I must expand beyond the system as concept and into the realm of storytelling and experimentation with new performance paradigms. The system is no longer interesting in it’s own right as a working proof of concept, but must become the platform I originally envisioned for my own voice in a world that must “give the past a slip”. The songs matter, the message matters, the performance matters.
robotcowboy is dead. Long live robotcowboy.
Onward and upward. Moving forward. Onto the stars …
At the arthackday at 319 Scholes in Brooklyn, NY, on Jan 28th 2012, robotcowboy 2.0 was debuted … well at least an alpha demo. Over the course of the 2 day hacking/meetup event, I rewrote the robotcowboy software setup as an iPad application in order to replace the wearable with an iPad 2 which is orders of magnitude more capable. I wore it on a new belt setup complete with powered usb hub and soundcard and performed a single, short song at the closing party.
Thanks to OpenFrameworks and the OF addons I developed over the summer (ofxPd and ofxLua), I’ve started work on what is essentially RjDj on steroids with interactive performance in mind. The robotcowboy app has an internal pd audio engine (via libpd) and lua scripting environment that communicate via OSC (internally or externally) and each scene (or song) consists of pd patches and lua scripts. The workflow that I want is to be able to patch the audio while live coding logic and visuals in lua (ie GAmuza, built on ofxLua), hopefully removing some of the complexity of the old setup so that I can focus more on creation. I realize there are plenty of creative coding environments out there (Processing, PD, Fluxus, etc) but, as when I first started robotcowboy, existing software dosen’t fit my needs as it is largely focused on desktop/laptop usage.
Check out the project on GitHub: https://github.com/danomatika/robotcowboy
Although, what I made at arthackday is just a demo of the app, the basic functionality is there. The most important point for me was that it is started. I have been mulling over the future of robotcowboy for over a year now and it feels immensely good to finally “break ground” on one aspect of the refresh of the project.
The iPad 2 has turned out to be a very capable upgrade from the old Xybernaut wearable computer as my demo went off without a hitch. Thanks to the tablet form factor, I’m looking forward to experimenting with new interactions outside of the belt (ie Daito Manabe’s dot.) and I feel this change in itself will help me from stagnating.
Over the next 6 months or so I will be working on the app and making stepping-stone experiments and performances, hopefully about 1 a month. I will be working out the lua scripting environment and workflow as well as adding midi IO (go iOS + usb midi!). The OSC communication is designed to allow forwarding between separate copies of the app so there will probably be some element of network performance, at least in synchronized visuals. Another aspect to add is scene management and transfer, but that will come later on. Thanks to OpenFrameworks, I will be able to build the app for Mac/Win/Linux and plan on releasing it to the iOS app store once I feel it’s ready.
Along with the system update, I will be rebranding the project and website … perhaps something a little more 2001 rather then 1982. Expect to see a new suit soon enough. Are you a web/graphic designer? Any costume designers? Let me know if you want to help …
Not to focus on the technical or design aspects, I have a large number of interesting ideas and concepts to explore through songs and performances. I don’t want to list any in detail yet until I explore them, but they range from a nuclear reactor simulation controlled via guitar to a live score played by a rappelling performer … not to mention a few fast and loud rock songs. These will then be combined into a series of concept albums/live performances for my MFA thesis project for the spring of 2013.
The future is bright. Ride on bit boys!