Make It Rain v 0.1
“Make It Rain” is citation:obsolete’s update on Steve Reich’s “It’s Gonna Rain”–it uses contemporary tech (smartphones, tablets, wifi/wireless/bluetooth, etc.), contemporary music (specifically, Fat Joe feat. Lil Wayne’s “Make It Rain”), and trends in contemporary art practice (interactivity, relational aesthetics, etc.) to reconsider both Reich’s original work in its own historical specificity, and the issues raised by the work in relation to our current historical moment. I’ve written about it here.
The first beta run of the piece was yesterday in my Philosophy of Music class. (The piece originated as an attempt to develop some sort of music-making activity for a class full of philosophy students who may have zero experience/training in music.) Some really interesting things happened, and I think we all learned a lot. We (citation:obsolete, my music/sound collaboration with christian.ryan) definitely got some good ideas about how to move forward with the piece. And, the best part is that these ideas came in talking with my students–even and especially the ones with no musical experience but lots of network/IT experience.
I asked the students bring in whatever devices they wanted to use–anything that could stream music from soundcloud and broadcast it for us to hear. Most students brought phones, tablet computers, and laptops. A few brought bluetooth speakers. You can read more about the specific instructions I gave them here.
Anyway, after experimenting for about 30ish minutes, we had lots of results to discuss.
It’s clear that the phasing Reich got as a result of analog/mechanical processes can’t be produced just by letting digital processes run on their own–or, more likely, the phasing is so extremely gradual that it’s below our threshold of perceptibility. (I’m thinking about how my digital alarm clock gradually slows down; it’s currently about 3 minutes behind the iPhone that’s docked to it, and to which I usually synch the clock’s time. So, there is some slow-down/entropy happening, but at a very, very gradual rate, too great for me to observe in, say, 10 minutes.) Treating the tech as merely a playback device didn’t produce the desired phasing effects.
However, if we treated the tech as mobile devices, this DID produce lots of interesting phasing effects. For example, two students hooked up their phones to a pair of bluetooth speakers (one student per speaker; there was a bass and treble, or ‘left’ and ‘right’ on each speaker unit), put the speakers together in the middle of a long hallway, and then gradually walked towards opposite ends of the hallway…and, eventually turned and walked down other corridors, walking in a loop around the building. Here’s a rough recording of what happened (start in at about 1:45):
As the students (and their phones, which were transmitting to the speakers, which then played the “Make It Rain” loop) walked farther and farther from the speakers, the bluetooth connection between phone and speaker decreased in quality. Sometimes the loop would be delayed, sometimes the loop would skip, etc. Basically, the speakers were programmed to deal with transmission errors in specific ways (delay/buffering to catch up, skipping dropped ‘packets’ of data, etc.). The movement or mobility of the playback devices (the phones) generated these transmission errors by testing the limits of the connection/stream/signal/etc.
We also took a class trip up and down the stairs in our building. The building has pretty decent wifi, but the signal isn’t as strong and reliable in the stairwell (the stairs are stone, the staircase itself is pretty walled off from the rest of the building, etc.). Our devices could get some signal, but not a full, entirely reliable one. So again there was the problem of ‘dropped packets,’ lost data, slow transmission, and MOVEMENT. We began on the third floor; we decided which loop/variation to all play; hit play, and then gradually made our way down two flights of stairs to the foyer, each playing the loop on our devices as we descended the staircase. This took about 2 minutes of the 4 minute loop. The trip downstairs introduced lots of phasing into the playback, so that for the last 2 minutes we could listen to interesting sonic relationships develop. We put all our devices on the table in the middle of the foyer, and then wandered around them to pay attention to the different relationships among different devices. We then repeated the process, making our way back upstairs to the third floor while playing a different loop/variation. Here’s a recording of one of our trips through the stairwell.
So what’s interesting here is that the phasing, the musically interesting phenomena, emerge from exploiting what’s specific to mobile tech as mobile tech–its mobility. These aren’t just playback devices–they’re portable, mobile, streaming ones. So it’s not just mobility itself that’s distinctive, but networked mobility–we’re all walking around, but we’re jacked in to the same network (school wifi) or parallel networks (AT&T and Sprint, for example). My educated guess is that this networked mobility is a manifestation or symptom of general relations of epistemological/ideological, material, capital, and subject-production. But that’s for theorizing later (or, if someone else wants to jump in here and do some theory while I use my creative brain, that would be awesome). If that’s the case, then the question is: How do the musical relationships and effects that result from our playing with networked mobility speak to these broader philosophical issues? How do the material, technological, and social relations of production crystallized in networked mobility (smartphones, wireless networks, soundcloud and other apps, etc.) manifest in or as specific musical/sonic phenomena? Or, more simply: how do the “relations of production” make the musical/sonic features of “Make It Rain” different than the musical/sonic features of “It’s Gonna Rain”?
I’ll post links to more recordings/videos as students post them to the course tumblr. Thanks again to Johnny Cook, Chad Glenn, Zach Jones, Hannah Levinson, TJ Picard, and Ryan Shullaw for their creativeness, their ideas, and their work on this.
This reminds me more of Cage than Reich in how much it concerns the conditions for listening (the conditions presented in the network-come-composition of the devices in opposition to very strict otherwise listening conditions presented by the devices – listen to a lil wayne song/loop and not do this sort of thing, phase brands together instead), the chance composition (the phasing itself dependent on the failure to transmit packets of data), even the accoustic environment takes on a bizarre new dimensions in this experiment in not just how they reverberate sounds but also the conductivity of wifi.
I like this a lot.