STREAMING TEXT THE ADRIFT EXPERIENCE =x = = s = 2 3 5 S EI The following is a description of my experience with streaming text. It took place in an artistic context that was designed to engage with and test the potential of the networked system and its narrative space. It made use of sound, images, text and movement in the hope of combining them in a way that is not possible in other mediums. The project was called Adrift. An evolving Internet performance event Adrift premiered at the Ars Electronica Festival in Linz, Austria in ‘97, and was per- formed thereafter on a monthly basis through April ‘98. Adrift was developed by a core group of three artists -- myself, architect Marek Walczak, who created the vrml (virtual re- ality modelling fanguage) or 3-D graphics; and composer and multi-instrumentalist Jesse Gilbert, an expert in RealAudio technology. Jesse worked with an ensemble of instrumentalists and with pre-recorded sound to create the sound score. We worked from different locations. Initially | was in Linz, Austria; Walczak in the East Village, NYC and Gilbert in the West Village. The idea at this early stage was that |, in my role as writer, would send texts to my collaborators in America. They would respond -- Walczak with vrml; Gilbert with sound -- and our individual contributions would come together and be made available for Internet users and local audiences. In Linz | had a small room, a projector, screen and audio system (and, as an unofficial performance project, a small audience). It worked. That was September 1997, and Real Audio had yet to complete development of its streaming technologies for text and video. Adrift worked because our team included three excellent programmers -- Mark James, who worked with us through the Ars Electronica Festival and Jonathan Feinberg and Martin Wattenburg, who joined us thereafter. My part, inputting text to the work, was accomplished with a java application developed by James, and later entirely re- worked by Wattenberg. Initially it included text files that could be edited prior to the performance and a writing space where new texts could be written. Special tags allowed me to colour code words, which were used as signals to my col- leagues: a red word indicated that | wanted a response from Walczak (vrml), yellow, a response from Gilbert (sound). They were of course free to disregard my requests. Later, when Wattenberg redesigned my application, | was able to do a number of additional things: | could colour para- graphs and vary the colors within paragraphs. | could position texts: left, right, center. And | could determine the method of transition, whether the text would fade in, scroll in, or appear abruptly. | could prepare all this in advance of the perfor- mance or on the fly. With the redesign also came a place for audience input -- anyone with a computer could input text and send it to me for inclusion -- and a button that alerted Gilbert to the fact that | wanted to speak into his soundscore. As with the earlier application, there was a button to call up pre-existing texts, a place to write new texts, a preview button so that | could see what my texts looked like before sending them, and a send button that streamed them off to our Boston server, where they became available to Walczak and Gilbert. My application was a little marvel. Another little marvel -- a java applet on the Boston server -- merged Walczak’s vrml (3- D) data with my text , making both available to users on a single browser page. When the performance began, the same applet, responding to a signal from