Dancing on the Grid: using e-Science tools to extend choreographic research

Helen Bailey, Michelle Bachler, Simon Buckingham Shum, Anja Le Blanc, Sita Popat, Andrew Rowley, Martin Turner

Abstract

This paper considers the role and impact of new and emerging e-Science tools on practice-led research in dance. Specifically, it draws on findings from the e-Dance project. This 2-year project brings together an interdisciplinary team combining research aspects of choreography, next generation of videoconferencing and human–computer interaction analysis incorporating hypermedia and nonlinear annotations for recording and documentation.

1. Introduction

Dance has been one of the art forms at the forefront in the early adoption of new technologies within its practice, both professionally and academically. Dance is fundamentally concerned with the live body moving in space and time. New and emerging technologies have presented a serious challenge to traditional definitions of these categories and therefore constitute fruitful territory both creatively and critically for the dance artist–scholar. Performance practitioner and theorist, Etchells (1999) suggested that ‘… theatre must take account of how technology […] has rewritten and is rewriting bodies, changing our understanding of narratives and places, changing our relationships to culture, changing our understandings of presence.’

This paper considers the ways in which the e-Dance (http://www.ahessc.ac.uk/e-dance) project is using the existing e-Science tools and methods as well as developing new ones as a means of extending choreographic knowledge and understanding. It will focus on (i) the methodological developments made in terms of collaborative interdisciplinary practices between dance and e-Science, (ii) the technical and software developments that have been made to support this research, (iii) the creative, compositional developments in practice-led research as a result of this new technologically enabled environment, and (iv) the new possibilities for hypermedia capture, documentation and (re)presentation of practice-led research.

e-Dance is a 2-year interdisciplinary practice-led research project bringing together practitioners and academics from the fields of dance and e-Science. It is collaborative in nature, involving researchers from four UK universities, as well as artists from the professional independent dance sector. The research is funded by three UK Research Councils forming the Arts and Humanities e-Science Initiative: the Arts and Humanities Research Council; the Engineering and Physical Sciences Research Council; and the Joint Information Systems Committee.

e-Dance repurposes Memetic (http://www.memetic-vre.net), an online meeting capture environment that integrates knowledge mapping within the Access Grid (AG) videoconferencing system, as a context for telepresent, distributed performance, and Compendium (http://compendium.open.ac.uk/institute) for hypermedia documentation of this practice as research (figure 1). This provides a rich, structured data repository, both for choreographic reflection in/on process and with the potential to support the subsequent construction of hypermedia research narratives. Through this convergence in network technology and the visualization of spatio-temporal structures and discourse, the project addresses the following intersecting questions. Firstly, what unique opportunities does the AG/Memetic environment provide for developing new approaches to choreographic process/composition and for capturing/modelling practice-led research? Secondly, how can choreographic knowledge and sensibility enable e-Science practice to make its applications more usable within the performance/arts practice-led research context?

Figure 1

Replay of a multi-site project meeting. The agenda item and the current contribution to the discussion are tracked using the timestamps of icons in Compendium (top right). Interactive event timelines (lower frame) enable navigation by speaker and topic. This screenshot shows the single screen ‘desktop’ interface to Memetic for participants who do not have full, multi-projector AG node.

The key feature of the e-Dance project is the creative and critical engagement with e-Science and specifically AG. It focuses on the Grid in terms of its visual communicational capacity, and from an arts perspective it is particularly concerned with meaning production in this visual, telecommunicational context. This focus on meaning bridges the disciplinary divide through user interface design and ‘sensemaking’ (Weick 1995) on the e-Science side, and spectator/participant engagement and interpretation from the perspective of the arts (Kozel 2007). This paper argues that, in this context, e-Science is moving beyond the purely instrumental function of merely making traditional research processes more efficient, into a new critically engaged territory of fundamentally shaping the form and content of research processes and products.

2. Collaborative interdisciplinary practices between dance and e-Science

Despite numerous attempts over the past decade or so to use Internet communications for distributed dance-making processes, point-to-point communications have proved more suited to the transmission of video telematics for performance (Naugle 2002; Birringer 2004; Popat 2006; Kozel 2007). Yet these cannot support the broader networks for performance practice that AG using multi-cast can offer (Sermon 2006). Although there exists a large body of multi-modal online performance research, prior to the start of this project, little research had been undertaken into the implications of AG and associated technologies for dance. This project builds on the work of a few pioneering arts researchers: choreographers Shapiro and Smith, visual artist Kelli Dipple and performance researchers Beth and Jimmy Miklavcic, all of whom presented AG-distributed art works at Super Computing Global 2006. In the UK, Bailey & Turner's (2006) Stereobodies project, part of the JISC-funded Virtual Research Environments project, CSAGE: Collaborative Stereoscopic Access Grid Environment, explored the Grid as a distributed compositional and performance space, challenging the notion of a single performance location and integrating stereoscopic video technology to fracture the two-/three-dimensional visual frame. Multidisciplinary arts improvisations at the Locating Grid Technologies (2006) workshop series at the University of Bristol, UK, revisited elements of previous works, playing with arrangements of video windows and acknowledging multiple performance locations.

While these projects began to question the AG interface for performance, none of them explored the full extent of AG as a visual communications medium for practice-led choreographic research processes. The e-Dance project has therefore sought to facilitate and interrogate distributed choreographic processes and consider the ways in which these processes can be enhanced and documented through the existing software and further software development. The project team has focused on establishing a common set of methodologies and working practices across the disciplines involved. In order to achieve this, it has been necessary to engage choreographers, dancers, visual interface designers and computer programmers in intensive practical research workshops where they are in constant dialogue for a week at a time. Research activity has been structured around a series of ‘research intensives’ that have brought the project team together into the studio setting (figure 2). This experimental, practice-led laboratory has provided the context in which to explore Memetic as a creative environment, to develop strategies for capturing and documenting the creative process and through a dialogical, iterative cycle, to evaluate technical and software developments in a meaningful, user-led environment.

Figure 2

Dance artists Catherine Bennett and James Hewison perform ‘Space: Placed’, an AG performance event during an e-Dance research intensive at the University of Bedfordshire, Bedford, UK, in April 2008. Photo by Martin Turner.

The interaction between members of the project team is clearly framed by the process of creating dance, so the discussion is focused on specific practical tasks that require collaborative input from all participants. The need to establish a common vocabulary has led the group to revisit fundamental key concepts such as space, time, location, process, presence and embodiment and to consider their disciplinary inflections and specificities. As new media artist Kac (2000) stated, ‘In telepresence art digital systems such as computers … and networks, ultimately point to the role of culture in creating both individual and collective experiences. Cultural parameters such as language, behavioural conventions, ethical frameworks, and ideological positions are always at work in art and science’.

3. Technical and software developments

The development of the ‘Grid’ and ‘e-Science’ was driven initially by requirements for an infrastructure to deliver distributed computation and storage. The primary demand for associated communication tools centred on creating virtual meeting environments, where verbal discussion was the focus and movement was limited and often of little importance. Significantly, the e-Dance project applies an aesthetic frame to the Grid; therefore, the interface that literally frames the content and exchange between participants is in need of further consideration and design in order to offer flexible usage for arts practice, where physical expression is a key aspect.

(a) Video communication

An initial and significant task for the project has been the development of a tool for communication between performance spaces. The work in this area has focused on AG technology, where multiple spaces using multi-cast can transmit and receive multiple video and audio streams.

One of the first issues of the project addressed was the quality of the video streams. AG traditionally uses the H.261 codec, which restricts the size of the video to 352 pixels wide×288 pixels high with a frame rate of up to 20 frames s−1. While this is sufficient for general meetings between people, it is inadequate for performance, where the quality of the video can be critical to the aesthetic impact. Two solutions were developed depending on bandwidth available; for high bandwidth and high computation, the JPEG video codec is used, otherwise H.261AS (the AS stands for ‘arbitrary size’ or ‘any size’) has been employed. This second codec allowed us to use the full size of the cameras we had chosen at the full frame rate as well as performing what is known as conditional replenishment; this essentially means that only the parts of the video that have changed are encoded and transmitted, resulting in less network bandwidth used and less processing power being required. A further solution being developed is to include the ability to dynamically reduce the frame rates and image sizes of individual cameras, either user-driven or automated by the transmitter, thus removing much of the encoding and decoding load.

(b) Video effects

Traditional videoconferencing displays the participants in two possible ways: using either a whole screen for the video and then switching this depending on who is talking, or a grid of videos without frames, but each in a fixed position on the screen. AG displays video in framed windows that can be moved arbitrarily around the screen, but can only be resized with fixed scaling of the original video; half size, same size and double size with respect to the original video size (figure 1). For performance, it is useful to have windows without frames (figures 2 and 3) that can be resized arbitrarily, although there is an issue with the control of the movement of the windows once frames no longer exist. This has been resolved by using mouse buttons over the video content itself. All windows can then be arbitrarily resized or sized to a large range of presets and positioned accurately anywhere.

Figure 3

Dance artists Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison perform ‘Trace Duet’, an AG performance event during an e-Dance research intensive at the University of Bedfordshire, Bedford, UK, in August 2008. Photos by Michelle Bachler.

Transparency and window blending is a useful option for performance (figure 3), but it is not a simple feature to implement. It has been achieved by avoiding the use of hardware acceleration and instead using the native operating system to draw to the screen. This resulted in less efficient operation, but the performance is within range of the hardware accelerated version.

(c) Recording and replaying

Another mode of manipulating time and space in performance is the use of live and recorded video, both from external sources and from recordings made previously. Fortunately, the recording of AG streams is a problem that was resolved in the Memetic project. Memetic was designed to record streams from an AG meeting, along with time-stamped Compendium annotations, and then replay them at a later date. One of the main issues with this software was that it was designed to work from a central server. This requires that the location being recorded has a network connection to the outside world, and similarly for the location where the playback occurs. We were aware that not all performance and rehearsal spaces have this connectivity, and so we integrated the Memetic recording and playback engine into the communication tool described above. This also had the advantage of simplifying the interface, since the users now have to access only one piece of software to perform both the communication and the recording and playback tasks. The current iteration of the software is capable of uploading the recorded streams to a central server once the recording is complete and future versions will also allow the download of streams from the central server.

(d) User interface for planning and performance

With the experience gained from the project research intensives, we have recently been designing a user interface that will support the choreographic research process and execute tasks during dance performance events. We have now decided on an interface that from the outset looks similar to presentation design tools found in many office applications. However, the tool will also take on useful aspects from other performance-related tools, such as Arkaos VJ (http://www.arkaos.net), where the keyboard can be used to trigger clips and effects, Adobe Premiere (http://www.adobe.com/products/premiere), which is used for editing videos, and Isadora (http://www.troikatronix.com/isadora.html), where complex video effects and layouts can be planned. These tools, while useful, all have their limitations with respect to networked choreographic research and performance. For example, Arkaos VJ is designed to be used in real time; while it does allow the recording of a session, it does not allow this to be built up with the individual items alterable. It also allows only a single scene to be recorded, rather than a series of interconnected scenes. Adobe Premiere is designed to be used to edit individual video clips; multiple video clips can be edited together, but the final product is a single continuous video. Isadora is designed to build up a series of controls over digital video; the emphasis here is, similar to Arkaos VJ, focused on real-time manipulation. The layout of objects in Isadora, while visual, is not related to the layout of the objects on the screen; to position a video window, the user must enter a series of coordinates.

The final interface has some similarities with presentation software however; instead of editing slides, the user will edit scenes. It differs in many important ways; for example, presentation software tends to have an ‘edit’ mode and a ‘slide show’ mode and is usually edited on the same monitor that the slide show appears on. We have made the assumption that the user has at least two monitors, one operating as the control monitor, and the second displaying the ‘live’ projection that the audience sees. We want users to be able to edit the scenes of the performance while the current scene is showing to the audience. The user may also want to trigger actions on a scene with a mouse click while not having the mouse pointer visible on the live screen. We have had to design tools into our software that would not normally be found in generic presentation software. In addition to the ability to show live and recorded video images, we include other tools that are specific to the performance arena, such as masking tools. While developing the user interface, we discovered that, although presentation tools allow the user to draw objects, they lack the ability to draw holes in the objects in order to partially reveal another. The development of this software is ongoing and we aim to make it as expandable as possible. This reflects the dynamic nature of the performance world, where new ideas are always emerging.

(e) Hardware

Videoconferencing equipment can be expensive and difficult to use in a performance environment and the images from traditional videoconferencing cameras can often be viewed only on the equipment to which they are connected. For these reasons, we decided to use Mini-DV camcorders, which are commonly used in the performance world. These cameras can be connected to the computer using IEEE 1394 FireWire Digital Video (DV) cabling; this has the advantage over USB webcams as FireWire DV cables can be up to 200 m in length, allowing a lot more freedom of movement (by comparison, USB is restricted to 5 m without repeaters). These cameras also usually have an LCD screen, which makes the set-up of scenes much easier. Finally, the cameras also transmit video with a resolution of 720 pixels wide×576 pixels high at 25 frames s−1. This transmission rate is fixed, and not dependent on the computer's processing power, as is the case with USB. As has been mentioned, we experienced problems with high processor usage due to the number of high-quality video streams being used. After tests, we recommend a modern multi-core machine when running the e-Dance software suite; a dual-core laptop is sufficient.

There are many pieces of hardware that have been developed for office work, which can now be applied to the dance performance context. One such item that we have used is the digital notepad, which is designed to allow the user to write or draw on a pad of real paper with real ink, but at the same time capture this data digitally and potentially convert it into text. These devices also allow real-time interaction with a digital version of the pad on the computer screen, allowing the user to draw on the screen. Using this along with the transparency effects allows the user to draw or write on the video windows (figure 3).

4. Creative, compositional developments in practice-led research

Performance within the AG/Memetic environment is conceptualized and practised as a live phenomenon. In other words, performers and spectators are co-present in physical spaces and simultaneously share multiple, virtual locations. Within an AG performance node, performers engage in live performance that can be relayed to them and to other remote locations through streamed, audio–video media (figure 4). Several video cameras can be used to provide a multi-perspective view of the dancer's body and the performance space in each AG node synchronously. Memetic's replay facilities enable the streamed media to be recorded and redistributed to remote locations synchronously or asynchronously. This provides a network topology that radically departs from those previously encountered in the telematic dance context in terms of both aesthetic complexity and technical functionality.

Figure 4

AG improvisations by dance artists Catherine Bennett, James Hewison and Amalia Garcia during an e-Dance research intensive at the University of Manchester, UK, in February 2008. Photos by Helen Bailey.

(a) Creative possibilities in distributed collaboration

As a distributed, collaborative environment, AG offers new creative possibilities through these multiple sites of performance and spectatorship. This requires the review of choreographic understandings of embodied spatio-temporal relationships. The multi-perspective nature of the environment throws into question the traditional relationship between choreographer and dancer: when working in a distributed yet collaborative environment, how can participants take account of the paradoxical subjective position of being alone/separate, yet together? Our research intensives have explored this through establishing dialogic telepresent performance contexts that foreground this paradoxical situation in terms of embodied experience.

For example, in the choreographed work Dislocate/Relocate: Composite Bodies (figure 5), four dancers were distributed across two AG nodes. In both nodes, each dancer had one camera trained on either their upper or lower body and a second providing a wide shot of both the projected and co-present performers. The streamed video of the fragment of each dancer's body was then placed in a grid, which was projected simultaneously in both AG nodes. Two composite virtual bodies were generated from the video streams of the four fragmented bodies. These new composite bodies were reconstructed through the intersection of the windows across a horizontal axis at the torso. A duet was then created, which allowed the two composite bodies to virtually dance together. In terms of compositional process, this required both the dancers and the choreographer to enter into a telepresent, non-verbal, movement-based dialogue in which the movement content and compositional structure emerged and were dependent on this networked context. The first phase of the duet focused on creating two new ‘coherent’ or singular bodies, with continuous space–time relationships established between the fragmented body parts. The second phase of the duet abandoned the idea of generating the illusion of embodied coherence and instead explored the foregrounding of a critical corporeality that embraced temporal discontinuity and spatial dislocation. This example of real-time spatial montage and the remediation and recontextualizing of this filmic concept is made possible by Grid technology. It is particularly interesting in the sense that it provides, in Bolter & Grusin's (1999) terminology, both a participant and spectatorial encounter with hypermediacy: a ‘style of visual representation whose goal is to remind the viewer of the medium’. In other words, it critically engages both participant and observer in issues concerning our corporeal relationship to mediatized environments and the embodied experience of telepresence and virtuality.

Figure 5

Dance artists Catherine Bennett, River Carmalt, Amalia Garcia and James Hewison performing ‘Dislocate/Relocate Composite Bodies’, an AG performance event during an e-Dance research intensive at the University of Bedfordshire, Bedford, UK, in August 2008. Photos by Michelle Bachler.

(b) Layering live and pre-recorded material

The project examines how Grid-based hypermedia and semantic annotation tools can be deployed as a means of capturing and rendering the visual discursive/dialogic practices inherent in the choreographic process. The visual nature of the interface employed by these e-Science documentation tools is currently being adapted to support multi-layered, nonlinear representations of process, more aligned with the characteristics of the creative process itself. This reiteration of process is not simply a static archival document of the process, but also a dynamic source of material that can be redeployed, for example, as a site for forensic archaeological investigation, as a score for further developmental commentaries generated through choreographic practice or pre-recorded audio/visual content for reuse in hybrid distributed performance. Layering of these multiple options enables us to challenge and manipulate understandings of time and space in performance by juxtaposing here and there, then and now. High-quality video and sophisticated control tools are essential for this ‘blurring’ to function effectively in performative terms. Figure 6 illustrates the practical exploration of these ideas.

Figure 6

Dance artist/researcher Helen Bailey performing an AG improvisation during an e-Dance research intensive at the University of Manchester, UK, in February 2008. Photo by Anja Le Blanc.

During this research intensive, the dancer adopted an improvisatory relationship to the AG environment. In other words, the mediatized context constructed in the AG node became a visual instrument that the dancer had to ‘learn to play’. Cameras in the AG node were distributed around the performance space to provide a multi-perspective view of the body. Movement was improvised in a real-time relationship to the spatial montage created by the various video representations of her body. In this way, the dancer simultaneously observes and constructs her virtual, motional, embodied identity through the multiple windows. The recorded video streams that were generated as a result of these improvisations were then replayed and added to the existing projected montage of live video stream windows. The dancer used the pre-recorded video streams as a visual score from which to generate a further movement commentary, thus layering the pre-recorded and the live into a spatio-temporal, nonlinear montage.

5. Hypermedia capture, documentation and (re)presentation of practice-led research

As a practice-led discipline, both the dance pieces produced and the way they are produced constitute core elements in choreographic research. Our interest is in how the same creative tools we provide for conceiving, replaying, discussing and annotating research processes/performances might also transition into reflective tools, generating as a by-product a hypermedia archive on which the choreographic artist/researcher can reflect. In turn, this would inform the composition of not only conventional research communications (such as this prose paper), but, more interestingly, the possibility of hypermedia presentations, crafting narrative paths through the archive. Moreover, as described in the example above (figure 6), these can transition back into media assets for use in performances. To support reflection, we are introducing different forms of ‘knowledge cartography’ (Okada et al. 2008) for mapping issues, dialogue and argumentation. In Memetic, the nodes in these networks (summarizing key ideas) are optionally indexed against video clips from AG sessions (Buckingham Shum et al. 2006; figure 1).

(a) Prelinguistic creativity versus analytical knowledge cartography

Writing about the often difficult, prelinguistic, creative process in both the arts and sciences, Claxton (2006) described: ‘… a softer, slower kind of groping for a way of articulating something that is currently, tantalizingly, beyond our linguistic grasp … the intuitive feeling of rightness (or wrongness) that guides the process. This sense of rightness—the same immediate, unjustifiable feeling of ‘Yes, that's it’ that guides the process of focusing—seems to be essential to the kind of creativity I am exploring. A choreographer may not know what it is she is looking for until she tries out one more move and gets the “Yes, that's it” response’.

Promising though Compendium seemed for mapping discussions, our pilot efforts to embed it in a studio context have been more sobering. Firstly, the choreographic researcher's need to walk, dance and gesture freely does not fit well with a keyboard-based tool. Speech recognition and transcript parsing may provide a way to semi-automate ‘hands-free’ map creation in the future. Secondly, the potential of knowledge mapping to sharpen thinking derives very much from analytical, linguistic reflection on the conceptual structure of ideas. This is ideal for the work of crafting scholarly reasoning, but is often antithetical to the embodied, non-verbal, creative process we have now witnessed first-hand in the dance studio.

Given these physical and cognitive gulfs, we have focused efforts on customizing Compendium to support the more reflective elements of choreographic research: in planning before going into the studio and in reflecting on the resulting video material. Figure 7 shows how the choreographer is now able to annotate video spatially and temporally by overlaying nodes directly onto videos. These mark significant moments in one or more locations, and time frames as reflected in the timelines. Nodes may be assigned codes to enable qualitative data analysis, and may contain additional nodes, conceptual structures, links to documents/websites or, indeed, additional videos. A given node may be embedded in multiple videos, facilitating navigation between thematically connected clips in the archive, and may also be the subject of personal or collective reflection in a dialogue map (as shown in figure 1).

Figure 7

Spatio-temporal annotation of video recordings in the Compendium visual hypermedia tool.

To summarize, we are exploring digital media to enable choreographers to reflect on the processes and products of their work—before and after the real-time pressures of working in the dance studio. In principle, it would also be possible to create annotated recordings in the studio, but we have not trialled this at the time of writing, and are uncertain whether this would be a distraction. The ability to construct layers of meaning across a video archive, possibly reflecting multiple perspectives in dialogue or argument with each other, opens interesting new possibilities for choreographic teaching, research and practice (or, we propose, any discipline whose discourse involves interpreting video records of practice). e-Science infrastructure of this sort makes it possible to capture, reflect on and disseminate processual data in ways not previously available to the dance researcher or artist.

6. Conclusion

At the time of writing, the e-Dance project has completed five research intensives and has plans for a similar number in the coming year. The focus for the first half of the project has been concerned with the development of an AG performance environment that provides a novel context for dance researchers to develop new compositional approaches to networked and distributed dance performance. The second half of the project is now investigating knowledge mapping tools to assist reflection on the meanings of the resulting materials, converting a video archive into a hypermedia network with layers of annotation and scholarly discourse providing narrative paths through the material.

The emphasis on process in this research is a key element, and the immersive nature of the research intensives has enabled choreographic and technical developments to progress in close association. Choreographic tasks both investigate and challenge the possibilities offered by existing technologies, while the nature of embodied knowledge calls into question current principles and methods of digital data collection. Simultaneously, the fracturing and fragmentation of space and time inherent in these remote communications and data storage/playback systems require a fundamental revision of both the dancer's embodied experience and the choreographer's craft.

Footnotes

  • One contribution of 16 to a Theme Issue ‘Crossing boundaries: computational science, e-Science and global e-Infrastructure II. Selected papers from the UK e-Science All Hands Meeting 2008’.

References

View Abstract