Skip to main content

Exploring Digital Futures @ APGRD

Exploring Digital Futures at the APGRD is an eleven-month scoping project (John Fell Fund 2025-26), which examines the role of audiovisual recordings in performance archives and re-considers the APGRD’s own collection of around 500 VHS, CDs, DVDs, and digital files.

The project responds to issues and ideas raised in our Archiving Performance in the Digital Present events and builds on the initial outcomes of the DAT@APGRD project (2024-25), which took the first steps towards making our audiovisual collection more discoverable and accessible.

Exploring Digital Futures (EDF) goes further by reconceptualising recordings of performances as structured, computational research objects.

Initial questions

The project began with a series of foundational questions:

  • Could digital tools be utilised to open up recordings, rendering them not only more discoverable and more accessible, but also searchable and explorable?
  • Could we introduce the ability to search within individual recordings and across a collection?
  • Could we encode (through automated and curated processes) different layers that make up a performance (speech; sound; bodies; lighting; costume; set; structure etc.) so that they can be analysed computationally, in new ways, and at scale?
  • Could we present the recordings in a manner that challenges the notion of the ‘neutral’ recording as a transparent document and positions them as artefacts which encode complex layers of technical, curatorial, and interpretative decision making?

Scoping and testing

In collaboration with Oxford’s Visual Geometry Group (VGG), we tested the reliability, efficiency, and usefulness of AI-assisted technologies such as automated speech recognition (WhisperX) and multimodal indexing (WISE) with a selection of digitised VHS tapes from the APGRD’s collections. 

At the same time, inspired by projects such as DraCor (the Drama Corpora Project), we explored the feasibility of adapting the principles of TEI (Text Encoding Initiative) for time-based media.   

Adopting this hybrid approach, the project developed MedEIA - the Media Encoding Initiative for Archives. Within this framework, machine-generated outputs are synchronised with video footage, corrected and further curated by scholars, and standardised in an extended TEI schema customised to capture spatial, auditory, embodied, and scenographic dimensions.

The MedEIA framework

A bespoke authoring interface has been prototyped, to enable record-creators with no training in TEI-XML to create encodings by presenting the TEI elements as editable ‘event’ categories linked to the footage by precise timestamps. The results enable comparative exploration, while foregrounding mediation, provenance, and interpretation.

Image:
Screenshot of an authoring interface to mark up transcripts of video files
The authoring interface
Encodings can be made by people with no no training in TEI-XML by presenting TEI elements as editable ‘event’ categories relating to: Speech, Structure, Stage, Set, Lighting, Audible, Objects.

For researchers, a prototype search interface has been created that would support visitors to the APGRD’s archive Study Room in performing various searches across all videos held in the collection: audio searches can detect particular sounds such as drums, screams, bells, whistles; while visual searches identify particular objects, colours, expressions, or individual faces.

Image:
Screenshot of an online video playback interface performing a visual search for a face
Researcher's view: performing a visual search
Users can select a section of a frame in the video recording and search both the rest of the video and across the collection by visual similarity, face matching, or object matching.
Image:
A screenshot of an interface searching for axes and weapons across videos
Researcher's view: results displayed for the query AXE
Direct search of video recordings in the media collection, using the multimodal search tool WISE.

For any recording that has also been encoded, a further set of search functions are opened up as well as new ways to explore and navigate the recordings.

A timeline generated from the encoding provides a visualisation of the elements of each performance - stage; set; number of bodies onstage; exits; entrances; actions; speech utterances; amplitude; scenes; acts. Each element can be explored individually or in combination. An overview allows for segments of the recording to be explored in isolation.

Image:
Screenshot of an online interface providing a visualisation of various aspects of a recording of a performance
Researcher's view: exploring the timeline
Encoding videos allow the researcher to explore a customisable visualisation of transcription data on a timeline, showing individual colour-coded performance events that are synchronised with the video recording.

Text searches can be made within the transcript of a single recording or across the collection to find individual words or phrases; while a visual search allows the researcher to draw a box around areas of interest in a single frame (be it a prop, a piece of set, a face, costume etc.) and search for similar occurrences within the collection.

A future development will see the encoded transcriptions available for queries via an API (Application Programming Interface), allowing the data to be interrogated computationally by a variety of software applications. This feature will be restricted to aspects of transcripts that fall outside of copyright or where permissions have been granted by copyright-holders.

GitHub repositories

Your input

The search interface cannot be made publicly available online due to copyright restrictions. We hope to secure the permissions needed to enable a public showcase of the prototype, operating on a limited, curated, collection of recordings. This will allow researchers to explore the MedEIA app and see what they can expect to use when visiting us, and allow us to gather more feedback for refining the system.

Similarly, the customisation of TEI for time-based media which we have begun will require community collaboration in order to create a shareable standard that other repositories and collections might wish to adopt.

These collaborations will be incorporated into any further developments of the EDF project.

With this in mind, we would love to be in contact with other performance archives, institutes, repositories and individuals in regards to their own AV collections. If you could spare a few minutes, we have a very brief online questionnaire here.

Acknowledgements

As well as the generosity of the University's John Fell Fund and the expertise and enthusiasm of our collaborators at VGG, in particular Giles Bergel, Abhishek Dutta, and Andrew Zisserman, we also wish to thank our colleagues at the University of Cape Town's ReTAGS project (Reimagining Tragedy in the Global South, led by Mark Fleishman) for their permission to access not only hundreds of hours of rehearsal footage but also performance recordings and unpublished scripts of their work, Antigone (not quite/quiet). Similarly, the Actors of Dionysus (AOD) and Digital Theatre+ have generously granted permission for us to demonstrate our prototype using clips from the recording of the AOD's 2017 Antigone.