Jake Elwes (/ˈɛl.wɪs/) is a British media artist, hacker and researcher. Their practice is the exploration of artificial intelligence (AI), queer theory and technical biases.[1] They are known for using AI to create art in mediums such as video, performance and installation.[2] Elwes considers themselves to be neuroqueer,[3] and their work on queering technology addresses issues caused by the normative biases of artificial intelligence.[4][1]
Education and early life
Elwes was born in London to British contemporary artist and painter Luke Elwes and Anneke, daughter of Hans Dumoulin. Elwes is the great grandchild of Army officer James Hennessy and portrait painter Simon Elwes RA, son of Victorian opera singer Gervase Elwes.[5][6]
Elwes studied at the Slade School of Fine Art from 2013 to 2017, where they began using computer code as a medium.[2] In 2016 they attended the School of Machines, Making & Make-Believe in Berlin with artist and educator Gene Kogan.[2] Elwes was introduced to drag performance by their collaborator Dr Joe Parslow[7] who holds a PhD in drag performance. Drag performance has since become instrumental to Elwes' work.[1]
Career
Elwes' work with artificial intelligence is cited as a hopeful strategy to make AI more playful and diverse.[8]
Elwes' work has been exhibited in numerous international art museums and galleries and was featured in a BBC documentary on the history of video art [9], they were a 2021 finalist for the Lumen Prize,[10] and received the Honorary Mention of the 2022 Prix Ars Electronica in the Interactive Art + category.[11] They also curated and presented the opening provocation "The New Real - Artistic and Queer Visions of AI Futures" to the UK government with two drag artists at the AI UK conference 2024.[12]
The Zizi Project is a series of works that explore the interaction of drag and A.I. Currently, The Zizi Project is made up of multiple artworks.
Zizi - Queering the Dataset (2019)
Knowing that facial recognition technology statically struggle to recognize black women or transgender people, Elwes set out to "Queer the Dataset" through an open-sourced generative adversarial network (GAN, a type of machine learning model and an early Generative artificial intelligence). Elwes added a dataset of 1,000 photos of drag kings and queens into the GAN's 70,000 faces collected in a standardised facial recognition dataset called Flickr-Faces-HQ Dataset (FFHQ). They then created new simulacra faces, known as deep fakes.[1] “We queer that data so it shifts all of the weights in this neural network from a space of normativity into a space of queerness and otherness. Suddenly all of the faces start to break down and you see mascara dissolve into lipstick and blue eye shadow turn into a pink wig” said Elwes in a 2023 interview for Artnet.[42]
Zizi & Me (2020–2023)
Zizi & Me is an ongoing multimedia collaboration between drag queen Me The Drag Queen and a deepfake A.I. clone of Me The Drag Queen. Using neural networks trained on filmed footage, the project creates a virtual body that can mimic reference movements.[43][44] The first act, which features a digital lip-sync duet to Anything You Can Do (I Can Do Better), satirises the idea of A.I. being mistaken for a human, using drag performance and cabaret to critique societal narratives about A.I. and its role in shaping identity. The project is part of The Zizi Project by Jake Elwes, which explores the intersection of drag performance and A.I.[45][46][47]
The Zizi Show - A Deepfake Drag Cabaret (2020)
The Zizi Show is a deep fake drag act based on artificial intelligence (AI). It has been presented live and as interactive online artwork. It is an exploration of queer culture and the algorithms philosophy and ethics of AI.[48]The Zizi Show was exhibited as the inaugural exhibition in the digital gallery at the V&A’s Photography Center from 2023 to 2024.[42][49]
Zizi in Motion: A Deepfake Drag Utopia (Movement by Wet Mess) (2023)
“Zizi in Motion” is a multichannel silent video installation featuring AI-generated deepfake performances, which are dynamically re-animated through the movements of London drag artist Wet Mess. The movements of Wet Mess cause the AI-generated visuals to glitch and distort, showcasing the interaction between drag performance and artificial intelligence. The work explore the potential for queer communities to ethically and creatively reclaim and repurpose deepfake technology, using it to celebrate queer bodies and identities.[50][51][52]
Art in the Cage of Digital Reproduction (2024)
In an act of protest on 26th November 2024, Elwes facilitated indirect access to an early access token for OpenAI’s Sora text-to-video model through a Hugging Face frontend under the account "PR Puppets"[53]. The accompanying statement called to 'denormalize the exploitation of artists by major AI companies for training data, R&D, and publicity'. The incident attracted international press coverage calling into question the role of artists in shaping the future of generative AI versus merely serving as data and credibility providers for tech giants.[54][55][56]
Elwes also coordinated a collection of mini essays with responses and reflections from the signees and guest writers titled "Art in the Cage of Digital Reproduction".[57]
Installations exploring interpretation and feedback loops between neural networks
Elwes has created works based on the interpretations and misinterpretations between different neural networks and training datasets including: A.I. Interprets A.I. Interpreting ‘Against Interpretation’ (Sontag 1966) from 2023, Closed Loop from 2017, and Auto-Encoded Buddha from 2016.
A.I. Interprets A.I. Interpreting ‘Against Interpretation (Sontag 1966) is a three-channel video artwork where an AI interprets Susan Sontag’s essay into images, and then and another AI reinterprets those images back into language. The piece highlights how AI-generated art can misinterpret and introduce bias.[58][59]
Closed Loop (2017)
Closed Loop is a two-channel video where two neural networks engage in a continuous feedback loop, one generating images based on the text output and the other creating text based on the image output. The work explores how AI models misinterpret and evolve in a surreal, self-perpetuating conversation, without human input.[60][61]
Auto-Encoded Buddha (2016)
Auto-Encoded Buddha is a mixed-media piece where an AI attempts to generate an image of a Buddha statue, trained on 5,000 Buddha images. The AI struggles to accurately represent the Buddha, highlighting the limitations of early generative neural networks. The work is a tribute to Nam June Paik’s TV Buddha (1974). [62][2][63]
CUSP (2019)
In their video work CUSP (2019) Elwes places marsh birds generated using artificial intelligence into a tidal landscape. These digitally generated and constantly shifting birds are recorded in dialogue with native birds. The video work is also accompanied by a soundscape of artificially generated bird song.[64]
Latent Space (2017)
Latent Space is one of the earliest examples of generative AI in art. The video artwork uses a neural network trained on 14.2 million images from the ImageNet database to explore “latent space,” a mathematical representation where AI maps learned image categories, such as trees or birds, into specific regions. Once trained, the AI understands all images of trees as existing in one area and all images of birds in another. By reverse-engineering the network, it becomes possible to generate synthetic images from coordinates within this space.
The video illustrates the AI’s process of creating novel images by not moving directly between recognizable categories, but by navigating the transitional spaces between them. The work highlights the network’s ability to generate unique and unexpected visual forms. The project draws on research from Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space (2016) and the ImageNet database (2009), with special acknowledgment to Anh Nguyen and the Evolving AI Lab for their contributions.[65][66][67]