Since November 19, visitors to the Museum of Modern Art (MoMA) in New York have been able to see Refik Anadol’s exhibition, “Unsupervised.” Upon entering Grund Hall at the museum’s entrance, they are confronted with a 24-by-24-foot multimedia wall featuring three new digital works by Refik Anadol Studio that use AI to interpret and transform over 200 years of art from MoMA’s collection.
Refik Anadol, born in 1985 in Istanbul, currently residing in California, Los Angeles, is a media artist, filmmaker, and pioneer in the aesthetics of artificial intelligence. He is also a lecturer in the Department of Design Media Arts at UCLA, where he received his second MFA.
His work, at the intersection of art, science, and technology, addresses the challenges and possibilities that ubiquitous computing has imposed on humanity, and what it means to be a human in the AI era. Refik Anatol’s audiovisual performances have been presented in prestigious monuments, museums and festivals around the world and have received various awards and prizes.
Refik Anadol Studio includes designers, architects, data scientists, and researchers from diverse professional and personal backgrounds, embracing the principles of inclusivity and equity at every stage of production. They collect data from digital archives and publicly available resources, and process these datasets with machine learning classification models.
The Unsupervised – Machine Hallucinations – MoMA project.
Unsupervised is part of Machine Hallucinations, Refik Anadol Studio’s project started in 2016 and still ongoing exploring data aesthetics based on collective visual memories, using AI as a collaborator with human consciousness, specifically DCGAN, PGAN, and StyleGAN algorithms trained on large datasets to deploy unrecognized layers of our external realities.
For six months, the software created by the studio team, with the help of Nvidia Research engineers, was fed with 380,000 ultra-high resolution images taken from 138,151 works in the MoMA collection, including Pablo Picasso, Salvador Dali, Gertrudes Altschul, but also Toru Iwatani’s Pac-Man video game. Among other things, the team used NVIDIA’s StyleGAN2 ADA to train their model for three weeks using an Nvidia DGX Station A100. It then explored a latent space with its Latent Space Browser software, which it has been developing since 2017.
In November 2021, Unsupervised was released on the new media platform Feral File. Some of the NFTs in the collection have sold for thousands of dollars, with one reaching $200,000.
The exhibition at MoMA
The images have a resolution of 1024 x 1024 pixels, are created in real time, and are constantly changing based on acoustics, light movements, audience movements captured by a camera mounted on the hall ceiling, and weather conditions recorded by a station in a nearby building. These inputs drive forces that affect different levers in the software, which in turn affect the constantly changing imagery and sound.
For Michelle Kuo, curator at MoMa, who co-organized the exhibition:
“AI is often used to classify, process and generate realistic representations of the world. In contrast, Unsupervised is visionary: it explores fantasy, hallucination, and irrationality, creating an alternative understanding of art-making itself.”
Refik Anadol comments:
“We don’t see anything real, it’s from the AI’s imagination. The AI in this case creates this pigment that doesn’t dry, a pigment that is always moving, always changing, and constantly evolving and creating new patterns.”
This exhibition, which according to MoMA’s senior curator Paola Antonelli, “underscores its support for artists who experiment with new technologies as tools to expand their vocabulary, their impact and their ability to help society understand and manage change,” will run through March 5.
Translated from « Refik Anadol : Unsupervised », quand l’IA s’invite au Musée d’Art Moderne de New York