AI: A Black Box(?)
An exploration of the Uncanny Valley through the work of artist and researcher Terence Broad.
A black box is a concept and definition borrowed by the engineering and science fields. It relates to devices or systems in which inputs and outputs can be viewed, but the internal operation is unknown or can not be fully grasped.
In 2015, Google engineer Alexander Mordvintsev designed DeepDream, a program that analyzes visual imagery in order to understand how the computer processes images. While its hallucinogenic outputs were welcomed for their artistic qualities, they also promoted a deeper interest in understanding the way in which deep neural networks work. This area was explored and developed by machine learning researcher Ian Goodfellow and his colleagues, bringing Generative Adversarial Networks (GANs) to the arts and research communities.
Terence Broad is an artist and researcher working between Newcastle and London who manipulates deep generative models through tools and methods built with the goal of reversing and amplifying the uncanny valley (this concept was introduced in 1970 by Japanese roboticist and professor Mashairo Mori and seeks to explain the human emotional response to androids and robots that resemble humans. The answer is positive to the point where it turns into rejection when the similarity becomes too close). The goal of his work is also to find novelty through the training of GANs without data and explore the gaze of the computer.
[Fig.1] Video (un)stable equilibrium (2019), Credits: Terence Broad. Courtesy of the artist. This work won the 2019 ICCV Computer Vision Art Gallery prize in Seoul.
Broad investigates new ways in which machine learning can promote creativity, while his approach, which could be considered somehow unorthodox, is very research-driven and explorative. The series of works (un)stable equilibrium materializes the desire of the artist of finding a way of training a neural network to get completely new generated results that do not resemble in any way the datasets. This research, which was presented at NeurIPS creativity workshop, led him to develop "quite an intimate understanding of what was going on", "a tacit understanding of the models and algorithms" developed, as he said.
“I am not particularly interested in artworks autonomously made by machines, and artworks that are generated simply by using the most recent state of the art AI model quickly get boring for me.” - Terence Broad.
Being Foiled is a body of work that proposes the reverse and amplification of the uncanny valley of Mori by generating images that are less and less realistic until they become (almost) abstract. As Broad stated: " [...] Where normally the uncanny valley is a phenomena encountered when trying to create more realistic representations of people, this procedure encountered in the other direction, by deliberately producing representations that diverged more and more away from realism.”
[Fig.2] Being Foiled (2020). Credits: Terence Broad. Courtesy of the artist.
[Fig. 3] Being Foiled (2020). Credits: Terence Broad. Courtesy of the artist.
Broad described the process of reversing the uncanny valley in a paper entitled Amplifying The Uncanny, which was presented at xCoAx 2020.
Part of his doctoral research develops methods for analyzing the internal components of a GAN and their behavior. The network bending technique is used by Broad in several works, including the one shown in the recent online exhibition hosted on Feral File. In "Reflections in the Water", curated by Luba Elliott, the artist proposed a portrait of himself that fades into evanescence, using customer software and the aforementioned technique.
Photo (Cover) by Sam Moqadam / Unsplash