exhibition
Humans and Neural
Networks: Who’s the
Creator?

People develop neural networks, and neural networks transform people. They influence how we learn, work, create, and think. Are neural networks our allies or rivals? Can they create? Can they empathize? What lies ahead for them — and for us?

Seeking answers to these questions, 11 artists explored the intersection of technology and art in a collaborative art lab by Yandex and the Tretyakov Gallery.

Their works were exhibited in April and May 2025 at the Tretyakov Gallery on Kryмскy Val. Now, you can view them here on the site.

сURAТОRS

WоRкS

Artifact

Author
Kami Usu
MoreBack

Modern technology is shaping a new layer of reality—one in which the digital and physical merge into a single environment. This project explores algorithmic patterns as autonomous artifacts that extend beyond screens and interfaces, acquiring material form.

Drawing on Harun Farocki’s theory of operational images and Friedrich Kittler’s writings on optical media, the artist reimagines the algorithms behind computer vision and their visual language. Once seen solely as technical instruments for calibration and analysis—as a functional interface between machine and human—these patterns are now approached as objects of artistic inquiry. Algorithmic aesthetics take physical form, and the pattern becomes a digital artifact.

The work functions as an experiment in algorithmic topology—a structure generated not by nature, but by artificial intelligence. These forms have no direct analogue in the living world, yet follow an internal logic shaped by computational processes. The project not only renders visible the agency of algorithms but also proposes a new aesthetic that expands the boundaries of human perception.

This piece explores algorithmic topology — a structure generated by artificial intelligence. The forms on screen have no direct counterpart in the natural world, yet they follow an internal logic defined by computational processes. The project gives shape to the invisible, forging a new aesthetic and expanding the boundaries of human perception.

Next piece

GPU choir

Author
Alexander Vasilenko
MoreBack

This installation makes visible and accessible the process of neural network training and text generation, which is typically hidden within server infrastructure. On a small-diameter podium, several GPUs (graphics processing units) perform continuous neural network training. Their operation produces fan noise and heat, turning computation into a physical and auditory experience.

Directional microphones capture the sound of the GPUs at work. These signals are processed to isolate harmonics and overtones from the ambient noise. The resulting generative soundscape plays through speakers positioned around the installation.

Temperature sensors monitor each GPU and send the data to a microcontroller, which converts it into MIDI signals. These signals, in turn, control the audio processor, modifying the sound’s spectral qualities in response to shifts in heat.

The work treats machine learning and text generation as a set of measurable physical processes, transforming raw computational activity into an audiovisual experience.

This audiovisual installation reveals a process usually hidden from view: a neural network being trained on five GPUs. The sound of their operation, combined with GPU temperature data converted into MIDI signals, forms a generative soundscape that plays through the speakers.

Next piece
Previous piece

Recognition

Author
Elena Filaretova
MoreBack

The human brain is wired to find familiar shapes, faces, and figures in abstract forms and patterns. This phenomenon, known as visual pareidolia, forms the basis of “Recognition” — an installation by Elena Filaretova.

For the past several years, visual pareidolia has been central to the artist’s practice. Visual pareidolia is the process by which we perceive familiar images in random or unstructured visual stimuli. In her work, Filaretova constructs a space where the brain instantly identifies human or animal features in silhouettes and patterns. This way, each viewer forms a unique interpretation shaped by their own visual memory and experience.

The artist created 55 portraits in which facial features are only barely implied, hovering at the edge between chaos and recognition. The installation invites interaction: a camera and screen allow viewers to engage directly with the work. When a viewer directs the camera toward one of the portraits, a neural network attempts to detect and interpret the face — offering descriptions of emotions, personality traits, or imagined thoughts.

Where is the threshold between a random arrangement of lines and a real face? Why do we perceive emotions in images that may contain none? Does the neural network see as we do? Here, the algorithm becomes a second observer. Its perception differs from our own, yet it constructs its own layer of reality, interpreting the image through the logic of machine learning.

This interplay between human and algorithm opens a dialogue between subjective and objective perception. The artist invites us not only to seek out faces, but to compare our interpretations with those of the machine. Perhaps the neural network will notice something the viewer overlooked — or perhaps its response will feel absurd, disconnected.

“Recognition” is not only an exploration of the brain’s capacity for pattern recognition, but also a reflection on the boundaries between imagination and reality — between human perception and algorithmic interpretation.

This piece centers on visual pareidolia — the brain’s tendency to recognize familiar patterns in abstract shapes.

The artist created portraits with only the faintest hints of facial features, hovering between chaos and the human form. When viewed through a camera, a neural network tries to “see” the face and describe its emotions, personality, and inner world. The AI’s interpretation invites dialogue: for some, it offers surprising insight; for others, it may seem debatable — or even absurd.

Next piece
Previous piece

BACTЕRIA AS A SERVICЕ

Author
Anna Martynenko
MoreBack

This project explores the boundary between the living and the non-living in technology by examining the role of random noise in neural networks.

Generative neural networks rely on pseudorandom noise, a form of artificial randomness, to introduce elements of creativity, uniqueness, and unpredictability that feel almost lifelike. In her project, the artist creates a source of truly random noise generated by living bacteria collected from a water body in St. Petersburg.

The bacteria are placed in nutrient-rich activated sludge, forming a closed ecosystem modeled after a Winogradsky column. A piece of carbon felt, functioning as both anode and cathode, is embedded within the system to measure its electrical activity. This current is analyzed and translated into movement by a kinetic display made of rotating panels, producing a unique visual matrix. Every minute, the system collects a new data set, generating a pattern shaped by a living, organic signal.

This signal is streamed online via Yandex Cloud, making it openly accessible to anyone who wishes to use it — a gesture the artist describes as “bacteria as a service”.

Some of the generated patterns are captured and reproduced in graphite — a material that echoes the carbon felt used in the system, which serves as a conductor for microbial electron transfer.

Reflecting on what it means for something to be alive within technological systems, the artist compares the neural network to the Tin Woodman—a machine yearning for a living heart to truly love and feel. In this installation, the bacteria serve as that heart: a biological core within a mechanical organism.

Generative neural networks rely on pseudo-random noise to add uniqueness and creative variation to their outputs. Here, the artist turns to bacteria as a source of randomness: their electrical activity forms a visual matrix that updates every minute. The signal is streamed online via Yandex Cloud, open for anyone to use (bacteria as a service).

Next piece
Previous piece

Sensitive topics

Author
Kristina Pashkova
MoreBack

The artist invites viewers to see AI not as something imposed “from above” by large corporations, but as something that has emerged “from the people” — and, in essence, belongs to them. Wooden components from antique peasant weaving looms, incorporated into the installation, reinforce this idea. They also reference the historical connection between weaving and computing: Jacquard looms are direct ancestors of modern computers, with their punched card systems. A model of an early computer punched card is featured in the installation’s tactile display. This is a nod to similar perforated cards once used in textile production to create complex patterns.

The woven works on display were handwoven by the artist on a semi-automatic digital Jacquard loom (TC2). Viewing the loom through the perspective of the weaver — who forms a kind of relationship with the machine — the artist comes to see AI not as a tool, but as a partner. Her meditations on ethics, empathy, and care — whether directed toward oneself, other beings, or those not made of flesh and blood but still in need of training and attention — are concentrated in a data-labeling test for neural networks. Based on materials from Yandex, the test invites viewers to assume the role of an AI trainer, choosing which responses a neural network should consider most appropriate.

The artist thanks the Svody Art Production Center, the GES-2 House of Culture, where the woven works were created, and the aluminum profile manufacturer Soberizavod.

This installation combines jacquard textiles, readymades, tactile models, and an interactive quiz. The artist wove each piece by hand on a semi-automated TC2 digital jacquard loom. The machine breaks down an image into black and white squares — each one a pixel or one thread. Based on the pixel’s color, it raises or lowers the warp threads, while the weft (horizontal thread) is passed through by hand. A kind of dance between human and machine.

Next to the work is a text on ethics, empathy, and love. As they read it, visitors take on the role of an AI trainer — answering questions and deciding which responses a neural network should consider appropriate.

Next piece
Previous piece

The number of touches

Author
Mariia Fedorova
MoreBack

The artwork explores the difference between how humans and artificial intelligence learn. The artist reflects on how prepared we are to engage with AI as a true partner. For humans, learning is rooted in multisensory engagement with the physical environment, while machine learning is based on recognizing patterns within data. In 2025, a new era is expected to emerge in AI: the era of autonomous agents, systems designed to act independently and carry out tasks across industries with minimal human input. AI is no longer merely a tool; it becomes a partner. The artist asks: where might first contact with such a colleague begin?

The installation consists of two futuristic pedestals and a central glowing object. The pedestals vaguely resemble ergonomic computer furniture and act as activators for the central form, which symbolizes AI. The viewer engages the installation with their body, taking a position that matches the designated hand and foot placements on each pedestal. Interaction requires multiple points of contact with the metal plates — this is how the viewer “touches” the AI. The central object tracks the number of touches registered in the past hour and across the full duration of the exhibition. The data appears on individual screens (one per pedestal). When fewer than 20 touches occur within an hour, the object glows blue — a signal of unreadiness to connect. If the count exceeds 20, it glows red and begins to “respond”: a personal message appears on the screen, addressed to the visitor.

This piece explores the difference between how humans and AI learn. If the metal plate receives fewer than 20 touches in an hour, it turns blue — signaling it’s not ready to interact. But once that threshold is crossed, it lights up red and starts to “respond”: a personalized message appears on the nearby screen.

Next piece
Previous piece

The Secret Life

Author
Daria Neretina
MoreBack

In this project, the artist, in collaboration with a developer, conducted an experiment: a neural network trained on the works of Anna Golubkina from the Tretyakov Gallery collection generates new botanical images in her style. Rather than copying, the AI reimagines — like an artist inspired by a master of the past. The work raises pressing questions about authorship in the age of generative technologies and the nature of artistic creation.

The AI training process exists in a legal gray area: where does influence end and originality begin? Like artists once did, the neural network learns to see the world through form and texture, turning digital noise into the familiar outlines of daisies, clover, and thistle. These machine-born images take material form through porcelain prints and laser engravings on glass.

An old card catalog and writing desk appear in the installation. They are symbols of memory, set against an AI that does not preserve information, but transforms it. We see only the result, not the process, just as we rarely notice a sprout until it breaks the surface. This project becomes a dialogue with the past, in which artificial intelligence emerges as a co-author — revealing new dimensions of perception and generating art where tradition meets the algorithm.

This piece uses a diffusion neural network trained on herbarium scans from sculptor Anna Golubkina’s archive, held by the Tretyakov Gallery. The AI-generated images of new, fictional plants — etched onto glass and placed inside card catalog drawers. The artist also used intermediate “freeze-frames” of the neural network’s generative process as illustrations on porcelain.

Next piece
Previous piece

A.G.I.

Author
Yan Posadsky
MoreBack

Archetypov Gordey Ignatovich (1856–1943) was a visionary Russian artist, one far ahead of his time. The son of a watchmaker from Nizhny Novgorod, he developed a fascination with both mechanics and art at an early age. As a student at the Imperial Academy of Arts, Archetypov stood out for his unconventional ideas. In the 1880s, he began experimenting with abstraction, drawing inspiration from mathematics and probability theory.

His series “Patterns of the Mind” (1890–1905) featured intricate geometric structures that blended Impressionism with early echoes of Suprematism. In the early 20th century, he created the cycle “Machines of the Future”, anticipating the visual logic of modern computing. Archetypov died in 1943, largely unrecognized. His legacy was only revisited at the turn of the century.

The project “A.G.I.” presents a monumental painting in a heavy black frame, attributed to an unknown artist and allegedly acquired by Pavel Tretyakov in the late 19th century. In fact, every element — from the artist’s name to the sketch and final composition — was generated by neural networks. The acronym A.G.I. refers to Artificial General Intelligence, a field of AI research focused on building systems with human-like reasoning and the capacity for self-learning.

The image was created using a dataset of 130 manually selected paintings from the Tretyakov Gallery collection, primarily landscapes and abstractions. The sketch includes a double exposure with the eyes of Pavel Tretyakov, taken from Ilya Repin’s 1876 portrait. The frame was also generated by AI and produced on a 3D printer. It combines references to historical museum frames with visible elements of the printing process: tree-like support structures and ornamental fractals that echo the botanical motifs of Art Nouveau. The project probes the line between fiction and reality, raising questions about authorship, the role of technology in art-making, historical authenticity, and how contemporary tools reshape our perception of the past.

A monumental painting by an unknown artist, allegedly acquired by Pavel Tretyakov in the late 19th century. But everything about it is fiction. The artist’s name, biography, concept, sketch, signature, and even the heavy black frame were all generated by neural networks trained on the Tretyakov Gallery’s collection.

Next piece
Previous piece

Exoboros

Author
Nika Peshekhonova
MoreBack

The installation presents a constructed environment in which physical and virtual elements reveal the mechanisms of interaction between humans and artificial intelligence.

At its center stands a fountain — a metaphor for recursion and circulation, and a symbol of the current limitations of visual neural networks, particularly their inability to comprehend complex context. This is clearly illustrated by their ongoing failure to accurately visualize Marcel Duchamp’s “Fountain”, a work often cited as the foundational moment in conceptual art.

Scattered around the fountain are disposable utensils and tools rendered in the aesthetic of H. R. Giger, as if left behind by accident. These were created using Yandex’s visual neural network, image-to-3D conversion algorithms, and 3D printing. They reflect the accelerating automation of artistic production and the appropriation of visual languages, raising questions around fragmented and distributed authorship in the age of AI.

The interactive component includes a voice assistant powered by Yandex Assistant API and Yandex SpeechKit, fine-tuned on video game narratives. A motion capture camera records the viewer’s movements and translates them into a digital avatar — without any explicit request or consent. This process highlights a defining dilemma of the digital age: our interactions with neural networks construct virtual identities, often without our awareness.

These avatars reflect versions of the self we may never have imagined, formed in contexts where traditional ethical norms and privacy boundaries tend to dissolve. The agency of these virtual characters — their behavioral mechanics scripted independently of human intent — points to the growing autonomy of digital entities.

*Exoboros: Exo (as in "exoskeleton") refers to technological shells that both support and constrain the human body and mind, echoing Yuk Hui's concept of exosomatics. Boros (from Ouroboros) evokes the image of a closed loop, self-consumption, and recursion. In the context of this work, Exoboros becomes a metaphor for the endless cycle of human–AI interaction, in which each becomes "sustenance" for the other.

This installation blends physical and virtual elements to expose how humans and AI interact — each feeding off the other. At its center is a fountain referencing Marcel Duchamp’s iconic piece. Around it lie seemingly forgotten objects, co-created with an image-generation neural network. A camera tracks each visitor’s movement and transforms them into a digital avatar on screen.

Next piece
Previous piece

Dance, not duel

Author
Mariia (Yashnikova) Tkachenko, Ksyusha Zemskova
MoreBack

Neural networks and social technologies appear as similar structures — both are grounded in communication and power relations. This project builds on the principles of a Siamese neural network: a model that compares the feature vectors of two inputs to evaluate their semantic similarity or difference. By analogy, the artists introduce two characters, Masha and Ksyusha, as a way to explore possibilities for communication, mutual understanding, and what Roland Barthes termed “living-together”.

In the app Which Character Are You?, viewers are invited to align themselves with one of two behavioral strategies. These strategies are based on the artists' own everyday and dance-based patterns, extremes that they admire in each other, but that also spark conflict. The movements are framed within a video game simulation, where choosing a character is typically a step toward competition. Here, the aim is different: to move together in a shared dance.

Dance shifts the focus away from ideology or psychology, redirecting it toward the shared experience of transformation and fluidity. Within this space, the characters are not only friends but supports — physical anchors, objects in motion, material for the imagination. In the video work, communication unfolds through attentive observation: responding to another’s movement by aligning with the rhythms and trajectories of their body.

A candy wrapper or a trash bin becomes another point of connection. They are part of the background of daily routines—traces of human presence in the urban landscape, byproducts of the economy. As Boris Groys writes, garbage is part of the real world — and without it, reality would no longer be real.

Masha and Ksyusha are doubles who perform, on behalf of the artists, strategies for moving toward one another.

The artists draw on the logic of a Siamese neural network — a system designed to compare feature vectors of two objects and identify their semantic similarities or differences. After answering 10 questions, each viewer generates a digital character that enters a game simulation. But instead of battling for dominance, the characters meet for a shared dance.

Next piece
Previous piece

3D-TОUR

Take a look at the exhibition in the virtual space

DATA.RELIC Sculpture

Over the course of a month, visitors to the “Humans and Neural Networks: Who’s the Creator?” exhibition shared their views by choosing between humans and AI in various aspects of life. Their answers shaped the final form of the data.relic sculpture, now on display at the Yandex Museum.
Fri Jun 20 2025 14:17:41 GMT+0300 (Moscow Standard Time)