highlike

Memo Akten and Katie Peyton Hofstadter

Embodied Simulation

‘Embodied Simulation’ is a multiscreen video and sound installation that aims to provoke and nurture strong connections to the global ecosystems of which we are a part. The work combines artificial intelligence with dance and research from neuroscience to create an immersive, embodied experience, extending the viewer’s bodily perception beyond the skin, and into the environment.
The cognitive phenomenon of embodied simulation (an evolved and refined version of ‘mirror neurons’ theory) refers to the way we feel and embody the movement of others, as if they are happening in our own bodies. The brain of an observer unconsciously mirrors the movements of others, all the way through to the planning and simulating execution of the movements in their own body. This can even be seen in situations such as embodying and ‘feeling’ the movement of fabric blowing in the wind. As Vittorio Gallese writes, “By means of a shared neural state realized in two different bodies that nevertheless obey to the same functional rules, the ‘objectual other’ becomes ‘another self’.”

QUBIT AI: Valentin Rye

Around The Milky Way

FILE 2024 | Aesthetic Synthetics
International Electronic Language Festival

Valentin Rye – Around The Milky Way

In the distant future, several species inhabit the Milky Way. This scenario inspired the video, aiming to portray various forms of life and evoke a plausible atmosphere. Using images he created, the artist experimented with Stable Video Diffusion, a new video generation method. The objective was to expand the limits of this neural network and improve its output, resulting in the conception of this video.

Bio

Valentin Rye is a self-taught machine whisperer based in Copenhagen, deeply passionate about art, composition and the possibilities of technology. He has been involved in AI and neural network image manipulation since starting DeepDream in 2015. While his IT work lacks creativity, he indulges in digital arts during his free time, exploring graphics, web design, video, animation and experimentation with images.

QUBIT AI: Valentin Rye

Space Odyssey 2002

FILE 2024 | Aesthetic Synthetics
International Electronic Language Festival
Valentin Rye – A Space Odyssey 2002 – Denmark

This video offers a surrealist take on futuristic sci-fi concepts from the 60s and 70s. Imagine a future colonization of Mars where interior design is given an avant-garde twist. It was created by brainstorming visual ideas, generating countless images, refining the best ones and assembling them into clips. After careful selection, these clips have been organized into a cohesive timeline, accompanied by atmospheric music to enhance the overall experience.

Bio

Valentin Rye is a self-taught machine whisperer based in Copenhagen, deeply passionate about art, composition and the possibilities of technology. He has been involved in AI and neural network image manipulation since starting DeepDream in 2015. While his IT work lacks creativity, he indulges in digital arts during his free time, exploring graphics, web design, video, animation and experimentation with images.

Pangenerator

The abacus
THE ABACUS is probably the first ever 1:1 interactive physical representation of real, functioning deep learning network, represented in the form of a light sculpture. The main purpose of the installation is to materialise and demystify inherently ephemeral nature of artificial neural networks on which our lives are becoming increasingly reliant on. As the part of new permanent exhibition devoted to the Future the installation aims to engage and educate the audience in artistically compelling ways being the manifestation of art and science movement goals.

Frederik de Wilde

Ai Beetles
THIS NEXT NATURE POST-CAMOUFLAGE AI BEETLE IS INVISIBLE FOR THE ELECTRONIC EYE. THE PATTERNS ARE GENERATED WITH NEURAL NETWORKS AND EVOLUTIONARY ALGORITHMS TO FOOL AND MISLEAD ARTIFICIAL INTELLIGENCE ENABLED SYSTEMS. MY AIM IS TO DEVELOP A CONTEMPORARY RAZZLE DAZZLE STYLE CAPABLE OF MESSING UP LABELLING AND METADATA SYSTEMS. IT LOOKS LIKE A BEETLE FOR US BUT IS SEEN AS E.G. A HONEYCOMB FOR AN AI.
video

Refik Anadol

Machine Memoirs
Es una exploración de estructuras celestes a través de la mente de una máquina. Esta instalación inmersiva tiene como objetivo combinar exploraciones pasadas y soñar con lo que puede existir más allá de nuestro alcance. Usando inteligencia artificial para narrar lo “desconocido” y una red neuronal generativa entrenada en imágenes de la Tierra, la Luna, Marte y la Galaxia, tomadas de observaciones de la ISS, Chandra, Kepler, Voyager y Hubble, esta instalación imagina un universo alternativo, quizás aportando más textura a nuestra propia tela.
.
Is an exploration of celestial structures through the mind of a machine. This immersive installation aims to combine past explorations and dream of what may exist just beyond our reach. Using machine intelligence to narrate the “unknown,” and a generative neural network trained on images of the Earth, Moon, Mars and the Galaxy, taken from ISS, Chandra, Kepler, Voyager, and Hubble observations, this installation imagines an alternate universe, perhaps providing further texture to the fabric of our own.

STEPHANIE DINKINS

Gespräche mit Bina
Die Künstlerin Stephanie Dinkins und Bina48, einer der fortschrittlichsten sozialen Roboter der Welt, testen diese Frage anhand einer Reihe laufender Gespräche auf Video. Dieses Kunstprojekt untersucht die Möglichkeit einer langfristigen Beziehung zwischen einer Person und einem autonomen Roboter, die auf emotionaler Interaktion basiert und möglicherweise wichtige Aspekte der Mensch-Roboter-Interaktion und des menschlichen Zustands aufdeckt. Die Beziehung wird mit Bina48 (Breakthrough Intelligence via Neural Architecture, 48 Exaflops pro Sekunde) aufgebaut, einem intelligenten Computer der Terasem Movement Foundation, der zu unabhängigem Denken und Emotionen fähig sein soll.

Ouchhh

Poetic AI
Ouchhh created an Artificial Intelligence and the t-SNE visualization of the hundreds of books and articles [approx. 20 million lines of text] written by scientists who changed the destiny of the world -and wrote history- were fed to the Recurrent Neural Network during the training. This, later on, was used to generate novel text in the exhibition. 136 projectors shining to be a veritable oneiric experience, the ‘POETIC – AI’ digital installation uses Artificial Intelligence in the visual creation process: the forms, light, and movement are generated by an algorithm that creates a unique and contemplative digital work, an AI dancing in the dark, trying to show us connections we could never see otherwise.

quadrature

Credo

A radio telescope scans the skys in search of signs of extraterrestrial life.
The received raw signals serve as input data for a neural network, which was trained on human theories and ideas of aliens. Now it tries desperately to apply this knowledge and to discover possible messages of other civilizations in the noise of the universe. Mysterious noises resound as the artificial intelligence penetrates deeper and deeper into the alien data, where it finally finds the ultimate proof.The sound installation revolves around one of the oldest questions of mankind – one that can never be disproved: Are we alone in the universe?

Studio A N F

Computer Visions 2
After more decades of trying to construct an apparatus that can think, we may be finally witnessing the fruits of those efforts: machines that know. That is to say, not only machines that can measure and look up information, but ones that seem to have a qualitative understanding of the world. A neural network trained on faces does not only know what a human face looks like, it has a sense of what a face is. Although the algorithms that produce such para-neuronal formations are relatively simple, we do not fully understand how they work. A variety of research labs have also been successfully training such nets on functional magnetic resonance imaging (fMRI) scans of living brains, enabling them to effectively extract images, concepts, thoughts from a person’s mind. This is where the inflection likely happens, as a double one: a technology whose workings are not well understood, qualitatively analyzing an equally unclear natural formation with a degree of success. Andreas N. Fischer’s work Computer Visions II seems to be waiting just beyond this cusp, where two kinds of knowing beings meet in a psychotherapeutic session of sorts[…]

Jake Elwes

Cusp
A familiar childhood location on the Essex marshes is reframed by inserting images randomly generated by a neural network (GAN*) into this tidal landscape. Initially trained on a photographic dataset, the machine proceeds to learn the embedded qualities of different marsh birds, in the process revealing forms that fluctuate between species, with unanticipated variations emerging without reference to human systems of classification. Birds have been actively selected from among the images conceived by the neural network, and then combined into a single animation that migrates from bird to bird, accompanied by a soundscape of artificially generated bird song. The final work records these generated forms as they are projected, using a portable perspex screen, across the mudflats in Landermere Creek.

Stephanie Dinkins

Conversations with Bina 48
Artist Stephanie Dinkins and Bina48, one of the worlds most advanced social robots, test this question through a series of ongoing videotaped conversations. This art project explores the possibility of a longterm relationship between a person and an autonomous robot that is based on emotional interaction and potentially reveals important aspects of human-robot interaction and the human condition. The relationship is being built with Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second) an intelligent computer built by Terasem Movement Foundation that is said to be capable of independent thought and emotion.

localStyle (Marlena Novak & Jay Alan Yim) in collaboration with Malcolm MacIver

Scale
‘scale’ is an interspecies art project: an audience-interactive installation that involves nocturnal electric fish from the Amazon River Basin. Twelve different species of these fish comprise a choir whose sonified electrical fields provide the source tones for an immersive audiovisual environment. The fish are housed in individual tanks configured in a custom-built sculptural arc of aluminum frames placed around a central podium. The electrical field from each fish is translated into sound, and is thus heard — unprocessed or with digital effects added, with immediate control over volume via a touchscreen panel — through a 12-channel surround sound system, and with LED arrays under each tank for visual feedback. All software is custom-designed. Audience members interact as deejays with the system. Amongst the goals of the project is our desire to foster wider public awareness of these remarkable creatures, their importance to the field of neurological research, and the fragility of their native ecosystem.The project leaders comprise visual/conceptual artist Marlena Novak, composer/sound designer Jay Alan Yim, and neural engineer Malcolm MacIver. MacIver’s research focuses on sensory processing and locomotion in electric fish and translating this research into bio-inspired technologies for sensing and underwater propulsion through advanced fish robots. Novak and Yim, collaborating as ‘localStyle’, make intermedia works that explore perceptual themes, addressing both physical and psychological thresholds in the context of behavior, society/politics, and aesthetics.

Doug Rosman

Self-contained II
A neural network, trained to see the world as variations of the artist’s body, enacts a process of algorithmic interpretation that contends with a body as a subject of multiplicity. After training on over 30,000 images of the artist, this neural network synthesizes surreal humanoid figures unconstrained by physics, biology and time; figures that are simultaneously one and many. The choice of costumes and the movements performed by the artist to generate the training images were specifically formulated to optimize the legibility of the artist within this computational system. self-contained explores the algorithmic shaping of our bodies, attempting to answer the question: how does one represent themselves in a data set? Building on the first iteration of the series, the synthetic figures in self-contained II proliferate to the point of literally exploding. Through the arc of self-contained II, this body that grows, multiples, and dissolves never ceases to be more than a single body.

Max Cooper

Morphosis
Morphosis uses artificial neural networks to create morphing images of scale. The system explores how natural structures from the most tiny to the most huge, share aesthetic properties, as recognized by the trained network, and recreated in continuous flowing sequence via these connections. It’s a study of the seemingly infinite nature of space and natural physical structure, which can loop back on itself to give endless visual exploration and variation.

Void

Abysmal
Abysmal means bottomless; resembling an abyss in depth; unfathomable. Perception is a procedure of acquiring, interpreting, selecting, and organizing sensory information. Perception presumes sensing. In people, perception is aided by sensory organs. In the area of AI, perception mechanism puts the data acquired by the sensors together in a meaningful manner. Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Inspired by the brain, deep neural networks (DNN) are thought to learn abstract representations through their hierarchical architecture. The work mostly shows the ‘hidden’ transformations happening in a network: summing and multiplying things, adding some non-linearities, creating common basic structures, patterns inside data. It creates highly non-linear functions that map ‘un-knowledge’ to ‘knowledge’.

Guy Ben-Ary

CellF
“CellF is the world’s first neural synthesiser. Its “brain” is made of biological neural networks that grow in a Petri dish and controls in real time it’s “body” that is made of an array of analogue modular synthesizers that work in synergy with it and play with human musicians. It is a completely autonomous instrument that consists of a neural network that is bio-engineered from my own cells that control a custom-built synthesizer. There is no programming or computers involved, only biological matter and analogue circuits; a ‘wet-analogue’ instrument.”Guy Ben-Ary

Behnaz Farahi

Synapse
Synapse is a 3D-printed helmet which moves and illuminates according to brain activity[…] The main intention of this project is to explore the possibilities of multi-material 3D printing in order to produce a shape-changing structure around the body as a second skin. Additionally, the project seeks to explore direct control of the movement with neural commands from the brain so that we can effectively control the environment around us through our thoughts. The environment therefore becomes an extension of our bodies. This project aims to play with the intimacy of our bodies and the environment to the point that the distinction between them becomes blurred, as both have ‘become’ a single entity. The helmet motion is controlled by the Eletroencephalography (EEG) of the brain. A Neurosky’s EEG chip and Mindflex headset have been modified and redesigned in order to create a seamless blend between technology and design.

Haru Ji and Graham Wakefield

Infranet
Infranet is a generative artwork realized through a population of artificial lifeforms with evolutionary neural networks, thriving upon open geospatial data of the infrastructure of the city as their sustenance and canvas. Born curious, these agents form spatial networks through which associations spread in complex contagions. In this city as organism, the data grounds an unbounded, decentralized, open-ended, and unsupervised system. Non-human beings flourish in this environment by learning, discovering, communicating, self-governing, and evolving. The invitation is to witness, through immersive visualization and sonficiation of this complex adaptive system, how a new morphologic landscape emerges as a possible but speculative city.

Greg Dunn and Brian Edward

Self-Reflected

Dr. Greg Dunn (artist and neuroscientist) and Dr. Brian Edwards (artist and applied physicist) created Self Reflected to elucidate the nature of human consciousness, bridging the connection between the mysterious three pound macroscopic brain and the microscopic behavior of neurons. Self Reflected offers an unprecedented insight of the brain into itself, revealing through a technique called reflective microetching the enormous scope of beautiful and delicately balanced neural choreographies designed to reflect what is occurring in our own minds as we observe this work of art. Self Reflected was created to remind us that the most marvelous machine in the known universe is at the core of our being and is the root of our shared humanity.

Laura Jade

B R A I N L I G H T
“The catalyst for this research project was my flourishing intrigue and desire to harnesses my own Brain as the creator of an interactive art experience where no physical touch was required except the power of my own thoughts. To experience a unique visualisation of brain activity and to share it with others I have created a large freestanding brain sculpture that is made of laser cut Perspex hand etched with neural networks that glow when light is passed through them.” Laura Jade

Greg Dunn

brain art
To capture their strikingly chaotic and spontaneous forms, the neurons in Self Reflected are painted using a technique wherein ink is blown around on a canvas using jets of air. The resulting ink splatters naturally form fractal like neural patterns, and although the artist learns to control the general boundaries of the technique it remains at its heart a chaotic, abstract expressionist process.

GUY BEN-ARY, PHILIP GAMBLEN AND STEVE POTTER

Silent Barrage

Silent Barrage has a “biological brain” that telematically connects with its “body” in a way that is familiar to humans: the brain processes sense data that it receives, and then brain and body formulate expressions through movement and mark making. But this familiarity is hidden within a sophisticated conceptual and scientific framework that is gradually decoded by the viewer. The brain consists of a neural network of embryonic rat neurons, growing in a Petri dish in a lab in Atlanta, Georgia, which exhibits the uncontrolled activity of nerve tissue that is typical of cultured nerve cells. This neural network is connected to neural interfacing electrodes that write to and read from the neurons. The thirty-six robotic pole-shaped objects of the body, meanwhile, live in whatever exhibition space is their temporary home. They have sensors that detect the presence of viewers who come in. It is from this environment that data is transmitted over the Internet, to be read by the electrodes and thus to stimulate, train or calm parts of the brain, depending on which area of the neuronal net has been addressed.

Olle Cornéer and Martin Lübcke

Public Epidemic Nº 1 (Bacterial Orchestra)
Олле и Любке
FILE FESTIVAL

“Bacterial Orchestra” (2006), a self-organizing evolutionary musical organism where each cell lives on an Apple iPhone (it can be ported to any mobile phone, but the iPhone was chosen because it’s popular and the centralized App Store makes it easy for the epidemic to spread). That way, hundreds of people can gather with their mobiles and together create a musical organism. It will evolve organically in the same way as “Bacterial Orchestra”, but it will also be much more infectious. The installation and the ideas behind it can be traced from different areas such as chaos theory, self-organizing systems and neural networks. The goal? A world wide sound pandemic, of course.

STELARC

drawing with robot arm
“With gene mapping, gender reassignment, prosthetic limbs and neural implants, what a body is and how a body operates becomes problematic. We generate Fractal Flesh and Phantom Flesh, extended operational systems and virtual task environments. Meat and metal mesh into unexpected and alternate anatomical architectures that perform remotely beyond the boundaries of the skin and beyond the local space it inhabits. The monstrous is no longer the alien other. We inhabit an age of Circulating Flesh. Organs are extracted from one body and inserted into other bodies. Limbs that are amputated from a dead body can be reattached and reanimated on a living body. A face from a donor stitched to the skull of the recipient becomes a Third Face. A skin cell from an impotent male can be recoded into a sperm cell. And more interestingly a skin cell from a female body might be recoded into a sperm cell. Turbine hearts circulate blood without pulsing. In the near future you might rest you head on your loved one’s chest. They are warm to the touch, they are breathing, they are certainly alive. But they will have no heartbeat. A cadaver can be preserved forever through plastination whilst simultaneously a comatose body can be sustained indefinitely on a life-support system. Dead bodies need not decompose, near-dead bodies need not die. Most people will no longer die biological deaths. They will die when their life-support systems are switched off. The dead, the near-dead, the not-yet-born and the partially living exist simultaneously. And cryongenically preserved bodies await reanimation at some imagined future. We live in an age of the Cadaver, the Comatose and the Chimera. Liminal spaces proliferate. Engineering organs, stem-cell growing them or by bio-printing will result in an abundence of organs. An excess of organs. Of organs awaiting bodies. Of Organs Without Bodies.” STELARC