highlike

Pangenerator

The abacus
THE ABACUS is probably the first ever 1:1 interactive physical representation of real, functioning deep learning network, represented in the form of a light sculpture. The main purpose of the installation is to materialise and demystify inherently ephemeral nature of artificial neural networks on which our lives are becoming increasingly reliant on. As the part of new permanent exhibition devoted to the Future the installation aims to engage and educate the audience in artistically compelling ways being the manifestation of art and science movement goals.

Frederik de Wilde

Ai Beetles
THIS NEXT NATURE POST-CAMOUFLAGE AI BEETLE IS INVISIBLE FOR THE ELECTRONIC EYE. THE PATTERNS ARE GENERATED WITH NEURAL NETWORKS AND EVOLUTIONARY ALGORITHMS TO FOOL AND MISLEAD ARTIFICIAL INTELLIGENCE ENABLED SYSTEMS. MY AIM IS TO DEVELOP A CONTEMPORARY RAZZLE DAZZLE STYLE CAPABLE OF MESSING UP LABELLING AND METADATA SYSTEMS. IT LOOKS LIKE A BEETLE FOR US BUT IS SEEN AS E.G. A HONEYCOMB FOR AN AI.
video

Ouchhh

Poetic AI
Ouchhh created an Artificial Intelligence and the t-SNE visualization of the hundreds of books and articles [approx. 20 million lines of text] written by scientists who changed the destiny of the world -and wrote history- were fed to the Recurrent Neural Network during the training. This, later on, was used to generate novel text in the exhibition. 136 projectors shining to be a veritable oneiric experience, the ‘POETIC – AI’ digital installation uses Artificial Intelligence in the visual creation process: the forms, light, and movement are generated by an algorithm that creates a unique and contemplative digital work, an AI dancing in the dark, trying to show us connections we could never see otherwise.

quadrature

Credo

A radio telescope scans the skys in search of signs of extraterrestrial life.
The received raw signals serve as input data for a neural network, which was trained on human theories and ideas of aliens. Now it tries desperately to apply this knowledge and to discover possible messages of other civilizations in the noise of the universe. Mysterious noises resound as the artificial intelligence penetrates deeper and deeper into the alien data, where it finally finds the ultimate proof.The sound installation revolves around one of the oldest questions of mankind – one that can never be disproved: Are we alone in the universe?

Studio A N F

Computer Visions 2
After more decades of trying to construct an apparatus that can think, we may be finally witnessing the fruits of those efforts: machines that know. That is to say, not only machines that can measure and look up information, but ones that seem to have a qualitative understanding of the world. A neural network trained on faces does not only know what a human face looks like, it has a sense of what a face is. Although the algorithms that produce such para-neuronal formations are relatively simple, we do not fully understand how they work. A variety of research labs have also been successfully training such nets on functional magnetic resonance imaging (fMRI) scans of living brains, enabling them to effectively extract images, concepts, thoughts from a person’s mind. This is where the inflection likely happens, as a double one: a technology whose workings are not well understood, qualitatively analyzing an equally unclear natural formation with a degree of success. Andreas N. Fischer’s work Computer Visions II seems to be waiting just beyond this cusp, where two kinds of knowing beings meet in a psychotherapeutic session of sorts[…]

Jake Elwes

Cusp
A familiar childhood location on the Essex marshes is reframed by inserting images randomly generated by a neural network (GAN*) into this tidal landscape. Initially trained on a photographic dataset, the machine proceeds to learn the embedded qualities of different marsh birds, in the process revealing forms that fluctuate between species, with unanticipated variations emerging without reference to human systems of classification. Birds have been actively selected from among the images conceived by the neural network, and then combined into a single animation that migrates from bird to bird, accompanied by a soundscape of artificially generated bird song. The final work records these generated forms as they are projected, using a portable perspex screen, across the mudflats in Landermere Creek.

Doug Rosman

Self-contained II
A neural network, trained to see the world as variations of the artist’s body, enacts a process of algorithmic interpretation that contends with a body as a subject of multiplicity. After training on over 30,000 images of the artist, this neural network synthesizes surreal humanoid figures unconstrained by physics, biology and time; figures that are simultaneously one and many. The choice of costumes and the movements performed by the artist to generate the training images were specifically formulated to optimize the legibility of the artist within this computational system. self-contained explores the algorithmic shaping of our bodies, attempting to answer the question: how does one represent themselves in a data set? Building on the first iteration of the series, the synthetic figures in self-contained II proliferate to the point of literally exploding. Through the arc of self-contained II, this body that grows, multiples, and dissolves never ceases to be more than a single body.

Max Cooper

Morphosis
Morphosis uses artificial neural networks to create morphing images of scale. The system explores how natural structures from the most tiny to the most huge, share aesthetic properties, as recognized by the trained network, and recreated in continuous flowing sequence via these connections. It’s a study of the seemingly infinite nature of space and natural physical structure, which can loop back on itself to give endless visual exploration and variation.

Void

Abysmal
Abysmal means bottomless; resembling an abyss in depth; unfathomable. Perception is a procedure of acquiring, interpreting, selecting, and organizing sensory information. Perception presumes sensing. In people, perception is aided by sensory organs. In the area of AI, perception mechanism puts the data acquired by the sensors together in a meaningful manner. Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Inspired by the brain, deep neural networks (DNN) are thought to learn abstract representations through their hierarchical architecture. The work mostly shows the ‘hidden’ transformations happening in a network: summing and multiplying things, adding some non-linearities, creating common basic structures, patterns inside data. It creates highly non-linear functions that map ‘un-knowledge’ to ‘knowledge’.

Guy Ben-Ary

CellF
“CellF is the world’s first neural synthesiser. Its “brain” is made of biological neural networks that grow in a Petri dish and controls in real time it’s “body” that is made of an array of analogue modular synthesizers that work in synergy with it and play with human musicians. It is a completely autonomous instrument that consists of a neural network that is bio-engineered from my own cells that control a custom-built synthesizer. There is no programming or computers involved, only biological matter and analogue circuits; a ‘wet-analogue’ instrument.”Guy Ben-Ary

Laura Jade

B R A I N L I G H T
“The catalyst for this research project was my flourishing intrigue and desire to harnesses my own Brain as the creator of an interactive art experience where no physical touch was required except the power of my own thoughts. To experience a unique visualisation of brain activity and to share it with others I have created a large freestanding brain sculpture that is made of laser cut Perspex hand etched with neural networks that glow when light is passed through them.” Laura Jade

GUY BEN-ARY, PHILIP GAMBLEN AND STEVE POTTER

Silent Barrage

Silent Barrage has a “biological brain” that telematically connects with its “body” in a way that is familiar to humans: the brain processes sense data that it receives, and then brain and body formulate expressions through movement and mark making. But this familiarity is hidden within a sophisticated conceptual and scientific framework that is gradually decoded by the viewer. The brain consists of a neural network of embryonic rat neurons, growing in a Petri dish in a lab in Atlanta, Georgia, which exhibits the uncontrolled activity of nerve tissue that is typical of cultured nerve cells. This neural network is connected to neural interfacing electrodes that write to and read from the neurons. The thirty-six robotic pole-shaped objects of the body, meanwhile, live in whatever exhibition space is their temporary home. They have sensors that detect the presence of viewers who come in. It is from this environment that data is transmitted over the Internet, to be read by the electrodes and thus to stimulate, train or calm parts of the brain, depending on which area of the neuronal net has been addressed.

Olle Cornéer and Martin Lübcke

Public Epidemic Nº 1 (Bacterial Orchestra)
Олле и Любке
FILE FESTIVAL

“Bacterial Orchestra” (2006), a self-organizing evolutionary musical organism where each cell lives on an Apple iPhone (it can be ported to any mobile phone, but the iPhone was chosen because it’s popular and the centralized App Store makes it easy for the epidemic to spread). That way, hundreds of people can gather with their mobiles and together create a musical organism. It will evolve organically in the same way as “Bacterial Orchestra”, but it will also be much more infectious. The installation and the ideas behind it can be traced from different areas such as chaos theory, self-organizing systems and neural networks. The goal? A world wide sound pandemic, of course.

Refik Anadol

Machine Memoirs
Es una exploración de estructuras celestes a través de la mente de una máquina. Esta instalación inmersiva tiene como objetivo combinar exploraciones pasadas y soñar con lo que puede existir más allá de nuestro alcance. Usando inteligencia artificial para narrar lo “desconocido” y una red neuronal generativa entrenada en imágenes de la Tierra, la Luna, Marte y la Galaxia, tomadas de observaciones de la ISS, Chandra, Kepler, Voyager y Hubble, esta instalación imagina un universo alternativo, quizás aportando más textura a nuestra propia tela.
.
Is an exploration of celestial structures through the mind of a machine. This immersive installation aims to combine past explorations and dream of what may exist just beyond our reach. Using machine intelligence to narrate the “unknown,” and a generative neural network trained on images of the Earth, Moon, Mars and the Galaxy, taken from ISS, Chandra, Kepler, Voyager, and Hubble observations, this installation imagines an alternate universe, perhaps providing further texture to the fabric of our own.

Haru Ji and Graham Wakefield

Infranet
Infranet is a generative artwork realized through a population of artificial lifeforms with evolutionary neural networks, thriving upon open geospatial data of the infrastructure of the city as their sustenance and canvas. Born curious, these agents form spatial networks through which associations spread in complex contagions. In this city as organism, the data grounds an unbounded, decentralized, open-ended, and unsupervised system. Non-human beings flourish in this environment by learning, discovering, communicating, self-governing, and evolving. The invitation is to witness, through immersive visualization and sonficiation of this complex adaptive system, how a new morphologic landscape emerges as a possible but speculative city.