highlike

FILE LED SHOW 2013

1024 ARCHITECTURE

FILE FESTIVAL

The project consists of a podium with a microphone installed on the sidewalk of the Paulista Avenue, where people could interact through the vocal conversion into musical notes. The 1024 architecture group elaborated a brand new algorithm for this project, which changes the graphic behaviors by means of sounds. Depending on the note sung, several parameters of the program, such as colors, shapes and density.

1024 ARCHITECTURE

FILE LED Show

The art-technology exhibition of FILE São Paulo 2013 brings the new interactive work by the famous 1024 architecture group to be presented on the gigantic LED panel of FIESP building on Paulista Avenue. People could change the images on the panel through their voices or by humming a song.
The 1024 architecture group created the “interactive LED” digital graphic project for the Paulista Avenue, which will be presented from July 22 through August 18, 2013. The project consists of a podium with a microphone installed on the sidewalk of the Paulista Avenue, where people could interact through the vocal conversion into musical notes. The 1024 architecture group elaborated a brand new algorithm for this project, which changes the graphic behaviors by means of sounds. Depending on the note sung, several parameters of the program, such as colors, shapes, density, and rhythms (squares, circles, stripes, etc.), will change.

QUBIT AI: Banz & Bowinkel

Bots
FILE 2024 | Installations
International Electronic Language Festival

Bots presents a computer-controlled society through a series of algorithmically controlled humanoid avatars that appear on physical carpets using augmented reality (AR). Real-time performances synthesize human behavioral patterns into a formalized digital social study. Omnipresent, combined with our devices and incorporated into virtual environments, the work reminds us of our own digitalized world, in which we are surrounded by invisible bots.

Bio

Giulia Bowinkel (born 1983) and Friedemann Banz (born 1980) live in Berlin and have worked together under the name Banz & Bowinkel since 2009. In 2007 they graduated from the Art Academy with Albert Oehlen and started making art with computers . His work encompasses computer-generated imagery, animation, augmented reality, virtual realities and installations.

QUBIT AI: Michael Sadowski (aka derealizer)

Stealth Technology of Ancient, Cosmic Pantheons

FILE 2024 | Aesthetic Synthetics
International Electronic Language Festival
Michael Sadowski (aka derealizer) – Stealth Technology of Ancient, Cosmic Pantheons – Austria

An abstract painting in motion, with colors and shapes exploding and transforming to the rhythm of the music. The dynamic element is not trapped in a static image, but can unfold in time and space.

Bio

Using Stable Diffusion, a visual synthesizer, the artist turns fantasies into videos using just a PC, similar to the invention of printing 600 years ago. Exploring the interplay between software algorithms that create visual worlds and the artist’s mind guiding this process is incredibly exciting. Unlike traditional cinema, there is no ‘reality’ or humans involved, making it a satisfying medium for creating visual art.

Credits

Visuals: Michael Sadowski
Music: Stealth Technology of Ancient, Cosmic Pantheons by The Intangible

QUBIT AI: Michael Sadowski (aka derealizer)

Distortions of The Past

FILE 2024 | Aesthetic Synthetics
International Electronic Language Festival
Michael Sadowski (aka derealizer) – Distortions of The Past – Austria

Fractal elements that resemble cosmic structures evoke the illusion of traveling through a fractal universe. Rules, in the form of prompts, and chance interact with each other to create a visual fantasy.

Bio

Using Stable Diffusion, a visual synthesizer, the artist turns fantasies into videos using just a PC, similar to the invention of printing 600 years ago. Exploring the interplay between software algorithms that create visual worlds and the artist’s mind guiding this process is incredibly exciting. Unlike traditional cinema, there is no ‘reality’ or humans involved, making it a satisfying medium for creating visual art.

Credits

Visuals: Michael Sadowski
Music: Distortions of the Past by Dreamstate Logic

QUBIT AI: Infratonal

Useless Hands

FILE 2024 | Aesthetic Synthetics
International Electronic Language Festival
Infratonal – Useless Hands – France

When our hands become useless, what will we choose to do with them? We can use AI to visualize the unthinkable, the strangely familiar yet indescribable forms and structures. Generative AI could be used as an amplifier of our ability to explore abstraction and surrealism rather than a simple mirror of our usual perceptions.

Bio

Infratonal is an artistic project led by Louk Amidou, a Paris-based multidisciplinary artist who works at the intersection of digital arts, electronic music and interaction design. He uses algorithms to create hybrid visual and sound pieces which aim to be performed by the human gesture as intangible instruments. He questions the artwork’s nature at the age of AI and the relationship between the artist and the algorithm.

QUBIT AI: Michael Sadowski (aka derealizer)

Magic Drops

FILE 2024 | Interator – Sound Synthetics
International Electronic Language Festival
Michael Sadowski (aka derealizer) – Magic Drops – Austria

Moving abstract structures and grimacing masks, colors that change and pulse to the rhythm of the music create a psychedelic experience that embodies the spirit of Techno.

Bio

Using Stable Diffusion, a visual synthesizer, the artist turns fantasies into videos using just a PC, similar to the invention of printing 600 years ago. Exploring the interplay between software algorithms that create visual worlds and the artist’s mind guiding this process is incredibly exciting. Unlike traditional cinema, there is no ‘reality’ or humans involved, making it a satisfying medium for creating visual art.

Credits

Music: Chris Robert

QUBIT AI: Michael Sadowski (aka derealizer)

In Love

FILE 2024 | Interator – Sound Synthetics
International Electronic Language Festival
Michael Sadowski (aka derealizer) – In Love – Austria

Fractal structures move to the sound of progressive house as the virtual camera navigates through this fractal world. To intensify the psychedelic quality, a second layer contrasts with the movement, resulting in a joyful madness of colors.

Bio

Using Stable Diffusion, a visual synthesizer, the artist turns fantasies into videos using just a PC, similar to the invention of printing 600 years ago. Exploring the interplay between software algorithms that create visual worlds and the artist’s mind guiding this process is incredibly exciting. Unlike traditional cinema, there is no ‘reality’ or humans involved, making it a satisfying medium for creating visual art.

Credits

Music: Y do I

VTOL

ADAD
This installation is a mechanism that serves as a kind of interface between planetary processes and an audience. It consists of 12 transparent piezocrystals, grown especially for the project, and 12 motorized hammers that strike them. The installation is connected to the internet. Its core algorithm is controlled by data from a meteorological site which shows lightning strikes in real time (on average, 10~200 lightning flashes occur on the planet every minute). Each time the installation receives information about a lightning strike, a hammer strikes one of the crystals, resulting in a small electrical discharge produced by the crystal under mechanical stress. Each of these charges activates a powerful lamp and sound effects.

FILE LED SHOW Neuroscientific-Installation

 

FILE FESTIVAL

FILE LED SHOW

saccade
OUCHHH STUDIO
Neuroscientific-Installation
We are invited to São Paulo for our vertical light and sound installation which will transform the facade of São Paulo’s one of the most important architecture which is Fiesp Led Building.
We started this project with the idea that Neuroscience and simultaneous movement of both eyes between two or more phases of fixation in the same direction algorithms, and we transform the high-resolution led screen into a media canvas which transformed into living architecture.

Neri Oxman

MAN-NAHĀTA
Computational growth across material and urban scales offers a framework for design through self-organization, enabling the generation of vast, diverse forms exhibiting characteristics like those that emerge through the biological growth processes found in Nature. In this project, we construct an oriented volume spanned by surface normals of the shape at every point. The value of the oriented volume drives the iterative deformation of the shape. Depending on the parameterization of this process, we can obtain distinctly different growing forms. Importantly, the emergence of these forms is driven only by the time evolution of a geometric operator acting on the shapes iteratively, thereby connecting geometry and growth through an algorithm. To form the Man-Nahata landscape, the buildings of the urban landscape are transformed through repeated morphological closing operations, where the field of influence follows a gradient from the center to the outskirts of a circular region.

The Collective

2°C
2°C is a unique AI generated art installation imagined through the mind of a machine. Utilising machine learning algorithms trained on thousands of archival images of geometric structures of man made cities and naturally occurring organic corals forms, the AI takes this learned data to visualise an otherwise unseen coral city. 2°C is about coral bleaching, one of the phenomenon mainly caused by rising sea temperature brought about by climate change. To prevent the massive, irreversible impacts of ocean warming on the coral reefs and their services, it is crucial to limit the global average temperature increase to below 2°C above pre-industrial levels.

Lawrence Lek

AIDOL
Lawrence Lek präsentiert AIDOL, die Fortsetzung seines 2017er CGI-Films Geomancer. Mit einer vom Künstler geschriebenen und orchestrierten Partitur erzählt diese computergenerierte Fantasy die Geschichte eines verblassenden Superstars, Diva, der einen aufstrebenden KI-Songwriter für eine Comeback-Performance beim olympischen eSport-Finale 2065 engagiert. In einem Nebel-und-Spiegel-Reich aus fantastischer Architektur, empfindungsfähigen Drohnen und schneebedeckten Dschungeln dreht sich AIDOL um den langen und komplexen Kampf zwischen Mensch und Künstlicher Intelligenz. Ruhm – in all seiner Anziehungskraft und Leere – steht den größeren Widersprüchen einer Post-KI-Welt gegenüber, einer Welt, in der Originalität manchmal nur ein algorithmischer Trick ist und in der Maschinen die Fähigkeit haben, zu lieben und zu leiden.

Peter Macapia

Dirty Geometry:Cloud
Dirty Geometry: Cloud is a six-panel folded screen generated using algorithmic computation with random variables. Between the drawing and the surface lies an infra-thin space perpetually reconstituted; one hue replaces another as the light shifts outside; becoming more intense, now

Jeremy Rotsztain

BECHA-KPACHA
BECHA-KPACHA is an algorithmic music video for the electronic musician COH. The song’s tittle (pronounced Vesna Krasna) was taken from an old Russian poem and roughly translates “Spring the beautiful”, though it can also mean “Spring the red.” The animation reference’s traditional Russian folk patterns, commonly known as Hohloma. In these patterns, colorful plant leaves expand and twist around one another while fruit grows along side. These patterns were a starting point for this sound-responsive animation.

UVA UNITED VISUAL ARTISTS

Entwurf
Blueprint umfasst die Beziehung und Parallelen zwischen Kunst und Wissenschaft und schafft Kompositionen durch die mathematischen Prinzipien der Logik, die das Leben stützen. UVA untersucht Analogien zwischen DNA und Computercode und hat die Blueprint-Serie entwickelt. Arbeiten, die Genetik und Code als Blaupausen künstlicher und natürlicher Systeme verbinden. Da sich die Arbeit im Laufe der Zeit langsam ändert, schwanken die Muster zwischen verschiedenen Komplexitätsgraden. Blueprint verwendet die Grundkonzepte der Evolution, um ein sich ständig veränderndes Bild zu erstellen. Während Zellen ihre Gene buchstäblich auf ihre angrenzenden anderen übertragen, fließt Farbe wie Farbe über die Leinwand. Blueprint erstellt jede Minute eine einzigartige farbenfrohe Komposition und präsentiert das unbegrenzte Ergebnis, das sich aus einem einzigen Algorithmus ergibt. ein einziges Regelwerk.

Random International

Presence and Erasure
Presence and Erasure is a portrait machine that explores the reality of automated facial recognition and how people relate to their self-image, instinctively and emotionally. Within a given spatial domain, the artwork constantly scans for faces in the vicinity and photographs them. When the artwork’s algorithm detects a certain quality within a photograph, this image is temporarily printed at large scale by exposing a photochromic surface to light impulses. Each automated portrait remains for little more than a minute, before gradually dissolving into blankness. RANDOM INTERNATIONAL began to combine transient mark-making with automated portraiture early on in their practice, in 2008. Presence and Erasure marks the latest development in this body of work and assumes a minimal, industrial aesthetic that references their earliest studies on this theme. The physical impact of facial recognition and machine vision is emphasised by the exposure of the printing process itself, contrasted against the aesthetic of the high resolution portraits generated. RANDOM INTERNATIONAL intend this as a counter to the perception of surveillance footage as always being low quality, aiming to create a deeper reflection on the nature of surveillance today as well as the resounding cognitive and emotional dissonances.

Aidan Meller

AI-DA
Ai-Da est le premier artiste ultra-réaliste au monde. Elle dessine en utilisant des caméras dans ses yeux, ses algorithmes d’IA et son bras robotique. Créée en février 2019, elle a eu sa première exposition solo à l’Université d’Oxford, « Unsecured Futures », où son art a encouragé les téléspectateurs à réfléchir à notre monde en évolution rapide. Depuis, elle a voyagé et exposé des œuvres à l’échelle internationale, et a eu sa première exposition dans un grand musée, le Design Museum, en 2021. Elle continue de créer de l’art qui remet en question nos notions de créativité dans une ère post-humaniste.

Tacit Group

61/6 haut-parleurs
Tacit Group est un groupe de performance audiovisuelle qui a été formé en 2008 pour créer une œuvre axée sur l’algorithmique et l’audiovisuel. Son art algorithmique se concentre sur le processus plutôt que sur le résultat. Ils créent du code mathématique, des systèmes utilisant des principes et des règles, et improvisent des performances scéniques à l’aide des systèmes. Pendant la représentation, les systèmes sont révélés visuellement et sonorement, afin que le public puisse entendre avec leurs yeux, comme on voit “Le Cri” d’Edvard Munch (Norvège, 1863-1944). Les images font partie intégrante du travail du Groupe Tacit en tant que compositeurs et artistes médiatiques. Ils espèrent qu’en montrant non seulement la pièce finie, mais le processus de discussion ou de jeu qui la génère, ils engageront plus intensément leurs téléspectateurs et briseront la division conventionnelle entre les interprètes et les membres du public. Aucun de ses travaux n’est jamais terminé. Ils mettent continuellement à jour les systèmes sous-jacents et s’inspirent de leur pratique de la programmation informatique. En tant qu’artistes de notre époque, Tacit Group découvre les possibilités artistiques de la technologie.

Troika

AVA

Ava’ is Troika’s first sculptural manifestation of their exploration of algorithms. ‘Ava’ is the physical result of emergence and self organisation brought about by ‘growing’ a sculpture through the use of a computer algorithm that imitates the emergence of life by which complexity arises from the simplest of things. As such the sculpture probes at the nature of becoming, existence and our strive to understand and replicate the complexities of life.In a landscape where our personal data is a raw material, and where we, humans, have become subordinate spectators of algorithms and a computerised infrastructure, we ask the question how much or little are we capable of influencing our surrounding reality, how much is predetermined, how much is down to chance.

fabrica

recognition
RECOGNITION

Recognition, winner of IK Prize 2016 for digital innovation, is an artificial intelligence program that compares up-to-the-minute photojournalism with British art from the Tate collection. Over three months from 2 September to 27 November, Recognition will create an ever-expanding virtual gallery: a time capsule of the world represented in diverse types of images, past and present.A display at Tate Britain accompanies the online project offering visitors the chance to interrupt the machine’s selection process. The results of this experiment – to see if an artificial intelligence can learn from the many personal responses humans have when looking at images – will be presented on this site at the end of the project.Recognition is a project by Fabrica for Tate; in partnership with Microsoft, content provider Reuters, artificial intelligence algorithm by Jolibrain.

Driessens & Verstappen

Breed
Breed (1995-2007) is a computer program that uses artificial evolution to grow very detailed sculptures. The purpose of each growth is to generate by cell division from a single cell a detailed form that can be materialised. On the basis of selection and mutation a code is gradually developed that best fulfils this “fitness” criterion and thus yields a workable form. The designs were initially made in plywood. Currently the objects can be made in nylon and in stainless steel by using 3D printing techniques. This automates the whole process from design to execution: the industrial production of unique artefacts.
Computers are powerful machines to harness artificial evolution to create visual images. To achieve this we need to design genetic algorithms and evolutionary programs. Evolutionary programs allow artefacts to be “bred”, rather than designing them by hand. Through a process of mutation and selection, each new generation is increasingly well adapted to the desired “fitness” criteria. Breed is an example of such software that uses Artificial Evolution to generate detailed sculptures. The algorithm that we designed is based on two different processes: cell-division and genetic evolution.

Shinseungback Kimyonghun

Cloud Face
Humans see figures in clouds: animals, faces and even god. This kind of perception also appears in machine vision. Face-detection algorithms sometimes find faces where there are not any.’Cloud Face’ is a collection of cloud images that are recognized as human faces by a face-detection algorithm. It is a result of machine vision’s error but they often look like faces to human eyes too. Humans, yet, know these are not actual faces. Humans rather imagine faces from the clouds. Here, the error of machines and the imagination of humans meet.

Thom Kubli

Brazil Now
BRAZIL NOW is a composition that addresses increasing militarization and surveillance within urban areas. Its geographical and acoustic reference is São Paulo, the largest megacity in Latin America. The piece is based on field recordings that capture the symptoms of a Latin American variant of turbo-capitalism with its distinctive acoustic features. Eruptive public demonstrations on the streets are often accompanied by loud, carnivalesque elements. These are controlled by a militarized infrastructure, openly demonstrating a readiness to deploy violence. The sonic documents are analyzed by machine learning algorithms searching for acoustic memes, textures, and rhythms that could be symptomatic for predominant social forces. The algorithmic results are then used as a base for a score and its interpretation through a musical ensemble. The piece drafts a phantasmatic auditory landscape built on the algorithmic evaluation of urban conflict zones.

Thijis Biersteker

Pollutive Ends
With the art installation Pollutive Ends the artist Thijs Biersteker shows the impact of 1 cigarette butt on our environment and waters. The impact is made visible by moving small elements of real polluted water hypnotically right in front of the visitors eyes through an intricate tube system. The algorithmic driven pumping system calculates the amount of visitors that are in the museum, the likelihood that they smoke and the amount of pollution that they would generate.

VTOL

Pétrole
L’idée principale de ce projet est de présenter aux visiteurs de l’exposition la possibilité de détruire tout objet qui pourrait se trouver sur leur personne, afin de le transformer en une composition sonore unique. L’installation se compose de cinq presses hydrauliques, capables d’écraser pratiquement n’importe quel objet (téléphone portable, paire de lunettes, écouteurs ou autre). En cours de destruction, un microphone spécial enregistre les sons émis lors de la déformation de l’objet et, en quelques minutes seulement, un algorithme informatique les transforme en un album de 20 minutes.

NAHO MATSUDA

Tout à chaque fois
La pièce présente un commentaire continu sur l’activité de la ville dans laquelle elle se situe. La «poésie», qui est créée par un algorithme qui sélectionne et réorganise au hasard les données collectées à partir d’un certain nombre de points spécifiques au site, et est publiée sur un site Web. Les Pis tirent la poésie du site Web sur 4G, puis transmettent les adresses de lettres via Ethernet aux arduinos, puis aux moteurs en utilisant le protocole réseau I2C. L’acte de ceci déclenche alors l’algorithme pour générer un nouveau poème.

FABRICA

Anerkennung
Recognition, Gewinner des IK-Preises 2016 für digitale Innovation, ist ein Programm für künstliche Intelligenz, das aktuellen Fotojournalismus mit britischer Kunst aus der Tate-Sammlung vergleicht. In drei Monaten vom 2. September bis 27. November wird Recognition eine ständig wachsende virtuelle Galerie schaffen: eine Zeitkapsel der Welt, die in verschiedenen Arten von Bildern aus Vergangenheit und Gegenwart dargestellt wird. Eine Ausstellung in der Tate Britain begleitet das Online-Projekt und bietet Besuchern die Möglichkeit um den Auswahlprozess der Maschine zu unterbrechen. Die Ergebnisse dieses Experiments – um zu sehen, ob eine künstliche Intelligenz aus den vielen persönlichen Reaktionen lernen kann, die Menschen beim Betrachten von Bildern haben – werden am Ende des Projekts auf dieser Website vorgestellt. Recognition ist ein Projekt von Fabrica für Tate; in Partnerschaft mit Microsoft, Inhaltsanbieter Reuters, Algorithmus für künstliche Intelligenz von Jolibrain.

TUNDRA

Nomade
Inspiriert vom Konzept der digitalen Nomaden des 21. Jahrhunderts und basierend auf verschiedenen Stücken und Algorithmen aus TUNDRAs früheren hochgelobten audiovisuellen Installationen, die weltweit von den USA bis nach China uraufgeführt wurden, bringt NOMAD die polare Atmosphäre verschiedener ortsspezifischer TUNDRA-Installationen in eine zufällig wechselnde Abfolge von visuellen Themen und Mustern, ausgelöst durch live gespielten Sound.

OLEG SOROKO

Prozedurales Tuch V//002
“Ich setze Experimente mit prozedural erzeugten Strukturen fort. Dieses Mal implementiere ich einen in Houdini zusammengestellten Algorithmus über dem weiblichen Körper. Alle in Houdini erzeugten Netze. Dann auf Sketchfab hochgeladen. Alle Bilder aus Sketchfab-Modell. Sie können es in 3D in jedem Browser über den Link in meinem Profil überprüfen.” Oleg Soroko

Studio A N F

Computervisionen 2
Nach mehr Jahrzehnten des Versuchs, einen Apparat zu konstruieren, der denken kann, können wir endlich die Früchte dieser Bemühungen erleben: Maschinen, die es wissen. Das heißt, nicht nur Maschinen, die Informationen messen und nachschlagen können, sondern auch solche, die ein qualitatives Verständnis der Welt zu haben scheinen. Ein auf Gesichtern trainiertes neuronales Netzwerk weiß nicht nur, wie ein menschliches Gesicht aussieht, es hat auch ein Gefühl dafür, was ein Gesicht ist. Obwohl die Algorithmen, die solche para-neuronalen Formationen erzeugen, relativ einfach sind, verstehen wir nicht vollständig, wie sie funktionieren. Eine Vielzahl von Forschungslabors hat solche Netze auch erfolgreich auf fMRT-Scans (Functional Magnetic Resonance Imaging) lebender Gehirne trainiert, um Bilder, Konzepte und Gedanken effektiv aus dem Geist einer Person zu extrahieren. Hier geschieht die Beugung wahrscheinlich als doppelte: eine Technologie, deren Funktionsweise nicht gut verstanden wird und die eine ebenso unklare natürliche Formation mit einem gewissen Erfolg qualitativ analysiert. Andreas N. Fischers Arbeit Computer Visions II scheint kurz hinter dieser Schwelle zu warten, wo sich zwei Arten von wissenden Wesen in einer Art psychotherapeutischer Sitzung treffen […]

Thomas Depas

Princess of Parallelograms
What will happen when our imagination itself is externalized in machines? Artificial intelligence constructs its own world-truth that is beyond our sensory perception. Generative Adversarial Networks (GANs) use algorithms to synthesize and generate images in a completely new way. These images have almost uncanny aesthetic characteristics, seeming to emerge from an ocean of data, a kind of pixel soup. Rather as if we were observing the emergence of artificial thought.” The machine learns to understand the “essence” of a thing, be it an animal, the face of a celebrity or a body of text. It is then able to generate new images of this thing, including faces of celebrities who do not exist, mutant animals, or new texts. Eventually, AI will be capable of instantaneously and dynamically emulating all representations. The era of the optical machine and the capture of reality will then be at an end, supplanted by the era of machines that generate their own reality.

Ouchhh

Poetic AI
Ouchhh created an Artificial Intelligence and the t-SNE visualization of the hundreds of books and articles [approx. 20 million lines of text] written by scientists who changed the destiny of the world -and wrote history- were fed to the Recurrent Neural Network during the training. This, later on, was used to generate novel text in the exhibition. 136 projectors shining to be a veritable oneiric experience, the ‘POETIC – AI’ digital installation uses Artificial Intelligence in the visual creation process: the forms, light, and movement are generated by an algorithm that creates a unique and contemplative digital work, an AI dancing in the dark, trying to show us connections we could never see otherwise.

Refik Anadol

Quantum memories
Quantum Memories is Refik Anadol Studio’s epic scale investigation of the intersection between Google AI Quantum Supremacy experiments, machine learning, and aesthetics of probability. Technological and digital advancements of the past century could as well be defined by the humanity’s eagerness to make machines go to places that humans could not go, including the spaces inside our minds and the non-spaces of our un- or sub-conscious acts. Quantum Memories utilizes the most cutting-edge, Google AI’s publicly available quantum computation research data and algorithms to explore the possibility of a parallel world by processing approximately 200 million nature and landscape images through artificial intelligence. These algorithms allow us to speculate alternative modalities inside the most sophisticated computer available, and create new quantum noise-generated datasets as building blocks of these modalities. The 3D visual piece is accompanied by an audio experience that is also based on quantum noise–generated data, offering an immersive experience that further challenges the notion of mutual exclusivity. The project is both inspired by and a speculation of the Many-Worlds Interpretation in quantum physics – a theory that holds that there are many parallel worlds that exist at the same space and time as our own.

Tundra

Nomad
Inspired by the concept of digital nomads of 21st-century and based on various pieces and algorithms from TUNDRA’s previous highly acclaimed audio-visual installations premiered across the globe from USA to China, NOMAD brings the polar atmosphere of different TUNDRA site-specific installations into a randomly changing sequence of visual themes and patterns triggered by live-performed sound.

Oleg Soroko

Procedural cloth V//002
“I’m continuing experiments with procedurally generated structures. This time I’m implementing algorithm assembled in Houdini over female body. All meshes generated in Houdini. Then uploaded to sketchfab.  All images made from sketchfab model. You can check it in 3d in any browser by the link in my profile.” Oleg Soroko

Studio A N F

Computer Visions 2
After more decades of trying to construct an apparatus that can think, we may be finally witnessing the fruits of those efforts: machines that know. That is to say, not only machines that can measure and look up information, but ones that seem to have a qualitative understanding of the world. A neural network trained on faces does not only know what a human face looks like, it has a sense of what a face is. Although the algorithms that produce such para-neuronal formations are relatively simple, we do not fully understand how they work. A variety of research labs have also been successfully training such nets on functional magnetic resonance imaging (fMRI) scans of living brains, enabling them to effectively extract images, concepts, thoughts from a person’s mind. This is where the inflection likely happens, as a double one: a technology whose workings are not well understood, qualitatively analyzing an equally unclear natural formation with a degree of success. Andreas N. Fischer’s work Computer Visions II seems to be waiting just beyond this cusp, where two kinds of knowing beings meet in a psychotherapeutic session of sorts[…]

Refik Anadol

WDCH Dreams
The Los Angeles Philharmonic collaborated with media artist Refik Anadol to celebrate our history and explore our future. Using machine learning algorithms, Anadol and his team has developed a unique machine intelligence approach to the LA Phil digital archives – 45 terabytes of data. The results are stunning visualizations for WDCH Dreams, a project that was both a week-long public art installation projected onto the building’s exterior skin (Sept 28 – Oct 6, 2018) and a season-long immersive exhibition inside the building, in the Ira Gershwin Gallery.

Doug Rosman

Self-contained II
A neural network, trained to see the world as variations of the artist’s body, enacts a process of algorithmic interpretation that contends with a body as a subject of multiplicity. After training on over 30,000 images of the artist, this neural network synthesizes surreal humanoid figures unconstrained by physics, biology and time; figures that are simultaneously one and many. The choice of costumes and the movements performed by the artist to generate the training images were specifically formulated to optimize the legibility of the artist within this computational system. self-contained explores the algorithmic shaping of our bodies, attempting to answer the question: how does one represent themselves in a data set? Building on the first iteration of the series, the synthetic figures in self-contained II proliferate to the point of literally exploding. Through the arc of self-contained II, this body that grows, multiples, and dissolves never ceases to be more than a single body.

Ong Kian-Peng

Particle Waves
“Particle Waves” is a kinetic sound sculpture comprising of a 4×3 grid of 12 individual kinetic bowls. Within each bowl contains tiny metal beads of various sizes, creating noises as the bowl rotates in various angles. The noise from a single bowl forms collectively to become a soundscape, reminding us of waves and oceans. The bowls are arranged in a 4×3 grid and controlled as a whole by a microcontroller running a wave algorithm. This creates a continuous wave-like kinetic motion over the grid, at the same time creating a spatialized soundscape. This installation is a continuous attempt of exploring the correlation between sound and nature.

Charlie Behrens

Algorithmic Architecture
This short film is intended to encourage a creative audience to seek out Kevin Slavin’s talk Those Algorithms Which Govern Our Lives. It employs an effect which takes place in Google Earth when its 3D street photography and 2D satellite imagery don’t register correctly. This glitch is applied as a metaphor for the way that our 21st century supercities are physically changing to suit the needs of computer algorithms rather than human employees.

Michael Sedbon

CMD
Here are 2 artificial ecosystems sharing a light source. Access to this light source is granted through a market. Each colony of photosynthetic bacterias can claim access to light thanks to credits earned for their oxygen production. The rules driving the market are optimized through a genetic algorithm. This artificial intelligence is testing different populations of financial systems on these 2 sets of Cyanobacteria. Like so, the photosynthetic cells and the computer are experimenting with different political systems granting access to this resource. The system oscillates between collaborative and competitive states. The genetic algorithm pictures the rules of these proto-societies as genes. By breeding populations of societies, new generations of markets arise. Like so, the sum of microscopic series of events determines the status of the system at a macroscopic scale.

VTOL

Until I die
This installation operates on unique batteries that generate electricity using my blood. The electric current produced by the batteries powers a small electronic algorithmic synth module. This module creates generative sound composition that plays via a small speaker. The blood used in the installation was stored up gradually over 18 months. The conservation included a number of manipulations to preserve the blood’s chemical composition, color, homogeneity and sterility to avoid bacterial contamination. The total amount of blood conserved was around 4.5 liters; it was then diluted to yield 7 liters, the amount required for the installation. The blood was diluted with distilled water and preservatives such as sodium citrate, antibiotics, antifungal agents, glucose, glycerol etc. The last portion of blood (200ml) was drawn from my arm during the performance presentation, shortly before the launch of the installation.

Anne-Sarah Le Meur & Jean-Jacques Birgé

Omni-Vermille
Omni-Vermille is based on computer-generated real-time 3D images. The programmed code allows light spots to oscillate against a dark background. The colors sometimes move dynamically, sometimes calmly across the projection surface; sometimes they evoke plasticity, sometimes depth. This continuous metamorphosis endows the contents of the images with a sensual, even lively quality. The metamorphosis designed by algorithms opens up a new time-based morphology of colors and forms for painting. The play of colors is accompanied by a stereophonic sound composition by Jean-Jacques Birgé (*1952, France). The sounds follow the shapes of the colors, only to stand out again the next moment: the combination of sound and image results entirely from the laws of random simultaneity.

Vtol

Oil
The main idea of this project is to present exhibition visitors with the chance to destroy any object that might happen to be on their person, in order to transform it into a unique sound composition. The installation consists of five hydraulic presses, capable of crushing practically any object (a mobile telephone, pair of glasses, headphones or whatever). In the process of destruction, a special microphone records the sounds made as the object undergoes deformation, and in just a few minutes, a computer algorithm transforms them into a 20 minutes album.

Refik Anadol

Machine Hallucination
Refik Anadol’s most recent synesthetic reality experiments deeply engage with these centuries-old questions and attempt at revealing new connections between visual narrative, archival instinct and collective consciousness. The project focuses on latent cinematic experiences derived from representations of urban memories as they are re-imagined by machine intelligence. For Artechouse’s New York location, Anadol presents a data universe of New York City in 1025 latent dimensions that he creates by deploying machine learning algorithms on over 100 million photographic memories of New York City found publicly in social networks. Machine Hallucination thus generates a novel form of synesthetic storytelling through its multilayered manipulation of a vast visual archive beyond the conventional limits of the camera and the existing cinematographic techniques. The resulting artwork is a 30-minute experimental cinema, presented in 16K resolution, that visualizes the story of New York through the city’s collective memories that constitute its deeply-hidden consciousness.

ADAM FERRISS

“Adam Ferriss is one of those technologically-minded creatives who is able to put his ever-growing knowledge of code and processing to use building aesthetically wondrous digital art for the rest of us to enjoy. His images make me feel like I’ve just taken some psychedelics and stepped into one of those crazy houses you get in funfairs, where there are giant optical illusions on every wall and the floor keeps moving under your feet, except these are made using algorithms and coding frameworks […]”

Raven Kwok

1194D^3
Initially started as a tweak of 115C8 in 2013, one of Kwok’s Algorithmic Creatures based on finite subdivision, 1194D is an experiment on multiple geometric creatures co-existing within a tetrahedron-based grid environment. In 2017, The project was improved and revised into an immersive triple-screen audiovisual installation as 1194D^3 for .zip Future Rhapsody art exhibition curated by Wu Juehui & Yan Yan at Today Art Museum in Beijing, China. The entire visual is programmed and generated using Processing. All stages are later composed and exported using Premiere.

Pedro Veneroso

file festival 2019
‘Tempo: cor’(Time:color) consists of an immersive installation that seeks to modify our experience of time by converting hours into color. A set of chromatic clocks, each set to a different GMT time zone, projects, in a semicircle, the current time in their mathematical and chromatic representations. The conversion between these two forms of time representation is based on an algorithm composed of sinusoidal functions that modulates the RGB colors as a function of the current time, gradually modifying the intensities of blue, green and red throughout the day: at midday yellow predominates, while at four in the afternoon the hour is red; midnight is blue, six o’clock in the morning is green. Side by side, the colors projected by the clocks merge, creating an immersive experience of a continuous and circular time, between the different time zones, that crosses the entire chromatic spectrum. This installation is part of a series of works in which I investigate the relationships between human notations and codes and our experience of space-time, seeking to change the ways we understand it; in this case, visitors immerse themselves in a spatial experience of time that provokes the questioning of notations and perceptions that we usually consider axiomatic. Changing the way we represent time will change our way of experiencing it?