Julien Prévieux

Where Is My (Deep) Mind?
Dans Where Is My (Deep) Mind ? quatre performers incarnent différentes expériences de Machine Learning. A la fois expérimentateurs et sujet d’expérience, les acteurs donnent à voir une gamme de processus d’apprentissage automatique allant de la reconnaissance des mouvements sportifs aux techniques de négociation d’achat et de vente. Gestes et paroles codifiées, transférées à des machines ignorant tout du contexte culturel, produisent autant de dérapages ou d’erreurs inattendues, contrefaçons comportementales aux accents comiques.

Universal Everything

Hype Cycle
Machine Learning
Set in a spacious, well-worn dance studio, a dancer teaches a series of robots how to move. As the robots’ abilities develop from shaky mimicry to composed mastery, a physical dialogue emerges between man and machinemimicking, balancing, challenging, competing, outmanoeuvring.

Chris Cheung

No Longer Write – Mochiji
Powered by artificial intelligence’s Generative Adversarial Networks (GANs), the collected works from ancient Chinese Calligraphers, including Wang Xizhi, Dong Qichang, Rao Jie, Su Shi, Huang Tingjian, Wang Yangming, as input data for deep learning. Strokes, scripts and style of the masters are blended and visualized in “Mochiji”, a Chinese literature work paying tribute to Wang Xizhi. Wang is famous for his hard work in the pursuit of Chinese calligraphy. He kept practicing calligraphy near the pond and eventually turned the pond for brush washing into an ink pond (Mochi). The artwork provides a platform for participants to write and record their handwriting. After a participant finished writing the randomly assigned script from “Mochiji”, the input process is completed and the deep learning process will begin. The newly collected scripts will be displayed on the screen like floating ink on the pond, and slowly merge with other collected data to present a newly learnt script. The ink pond imitates process of machine learning, which observes, compares and filters inputs through layers of image and text, to form a modern edition of “Mochiji”.
.
不再写 – Mochiji
以人工智能的生成对抗网络(GANs)为动力,将王羲之、董其昌、饶捷、苏轼、黄廷健、王阳明等中国古代书法家的作品作为深度学习的输入数据。向王羲之致敬的中国文学作品《麻糬》,将大师的笔触、文字、风格融为一体,形象化。王先生以对中国书法的刻苦钻研而著称。他一直在池塘边练习书法,最终把洗笔池变成了墨池(麻糬)。艺术作品为参与者提供了一个书写和记录他们笔迹的平台。参与者完成“Mochiji”中随机分配的脚本后,输入过程完成,深度学习过程将开始。新收集到的脚本会像池塘上的浮墨一样显示在屏幕上,并与其他收集到的数据慢慢融合,呈现出新学到的脚本。墨池模仿机器学习的过程,通过图像和文本的层层观察、比较和过滤输入,形成现代版的“年糕”。

 

Thom Kubli

Brazil Now
BRAZIL NOW is a composition that addresses increasing militarization and surveillance within urban areas. Its geographical and acoustic reference is São Paulo, the largest megacity in Latin America. The piece is based on field recordings that capture the symptoms of a Latin American variant of turbo-capitalism with its distinctive acoustic features. Eruptive public demonstrations on the streets are often accompanied by loud, carnivalesque elements. These are controlled by a militarized infrastructure, openly demonstrating a readiness to deploy violence. The sonic documents are analyzed by machine learning algorithms searching for acoustic memes, textures, and rhythms that could be symptomatic for predominant social forces. The algorithmic results are then used as a base for a score and its interpretation through a musical ensemble. The piece drafts a phantasmatic auditory landscape built on the algorithmic evaluation of urban conflict zones.

Refik Anadol

Quantum memories
Quantum Memories is Refik Anadol Studio’s epic scale investigation of the intersection between Google AI Quantum Supremacy experiments, machine learning, and aesthetics of probability. Technological and digital advancements of the past century could as well be defined by the humanity’s eagerness to make machines go to places that humans could not go, including the spaces inside our minds and the non-spaces of our un- or sub-conscious acts. Quantum Memories utilizes the most cutting-edge, Google AI’s publicly available quantum computation research data and algorithms to explore the possibility of a parallel world by processing approximately 200 million nature and landscape images through artificial intelligence. These algorithms allow us to speculate alternative modalities inside the most sophisticated computer available, and create new quantum noise-generated datasets as building blocks of these modalities. The 3D visual piece is accompanied by an audio experience that is also based on quantum noise–generated data, offering an immersive experience that further challenges the notion of mutual exclusivity. The project is both inspired by and a speculation of the Many-Worlds Interpretation in quantum physics – a theory that holds that there are many parallel worlds that exist at the same space and time as our own.

Nathan Shipley

Dali Lives
Using an artificial intelligence (AI)-based face-swap technique, known as “deepfake” in the technical community, the new “Dalí Lives” experience employs machine learning to put a likeness of Dalí’s face on a target actor, resulting in an uncanny resurrection of the moustacheod master. When the experience opens, visitors will for the first time be able to interact with an engaging life-like Salvador Dalí on a series of screens throughout the Dalí Museum.

Refik Anadol

WDCH Dreams
The Los Angeles Philharmonic collaborated with media artist Refik Anadol to celebrate our history and explore our future. Using machine learning algorithms, Anadol and his team has developed a unique machine intelligence approach to the LA Phil digital archives – 45 terabytes of data. The results are stunning visualizations for WDCH Dreams, a project that was both a week-long public art installation projected onto the building’s exterior skin (Sept 28 – Oct 6, 2018) and a season-long immersive exhibition inside the building, in the Ira Gershwin Gallery.

Mika Tajima

New Humans
In New Humans, emergent gatherings of synthetic humans rise from the surface of a black ferrofluid pool. Appearing to morph like a supernatural life form, these dynamic clusters of magnetic liquid produced by machine learning processes are images of communities of synthetic people–hybrid profiles modeled from actual DNA, fitness, and dating profile data sets sourced from public and leaked caches. The work questions how we can radically conceptualize the “user profile” to embody a self whose bounds are indefinable and multiple. Generative algorithm using machine learning (GAN, T-SNE) and fluid simulation (Navier Stokes), countour generation (OpenCV), user profile data caches (DNA, fitness, and dating), software production (Processing), ferrofluid, custom electromagnet matrix, custom PCB control system, computer, steel, wood, aluminum.

Ben Cullen Williams

Living Archive

The Living Archive is an experiment between Studio Wayne McGregor and Google Arts and Culture – a tool for choreography powered by machine learning which generates original movement inspired by Wayne’s 25-year archive.

Chris Salter

n-Polytope: Behaviors in Light and Sound after Iannis Xenakis
N_Polytope: Behaviors in Light and Sound After Iannis Xenakis is a spectacular light and sound performance-installation combining cutting edge lighting, lasers, sound, sensing and machine learning software inspired by composer Iannis Xenakiss radical 1960s- 1970s works named Polytopes (from the Greek ‘poly’, many and ‘topos’, space). As large scale, immersive architectural environments that made the indeterminate and chaotic patterns and behaviour of natural phenomena experiential through the temporal dynamics of light and the spatial dynamics of sound, the Polytopes still to this day are relatively unknown but were far ahead of their time. N_Polytope is based on the attempt to both re-imagine Xenakis’ work with probabilistic/stochastic systems with new techniques as well as to explore how these techniques can exemplify our own historical moment of extreme instability.

Alex May and Anna Dumitriu

ArchaeaBot: A Post Climate Change, Post Singularity Life-form
“ArchaeaBot: A Post Singularity and Post Climate Change Life-form” takes the form of an underwater robotic installation that explores what ‘life’ might mean in a post singularity, post climate change  future. The project is based on new research about archaea (the oldest life forms on Earth) combined with the latest innovations in machine learning & artificial intelligence creating the ‘ultimate’ species for the end of the world as we know it.

FIELD

System Aesthetics
The works in this series are part of an extensive research project by FIELD, exploring the most relevant machine learning algorithms in code-based illustrations […] We have started a deeper exploration of the less accessible information that is out there, such as scientific papers and open source code publications, to develop an understanding of these algorithms’ inner workings, and translate it into visual metaphors that can contribute to a public debate.

ELEVENPLAY x RZM

Discrete Figures
‘Discrete Figures’ unites the performing arts and mathematics in a dramatic exploration of the relationship between the human body and computer generated movement (simulated bodies) born from mathematical analysis. As an additional layer of complexity, the performance piece utilizes drones, A.I., and machine learning in the quest for a new palette of movement to foster undiscovered modes of expressive dance that transcend the limits of conventional human subjectivity and emotional expression.

Jonathan O’Hear, Martin Rautenstrauch & Timothy O’Hear

DAI – the Dancing Artificial Intelligence
DAI is an Artificial Intelligence artist. What this means is that it* thinks; it doesn’t follow a script or act randomly. In its first physical form, DAI is a performer and is inviting you to view its movement creation process. During the process DAI has been exploring its body and its environment, searching for ways to overcome some of the limitations that the physical world has imposed upon its virtual aspirations. This project is a reaction to the rapidly growing importance of artificial intelligence (AI) in our lives. Simple versions of AI are already everywhere, and today we are at a turning point where the first machines capable of learning through experience, like us, are making their appearance. This raises all kinds of ethical and moral issues and we want to be involved in this debate in our own way.

HUANG YI & KUKA

The work fulfills Huang’s childhood dream of having a robot dance partner and required development from scratch.  After learning the mechanics of the industrial KUKA robot, he conceptualized the movements and programmed the machine to create the partner he wanted.   He says of the experience, “Dancing face to face with a robot is like looking at my own face in a mirror… I think I have found the key to spin human emotions into robots.” It was developed into a full-length piece with two additional dancers as part of 3-Legged Dog Art & Technology Center‘s Artist Residency program and their 3LD/3D+ program.

Mushon Zer-Aviv

The normalizing machine
The Normalizing Machine is an interactive installation presented as an experimental research in machine-learning. It aims to identify and analyze the image of social normalcy. Each participant is asked to point out who looks most normal from a line up of previously recorded participants. The machine analyzes the participant decisions and adds them to its’ aggregated algorithmic image of normalcy.

Rhizomatiks Research ELEVENPLAY Kyle McDonald

discrete figures 2019

Human performers meet computer-generated bodies, calculated visualisations of movement meet flitting drones! Artificial intelligence and self-learning machines make this previously unseen palette of movement designs appear, designs that far transcend the boundaries of human articulateness, allowing for a deep glimpse into the abstract world of data processing. The Rhizomatiks Research team, led by Japanese artist, programmer, interaction designer and DJ Daito Manabe, gathers collective power with a number of experts, among them the five ELEVENPLAY dancers of choreographer MIKIKO as well as from coding artist Kyle McDonald. The result is a breathtaking, implemented beautifully, in short: visually stunning.

Refik Anadol

Machine Hallucination
Refik Anadol’s most recent synesthetic reality experiments deeply engage with these centuries-old questions and attempt at revealing new connections between visual narrative, archival instinct and collective consciousness. The project focuses on latent cinematic experiences derived from representations of urban memories as they are re-imagined by machine intelligence. For Artechouse’s New York location, Anadol presents a data universe of New York City in 1025 latent dimensions that he creates by deploying machine learning algorithms on over 100 million photographic memories of New York City found publicly in social networks. Machine Hallucination thus generates a novel form of synesthetic storytelling through its multilayered manipulation of a vast visual archive beyond the conventional limits of the camera and the existing cinematographic techniques. The resulting artwork is a 30-minute experimental cinema, presented in 16K resolution, that visualizes the story of New York through the city’s collective memories that constitute its deeply-hidden consciousness.