highlike

Pangenerator

The abacus
THE ABACUS is probably the first ever 1:1 interactive physical representation of real, functioning deep learning network, represented in the form of a light sculpture. The main purpose of the installation is to materialise and demystify inherently ephemeral nature of artificial neural networks on which our lives are becoming increasingly reliant on. As the part of new permanent exhibition devoted to the Future the installation aims to engage and educate the audience in artistically compelling ways being the manifestation of art and science movement goals.

Chris Cheung

No Longer Write – Mochiji
Powered by artificial intelligence’s Generative Adversarial Networks (GANs), the collected works from ancient Chinese Calligraphers, including Wang Xizhi, Dong Qichang, Rao Jie, Su Shi, Huang Tingjian, Wang Yangming, as input data for deep learning. Strokes, scripts and style of the masters are blended and visualized in “Mochiji”, a Chinese literature work paying tribute to Wang Xizhi. Wang is famous for his hard work in the pursuit of Chinese calligraphy. He kept practicing calligraphy near the pond and eventually turned the pond for brush washing into an ink pond (Mochi). The artwork provides a platform for participants to write and record their handwriting. After a participant finished writing the randomly assigned script from “Mochiji”, the input process is completed and the deep learning process will begin. The newly collected scripts will be displayed on the screen like floating ink on the pond, and slowly merge with other collected data to present a newly learnt script. The ink pond imitates process of machine learning, which observes, compares and filters inputs through layers of image and text, to form a modern edition of “Mochiji”.
.
不再写 – Mochiji
以人工智能的生成对抗网络(GANs)为动力,将王羲之、董其昌、饶捷、苏轼、黄廷健、王阳明等中国古代书法家的作品作为深度学习的输入数据。向王羲之致敬的中国文学作品《麻糬》,将大师的笔触、文字、风格融为一体,形象化。王先生以对中国书法的刻苦钻研而著称。他一直在池塘边练习书法,最终把洗笔池变成了墨池(麻糬)。艺术作品为参与者提供了一个书写和记录他们笔迹的平台。参与者完成“Mochiji”中随机分配的脚本后,输入过程完成,深度学习过程将开始。新收集到的脚本会像池塘上的浮墨一样显示在屏幕上,并与其他收集到的数据慢慢融合,呈现出新学到的脚本。墨池模仿机器学习的过程,通过图像和文本的层层观察、比较和过滤输入,形成现代版的“年糕”。

 

Julien Prévieux

Where Is My (Deep) Mind?
Dans Where Is My (Deep) Mind ? quatre performers incarnent différentes expériences de Machine Learning. A la fois expérimentateurs et sujet d’expérience, les acteurs donnent à voir une gamme de processus d’apprentissage automatique allant de la reconnaissance des mouvements sportifs aux techniques de négociation d’achat et de vente. Gestes et paroles codifiées, transférées à des machines ignorant tout du contexte culturel, produisent autant de dérapages ou d’erreurs inattendues, contrefaçons comportementales aux accents comiques.

Nathan Shipley

Dali Lives
Using an artificial intelligence (AI)-based face-swap technique, known as “deepfake” in the technical community, the new “Dalí Lives” experience employs machine learning to put a likeness of Dalí’s face on a target actor, resulting in an uncanny resurrection of the moustacheod master. When the experience opens, visitors will for the first time be able to interact with an engaging life-like Salvador Dalí on a series of screens throughout the Dalí Museum.

Rhizomatiks Research ELEVENPLAY Kyle McDonald

discrete figures 2019

Human performers meet computer-generated bodies, calculated visualisations of movement meet flitting drones! Artificial intelligence and self-learning machines make this previously unseen palette of movement designs appear, designs that far transcend the boundaries of human articulateness, allowing for a deep glimpse into the abstract world of data processing. The Rhizomatiks Research team, led by Japanese artist, programmer, interaction designer and DJ Daito Manabe, gathers collective power with a number of experts, among them the five ELEVENPLAY dancers of choreographer MIKIKO as well as from coding artist Kyle McDonald. The result is a breathtaking, implemented beautifully, in short: visually stunning.

Refik Anadol

Machine Hallucination
Refik Anadol’s most recent synesthetic reality experiments deeply engage with these centuries-old questions and attempt at revealing new connections between visual narrative, archival instinct and collective consciousness. The project focuses on latent cinematic experiences derived from representations of urban memories as they are re-imagined by machine intelligence. For Artechouse’s New York location, Anadol presents a data universe of New York City in 1025 latent dimensions that he creates by deploying machine learning algorithms on over 100 million photographic memories of New York City found publicly in social networks. Machine Hallucination thus generates a novel form of synesthetic storytelling through its multilayered manipulation of a vast visual archive beyond the conventional limits of the camera and the existing cinematographic techniques. The resulting artwork is a 30-minute experimental cinema, presented in 16K resolution, that visualizes the story of New York through the city’s collective memories that constitute its deeply-hidden consciousness.

FIELD

System Aesthetics
The works in this series are part of an extensive research project by FIELD, exploring the most relevant machine learning algorithms in code-based illustrations […] We have started a deeper exploration of the less accessible information that is out there, such as scientific papers and open source code publications, to develop an understanding of these algorithms’ inner workings, and translate it into visual metaphors that can contribute to a public debate.