RHIZOMATIKS RESEARCH ELEVENPLAY KYLE MCDONALD
Esecutori umani incontrano corpi generati dal computer, visualizzazioni calcolate del movimento incontrano droni svolazzanti! L’intelligenza artificiale e le macchine di autoapprendimento fanno apparire questa tavolozza inedita di progetti di movimento, progetti che trascendono di gran lunga i confini dell’articolazione umana, consentendo uno sguardo profondo nel mondo astratto dell’elaborazione dei dati. Il team di Rhizomatiks Research, guidato dall’artista, programmatore, designer dell’interazione e DJ giapponese Daito Manabe, riunisce il potere collettivo con un numero di esperti, tra cui i cinque ballerini ELEVENPLAY del coreografo MIKIKO e dell’artista di programmazione Kyle McDonald. Il risultato è un’immagine mozzafiato, realizzata magnificamente, insomma: visivamente sbalorditiva.
Driessens & Verstappen
Breed (1995-2007) is a computer program that uses artificial evolution to grow very detailed sculptures. The purpose of each growth is to generate by cell division from a single cell a detailed form that can be materialised. On the basis of selection and mutation a code is gradually developed that best fulfils this “fitness” criterion and thus yields a workable form. The designs were initially made in plywood. Currently the objects can be made in nylon and in stainless steel by using 3D printing techniques. This automates the whole process from design to execution: the industrial production of unique artefacts.
Computers are powerful machines to harness artificial evolution to create visual images. To achieve this we need to design genetic algorithms and evolutionary programs. Evolutionary programs allow artefacts to be “bred”, rather than designing them by hand. Through a process of mutation and selection, each new generation is increasingly well adapted to the desired “fitness” criteria. Breed is an example of such software that uses Artificial Evolution to generate detailed sculptures. The algorithm that we designed is based on two different processes: cell-division and genetic evolution.
PETER WILLIAM HOLDEN
Come molti della mia generazione, sono cresciuto succhiando un tubo a raggi catodici e facendo il bagno nelle onde radio. Tutto questo è rappresentato all’interno del mio lavoro, in un collage di movimento, luce e suono. Attualmente con il mio lavoro sto esplorando modi per dissolvere i confini tra cinematografia e scultura. Le mie recenti ricerche su questo tema hanno coinvolto l’uso di computer combinato con elementi meccanici per creare installazioni simili a mandala. Queste installazioni sono il mio mezzo e le uso per creare animazioni effimere. Questa effimera coreografia di movimento è il punto focale del mio lavoro. Credo che questo fascino per le immagini in movimento e la trasformazione degli oggetti derivi dalla mia giovinezza, dove i primi computer domestici degli anni ’80 mi hanno dato uno sguardo nel meraviglioso mondo della matematica applicata. Su questi computer è stato possibile con codici semplici generare fantastici schemi e suoni astratti e quell’incontro ha distrutto per sempre il confine nella mia mente tra astratto e reale. Anche la danza gioca un ruolo significativo nel mio lavoro; Sono stato attratto dalla musica elettronica. L’electro con il suo suono sintetizzato mi ha introdotto alla break-dance e la mia anima è stata catturata dalla bellezza del movimento fisico coreografato.
PETER WILLIAM HOLDEN
“My recent investigations of this theme have involved the usage of computers combined with mechanical elements to create mandala like installations. These installations are my medium and I use them to create ephemeral animations. This ephemeral choreography of movement is the focal point of my work”.
Paul Robertson is an Australian animator and digital artist who is known for his pixel art used in short films and video games. He is mostly known for Scott Pilgrim Vs. the World: The Game and the recent release, Mercenary Kings. Apart from his seasoned career as a game designer and movie creator, Robertson has been recently spotted on Tumblr with these GIFS. His interest in inserting flashing neon colors, geometric shapes, Japanese character animation, and 1990′s computer imagery, deems his work as heavily influenced by the Seapunk/Vaporwave aesthetic.
The Pangolin Scales Project demonstrates a 1.024 channel BCI (Brain-Computer Interface) that is able to extract information from the human brain with an unprecedented resolution. The extracted information is used to control the Pangolin Scale Dress interactively into 64 outputs.The dress is also inspired by the pangolin, cute, harmless animals sometimes known as scaly anteaters. They have large, protective keratin scales covering their skin (they are the only known mammals with this feature) and live in hollow trees or burrows.As such, Pangolins and considered an endangered species and some have theorized that the recent coronavirus may have emerged from the consumption of pangolin meat.Wipprecht’s main challenge in the project’s development was to not overload the dress with additional weight. She teamed up 3D printing experts Shapeways and Igor Knezevic in order to create an ‘exo-skeleton’ like dress-frame (3mm) that was light enough to be worn but sturdy enough to hold all the mechanics in place
Digital image projection, software, real-time internet-based data, and sound
Installation shot at St. Saviour Church, London
Tromarama is an art collective founded in 2006 by Febie Babyrose, Herbert Hans and Ruddy Hatumena. Engaging with the notion of hyperreality in the digital age, their projects explore the interrelationship between the virtual and the physical world. Their works often combine video, installations, computer programming and public participation depicting the influence of digital media on the society perception towards their surroundings. They live and work between Jakarta and Bandung.
HYE YEON NAM
“Please smile” is an exhibit involving five robotic skeleton arms that change their gestures depending on a viewer’s facial expressions. It consists of a microcontroller, a camera, a computer, five external power supplies, and five plastic skeleton arms, each of them with four motors. It incorporates elements from mechanical engineering, computer vision perception, to serve artistic expression with a robot.
SMSMS-SMS Mediated Sublime
CIMs-Collective Intelligence Machines
“In 2000, I began to connect some of these computers to the mobile phone network (SMSMS-SMS Mediated Sublime, and CIMs-Collective Intelligence Machines). This enabled me to make interactive and multiple installations, connecting various locations.
In this case the flow of images was made visible by large-scale video-projections and the members of the audience were able to modify their characteristics in real time, by sending new inputs to the system from their own phones. This was done in a similar way to certain applications used in electronic democracy. What I had in mind was art which was generative, interactive and public.”
Created by a mathematician, digital artist and Emmy award winning supervisor of computer generated effects – Andy Lomas, Morphogenetic Creations is a collection of works that explore the nature of complex forms that can be produced by digital simulation of growth systems. These pieces start with a simple initial form which is incrementally developed over time by adding iterative layers of complexity to the structure.The aim is to create structures emergently: exploring generic similarities between many different forms in nature rather than recreating any particular organism. In the process he is exploring universal archetypal forms that can come from growth processes rather than top-down externally engineered design.Programmed using C++ with CUDA, the series use a system of growth by deposition: small particles of matter are repeatedly deposited onto a growing structure to build incrementally over time. Rules are used to determine how new particles are created, and how they move before being deposited. Small changes to these rules can have dramatic effects on the final structure, in effect changing the environment in which the form is grown. To create these works, Andy uses the GPU as a compute device rather than as a display device. All the data is held in memory on the GPU and various kernel functions are called to do things like apply forces to the cells, make cells split, and to render the cells using ray-tracing. The simulations and rendering for each of the different animated structures within this piece take about 12 hours to run, Andy explains. By the end of the simulations there are over 50,000,000 cells in each structure.The Cellular Forms use a more biological model, representing a simplified system of cellular growth. Structures are created out of interconnected cells, with rules for the forces between cells, as well as rules for how cells accumulate internal nutrients. When the nutrient level in a cell exceeds a given threshold the cell splits into two, with both the parent and daughter cells reconnecting to their immediate neighbours. Many different complex organic structures are seen to arise from subtle variations on these rules, creating forms with strong reminiscences of plants, corals, internal organs and micro-organisms.
Universo di particelle d’acqua
Universe of Water Particles è una cascata creata in un ambiente simulato al computer. Una roccia virtuale viene prima scolpita e l’acqua generata dal computer composta da centinaia di migliaia di particelle d’acqua viene poi versata su di essa. Il computer calcola il movimento di queste particelle per produrre un’accurata simulazione di cascata che scorre secondo le leggi fisiche. Successivamente, viene selezionato lo 0,1 percento delle particelle e vengono tracciate delle linee in relazione ad esse. La sinuosità delle linee dipende dall’interazione complessiva tra le particelle d’acqua e forma la magnifica cascata vista sullo schermo.
Valentin Spiess di iart, spiega come funziona il sistema. «Dietro il telone ci sono più di 10mila cilindri telescopici estensibili, sormontati da una sfera con il LED colorato. Quando la persona da ritrarre entra nel gabbiotto per il selfie, vengono scattate 5 immagini che un computer assembla in un rendering 3D. Una volta terminato il disegno, le informazioni vengono inviate al sistema che regola il posizionamento dei cilindri telescopici che si mettono nella posizione giusta per far apparire il viso prescelto». Il sistema è stato costruito in modo da poter essere riutilizzato. Come una specie di Mount Rushmore mobile e per tutti.
Random String of Emotions
Emotion recognition software analyzes our emotions by deconstructing our facial expressions into temporal segments that produce the expression, called Action Units (AU; developed by Paul Ekman), and breaking them down into percentages of six basic emotions, happy, sad, angry, surprised, scared, and disgusted. In this video the artist uses this decoding system to turn the process around. Here – instead of detecting AUs – a computer is used to generate a random string of AUs. In this way complex and perhaps even nonexisting emotional expressions will be discovered. These randomly formed expressions, played in random order, are then analyzed again by professional emotion recognition software.
Catala’s work investigate emotions and empathy in our technological information-driven society. His computer generated Emobot appears at once familiar and strange as it utters simple statement about its feelings. The visually mesmerizing figure might serve as an avatar onto which we relate to the expression of strong feelings when they are delivered digitally-an increasing prevalent cultural phenomena. Listening to a not-quite human presence express its vulnerability may have less impact in us than hearing a real person communicate similar emotions. Nevertheless, the deliberate and repetitive manner in which Emobot articulates powerful sentiments affords us ample opportunity to reflect on the existential quality of their meaning.
Das Pangolin Scales Project demonstriert ein 1.024-Kanal-BCI (Brain-Computer Interface), das Informationen aus dem menschlichen Gehirn mit einer beispiellosen Auflösung extrahieren kann. Die extrahierten Informationen werden verwendet, um das Pangolin-Schuppenkleid interaktiv in 64 Ausgaben zu steuern. Das Kleid ist auch von den Pangolin-niedlichen, harmlosen Tieren inspiriert, die manchmal als schuppige Ameisenbären bekannt sind. Sie haben große, schützende Keratinschuppen auf ihrer Haut (sie sind die einzigen bekannten Säugetiere mit dieser Eigenschaft) und leben in hohlen Bäumen oder Höhlen. Als solche gelten Pangoline als gefährdete Arten, und einige haben angenommen, dass das jüngste Coronavirus möglicherweise entstanden ist Der Verzehr von Pangolinfleisch. Wipprechts größte Herausforderung bei der Entwicklung des Projekts bestand darin, das Kleid nicht mit zusätzlichem Gewicht zu überladen. Sie hat die 3D-Druckexperten Shapeways und Igor Knezevic zusammengebracht, um einen “Exo-Skelett“ -ähnlichen Kleiderrahmen (3 mm) zu schaffen, der leicht genug war, um getragen zu werden, aber robust genug, um alle Mechaniken an Ort und Stelle zu halten
“Aposematic Jacket” ist ein tragbarer Computer zur Selbstverteidigung. Die Linsen an der Jacke geben das Warnsignal „Ich kann Sie aufzeichnen“ aus, um mögliche Angriffe zu verhindern. Wenn der Träger einen bedrohten Knopf drückt, zeichnet die Jacke die Szene in 360 Grad auf und sendet die Bilder an das Web.
UVA UNITED VISUAL ARTISTS
Blueprint umfasst die Beziehung und Parallelen zwischen Kunst und Wissenschaft und schafft Kompositionen durch die mathematischen Prinzipien der Logik, die das Leben stützen. UVA untersucht Analogien zwischen DNA und Computercode und hat die Blueprint-Serie entwickelt. Arbeiten, die Genetik und Code als Blaupausen künstlicher und natürlicher Systeme verbinden. Da sich die Arbeit im Laufe der Zeit langsam ändert, schwanken die Muster zwischen verschiedenen Komplexitätsgraden. Blueprint verwendet die Grundkonzepte der Evolution, um ein sich ständig veränderndes Bild zu erstellen. Während Zellen ihre Gene buchstäblich auf ihre angrenzenden anderen übertragen, fließt Farbe wie Farbe über die Leinwand. Blueprint erstellt jede Minute eine einzigartige farbenfrohe Komposition und präsentiert das unbegrenzte Ergebnis, das sich aus einem einzigen Algorithmus ergibt. ein einziges Regelwerk.
Sougwen Chung ist eine international bekannte multidisziplinäre Künstlerin, die handgezeichnete und computergenerierte Markierungen verwendet, um die Nähe zwischen der Kommunikation von Person zu Person und der Kommunikation von Person zu Maschine zu untersuchen. Sie ist eine ehemalige Forscherin am MIT Media Lab und derzeit Artist in Resident bei Bell Labs und dem New Museum of Contemporary Art in New York. Ihre spekulative kritische Praxis umfasst Installation, Skulptur, Standbild, Zeichnung und Performance. Drawing Operations Unit: Generation 1 ist die erste Stufe einer laufenden Studie über die Interaktion zwischen Mensch und Roboter als künstlerische Zusammenarbeit.
Pianographique ist eine Reihe von Kollaborationen der visuellen Echtzeitkünstler Cori O’Lan und Maki Namekawa. Die Visualisierungen sind keine Videos, die mehr oder weniger synchron zur Musik sind, und es ist auch nicht das Spiel des Musikers mit vorgefertigtem Material, sie werden im Moment der Aufführung gemeinsam erstellt. Wie bei den meisten Visualisierungen von Cori O’Lan werden alle grafischen Elemente direkt vom akustischen Material abgeleitet, d. H. Vom Klang der Musik. Zu diesem Zweck wird das Klavier mit Mikrofonen aufgenommen und diese Signale werden dann vom Computer in eine Vielzahl von Informationen über Frequenz, Tonhöhe, Lautstärke, Dynamik usw. umgewandelt. Diese Informationen werden wiederum zur Steuerung des Grafikcomputers verwendet. Erstellen Sie grafische Elemente oder ändern Sie sie auf vielfältige Weise. Da diese Prozesse in Echtzeit stattfinden, besteht eine direkte und ausdrucksstarke Verbindung zwischen Musik und visueller Interpretation. Die Visualisierung wird eigentlich nicht vom Computer „erstellt“, sondern viel mehr von der Musik selbst – der Computer ist eher das Instrument, der Pinsel, der von der Musik gespielt wird.
Gespräche mit Bina
Die Künstlerin Stephanie Dinkins und Bina48, einer der fortschrittlichsten sozialen Roboter der Welt, testen diese Frage anhand einer Reihe laufender Gespräche auf Video. Dieses Kunstprojekt untersucht die Möglichkeit einer langfristigen Beziehung zwischen einer Person und einem autonomen Roboter, die auf emotionaler Interaktion basiert und möglicherweise wichtige Aspekte der Mensch-Roboter-Interaktion und des menschlichen Zustands aufdeckt. Die Beziehung wird mit Bina48 (Breakthrough Intelligence via Neural Architecture, 48 Exaflops pro Sekunde) aufgebaut, einem intelligenten Computer der Terasem Movement Foundation, der zu unabhängigem Denken und Emotionen fähig sein soll.
Studio A N F
Nach mehr Jahrzehnten des Versuchs, einen Apparat zu konstruieren, der denken kann, können wir endlich die Früchte dieser Bemühungen erleben: Maschinen, die es wissen. Das heißt, nicht nur Maschinen, die Informationen messen und nachschlagen können, sondern auch solche, die ein qualitatives Verständnis der Welt zu haben scheinen. Ein auf Gesichtern trainiertes neuronales Netzwerk weiß nicht nur, wie ein menschliches Gesicht aussieht, es hat auch ein Gefühl dafür, was ein Gesicht ist. Obwohl die Algorithmen, die solche para-neuronalen Formationen erzeugen, relativ einfach sind, verstehen wir nicht vollständig, wie sie funktionieren. Eine Vielzahl von Forschungslabors hat solche Netze auch erfolgreich auf fMRT-Scans (Functional Magnetic Resonance Imaging) lebender Gehirne trainiert, um Bilder, Konzepte und Gedanken effektiv aus dem Geist einer Person zu extrahieren. Hier geschieht die Beugung wahrscheinlich als doppelte: eine Technologie, deren Funktionsweise nicht gut verstanden wird und die eine ebenso unklare natürliche Formation mit einem gewissen Erfolg qualitativ analysiert. Andreas N. Fischers Arbeit Computer Visions II scheint kurz hinter dieser Schwelle zu warten, wo sich zwei Arten von wissenden Wesen in einer Art psychotherapeutischer Sitzung treffen […]
Maria Guta and Adrian Ganea
Performance & live computer generated simulation
A postmodern fairytale, Cyberia takes place somewhere in a cold distant East, stretching between and endless imaginary realm and a vast physical space. It is a westwards journey towards a promised future with no arrival and no return. There is no here or there, only a twilight zone between a departure point and a simulated destination. Between digital video projections and a physical setting, using the mechanics of a video-game engine with a motion capture suit, Cyberia is the simulation of an endless pre-climax state where a performer and a CG avatar dance as one to the rhythms of an imaginary West. In a world oversaturated by digital data –mysticism and paranormal are as popular as ever. Emerging technologies are increasingly incorporated in a form of postmodern spiritualism, as Arthur C. Clarke points out: “Any sufficiently advanced technology is indistinguishable from magic.
Fanscreen is comprised of several panels made up of computer fans which are linked together to form a larger ‘screen’. These screens are arranged side-by-side in a large circle that the viewer can enter. Each fan is programmed independently, creating an effect that mimics individual “pixels” on a larger screen. Moving in synchronicity, the programmed fans produce a kind of “film” of various abstract movements.
The mechanical mirrors are made of various materials but share the same behavior and interaction; any person standing in front of one of these pieces is instantly reflected on its surface. The mechanical mirrors all have video cameras, motors and computers on board and produce a soothing sound as the viewer interacts with them. Troll Mirror was commisioned by Traget and is made of pairs of pink and blue troll dolls. Every troll doll pair can rotate so that the pink or blue troll face the front. The result is a colorfull reflection of the viewer’s outline and playfull colorfull transitions
Tage Ohne Stunden
Eine künstliche Welt beschwört der Schweizer Computerkünstler Yves Netzhammer in seiner achtminütigen Videoinstallation “Tage ohne Stunden“ herauf. Das Blickfeld des Betrachter ist Teil des Werkes, er wird in die surreale Filmsequenzen einbezogen, in denen sich die Figuren wie Puppen bewegen, gezeichnet von einer Software, der ihnen eine glatte, künstliche Oberfläche verleiht.
Quantum Memories is Refik Anadol Studio’s epic scale investigation of the intersection between Google AI Quantum Supremacy experiments, machine learning, and aesthetics of probability. Technological and digital advancements of the past century could as well be defined by the humanity’s eagerness to make machines go to places that humans could not go, including the spaces inside our minds and the non-spaces of our un- or sub-conscious acts. Quantum Memories utilizes the most cutting-edge, Google AI’s publicly available quantum computation research data and algorithms to explore the possibility of a parallel world by processing approximately 200 million nature and landscape images through artificial intelligence. These algorithms allow us to speculate alternative modalities inside the most sophisticated computer available, and create new quantum noise-generated datasets as building blocks of these modalities. The 3D visual piece is accompanied by an audio experience that is also based on quantum noise–generated data, offering an immersive experience that further challenges the notion of mutual exclusivity. The project is both inspired by and a speculation of the Many-Worlds Interpretation in quantum physics – a theory that holds that there are many parallel worlds that exist at the same space and time as our own.
‘SnP’, 2018, recycled plastic, injection moulded
“Widrig’s art breaks down the boundaries between disciplines; borrowing tools traditionally associated with one industry and using them in other fields, in often unanticipated and exciting ways. Widrig uses computer simulation processes and advanced technologies adopted from the special effects business to create sculptural 3D-printed craftwork—digital designs materialize into intricate sculptures in glass or recycled plastic and furniture pieces with impeccable undulated thin surfaces,” Devid Gualandris
“*(asterisk) is an installation comprised of an armillary sphere apparatus rotating an apple in 360 degrees and four cameras omnidirectionally scanning the surface of the apple in real-time. Computers calculate the similarity between fragmentary images of the present apple and apples I’ve eaten before, as if they were my memory of apples. The computations and compared apple-fragment images are shown on four displays respectively.” Noriyuki Suzuki
Waldian is a standalone, wall hanging sound and light fixture capable of playing a near infinite amount of melodic permutations over a predetermined musical scale, complemented by emerging light patterns from twelve separate LEDs spread across the sculpture. In technical terms, Waldian contains two oscillators, an envelope generator and a voltage controlled amplifier, all controlled by impulses from a network of logic gates akin to those of early computers. These impulses are essentially the nerves in the electronic ecosystem, deciding over pitch and amplitude changes as well as creating bursts of light to highlight the entrances of each note. Finally, there is a tube overdrive stage that creates harmonic and subharmonics based on how far away the two oscillators are from each other in frequency. Most parameters are customizable, such as the aforementioned pitch, amplitude and overdrive, but the responsiveness and envelope of the light bursts can also be adjusted, directly affecting the appearance of the light patterns.
New media artist group, Intermedia Chef is trying to explore dynamic artwork generated by computer-controlled sound in the aspect of data visualization. The sound installation artwork transits sound wave into physical movement. Overmore, they are aiming that the sound wave energy turns out a kinetic type of visual art.
if “Nervelevers” is anything to go by, Squarepusher’s upcoming album, Be Up A Hello, will be the closest thing we’ve had to vintage Squarepusher in years. This will be welcome news for many fans. Much like the best of Squarepusher’s catalogue, there’s a brilliant live quality to “Nervelevers.” His music often doesn’t sound like a single producer staring into a computer, but more like an incredibly tight jazz band, totally in sync. The track might not feature his virtuosic bass playing, but you can picture him slapping his bass guitar during its frantic acid line. You’re pulled through a chaotic wormhole, with only a brief respite when the glitched jungle drums break down to an almost hip-hop stagger. It’s fast, unpredictable, and most importantly, fun. Only a handful of artists can make music this complex feel like such a good time.
Nicolas Sassoon and Rick Silva
SIGNALS is a collaborative project by artists Nicolas Sassoon and Rick Silva that focuses on immersive audio-visual renderings of altered seascapes. Sassoon and Silva share an ongoing theme in their individual practices; the depiction of wilderness and natural forms through computer imaging. Created by merging their respective fields of visual research, SIGNALS features oceanic panoramas inhabited by unnatural substances and enigmatic structures. The project draws from sources such as oceanographic surveys, climate studies and science-fiction to create 3D generated video works and installations that reflect on contamination, mutation and future ecologies.
The piece is the latest instalment of their ongoing Spectra series, a merging of physical and virtual sculptures that take inspiration from space, technology, and our relationships to them, to provide elegant and sensory experiences using sound, light, and reflection. Spectra-3’s design and movement is inspired by the radio telescopes of the Very Large Array (VLA) located on the Plains of San Agustin in New Mexico. The piece combines computer-aided design with real-time input from the public’s movements, to inform its physical actions as it rotates on motors, augmenting the space with the enchanting hues and patterns of reflected light and spatialized sound.
your unerasable text
“your unerasable text” is an interactive installation, dealing with the topics of data storage and elimination of data. The installation can be placed in an exhibition, but ideally it’s exhibited in a public space window, where it can be used by people passing by 24h a day. The participant is asked to send a textmessage to the number written on a sign next to the installation. “send your unerasable textmessage to +43 664 1788374”. The receiver mobile transfers it to a computer, which is layouting the message automatically. Then it is printed on to a DIN A6 paper, which is falling directly on to a papershredder. There the message remains readable for a few moments and gets destroyed then. The shredded paper forms a visible heap on the floor, which reminds of a generative graphic.
Procedural cloth V//002
“I’m continuing experiments with procedurally generated structures. This time I’m implementing algorithm assembled in Houdini over female body. All meshes generated in Houdini. Then uploaded to sketchfab. All images made from sketchfab model. You can check it in 3d in any browser by the link in my profile.” Oleg Soroko
Ben Katz & Jared Di Carlo
The Rubik’s Contraption
“That was a Rubik’s cube being solved in 0.38 seconds. The time is from the moment the keypress is registered on the computer, to when the last face is flipped. It includes image capture and computation time, as well as actually moving the cube. The motion time is ~335 ms, and the remaining time image acquisition and computation. For reference, the current world record is/was 0.637 seconds. The machine can definitely go faster, but the tuning process is really time consuming since debugging needs to be done with the high speed camera, and mistakes often break the cube or blow up FETs. Looking at the high-speed video, each 90 degree move takes ~10 ms, but the machine is actually only doing a move every ~15 ms. For the time being, Jared and I have both lost interest in playing the tuning game, but we might come back to it eventually and shave off another 100 ms or so.” Ben Katz
Studio A N F
Computer Visions 2
After more decades of trying to construct an apparatus that can think, we may be finally witnessing the fruits of those efforts: machines that know. That is to say, not only machines that can measure and look up information, but ones that seem to have a qualitative understanding of the world. A neural network trained on faces does not only know what a human face looks like, it has a sense of what a face is. Although the algorithms that produce such para-neuronal formations are relatively simple, we do not fully understand how they work. A variety of research labs have also been successfully training such nets on functional magnetic resonance imaging (fMRI) scans of living brains, enabling them to effectively extract images, concepts, thoughts from a person’s mind. This is where the inflection likely happens, as a double one: a technology whose workings are not well understood, qualitatively analyzing an equally unclear natural formation with a degree of success. Andreas N. Fischer’s work Computer Visions II seems to be waiting just beyond this cusp, where two kinds of knowing beings meet in a psychotherapeutic session of sorts[…]
Upload not complete
The work magnifies the process of virtual and real fusion, which is the process of uploading human consciousness to digital space. When the visual perception has been lost, can people still recognize the body through the touch and sound of wind, sound and vibration everywhere? Experiencers use non-visual senses, experience media art, and cooperate with the Taiwanese Non-Visual Aesthetic Education Association to create a digital space where the computer can fully understand the location of the experiencer in the space, allowing the experiencer to listen, move, touch objects, feel the vibration and come to know the space.
Nadi Generative Art
Nadi is a Digital display of Kinetics and Energetics of Body Movements involved in Yoga. The visuals are created by investigating the flow of data, using the human body as a vehicle. With the support of computer vision technologies, a visual trail is formed by tracking the body movements during yogic postures. Inspired from Indian Yogic Science, we have visually depicted aspects of light, matter and energy in our forms. The generative nature of the visual comes from the digital juxtaposition of the poses that the body generates with each pose.
dravb consists of an 8×8 LED matrix and two proximity sensors. It uses two ESP8266 microcontrollers as ADCs to map hand movement to the matrix, but could also be used for musical purposes. I wanted it to have the look and feel of an old analog computer, with a clunky interface and dubious visual feedback.
gods and Pilgrims
New media artist Vvzela Kook works in various audiovisual media,including performance, theatre, computer graphics and drawing to explore contemporary performing arts such as the possibility that dance and computer-generated arts could co-exist. Kook’s video works combine technology with her artistic practice to reproduce and convert urban cityscapes into an integrated virtual experience. The condensed textures in her works connect with multiple sensual levels in our perception and reintroduce the unexplored potential of video as a medium
Eyeris is a cultural prosthetic that renders the user dependent on human touch for sight. While many of today’s digital devices extend our abilities to connect with each other, disability of our current digital devices can been seen through our loss of tangible human interaction. I made this piece in trying to explore the importance of human interdependency in a society living under the myth of autonomy driven by technological symbiosis between man and computer. Eyeris is a mechanically operated electronic device powered by digital input that is deliberately over-engineered to call attention to the social behavioral conditioning imposed on us through less discreet technological devices that we assimilate on a daily basis.
UVA UNITED VISUAL ARTISTS
Blueprint embraces the relationship and parallels between art and science, creating compositions through the mathematical principles of logic that underpin life. Exploring analogies between DNA and computer code, UVA have created the Blueprint series; works that pair genetics and code as the blueprints of artificial and natural systems. As the work slowly changes over time, patterns fluctuate between varying degrees of complexity. Blueprint uses the basic concepts of evolution to create an ever-transitioning image. With cells literally transferring their genes to their adjoining others, colour flows like paint across the canvas. Drawing up a unique colourful composition every minute, Blueprint presents the unlimited outcome that results from a single algorithm; a single set of rules.
Where the City Can’t See
Directed by speculative architect Liam Young and written by fiction author Tim Maughan, ‘Where the City Can’t See’ is the world’s first narrative fiction film shot entirely with laser scanners, designed in collaboration with Alexey Marfin. The computer vision systems of driverless cars google maps, urban management systems and CCTV surveillance are now fundamentally reshaping urban experience and the cultures of our city. Set in the Chinese owned and controlled Detroit Economic Zone (DEZ) and shot using the same scanning technologies used in autonomous vehicles, we see this near future city through the eyes of the robots that manage it. Exploring the subcultures that emerge from these new technologies the film follows a group of young car factory workers across a single night, as they drift through the smart city point clouds in a driverless taxi, searching for a place they know exists but that the map doesn’t show.
Conversations with Bina 48
Artist Stephanie Dinkins and Bina48, one of the worlds most advanced social robots, test this question through a series of ongoing videotaped conversations. This art project explores the possibility of a longterm relationship between a person and an autonomous robot that is based on emotional interaction and potentially reveals important aspects of human-robot interaction and the human condition. The relationship is being built with Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second) an intelligent computer built by Terasem Movement Foundation that is said to be capable of independent thought and emotion.
REVITAL COHEN & TUUR VAN BALEN
A number of life-support machines are connected to each other, circulating liquids and air in attempt to mimic a biological structure.
The Immortal investigates human dependence on electronics, the desire to make machines replicate organisms and our perception of anatomy as reflected by biomedical engineering.
A web of tubes and electric cords are interwoven in closed circuits through a Heart-Lung Machine, Dialysis Machine, an Infant Incubator, a Mechanical Ventilator and an Intraoperative Cell Salvage Machine. The organ replacement machines operate in orchestrated loops, keeping each other alive through circulation of electrical impulses, oxygen and artificial blood.
Salted water acts as blood replacement: throughout the artificial circulatory system minerals are added and filtered out again, the blood gets oxygenated via contact with the oxygen cycle, and an ECG device monitors the system’s heartbeat. As the fluid pumps around the room in a meditative pulse, the sound of mechanical breath and slow humming of motors resonates in the body through a comforting yet disquieting soundscape.Life support machines are extraordinary devices; computers designed to activate our bodies when anatomy fails, hidden away in hospital wards. Although they are designed as the ultimate utilitarian appliances, they are extremely meaningful and carry a complex social, cultural and ethical subtext. While life prolonging technologies are invented as emergency measures to combat or delay death, my interest lies in considering these devices as a human enhancement strategy.This work is a continuation of my investigation of the patient as a cyborg, questioning the relationship between medicine and techno- fantasies about mechanical bodies, hyper abilities and posthumanism.
Transfiguration (2020) is a reworking of the Universal Everything studio classic from 2011, The Transfiguration. The Transfiguration was first shown at the studio’s first major solo exhibition Super-Computer Romantics at La Gaite Lyrique, Paris. Now completely remade using the latest procedural visual effects software, the updated CGI artwork brings new life to the ever-evolving walking figure, with a new foley-based soundtrack by Simon Pyke.
In New Humans, emergent gatherings of synthetic humans rise from the surface of a black ferrofluid pool. Appearing to morph like a supernatural life form, these dynamic clusters of magnetic liquid produced by machine learning processes are images of communities of synthetic people–hybrid profiles modeled from actual DNA, fitness, and dating profile data sets sourced from public and leaked caches. The work questions how we can radically conceptualize the “user profile” to embody a self whose bounds are indefinable and multiple. Generative algorithm using machine learning (GAN, T-SNE) and fluid simulation (Navier Stokes), countour generation (OpenCV), user profile data caches (DNA, fitness, and dating), software production (Processing), ferrofluid, custom electromagnet matrix, custom PCB control system, computer, steel, wood, aluminum.
Grid 4×4 is an autonomous apparatus that creates an endless series of geometric forms. The vertices of the form are located on discs, each of them rotatable by a motor. A computer software plays tenderly with the rotation of the discs and generates an endless amount of geometric and chaotic patterns.
Christa Sommerer and Laurent Mignonneau
The Interactive Plant Growing
Interactive Plant Growing is an installation that deals with the principle of the growth of virtual plant organisms and their change and modification in real time in 3D virtual space. These modifications of pre-defined “artificially living plant organisms” are mainly based on the principle of development and evolution in time. The artificial growing of program-based plants expresses the desire to discover the principle of life as defined by the transformations and morphogenesis of certain organisms. Interactive Plant Growing connects the real-time growing of virtual plants in the 3D space of the computer to real living plants, which can be touched or approached by human viewers.
Give my Creation… Life!
Give my creation… Life! Is a project which links Art, Science and Technology. It is based on the generation of energy through the heart beating, with the aim of granting autonomy to a machine. During the research of this subversive goal, multiple issues have been addressed, such as the extension of a removed organ´s life, its artificial feeding of nutrients and its use as a source of natural energy, among others.
The Other in You
The Other in You, developed as a new way to experience dance, has realized a novel dance audience experience. We assembled the cutting-edge Computer Graphics, haptic feedback device which directly express the dance to the body, 16 stereophony channels sound and research on Virtual Reality techniques to realize this work. How can we relate to others, who are supposed to be distant from us? Do we really know what it is to “see”? The Other in You is an attempt to revive the notion of our body in relation to an object, a notion, which had been forgotten in the act of watching. Virtual reality technology enables us to bring the act of watching, once detached from the body, back to where it belongs. And as a result, it reconstructs the notion of “seeing“.
Rhizomatiks Research ELEVENPLAY Kyle McDonald
discrete figures 2019
Human performers meet computer-generated bodies, calculated visualisations of movement meet flitting drones! Artificial intelligence and self-learning machines make this previously unseen palette of movement designs appear, designs that far transcend the boundaries of human articulateness, allowing for a deep glimpse into the abstract world of data processing. The Rhizomatiks Research team, led by Japanese artist, programmer, interaction designer and DJ Daito Manabe, gathers collective power with a number of experts, among them the five ELEVENPLAY dancers of choreographer MIKIKO as well as from coding artist Kyle McDonald. The result is a breathtaking, implemented beautifully, in short: visually stunning.
The Legible City
In The Legible City the visitor is able to ride a stationary bicycle through a simulated representation of a city that is constituted by computer-generated three-dimensional letters that form words and sentences along the sides of the streets. Using the ground plans of actual cities – Manhattan, Amsterdam and Karlsruhe – the existing architecture of these cities is completely replaced by textual formations written and compiled by Dirk Groeneveld. Travelling through these cities of words is consequently a journey of reading; choosing the path one takes is a choice of texts as well as their spontaneous juxtapositions and conjunctions of meaning.
Pianographique is a series of collaborations of real time visual artist Cori O’Lan and Maki Namekawa. The visualisations are not videos that are more or less synchronous to the music and it is also not the musician’s playing to prefabricated material, they are jointly created together in the moment of the performance. As with most of Cori O’Lan’s visualizations, all graphic elements are derived directly from the acoustic material, i.e. the sound of the music. For this purpose, the piano is picked up with microphones and these signals are then transformed by the computer into a multitude of information about frequency, pitch, volume, dynamics, etc… This information, in turn, is used to control the graphics computer, create graphical elements or modify them in many ways. Since these processes take place in real time, there is a direct and expressive connection between the music and visual interpretation. The visualization is actually not “created” by the computer but much more by the music itself – the computer is rather the instrument, the brush operated, played by the music.
Computergesteuerte Fünfkanal-Video- / Toninstallation mit fünf Videoprojektoren, einem Achtkanal-Soundsystem und Diaprojektoren […] Als Bild schlägt ein Liebespaar häufig eine Burg der Ausgrenzung vor. Mit der sexuellen Befreiung der letzten Jahrzehnte hat das Wort nun mehr mit körperlicher Kopplung zu tun als mit der Erhabenheit der „wahren Liebe“. AIDS hat dieser Paarung eine neue Dimension der Vorsicht hinzugefügt. Die lebensgroßen Tänzer in Lovers haben kein Leben mehr. Die nackten Figuren werden auf die schwarzen Wände eines quadratischen Raums projiziert und haben eine spektrale Qualität. Ihre Bewegungen sind einfach und sich wiederholend. Sie gehen hin und her, laufen und rennen mit tierischer Anmut. Ihre Handlungen werden im Laufe der Zeit vertraut, so dass es eine Überraschung ist, wenn zwei der durchscheinenden Körper in einer virtuellen Umarmung zusammenkommen. Diese angeblichen Liebhaber – mehr überlappend als berührend – sind physisch nicht miteinander verflochten.
RAFAEL LOZANO HEMMER
拉斐尔·洛萨诺 – 亨默
רפאל לוזאנו, המר
A circular display that simulates the turbulence at the surface of the Sun using mathematical equations. The piece reacts to the presence of the public by varying the speed and type of animation displayed. If no one is in front of the piece the turbulence slows down and eventually turns off. As the built-in camera detects people more solar flares are generated and the fake Sun shows more perturbation and activity. At 140 cm diameter, Flatsun is exactly a billion times smaller than the real Sun. The piece consists of custom-made panels with 60,000 red and yellow LED lights, a computer with 8 processing cores, a camera with a pinhole lens and a mechanically engineered aluminium, steel and glass structure that pivots for maintenance. A single knob lets the collector set the brightness of the piece and turn it on and off.
This short film is intended to encourage a creative audience to seek out Kevin Slavin’s talk Those Algorithms Which Govern Our Lives. It employs an effect which takes place in Google Earth when its 3D street photography and 2D satellite imagery don’t register correctly. This glitch is applied as a metaphor for the way that our 21st century supercities are physically changing to suit the needs of computer algorithms rather than human employees.