highlike

Rafael Lozano-Hemmer

Saturation Sampler
Rafael Lozano-Hemmer’s work Saturation Sampler, uses AI computer vision to track onlookers and extract the most saturated color palettes from their bodies and clothes, creating a gridded composition from the footage where viewers catch glimpses of their reflections in the pixelated field. With the widest color gamut available and an unparalleled 160-degree viewing angle, Luma Canvas delivers a unique viewing experience unlike any other. The direct emissive nature of the display’s LEDs creates a visceral and material encounter with Lozano-Hemmer’s interactive work, meaningfully situating his digital work within the physical realm.

HYE YEON NAM

Please Smile
File Festival
“Please smile” is an exhibit involving five robotic skeleton arms that change their gestures depending on a viewer’s facial expressions. It consists of a microcontroller, a camera, a computer, five external power supplies, and five plastic skeleton arms, each of them with four motors. It incorporates elements from mechanical engineering, computer vision perception, to serve artistic expression with a robot.

Studio A N F

Computervisionen 2
Nach mehr Jahrzehnten des Versuchs, einen Apparat zu konstruieren, der denken kann, können wir endlich die Früchte dieser Bemühungen erleben: Maschinen, die es wissen. Das heißt, nicht nur Maschinen, die Informationen messen und nachschlagen können, sondern auch solche, die ein qualitatives Verständnis der Welt zu haben scheinen. Ein auf Gesichtern trainiertes neuronales Netzwerk weiß nicht nur, wie ein menschliches Gesicht aussieht, es hat auch ein Gefühl dafür, was ein Gesicht ist. Obwohl die Algorithmen, die solche para-neuronalen Formationen erzeugen, relativ einfach sind, verstehen wir nicht vollständig, wie sie funktionieren. Eine Vielzahl von Forschungslabors hat solche Netze auch erfolgreich auf fMRT-Scans (Functional Magnetic Resonance Imaging) lebender Gehirne trainiert, um Bilder, Konzepte und Gedanken effektiv aus dem Geist einer Person zu extrahieren. Hier geschieht die Beugung wahrscheinlich als doppelte: eine Technologie, deren Funktionsweise nicht gut verstanden wird und die eine ebenso unklare natürliche Formation mit einem gewissen Erfolg qualitativ analysiert. Andreas N. Fischers Arbeit Computer Visions II scheint kurz hinter dieser Schwelle zu warten, wo sich zwei Arten von wissenden Wesen in einer Art psychotherapeutischer Sitzung treffen […]

Studio A N F

Computer Visions 2
After more decades of trying to construct an apparatus that can think, we may be finally witnessing the fruits of those efforts: machines that know. That is to say, not only machines that can measure and look up information, but ones that seem to have a qualitative understanding of the world. A neural network trained on faces does not only know what a human face looks like, it has a sense of what a face is. Although the algorithms that produce such para-neuronal formations are relatively simple, we do not fully understand how they work. A variety of research labs have also been successfully training such nets on functional magnetic resonance imaging (fMRI) scans of living brains, enabling them to effectively extract images, concepts, thoughts from a person’s mind. This is where the inflection likely happens, as a double one: a technology whose workings are not well understood, qualitatively analyzing an equally unclear natural formation with a degree of success. Andreas N. Fischer’s work Computer Visions II seems to be waiting just beyond this cusp, where two kinds of knowing beings meet in a psychotherapeutic session of sorts[…]

Timeblur Studio

Nadi Generative Art
Nadi is a Digital display of Kinetics and Energetics of Body Movements involved in Yoga. The visuals are created by investigating the flow of data, using the human body as a vehicle. With the support of computer vision technologies, a visual trail is formed by tracking the body movements during yogic postures. Inspired from Indian Yogic Science, we have visually depicted aspects of light, matter and energy in our forms. The generative nature of the visual comes from the digital juxtaposition of the poses that the body generates with each pose.

Liam Young

Where the City Can’t See
Directed by speculative architect Liam Young and written by fiction author Tim Maughan, ‘Where the City Can’t See’ is the world’s first narrative fiction film shot entirely with laser scanners, designed in collaboration with Alexey Marfin. The computer vision systems of driverless cars google maps, urban management systems and CCTV surveillance are now fundamentally reshaping urban experience and the cultures of our city. Set in the Chinese owned and controlled Detroit Economic Zone (DEZ) and shot using the same scanning technologies used in autonomous vehicles, we see this near future city through the eyes of the robots that manage it. Exploring the subcultures that emerge from these new technologies the film follows a group of young car factory workers across a single night, as they drift through the smart city point clouds in a driverless taxi, searching for a place they know exists but that the map doesn’t show.

Marta Revuelta

AI Facial Profiling, Levels of Paranoia

Inspired by the recent psychometric research papers who claimed to use an AI to detect the criminal potential of a person based only on a photo of his face, and taking the world of firearms as a starting point, we present a “physiognomic machine”, a computer vision and pattern recognition system that detects the ability of an individual to handle firearms and predicts his potential danger from a biometric analysis of his face. The device is based on a camera-weapon that captures faces as well as a machine with artificial intelligence and a mechanical system that classifies the profiled persons into two categories, those who present a high risk of being a threat and those who present a lower risk .

ANDREW HIERONYMI

move
File Festival
MOVE is an interactive installation divided into six distinct modules, JUMP, AVOID, CHASE, THROW, HIDE and COLLECT. Each module offers a single-user interaction, based on a verb corresponding to the action the participant is invited to perform. Each verb corresponds to a common procedure acted out by avatars during videogame play. Each module offers an interaction with abstracted shapes (circles, rectangles) behaving according to simplified rules of physics (collision, friction). Each module is color-coded with consistency, where the color red is used for the graphical element that poses the core challenge. Each module increases difficulty in a similar linear manner.What makes MOVE unusual is that unlike most computer vision or sensor based games like Eye-toy or Dance Dance Revolution, the participant IS the avatar, he is not seeing a representation of herself or an indirect result of her actions on a separate screen but instead interacts directly with the projected graphical constituents of the game. Because those graphical elements are non-representational they do not allow for a projection in a fictional space. The combination of abstracted shapes and direct interaction reinforces in the player the focus on the action itself (JUMP, AVOID, CHASE, THROW, HIDE or COLLECT) instead of an ulterior goal.

Espadaysantacruz studio

Interactive Chalk Cars
“Interactive chalk cars” is an installation based on a traditional children ́s game, that was originally played on the streets. It uses new digital technologies to review a non-technological game. By using computer vision algorithms and projection mapping, it brings together the real and the virtual. In doing this, we try to combine two playing modes that are usually confronted: the individual video game and the outdoor social game.
FILE FESTIVAL

JEREMY BAILEY

ДЖЕРЕМИ БЭЙЛИ
제레미 베일리
ג’רמי ביילי
ジェレミー·ベイリー
Important Portraits

Powered by humor and computer vision, his work wryly critiques the uneasy relationship between technology and the body while playfully engaging the protocols of digital media.

KAROLINA SOBECKA

カロリナ・ソアベッカ
Каролина Собечка
sniff
File festival

Karolina Sobecka is a Polish artist that works with animation, design, interactivity, computer games and other media. Her work often engages public space and explores ways in which we interact with the world we create. Sniff is an interactive projection in a storefront window. When motion is detected along the sidewalk in front of the display, a virtual dog appears and responds the person’s behavior and gestures. The passerby’s movements are tracked by a computer vision system, and the dog behaves differently depending on how he is engaged. Like a real canine, big swift actions are interpreted as threatening, while slow and friendly actions directed to him are interpreted as friendly. He tracks and remembers the attitude of the viewer and forms a relationship with them over time based on the history of interaction. Depending on the nature of the relationship, he may bark, growl, roll over or even play fetch.

RYAN HABBYSHAW

carnival aquarium

The interactive aquarium is installed in store front windows in 6 cities across the US. Using computer vision the seascape will react to the motion of a user, seaweed will sway and fish will scatter. Users can then dial in with any mobile device and create a fish using their voice. As they connect in realtime the sounds they make are analyzed and create a dynamically generated fish. You can then enter that fish into the aquarium and use your key pad to eat various pieces of fish food that will transform and morph your fish into creatures. The application will remember the state of your fish so that when you call back later you can evolve your fish even further.

Driessens & Verstappen

Breed
Breed (1995-2007) is a computer program that uses artificial evolution to grow very detailed sculptures. The purpose of each growth is to generate by cell division from a single cell a detailed form that can be materialised. On the basis of selection and mutation a code is gradually developed that best fulfils this “fitness” criterion and thus yields a workable form. The designs were initially made in plywood. Currently the objects can be made in nylon and in stainless steel by using 3D printing techniques. This automates the whole process from design to execution: the industrial production of unique artefacts.
Computers are powerful machines to harness artificial evolution to create visual images. To achieve this we need to design genetic algorithms and evolutionary programs. Evolutionary programs allow artefacts to be “bred”, rather than designing them by hand. Through a process of mutation and selection, each new generation is increasingly well adapted to the desired “fitness” criteria. Breed is an example of such software that uses Artificial Evolution to generate detailed sculptures. The algorithm that we designed is based on two different processes: cell-division and genetic evolution.

Oleg Soroko

Procedural cloth V//002
“I’m continuing experiments with procedurally generated structures. This time I’m implementing algorithm assembled in Houdini over female body. All meshes generated in Houdini. Then uploaded to sketchfab.  All images made from sketchfab model. You can check it in 3d in any browser by the link in my profile.” Oleg Soroko

jeanine jannetje

reawaken
Reawaken is a kinetic sculpture with 55 robotic arms, powered by 55 servo motors. The lowering of the arms causes an abstract print on paper. Technology mirrors humanity, and vice versa. In addition to creating beauty, technology is there to meet our needs. We, and our needs, have evolved to a point where we are so integrated that we consume technology on autopilot. We live in a time of mass production in which our daily devices increasingly mimick each other. A smartphone is a small tablet, a tablet a small computer and a computer a small television. The question of what this does with our imagination, together with the increasingly invisible technological progress such as algorithms and artificial intelligence, have been my starting point for Reawaken.

CHANG YEN TZU

Self Luminous 2 – Unbalance
Self-luminous 2 is an experimental handmade instrument shown as performance. It is a series-project which I have been working on since 2013 and finally developed into shape in 2014. I am looking for intimate and personal instrument that reflects on the relation of digital sound and light message. In computer language, light on is 1 and light off is 0. If more than 2 lamps, it could be code or readable possibility by the meanings. When I press the button or turn the knob, the message will be sent to Pure Data, and the sound will be triggered in live by Pure Data.
The Data of sound such as frequency and volume, are analysed and sent out to the second Arduino to control the light. The light, in thus case, is an intuitive element for human beings. From this point, it is really close to sound which disturbs our biological body directly. The lights are visualised and they can be transferred the into messages. The message might be readable by coincidence with the link to the code. The light is bright enough to let audience to have persistence of vision in mind. During the performance, the sound will be reproduced by code and part of it is impromptu.
.

Julian Oliver

Föhnseher
Föhnseher rises from the scrap heap of analog TV. Unlike other televisions, Föhnseher captures and displays images downloaded by people on surrounding local wireless networks. Other people’s phones, laptops and tablet computers all become broadcast stations for this device, replacing the forgotten television towers of old.

MIAO XIAOCHUN

МЯО СЯОЧУНЬ
缪晓春
مياو شياو تشون

The large-scale nine-panel installation, Microcosm, is based on Hieronymus Bosch’s 15th century masterpiece, The Garden of Earthly Delights. Microcosm is an imaginative reinvention of the sumptuous landscape of sin, salvation, and tawdry visions of those who never made it to paradise. The structure and narrative pattern of Bosch’s triptych, such as the architecture of heaven, earth and hell, as well as the basic forms of Bosch’s pictures, have been preserved in Miao Xiaochun’s work. But new digital means and computer technologies have allowed Miao Xiaochun to explore a contemporary visual vocabulary. He abolishes the traditional fixed single-point perspective aesthetic, instead favoring the Chinese tradition of multiple points of view in a single landscape.

alexander mcqueen

الكسندر ماكوين
亚历山大·麦昆
알렉산더 맥퀸
אלכסנדר מקווין
アレキサンダーマックイーン
Александра Маккуина
Android Couture

Presented on the cusp of the new millennium, Alexander McQueen’s Autumn/Winter 1999 collection for famed French fashion house Givenchy captured the new fascination with personalized digital technology in popular culture. At the culmination of the show, two models appeared outfitted in molded Perspex bodices studded with flashing LED lights and glowing leggings patterned like computer chips. The creation of a digital aesthetic and its intimate application to the body—an android-like amalgamation of the physical and digital—anticipated the “wearables” trend and the formation of the digital self. Known for his exquisite tailoring, meticulous detailing, and ambitious collections, McQueen also represented one of the remaining visionaries of haute couture extravagance.

Ricardo Barreto and Maria Hsu Rocha

Martela
FILE FESTIVAL
Tactila is an art form whose medium is the sense of touch (tact) which is independent from the all the other ones and has its own intelligence, imagination, memory, perception, and sensation. It is well known that vision and sound have hegemony in arts and in other disciplines. Tactila takes place in time and, therefore, can be recorded and have various forms of notation for subsequent executions. That is why its development became possible only now, thanks to mechatronic and robotic systems which are compatible with machine languages.
The creation of tactile works involves a (tact) composition, which can be made through handmade notation and played on a keyboard or directly on the computer of the tactile machine ( robot ).
Tactile machines can present numerous tactile possibilities through points, vectors, and textures with varying rhythms and intensities, and be run in different extensions and locations of our body.

.

The first tactile machine is called “Martela”. It is a tactile robot comprised of 27 engines subdivided into three squares (3 x 3), i.e., each square has 9 engines. Each engine corresponds to a matrix point, so we have 27 tactile units that allow to touch the user’s body with various intensities.

GIUSEPPE RANDAZZO

Джузеппе Рандаццо
stone fields (Using computer algorithms)

This project has started from a search for a 3d-objects optimal packing algorithm over a surface, but evolved in something rather different. I love the work by Richard Long, from which this project takes its cue. The way he fills lonely landscapes with arcaic stones patterns and its eroic artistic practice, in his monumental vision, is in strong contrast with this computational approach that – ironically – allows virtual stones creation and sorting in a non physical, mental way, a ‘lazy’ version, so to speak. The virtual stones created from several fractal subdivision strategies, find their proper position within the circle, with a trial and error hierarchical algorithm. A mix of attractors and scalar fields (some with Perlin noise) drives the density and size of the stones. The code is a C++ console application that outputs a OBJ 3d file.