LUCY MCRAE
Biometric Mirror
source:lucymcraenet
What if Artificial Intelligence gets it wrong?
Biometric Mirror questions the accuracy and assumptions of facial recognition algorithms. Enter a sci fi beauty salon and let an AI scan your biometric data and reveal a mathematically ‘perfect’ version of your own face. But whose version of perfection is it really?
Biometric Mirror is an immersive installation that blends the act of casually glancing at one’s reflection with modern algorithmic perspectives on facial perfection. The artwork explores the accuracy and flaws of artificial intelligence and the ‘uncanny valley’ of algorithmic perfection and its potential black mirror outcomes.
Part of Science Gallery Melbourne’s PERFECTION exhibition program, this AI mirror analyses an individual’s character traits based solely on their face – to investigate the way that people respond to AI analysis. The mirror compares its onlooker to a database of faces that have been assessed on 14 characteristics, before issuing them a statement that summarises their ‘attractiveness’ and their emotional state. In theory, the algorithm is correct, but it’s likely the information isn’t – because how can it be if it’s based on subjective information? This intriguing collaboration is with scientists from the Research Centre for Social Natural User Interfaces, invented by Dr Niels Wouters.
“Creating platforms to discuss the cultural implications of emerging technologies, like artificial intelligence means that we can expose any assumptions in a public space.”
As beauty brands continue to embed AI technology into their offering, real people are starting to value the advice of machine learning. But upon what ideals is our ‘beauty’ being judged?
DAZED BEAUTY THE ILLUSION OF PERFECTION: DISTURBING TRUTH ABOUT AI BEAUTY
Biometric Mirror is a moment to start thinking about transparency of algorithms, consenting and deconsenting, and the current trend of perceiving algorithms (and AI) as the holy grail that will ultimately improve society.
Science Gallery’s PERFECTION trailer explores the ritualisation of artificial intelligence via a ceremony performed by digital shamans on a test subject. This experiment looks at the uncontrollable imperfections that occur when working with the human body in contrast with the controlled and programmable systems of artificial intelligence. How do we reassess and imagine new algorithmic paradigms that encompass imperfection, accident and messiness?
.
.
.
.
.
.
.
source:thisispapercom
Biometric Mirror is an immersive installation that blends the act of casually glancing at one’s reflection with modern algorithmic perspectives on facial perfection.
Photography: Jesse Marlow
The artwork explores the accuracy and flaws of artificial intelligence and the ‘uncanny valley’ of algorithmic perfection and its potential black mirror outcomes.
This AI mirror analyses an individual’s character traits based solely on their face – to investigate the way that people respond to AI analysis. The mirror compares its onlooker to a database of faces that have been assessed on 14 characteristics, before issuing them a statement that summarises their ‘attractiveness’ and their emotional state. In theory, the algorithm is correct, but it’s likely the information isn’t – because how can it be if it’s based on subjective information?
.
.
.
.
.
.
.
source:socialnuiunimelbeduau
Biometric Mirror exposes the possibilities of artificial intelligence and facial analysis in public space. The aim is to investigate the attitudes that emerge as people are presented with different perspectives on their own, anonymised biometric data distinguished from a single photograph of their face. It sheds light on the specific data that people oppose and approve, the sentiments it evokes, and the underlying reasoning. Biometric Mirror also presents an opportunity to reflect on whether the plausible future of artificial intelligence is a future we want to see take shape.
Big data and artificial intelligence are some of today’s most popular buzzwords. Both are promised to help deliver insights that were previously too complex for computer systems to calculate. With examples ranging from personalised recommendation systems to automatic facial analyses, user-generated data is now analysed by algorithms to identify patterns and predict outcomes. And the common view is that these developments will have a positive impact on society.
Within the realm of artificial intelligence (AI), facial analysis gains popularity. Today, CCTV cameras and advertising screens increasingly link with analysis systems that are able to detect emotions, age, gender and demographic information of people passing by. It has proven to increase advertising effectiveness in retail environments, since campaigns can now be tailored to specific audience profiles and situations. But facial analysis models are also being developed to predict your aggression level, sexual preference, life expectancy and likeliness of being a terrorist (or an academic) by simply monitoring surveillance camera footage or analysing a single photograph. Some of these developments have gained widespread media coverage for their innovative nature, but often the ethical and social impact is only a side thought.
Current technological developments approach ethical boundaries of the artificial intelligence age. Facial recognition and analysis in public space raise concerns as people are photographed without prior consent, and their photos disappear into a commercial operator’s infrastructure. It remains unclear how the data is processed, how the data is tailored for specific purposes and how the data is retained or disposed of. People also do not have the opportunity to review or amend their facial recognition data. Perhaps most worryingly, artificial intelligence systems may make decisions or deliver feedback based on the data, regardless of its accuracy or completeness. While facial recognition and analysis may be harmless for tailored advertising in retail environments or to unlock your phone, it quickly pushes ethical boundaries when the general purpose is to more closely monitor society.
COLLABORATION
This project is a collaboration between The University of Melbourne’s Microsoft Research Centre for Social Natural User Interfaces (SocialNUI) and Science Gallery Melbourne.