In this episode
Automated facial recognition systems: the bad, the very bad, and what you can do to resist them (if you like). A special collaborative episode with Ella Hillström.
Transcript
Geraint
Keep communities safe. Keep industry and commerce secure. Help combat crime and fraud. Promote justice. Protect victims. Protect the public. Protect fundamental freedoms and human rights. All we need to achieve this is … your face.
Chris
Hello, and welcome to Vulnerable By Design with me, Chris Onrust. Today we have a special collaborative episode with Ella Hillström — hi, Ella!
Ella
Hi!
Chris
… who is a researcher in social anthropology based at Stockholm. What are we talking about? Automated facial recognition systems. The bad, the very bad, and what you can do to resist them.
Let’s start with some questions.
Ella
Have you ever used your face to: unlock your phone? Go through customs? Get access to a building? Pay for a veggie burger?
Chris
Nice.
Ella
And have you ever, simply by walking down the street, taking a bus or going to work or school, had your face captured and ran through a giant database to match that cheeky visage of yours to your personal info, your name and address—linking it to a tidy list of locations you previously visited, plus all the things you’ve ever posted online? More difficult to answer is it?
Because most likely you wouldn’t even know when that’s been done to you. Today, in many places worldwide, your likeness is regularly being captured without anyone telling you and without you really having a say in the matter.
Chris
Pfff.
Let’s talk some stats. Last year, the website Comparitech found that, in a survey of the 100 most populous countries on earth, 70% of governments of the countries surveyed were using facial recognition technologies on a large scale basis. That could include facial recognition linked to on-street cameras, passport processing, or even as a precondition to access services.
For example, in the United Arab Emirates, facial recognition is used—supposedly—to speed up processes. In China facial recognition technologies are used invasively, including to publicly try to humiliate people who are caught on CCTV leaving their homes inappropriately dressed. Read: wearing their pyjamas. And in Russia facial recognition is now being used to detain people who protest against the ongoing war on Ukraine, or who seek to evade the partial military draft.
Ella
Some more stats. Of the surveyed countries, facial recognition was also used by police forces (in around 70% of them), in banks (around 80%), and in airports (60%). And it’s used in stores, on buses, trains and metro systems of around 20 to 30% of the countries. Spain, for example, logs the faces of all of the roughly 20 million visitors who pass through Madrid South bus terminal and checks them against their criminal database. While in Kazakhstan and China faces are valid payment methods to cover your bus fare.
Workplaces use facial recognition to monitor workers and in nearly one in five of the countries in the survey, facial recognition is used on children in schools to monitor their attendance, or even whether the little tots are paying attention in class—which may affect their grades. Do you remember being in school? Is this how you would have liked to be treated?
Only six out of 100 countries surveyed showed no evidence of using facial recognition at all. So in case you’d like to keep your face to yourself, these are: Cuba, Haiti, Syria, South Sudan and Madagascar.
Chris
Prefer to talk money? Globally, in 2020, the market for facial recognition software was estimated to be worth roughly €3.7 billion. That’s about the size of the entire GDP of Suriname for that entire year. It looks like those who can are throwing around a lot of dosh just to be able to identify you, track you, monitor your location or even your facial expression. But how does facial recognition actually work?
Ella
Expectably, facial recognition all starts with your face. And with ‘your face’, we’re generally talking about human faces. Although I think something similar has also been tried on dogs. Yep, dogs.
How each system works will of course vary. But broadly, a facial recognition system takes an incoming image—sometimes called a ‘query image’, because it’s used to query who’s in the frame. This image then gets processed. First, to detect whether there’s a face on the picture at all. I mean, it could also be just plants, or food? My food does occasionally get identified as people. And next, if the system does detect a face, it processes the image further to extract so-called ‘facial features’.
Chris
More specifically, it will try to identify things such as: Where are the eyes? What’s the shape of the nose? How about the chin then? And how are all of these facial features positioned towards one another? Once it’s figured this out, the system will produce a vector, which is something like a mathematical map of the key features of a particular face. This map sometimes is also called a ‘face print’, similar to the idea of a fingerprint.
Ella
Good. Once the system has your face print, then the serious identification business starts. Once it has your face print, the system will dive into a potentially massive database with other face prints that have already been extracted from other images. And ask: Does a face map of this new query image I’ve got right here, match one or more of the face prints already in the database? If not, then tough luck. But if a match does turn up, it will generally be expressed not as a gotcha! But more in terms of a degree of similarity between two face prints. So it might respond: Match found! It’s Chris (70% similarity).
Chris
Well, I can hear you thinking: Where do all of these images come from? I am very glad you asked. For the input image, basically, all bets are off. It could have come from anywhere. Your face must have been digitally captured at some point or other. Perhaps a video on TikTok? Local CCTV systems? Or maybe you dropped off a package at the neighbours’ and got caught on their doorbell snoop cam?
And as for the reference database—that my friends might well be the facial recognition industry’s dirty secret. Because while some of those pictures come from official government databases, such as passports or driver’s licenses, much of what is in these reference databases is just scraped from the internet.
For example, Clearview AI is one of the biggest players in the facial recognition field. Clearview AI has built up a database of over 10 billion images of people’s faces. That is more than one picture for every single living person on earth. Clearview says that the images it fetched were ‘publicly available’. In plain speak, ‘publicly available’ just means that it was scraped from lots and lots of websites, including from Instagram, Facebook, YouTube, Twitter, and the mobile payment app Venmo. Happily violating the terms of service of most of these platforms.
Plus, when it comes to Europe, Clearview AI also happily ignored the EU’s General Data Protection Regulation—or GDPR for short—which as a default explicitly forbids gathering personal data, including biometric facial data from EU citizens without their knowledge or consent. So yeah, that’s the space of players we are dealing with here.
Ella
By and large facial recognition systems themselves are discriminatory—you know, the racist, sexist sort of kind. How so? Well, let’s look at error rates, which is when face recognition systems get things wrong. Here are three known limitations. First, non-detection. The system might simply not recognize your face as a face. Which, problematically, can easily happen when you’ve got melanin-rich skin. Who’s looking forward to self-driving cars eh?
Second, misidentification. Which is when the system says that you’re somebody else, or that somebody else is you. And finally, what you call ‘mislabeling’. Which is where a system is wrong about facial traits, for example, whether your eyes are open or closed, because it’s only trained on a very limited range of facial features.
Chris
Studies show that systems were much more likely to misidentify you if you are a Black woman. And in general, the systems performed worse on people with dark skin, compared to light-skinned people. Worse on women, compared to men. And worse on older people and children, compared to people in other age groups. So let’s just call that what it is: tech-enabled racist, sexist, ageist discrimination.
Ella
Errors have consequences. Faulty matches and mismatches can mean that you can get stopped and questioned about things you’ve had no involvement with whatsoever. You might even get arrested if a facial recognition system says that you look quite like someone who may have done something or other. Because the computer is always right, isn’t it?
But of course, it’s not about the errors as Dr. Joy Buolamwini, founder of the Algorithmic Justice League, says: facial recognition tech threatens civil rights and liberties. Even when it works as intended. If you wanted to increase discrimination in society, facial recognition systems are your friends. These systems are often first used on people who are somehow seen as ‘dangerous’ in society. Perhaps you’re considered illegal for seeking refuge? Perhaps you’ve been labeled a ‘criminal’? Perhaps simply because you’re poor?
Chris
Sociologist Simone Brown even traces the rise of biometrics—which is when you’re measuring and calculating things about people’s bodies—they trace it all the way back to mass-scale transatlantic abduction and enslavement operations, where enslavers would mark people’s bodies to identify, capture and control their lives. Could there be a direct line from those abhorrent practices to the electronic bio-monitoring that we’re seeing today?1
It looks like with facial recognition tech, we are well on our way to what author and activist Cory Doctorow calls the ‘shitty technology adoption curve’. The shitty technology adoption curve is, in Doctorow’s words:
“… when you have a manifestly terrible idea for an oppressive technology, you can’t run it down the throats of rich, powerful people who get to say ‘No’.”
Instead:
“You have to find people whose complaints no one will listen to.”
Which, in Doctorow’s diagnosis, means that the worst tech ideas are first trialed on people who are in prison, people seeking asylum, people with a mental health crisis. And from there, its use gets expanded to children and manual workers. And ultimately, the tech will have become so ‘normal’ that even wealthy, powerful people voluntarily welcome it into their lives.
Chris
Think: willingly setting up remote-monitored, 24/7 home cameras and snoop speakers, leaving you continually surveilled.
The UK now plans to monitor foreign offenders with facial recognition snoop watches. Mind you, they’re trailing this not just on offenders. No, on foreign offenders. How could anyone possibly object? [Sarcasm]
Ella
With the shitty technology adoption curve, though, you can clearly see the endpoint. Perpetual, blanket surveillance of anyone who dares to go out into public space, or who dares to be present online. As journalist Kashmir Hill captures the vision, if facial recognition becomes ubiquitous, then:
“Searching someone by face could become as easy as googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable, and his or her home address would be only a few clicks away.”
Chris
“It would herald the end of public anonymity.”
Welcome to the future.
Ella and Chris
Or, as we can say, with gratitude to pastor Martin Niemöller:
First they used it on the prisoners, and I did not speak out — Because I wasn’t a prisoner.
Then they used it on the foreigners, and I did not speak out — Because I was not a foreigner.
Then they used it on our children and I did not speak out — Because I was no longer a child.
Then they used it on the workers, and I did not speak out — Because it was not my job.
And then they used it on me — and there was no one left to speak for me.
Ella
If you’re not so keen on being a living, smiling, walking barcode, then what can you do? If, by some miracle, you had access to either the query image, the algorithms used, or the full reference database used for the face matching, then you could in theory, use some anti-facial recognition software to disrupt the face recognition process. However, in this non-miracle world called ‘reality’, such opportunities are quite unlikely to arise.
Chris
Now, one thing that you can do, however, is to try to make sure that your facial barcode doesn’t get picked up whenever you take part in public life. Already in April 2010—over a decade ago!—researcher Adam Harvey showed that, with the right sort of makeup and hair styling, you can make it less likely that a facial recognition system will actually pick up your face. How would you do that?
Ella
The basic idea is that you make your face look less face-y. Many face recognition algorithms expect that, in human faces, certain regions will be brighter and others will be darker. For example, the nose bridge and the upper cheeks are protruding and so will be lighter than the eye region, which is set deeper into the face.
Chris
In 2010, Harvey gave some style tips to evade what is known as the Viola-Jones Haar cascade face detection algorithm—which was developed in 2001 by Paul Viola and Michael Jones, who were both building on the idea of Haar-features for face recognition, which were in turn identified by Alfred Haar in 1909. Some of the tips Harvey gave were:
- Go for an asymmetrical look, because most face detection algorithms expect symmetry between the left and the right side of a face.
- Use hair, clothing and accessories to obscure the elliptical shape of your head.
- And use contrasting makeup—so light colours on dark skin, or dark colours on light skin—to create unusual patterns and shapes on your face.
- Preferably partially obscure your nose bridge and one or both of your eyes.
Ella
Harvey called their proof of concept ‘CV Dazzle’, because you’re trying to dazzle and overwhelm the computer vision algorithm that’s trying to pick up your face. And in recent years, the idea got so popular that a group of Londoners started hosting what they called ‘Dazzle Clubs’. Which were basically walking tours through the city donning CV Dazzle camouflage, hoping to taunt London’s many, many surveillance cameras.
Chris
But beware: newer, better algorithms get developed all of the time. So you really have to tailor your camouflage to the algorithm that you’re trying to evade. For instance, early on in the ongoing COVID-19 pandemic, some facial recognition algorithms were initially thrown off, because they could no longer capture people’s full faces, when everyone was wearing face masks. But two years on, and with many countries going into the seventh, eighth, or ninth wave of infection, these algorithms have been duly trained so that they can get matches also when you are masked.
Ella
Worse yet, the newest recognition technologies capture not just your face, but your entire body. More predictable, less easy to hide.
Now, you may think that wearing bombastic looks wherever you go is a bit too much work, every time you’re just walking the dog, or going to the dentist. Also, you may find it a bit isolated? Individualistic? Each person fending for themselves.
Chris
Adam Harvey, who developed CV Dazzle, acknowledges this point. They say:
“It is important to question the systems, attempt to break them, and explore them on your own terms. More effort is needed to counter mass surveillance from both individuals and groups.”
Ella
In other words: Yes, makeup and hairstyling might be a bit individualistic. But individual actions can actually spark group uprisings, if everyone works together.
So what might some of that collective action to avoid omnipresent facial tracking look like? You might think of camera disruption. A well-placed laser light, a piece of cloth, or stickers can do wonders in blocking a lens. This tactic was used in 2019 by protesters in Hong Kong, who put on a masterclass in digital security.
Or how about the good old social, legal route? People can collectively say: We don’t want this. We don’t want it in our school. We don’t want it in our city. We don’t want it in our workplace. We don’t want it.
On a global scale, Belgium and Luxembourg already have outright facial recognition bans in place. Morocco had a temporary moratorium on the technology, though it let it expire. Morocco, you were doing so well! Several cities across the world are on the way to implement similar bans.
Yes, yes, I know. Facial recognition tech is supposed to be fast-track, quick and convenient, especially if you’re rich. But might it be time for a nice collective stop sign to these supposed conveniences? For otherwise, there’s just one question, really.
Ella and Chris
When they came for the prisoners, when they used it on the workers, when they dumped it on the children: Did you speak out?
Chris
Thank you for listening to Vulnerable By Design this week! If you would like to hear more, or get in touch, you’ll find all of our episodes and more information on vulnerablebydesign.net.
Ella
I am Ella Hillström …
Chris
… and I am Chris Onrust. Thank you for listening and bye for now.
Episode links
- TikTok Captures Your Face, by Jeannie Paterson and Niels Wouters, Pursuit, 26 July 2021.
- Fears for Children’s Privacy as Delhi Schools Install Facial Recognition, by Rina Chandran, 2 March 2021.
- Chinese City Uses Surveillance Tech to Shame Citizens for Wearing Pyjamas, by Daniel Van Boom, CNET, 22 January 2022.
- Facial Recognition Technology (FRT): 100 Countries Analyzed, Paul Bischoff, Comparitech, 8 June 2021.
- 20 Facial Recognition Statistics to Scan Through in 2022, by Nick Galov, WebTribunal.
- Move over Humans, This Startup Is Making Facial Recognition for Pets, by Karen Chiu, South China Morning Post, 12 July 2019.
- Viola Jones Algorithm and Haar Cascade Classifier, by Mrinal Tyagi, 16 July 2021.
- Clearview AI, Principles.
- The Secretive Company That Might End Privacy as We Know It, by Kashmir Hill, The New York Times, 18 January 2020.
- When the Robot Doesn’t See Dark Skin, by Joy Buolamwini, The New York Times, 21 June 2018.
- How Surveillance Has Always Reinforced Racism, interview with Simone Browne by Sidney Fussell, Wired, 19 June 2022.
- Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use, by Drew Harwell,Washington Post, 19 December 2019.
- Dark Matters: On the Surveillance of Blackness, by Simone Browne, 2015.
- Facial Recognition Smartwatches to Be Used to Monitor Foreign Offenders in UK, by Nicola Kelly, The Guardian, 5 August 2022.
- SoK: Anti-Facial Recognition Technology, by Emily Wenger, Shawn Shan, Haitao Zheng, and Ben Y. Zhao, arXiv, 8 December 2021.
- CV Dazzle, by Adam Harvey.
- The Dazzle Club, 2019-2021.
- The Right to Hide? Anti-Surveillance Camouflage and the Aestheticization of Resistance, by Torin Monahan, 3 April 2015.
- Resisting Smartness, Are.na collection by Shannon Mattern.
- Facial Recognition Is No Match for Face Masks, but Things Are Changing Fast, by Khari Johnson, VentureBeat, 8 April 2020.
- How Hong-Kong Evade Authorities with Tech by the Wall Street Journal, 16 September 2019.
-
Simone Browne argues that there is indeed such a direct link between contemporary biometrics and the historical enslavement of people. Read: Dark Matters: On the Surveillance of Blackness (2015), published by Duke University Press. ↩︎