Many Chinese cities, among them Ningbo, are investing heavily in AI and facial recognition technology. Uses range from border control — at Shanghai’s international airport and the border crossing with Macau — to the trivial: shaming jaywalkers.
In Ningbo, cameras oversee the intersections, and use facial-recognition to shame offenders by putting their faces up on large displays for all to see, and presumably mutter “tsk-tsk”. So it shocked Dong Mingzhu, the chairwoman of China’s largest air conditioner firm, to see her own face on the wall of shame when she’d done nothing wrong. The AIs had picked up her face off of an ad on a passing bus.
False positives in detecting jaywalkers are mostly harmless and maybe even amusing, for now. But the city of Shenzhen has a deal in the works with cellphone service providers to identify the offenders personally and send them a text message, and eventually a fine, directly to their cell phone. One can imagine this getting Orwellian pretty fast.
Facial recognition has been explored for decades, and it is now reaching a tipping point where the impacts of the technology are starting to have real consequences for people, and not just in the ways dystopian sci-fi has portrayed. Whether it’s racist, inaccurate, or easily spoofed, getting computers to pick out faces correctly has been fraught with problems from the beginning. With more and more companies and governments using it, and having increasing impact on the public, the stakes are getting higher.
How it Works
Your face is like a snowflake; delicate and unique. While some people struggle to tell them apart, cameras can create accurate measures of many of their dimensions, including distance from eye to eye, forehead to chin, and other relative measurements. Put these together and you get a signature of metrics that can identify someone. Now you can use image analysis on photos or video to identify faces, finding zones of a particular range of colors, and run analytics on them, compare the results to a database, and find your match.
Governments, which have access to drivers license photos, booking photos, and other large collections of images, can put together immense sets of metrics and refine their algorithms using subsets as training data. Companies like Facebook also have huge datasets at their fingertips, matching photos with the person in the photo, then tying your friend list to the facial recognition to get a high likelihood of accurate identification of other faces in the photos.
In this way, Facebook may even have a leg up on governments; they have metadata about network links that increase the chances of positive identification. They also have more accurate/recent data. But possibly most important, their training datasets include not just mugshots taken from the same angles and the same lighting conditions, but real situations with varying light levels, moods, angles, obstructions, etc, so that they can train their datasets with much richer data. While law enforcement may have access to lots of mugshots, matching them up to video camera feeds to extract faces is a lot more difficult than CSI:Miami would have you believe.
The great thing is you can go play with yourself using OpenCV and your own camera. OpenCV has a face detection algorithm, and the tutorial walks you through all the math and complexity of identifying faces in photographs.
To Err is AI
Facial recognition is imperfect. In addition to picking up jaywalking bus advertisements, facial recognition has also made headlines recently when Microsoft’s face API technology was insensitive because it couldn’t recognize the faces of people of color as well as it could white people. It turned out their training data didn’t have enough dark-skinned people. In a study by the ACLU, Amazon’s Rekognition software accidentally matched 28 members of congress with mugshots in its database.
The training set isn’t the only problem for recognition. Your mood also has an impact. Your angry face standing at the back of the line for the TSA looks different from your relieved face when you are about to exit the DMV. The angle of the shot must be accounted for when calculating the metrics, and subtle variations in lighting can change a few pixels just enough to get a different value. Someone who knows they are being recorded may be able to change their facial features enough to fool the algorithm as well. Facial recognition for the purpose of access has very different parameters from facial recognition for identification without consent. Accounting for all of these variables and slight changes is extremely difficult, if not impossible, and in a situation where subtle differences are barely distinguishable from noise, the likelihood of error is high.
Increasing the size of the database can also increase the likelihood of confusion. If the database has multiple people with similar metrics, slight variations in the source image can result in slightly different metrics, which then identify the wrong person. Think of all the times where a TV show officer has pulled out a book of mugshots and asked someone to try to identify a perp. If you aren’t in the book, you are pretty safe. But if the book is compiled by an entity that has everyone’s picture, you’re in the book whether you like it or not.
Some facial recognition systems are trained to identify characteristics without identifying the person. For example, some advertising companies are looking at customizing ads for passersby based on what it captures as their sex or age. Unfortunately, this assumption of gender can be offensive and reinforce stereotypes in a world where it’s already tough enough when people do it. A static billboard is one thing, but a billboard that judges you and offers you products based on your appearance may not last long. If we ever reach a day where sex has to be confirmed by an algorithm before an impatient person can be granted access to a bathroom, I will be among the first to foil the cameras.
The Consequences
Which bathroom to use isn’t as bad as it gets, though. Imagine if self-driving cars were to use aspects of facial recognition to solve the trolley problem. Someone would have to define in code or in law what characteristics of a human are more valuable than others, possibly leading to split-second decisions by a computer that one person’s life is worth 3/5 another person’s.
When facial recognition inevitably gets it wrong and misidentifies someone, or when their face is copied and used maliciously, it can have horrible and long-lasting effects in the same way that identity theft can ruin someone’s credit for years. China’s cameras are being linked to a new social credit system in which people who make small mistakes are penalized in a way that publicly shames them and affects their ability to live and work. When the system gets it wrong, and it already does, it will ruin innocent people.
Faces are being used for granting access to phones, computers, and buildings. Facial recognition is being rolled out in China as part of a social score system that penalizes people for traffic offenses. Straight out of the movie Minority Report, facial feature recognition being used to customize advertisements in public. In an application we’ve all seen coming for a long time because of its use on TV, facial recognition is happening in law enforcement, being used to identify wanted suspects. The TSA is also getting on board, and has been testing fingerprints and facial recognition to verify identity of travelers.
As the uses of facial recognition grow, so will the number of innocent people falsely accused; it’s simply a matter of percentages. If a system recognizes people correctly 99.5% of the time, and analyzes 10,000 traffic incidents per day or per week, 50 innocent people will receive fines. 70,000,000 people flow through Shanghai’s airport per year. 99.5% accuracy means nearly 1,000 false identifications per day.
Fooling Facial Recognition
Imagine that you wish to opt out of a facial recognition system. With carefully applied makeup, it’s possible to obscure the face in such a way to convince the neural network that there isn’t a face at all. Other accessories like sunglasses can be effective. Failing that, wearing a point light source on a face, such as a headlamp, can saturate the camera. Even the San Francisco uniform, a hoodie, can obscure the face enough to prevent face recognition.
When access is the goal, a common trick is to use a printed photo of the intended person to be recognized. Windows 10 was vulnerable to this for a while, but has since fixed this problem. Computers are getting smarter about collecting 3D data to make spoofing more difficult, but it’s still relatively easy to do.
Of course you shouldn’t be using your face as a password anyway; using something that is public to get access to something that is private is bad security practice. Besides being easy to spoof, it can be used by law enforcement to get access to your phone or computer if it uses Face ID to unlock. With so many facial recognition databases growing in size every day, their value to hackers is increasing. The public release of these databases has the power to make everybody’s face as a password useless forever.
Conclusions
Facial recognition changes our idea of what can be considered private. While US law has maintained for a long time that your presence in a public place is considered public, the reality was that publicly available information was different from publicly accessible information, and one’s daily habits could generally be considered private, with the exception of celebrities. Now we are almost in an age where everyone can be scrutinized and put on display as thoroughly as a celebrity by the paparazzi, and just like the paparazzi the public won’t be satisfied by the mundane but will instead be looking for the freak in all of us. Someone with a camera outside a bar will be able to identify everyone that goes in and how frequently, and publish that information to people who will be critical, unforgiving, and who will use that information against them.
Thankfully, there are a lot of people who are concerned, and they are having an impact. Google has pledged to refrain from selling facial recognition products. Amazon has admitted to working with law enforcement to sell its Rekognition software, but that is under scrutiny by some members of Congress. Microsoft recently published a detailed and insightful position on the facial recognition that’s very much worth a read. Even the Chinese AI unicorns are concerned.
Whether you’re playing with facial recognition at home, crossing borders, or just crossing the street, cameras are watching you. What is happening with those images is impressive and intimidating, and just like all new technologies is prone to error and ripe for abuse. What we’d like to see is more transparency in implementation and the right to opt out, and these issues will doubtless play themselves out in the world’s governments and courtrooms for years to come.
“Someone with a camera outside a bar will be able to identify everyone that goes in and how frequently, and publish that information to people who will be critical, unforgiving, and who will use that information against them.”
How about personal cameras? No social impact there.
A few very true statements here. When I saw my first presentations of facial recognition software in the early 1990s by some Israel company (extremely expensive licenses) I was impressed by how good the hit-fail-ratio actually was.
I am NOT impressed by the almost insignificant progress there as of today. There are SO MANY false positives, most systems (even the professional ones I have access to) both give false positives AND don’t recognize one and the same person in two frames after another in video footage due to ever so small contrast issues.
On the other hand: Training is hard. I am working in the field and creating training data that does not overshoot and at the same time provides enough variance to catch even the smallest derivations is almost impossible.
When it comes to different racial (sorry if that term should be politically incorrect, I am an old man and don’t understand today’s language rules) trades, the COLOR of the skin is not important, since most systems work on low-pass-filtered gradients (else they would have even more problems with lighting conditions anyway). It is, in fact, the VERY different relative landmark ratios that you find on Asian, Caucasian, African, American etc. faces. Once you begin mixing them into ONE model, you actually lose precision. So what you end up with is a relative skin-tone dependent multi-model setup where you try to fit the model to recognize faces into the assumed “racial category”. Needless to say that this is quite error-prone.
>”I am NOT impressed by the almost insignificant progress there as of today. ”
The traditional facial recognition algorithms that rely on statistical inference face an uphill batter due to diminishing returns. Practically all general purpose image recognition algorithms hit a wall around the 95% mark where false positive/negative results skyrocket when you try to shave off that last 5%
Roughly speaking as a general principle, to improve accuracy, you need to feed the algorithm approximately twice the data and have it compute twice as many correlations to shave half the error away. That’s because it relies on simple probabilities rather than understanding what it is actually looking at.
The machine has syntax but no semantics – data symbols without first-hand meaning, correlation without context. which is the point of the Chinese Room argument – it’s not intelligent, it’s just very complex. This route to AI will never result in a true AI and here’s where the shortcomings become apparent.
Of course that doesn’t mean you can’t use this sort of inference to recognize images. It’s just that you might need another layer of reasoning on top, sort of like a cortex of a kind…
Your statement makes sense,
In regard to racial profiling, I recall reading somewhere, that any “feature” commonly used to identify “race”,
let’s say for instance, black curly hair, or “oriental eyes”, are also distributed to other races in a lesser percent.
e.g. I had a gf years ago who had “oriental eyes” even though her ancestry was German.
I have a brother-in-law who was questioned by security at an Italian airport, because he had a “Mafia mustache”!
B^)
Over the years I’ve had a few questions because of my skin tone when compared to other “whites” (even in the dark confines of the UK, I have a healthy tan year-round). Statistical outliers FTW!
One wonders how big the datasets will have to be before the realisation that the distribution curves of features are both broad and typically overlapping.
All true Sheldon. As a Native American, I can easily pass for white, yet I have a fondness for sushi,
Go figure.
Kano discusses it in one his anthropology books about chimps. Late 80s or early 90s.
It is true across all primate species. When you divide a species into subgroups, the variation between the subgroups is always smaller than the average difference between individuals. This is discussed in the context of sexual dimorphism; it exists in primates, but even this, the largest subgroup difference, is smaller than the difference between individuals for every trait.
So for example, men might be taller than women, but the average height difference between two humans is larger than the average difference between men and women. This is true for all the physical measurements, including hip/waste/chest ratios.
When it comes to the differences between the supposed races, not only are the traits distributed according to a normal distribution, not clumped into delimited groups, but the geographic distribution also varies for every trait. There is an anthropological tool called a Cline Map, where you take a single trait and map it out geographically. If you take Cline maps for a bunch of different traits and overlay them, you find that while each trait appears do divide various villages and regions into different groups, you get a different grouping for every single trait that you map out. So even locally, there are not clear genetic groupings. This is why people who want to push the theory of races pick out “markers,” or individual traits, so that they can pretend there is a grouping. It isn’t that other traits don’t also have groupings, it is just that the other traits contradict the grouping the person doing the test or study desires. So they cherry-pick some “markers” that are the traits that randomly matched their stereotypes.
Skin color in particular maps to a continuum of tones from the darkest to the lightest; even very old anthropology texts are clear about there being no real categorical groupings for skin colors.
As you point out, epicanthic folds (https://en.wikipedia.org/wiki/Epicanthic_fold) are not unique to Asians.
You might want to use that term rather than “oriental eyes”
Before posting I changed to that from “slanted eyes”!
B^)
I still prefer 16:9 eyes
…Why? Should have just stuck with that one. Geez dude.
“A Face Only A Mother Could Love”
“A face for radio”
“A face for radio” and a voice for karaoke..
A face for radio, and a voice for the HAD comment section. ;)
“The great thing is you can go play with yourself using OpenCV and your own camera.”
Nice to know I/We have the option… But personally, I’d add an ‘It’ before yourself, just to ease the full impact of the whole sentence?
Don’t judge.
What’s to stop anyone with a grudge against someone from wearing a face mask of the person they have a grudge against, and doing “naughty things” repetitively?
They have already thought of that.
https://www.scmp.com/tech/article/2169945/china-tests-facial-recognition-border-crossing-hong-kong-zhuhai-macau-bridge
>An infrared thermal imaging camera is able to ensure that the person being inspected is a living person by measuring the body temperature, as opposed to, for example, a paper cut-out with a printout of a face. The thermal imaging doubles up as health quarantine control by measuring forehead temperatures to screen individuals with fever, according to the Shenzhen company.
Yet it still took that lady’s picture from an ad on the side of a bus. I don’t think this is ever going to be sufficiently accurate regardless of the number of tricks and tweaks used, mainly because nobody needs this. Why are we doing this to ourselves as a society? We’re perfectly safe and efficient enough already. This kind of thing is going to make hell out of the human condition.
When I read the passage you quoted, I believe them; an infrared thermal imaging camera is able to do a bunch of stuff, when combined with a system that does all that other stuff.
But often, instead of talking about what the system in question really can do, PR efforts will involve talking about the theoretical capabilities of a technology, in sentences or paragraphs proximate to discussion of the actual device. This makes it sound like they said that their device isn’t fooled by paper cut-outs, when all they really said is that theoretically they could make it that stuff, eventually.
Another fun phrase from your link, “there are no technical barriers.” Of course not; there are merely implementation details of various difficulties, speed bumps large and small, no actual barriers. It sounds to the unwary as if they’ve already worked out the kinks, but the statement only literally means, “Nothing is impossible.”
Think of the “do not fly list.” Once you end up on it you can never get off. I can’t change my face so if some bad data ends up in the police database I won’t be able to show my face in public. Same goes for DNA databases. Once bad data goes in you’re screwed.
I know this will be censored that makes you no better than Red China – consider that
“Unfortunately, this assumption of gender can be offensive and reinforce stereotypes in a world where it’s already tough enough when people do it”
You too HAD?? FFS
“I know this will be censored that makes you no better than Red China – consider that”
Okay I’ll consider it… yep, you’re an idiot.
Empty
it’s Lysenkoism 2.0 with a strong dose of Kafka thrown in for good measure. One can only be amused by it.
As you can see here, people bitching about pc culture are always about a hundred times more common and more unpleasant than the actual pc issues they’re protesting. Just find something nice to contribute or move on.
Some people don’t want a surveillance camera deciding whether they’re a boy or a girl and plastering that on a billboard along with our petty crimes such as spitting out gum. Apparently dinguses like us are designing nonsensical Douglas Adams-style gadgets such as judgemental AI cameras these days for dubious and frustratingly illogical purposes. Sounds perfectly reasonable to me that people might not like that.
We should try to make a habit of asking ourselves why we’re building and programming things like this. Just because we can or because they’re cool isn’t good enough. This is one hell of a terrible invention, and once people get desensitized to this we’ll all be stuck with these shenanigans forever. Significant swathes of our treasured human experience are basically impossible once these ubiquitous robots are constantly scrutinizing our actions and traits.
The thing about “PC culture” is that it’s based on moral panic. When what is acceptable depends on the unwritten rules of popular opinion, people aren’t quite sure what is permissible and what is not, which makes them err on the safe side and apply stronger standards so they wouldn’t fall on the wrong side of the crowd. When everybody is doing the same, it results in a gradual shift in opinion and becomes more and more oppressive.
It becomes a fertile ground for moral entrepreneurs who don’t necessarily even believe in their own rules, but they’ve found that by being loud and acting like they’re pointing out great moral/ethical issues, they can win the support of the panicking crowds who are looking for people to tell them how they should be behaving.
e.g. people often address me as Mr and Mrs in emails because my real name is ambiguous, which has absolutely no impact on anything whatsoever, but I could start acting really upset about it and troll myself some bargaining points or a free coffee that way, since everyone’s afraid of “misgendering” nowadays. I choose not to, because it would be a lousy thing to do.
All sounds fine and good, but the backlash against “PC culture” is also itself a moral panic. And you see way more of what y’all are doing here–complaining about hypothetical offended people who hypothetically insist on politically correct terms–than you see actual people who are offended by being misgendered or whatever. I have never in my entire LIFE encountered somebody who got upset when I misgendered them–and I have done it before, I’m not super PC-sensitive, pobody’s nerfect–yet I see somebody getting all in a tizzy almost every single day in response to these phantom politically correct boogiemen. The people I know with that kind of identity have always been very understanding when I don’t get it. You can’t just assume they’re all like the ones that get overblown in the media; those are fringe cases that the media is exploiting for a big juicy story.
So where’s all the SJWs everyone’s complaining about? It’s just you people shadowboxing! The original article just had it as a brief aside about it that was in no way inflammatory, to say maybe it would suck if a machine was tasked with determining if you were a man or a woman, got it wrong, and ended up making you unable to use that service. Seems reasonable, that would be pretty annoying wouldn’t it?
A moral panic consists of herd-like fear that society will be damaged by the actions of a supposed immoral group that often doesn’t even exist or is completely overblown, like the satanic panic thing in the eighties. People freaking out about the impending oppression of PC culture certainly fit that. I just don’t buy it, I don’t think we’re going to be brought into Orwellian dystopia by PC people.
Google up “CV Dazzle”
It is an ugly looking future
Big Tech is all about turning people into products and resources to be harvested. The people working for these firms do not have our best interests at heart, I view those workers the same way I view the Maoist Red Guard or a KGB agent – a threat to society. They are not good people.
Agreed. Something needs to be done about it–someday there’s going to be a technological solution to prevent any chance of human freedom and it will profoundly break us. I don’t think people are nearly freaked out enough about this stuff.
Try a GIS for “hyperface” instead. Instead of trying to conceal a face a la dazzle, it tries to overwhelm, or at least pollute the data being “looked at”.
Overwhelm? As in one’s so ugly the camera’s “overwhelmed” and bursts into flame?
Nice, this is a lot more practical than CV Dazzle.
Okay, I’ve tried to G**gle “GIS hyperface” and am not sure what you mean by it.
(I don’t think you mean women’s shoes by Nike B^)
Do you have a link to share?
GIS: Google image search (https://www.google.com/search?q=hyperface&tbm=isch)
“GIS” is used to mean “Google Image Search”. Do one of those with the query ‘hyperface’.
“The great thing is you can go play with yourself using OpenCV and your own camera.”
Seems like that goes well beyond facial recognition!
A colleague on an anti Trump demonstration in the UK recently tells me that some people had painted their faces with bright colored geometric designs such as squares and triangles, not for psycheledic reasons but because these apparently confuse the facial recognition systems they assumed were being pointed at them to identify and record potential troublemakers. I have no idea if this works or not.
That works against Haar cascade based systems.
My guess is, British law enforcement planted the idea so that when they look at the fuzzy, distant images from different angles it is easy to track who is who. They only need to get one clear shot of each face with the pattern showing, and then they have identification even in the low quality angles.
This reminds me of an episode of The Orville called “Majority Rule”. In that episode, the crew visits a planet where everyone has a social score and can be denied services (such as eating in a restaurant) or even arrested just for having a low score.
What’s being done in China with facial recognition is sickeningly reminiscent of the dystopian nightmare shown in that episode.
I mean, it’s China. Amnesty International has been call China of human rights violations for year, with no real progress.
China already working on a “social credit” system. This automates one of the inputs.
https://www.bbc.com/news/world-asia-china-34592186 “China ‘social credit’: Beijing sets up huge system”
>By 2020, everyone in China will be enrolled in a vast national database that compiles fiscal and government information, including minor traffic violations, and distils it into a single number ranking each citizen.
>One of the most high-profile projects is by Sesame Credit, the financial wing of Alibaba. With 400 million users, Alibaba is the world’s biggest online shopping platform. It’s using its unique database of consumer information to compile individual “social credit” scores.
>Users are encouraged to flaunt their good credit scores to friends, and even potential mates. China’s biggest matchmaking service, Baihe, has teamed up with Sesame to promote clients with good credit scores, giving them prominent spots on the company’s website.
I vote we just put on our guy fawkes masks now and just never take them off.
Well, in a place that forbids masks, how about face painting the mask?
That won’t work. It turns out your body and its movements can be recognised as easily, is much harder to disguise and is also easier to track with lower grade cameras.
Plus what Ren says. Facial cover is prohibited in many places, often introduced making use of societal displeasure with veiled Muslim women.
Is there a lack of dystopian sci-fi in China or something?
No, it’s just called “dystopian sci”
Damn that’s brutal. It was supposed to be a warning, not a fucking instruction manual!
I really hope this stuff doesn’t spread but I’m convinced it’s already started.
They just call it a “reality show” there.
The social justice warriors like the author are the Wests equivalent of this mad drive towards conformity and “safe spaces” at the expense of personal liberty and social responsibility.
Stick to reporting on the tech. If you leave your political opinion out, I won’t have to post mine.
Utter bullshit. This is about the same as comparing everyone who disagrees with you to Hitler. You’re not fooling anybody.
The people moaning and complaining about “ess jay dubyoos” are many times more irritating and numerous than these supposed menaces to society they complain about. Never met one single radical sjw in real life, but I’ve met about a thousand of you whiny turds. Just let go already.
Instant Godwin, way to go. You never seeing a sjw might just be caused by you being one.
Anyways, politics on the internet is useless. Especially on HaD. In the comments or in the articles. So as Cartse says, lets stick to tech.
I don’t think you read the comment carefully. It’s not a Godwin, it’s a reference to Godwin. You obviously can’t discuss Godwin’s law that without bringing him up. Saying people who disagree with you are Maoists isn’t functionally any different than Godwin’s law, the villain of choice has only been swapped out. Do you get it yet?
Every year I celebrate Martin Luther King Jr. Day.
SJW’s are real.
Other real-world examples: Elizabeth Freeman, Frederick Douglass, Lucretia Mott, James Madison, Harriet Tubman, Booker T. Washington, Emmeline Pankhurst, Mohandas Gandhi, Maria L. de Hernández, Thich Quang Duc, Elizabeth Peratrovich, Gordon Hirabayashi, Marie Foster, Nelson Mandela, Jackie Forster, Cesar Chavez, Dolores Huerta, Harish Iyer, Malala Yousafzai.
List not exhaustive.
“…One can imagine this getting Orwellian pretty fast. …” What do you mean? That is happening in China, how could it get more Orwellian than that?
Oh trust and believe it can always get worse. What could be worse than a dystopian regime? A dystopian regime with an army of unsleeping robots that can watch and judge more people simultaneously than God and Santa Claus combined.
If China uses this to the benefit of their own power and gets away with it, others will quickly follow suite. It’s not just their problem. This tech is reproducible and if it’s lucrative, somebody will eventually try it here. We’re still in the early days of finding out just how insanely dangerous information and data tech can be. It will get worse, that much is basically guaranteed.
its amazing..
If China uses this to the benefit of their own power and gets away with it, others will quickly follow suite. It’s not just their problem. This tech is reproducible and if it’s lucrative, somebody will eventually try it here. We’re still in the early days of finding out just how insanely dangerous information and data tech can be. It will get worse, that much is basically guaranteed.