We’re all pretty familiar with AI’s ability to create realistic-looking images of people that don’t exist, but here’s an unusual implementation of using that technology for a different purpose: masking people’s identity without altering the substance of the image itself. The result is the photo’s content and “purpose” (for lack of a better term) of the image remains unchanged, while at the same time becoming impossible to identify the actual person in it. This invites some interesting privacy-related applications.
The paper for Face Anonymization Made Simple has all the details, but the method boils down to using diffusion models to take an input image, automatically pick out identity-related features, and alter them in a way that looks more or less natural. For this purpose, identity-related features essentially means key parts of a human face. Other elements of the photo (background, expression, pose, clothing) are left unchanged. As a concept it’s been explored before, but researchers show that this versatile method is both simpler and better-performing than others.
Diffusion models are the essence of AI image generators like Stable Diffusion. The fact that they can be run locally on personal hardware has opened the doors to all kinds of interesting experimentation, like this haunted mirror and other interactive experiments. Forget tweaking dull sliders like “brightness” and “contrast” for an image. How about altering the level of “moss”, “fire”, or “cookie” instead?
Is the photo on the left in the Title Photo “real”?
The eyes creep me out, it made me think that was the AI altered photo.
Looks (original) like Alyssa Milano…
And the altered version looks slightly Asian (to me, at least)…
The first name on the paper is Han-Wei Kung, there’s your answer.
I think there is an odd reflection partially obscuring here left iris, that’s what makes it look off.
I think that you identified the problem I had with the photo.
And who the heck is Melissa Alono?
It didn’t seem to affect the ear in the second example. Ears are highly unique, some articles on the net say ear identification is similar to facial recognition or even fingerprints.
It is apropos that in applications for certain U.S. identity cards, an ear is required to be visible in the submitted photo (maybe even a specific ear, but I confess I’m not going to research that right now).
I was thinking the same thing, seems hair is untouched as well.
Though with how close to the original image the supposedly anonymised version is doesn’t seem to me like it would be that tricky to figure out the original person with any context to the image. So its more like casually and subtly blurring them out. As the height, body type, positional relationships between all the facial features that have been changed are still going to be pretty unique, and leaving any detail like the ears unchanged just makes it fairly trivial unless you have no context so are trying to match against all 6 billion active facebook users or whatever..
Human face uglyfier – AI always makes things moar creepy.
Seems like a pretty excellent way to protect privacy online. I’ll be glad when it’s a plugin for a video editing program.
Even a script within Gimp to make calls to this technique would be sweet!
You fools! You’ve photographed their stunt doubles!
I imagine the Spaceballs universe as having better AI than we do.
As someone with a volunteering side-hustle, I was thinking about something like this for a while now! We need images for social media but at the same time maintaining the subject’s rights and dignity is often not easy. If we are transparent about it, we might use anonymized pictures to protect our subject’s right. I’m thrilled to present this to the others working on this project!
That seems to be a legitimately good use of this.
last time I had to do it (also for events), I just pasted cat head PNG images over ppls heads. Works, makes for fun pictures, and drives the point home too
of course such a tool has two ways of working: anonymizing people, or put your face in place of the original one…
This fails somewhat where people are wearing distinctive clothes, but generally very neat idea.
The world is nothing but lies and AI has made it worse. The AI tells people lies and it’s call a ‘hallucination’. AI generates fake videos of real people doing something they never did, fake audio recordings of real people saying something they never did. And when AI is not being used to tell lies about us it is being used to either police us or replace us. I want out of this dystopian timeline.
Dude you might want to see a therapist about this
They’re valid concerns, but even if they weren’t, you’re just thinking it’s funny to pick on distressed people. Gross.
Every point fred made is a very legitimate issue.
And you think he needs to see a therapist?
Maybe it’s you that is the problem?
“The world is nothing but lies and AI has made it worse.”
^ This is a lie
^^ No, this is a lie
^^^ No, this is a lie
[ad infinitum]
“It’s not a deepfake, it’s an anonymized image!” Case dismissed!
In the end, both sides will use this. I post an anonymized picture publically, my mom’s AI replaces the random Face with mine… Still better then posting my real face. But when she says “oh your face looks exhausted.” or similar, of course she is looking at some random face from me.
We can only hope the AI isn’t keeping records of who used the tool, else someone could send it the “anonymized” photo and ask the AI who it sent the image to. No doubt that could be the future we are heading for.
I am wondering, if it were used on a video, is there any way to keep it consistent across frames? So the each face is anonymised to a consistent new model?
I suspect you’d have to use two separate tools – this one to generate the fake and a face replacement tool to then map the same selected fake to every frame of the video
That wouldn’t work, though – you don’t know how the original face is going to change over the video, so it might be totally impossible to map the fake in any realistic fashion. Imagine a person putting on glasses in the video, or someone getting punched in the nose. The remapping could end up needing to restructure the entire thing. You’d be better off with a single tool that looks at the whole thing, determining an anonymization that’s independent of the action in the video.
If it is still the same face the face mapping tech should be able to make it match as well as it ever does (which like all these tools right now is going to be really great only once you put in a heap of manual effort to fix all the screwups).
It is much the same tech used in mocap animation, just with a bit less precision as you didn’t get your original source image to put on all the tracking dots (or have many cameras on them) so the first step is only guessing at how the original face moved. So when it comes to the forceful rearranging of the nose I doubt any ‘smart’ tool would be convincing anytime soon – how many training images is it going to get of injured people to understand what that red stuff is etc. I’d suspect such a system can handle glasses though.
Now make it always do Richard D James’ face
(like the Windowlicker music video)
Now you have my attention
Well, will this be a royalty remover? Streaming services should love this to make royalty free thumbnails. Maybe you can run whole movies through this, change the voice a little… /barf
The real science term for inability to recognize faces is called prosopagnosia. Like, for patients that have had strokes or have psychiatric disorders. Like all similar descriptors, people to varying degrees can have it and not have the full blown pathology.
.
I either subconsciously don’t care much what people look like or don’t pay attention much or something to the point I often confuse people in movies for other characters. Even actresses and actors in general are all kinda the same. Tall blonde lady. Got it. Brooding scruff jacked dude. Got it. I always figured it was because I was bored. I’d make the worst police witness ever!
.
Anyway, the before and after look darn near identical to me. I had to look forever to even be able to tell if they were even different of if it was a “trick.” In any case, all the other contextual clues like the guy with a ridiculous circus moustache, insane hat that very few people even wear and used car salesman jacket are much easier to identify then super subtle facial alterations.
It is a well known fact in AI circles that even the best AI can’t tell the difference between Lee Marvin and James Coburn.
I tried to think of a use case other than “because we can”. So far the only thing popping into my head is to replace the faces of other people in a photo you’re in. I’ve seen people post photos of themselves and obscure or blur the faces of everyone else in the photo. Maybe this will be less annoying than a photo that looked like it was attacked by a sharpie or covered by happy faces.
I dunno, I’d imagine a lot of people would be more comfortable with photo releases in dicey situations if the images are anonymized.
That to me says if you want to take a picture of yourself learn how to use your camera right and just get all the background subjects pleasingly out of focus. Or pick a better time/place to take your darn photo…
There are already very good “inpainting” algorithms that can remove people from pictures. With AI these should only have gotten better.
Well, I did not consider that keeping people in a scene might be important…
So it’s like MIDI for faces?
‘Purikura’ photo booths in arcades here in Japan have been doing this sort of thing (not using AI, but essentially the same end result) for many years, just that you get to choose what and how filtered the results are.
It’s very popular with teenage girls….
Can it make everyone look like John Malkovich?