AI Face Anonymizer Masks Human Identity In Images

We’re all pretty familiar with AI’s ability to create realistic-looking images of people that don’t exist, but here’s an unusual implementation of using that technology for a different purpose: masking people’s identity without altering the substance of the image itself. The result is the photo’s content and “purpose” (for lack of a better term) of the image remains unchanged, while at the same time becoming impossible to identify the actual person in it. This invites some interesting privacy-related applications.

Originals on left, anonymized versions on the right. The substance of the images has not changed.

The paper for Face Anonymization Made Simple has all the details, but the method boils down to using diffusion models to take an input image, automatically pick out identity-related features, and alter them in a way that looks more or less natural. For this purpose, identity-related features essentially means key parts of a human face. Other elements of the photo (background, expression, pose, clothing) are left unchanged. As a concept it’s been explored before, but researchers show that this versatile method is both simpler and better-performing than others.

Diffusion models are the essence of AI image generators like Stable Diffusion. The fact that they can be run locally on personal hardware has opened the doors to all kinds of interesting experimentation, like this haunted mirror and other interactive experiments. Forget tweaking dull sliders like “brightness” and “contrast” for an image. How about altering the level of “moss”, “fire”, or “cookie” instead?

43 thoughts on “AI Face Anonymizer Masks Human Identity In Images

  1. It didn’t seem to affect the ear in the second example. Ears are highly unique, some articles on the net say ear identification is similar to facial recognition or even fingerprints.

    1. I was thinking the same thing, seems hair is untouched as well.

      Though with how close to the original image the supposedly anonymised version is doesn’t seem to me like it would be that tricky to figure out the original person with any context to the image. So its more like casually and subtly blurring them out. As the height, body type, positional relationships between all the facial features that have been changed are still going to be pretty unique, and leaving any detail like the ears unchanged just makes it fairly trivial unless you have no context so are trying to match against all 6 billion active facebook users or whatever..

  2. As someone with a volunteering side-hustle, I was thinking about something like this for a while now! We need images for social media but at the same time maintaining the subject’s rights and dignity is often not easy. If we are transparent about it, we might use anonymized pictures to protect our subject’s right. I’m thrilled to present this to the others working on this project!

  3. The world is nothing but lies and AI has made it worse. The AI tells people lies and it’s call a ‘hallucination’. AI generates fake videos of real people doing something they never did, fake audio recordings of real people saying something they never did. And when AI is not being used to tell lies about us it is being used to either police us or replace us. I want out of this dystopian timeline.

  4. In the end, both sides will use this. I post an anonymized picture publically, my mom’s AI replaces the random Face with mine… Still better then posting my real face. But when she says “oh your face looks exhausted.” or similar, of course she is looking at some random face from me.

    1. We can only hope the AI isn’t keeping records of who used the tool, else someone could send it the “anonymized” photo and ask the AI who it sent the image to. No doubt that could be the future we are heading for.

      1. That wouldn’t work, though – you don’t know how the original face is going to change over the video, so it might be totally impossible to map the fake in any realistic fashion. Imagine a person putting on glasses in the video, or someone getting punched in the nose. The remapping could end up needing to restructure the entire thing. You’d be better off with a single tool that looks at the whole thing, determining an anonymization that’s independent of the action in the video.

        1. If it is still the same face the face mapping tech should be able to make it match as well as it ever does (which like all these tools right now is going to be really great only once you put in a heap of manual effort to fix all the screwups).

          It is much the same tech used in mocap animation, just with a bit less precision as you didn’t get your original source image to put on all the tracking dots (or have many cameras on them) so the first step is only guessing at how the original face moved. So when it comes to the forceful rearranging of the nose I doubt any ‘smart’ tool would be convincing anytime soon – how many training images is it going to get of injured people to understand what that red stuff is etc. I’d suspect such a system can handle glasses though.

  5. Well, will this be a royalty remover? Streaming services should love this to make royalty free thumbnails. Maybe you can run whole movies through this, change the voice a little… /barf

  6. The real science term for inability to recognize faces is called prosopagnosia. Like, for patients that have had strokes or have psychiatric disorders. Like all similar descriptors, people to varying degrees can have it and not have the full blown pathology.
    .
    I either subconsciously don’t care much what people look like or don’t pay attention much or something to the point I often confuse people in movies for other characters. Even actresses and actors in general are all kinda the same. Tall blonde lady. Got it. Brooding scruff jacked dude. Got it. I always figured it was because I was bored. I’d make the worst police witness ever!
    .
    Anyway, the before and after look darn near identical to me. I had to look forever to even be able to tell if they were even different of if it was a “trick.” In any case, all the other contextual clues like the guy with a ridiculous circus moustache, insane hat that very few people even wear and used car salesman jacket are much easier to identify then super subtle facial alterations.

  7. I tried to think of a use case other than “because we can”. So far the only thing popping into my head is to replace the faces of other people in a photo you’re in. I’ve seen people post photos of themselves and obscure or blur the faces of everyone else in the photo. Maybe this will be less annoying than a photo that looked like it was attacked by a sharpie or covered by happy faces.

    1. That to me says if you want to take a picture of yourself learn how to use your camera right and just get all the background subjects pleasingly out of focus. Or pick a better time/place to take your darn photo…

    2. There are already very good “inpainting” algorithms that can remove people from pictures. With AI these should only have gotten better.
      Well, I did not consider that keeping people in a scene might be important…

  8. ‘Purikura’ photo booths in arcades here in Japan have been doing this sort of thing (not using AI, but essentially the same end result) for many years, just that you get to choose what and how filtered the results are.

    It’s very popular with teenage girls….

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.