Adobe Neural Net Detects Photoshop Shenanigans

Photoshop can take a bad picture and make it look better. But it can also take a picture of you smiling and make it into a picture of your frowning. Altering images and video can of course be benign, but it can also have nefarious purposes. Adobe teamed up with researchers at Berkeley to see if they could teach a computer to detect a very specific type of photo manipulation. As it turns out, they could.

Using a Photoshop feature called face-aware liquify, slightly more than half of the people tested could tell which picture was the original and which was retouched to alter the facial expression. However, after sufficient training, Adobe’s neural network could solve the puzzle correctly 99% of the time.

It might seem odd to focus on that specific type of edit, but it is useful for making very subtle changes to a person’s face. Earlier research worked on detecting cruder manipulations.

It sounds as though the neural network could determine which of the two photos was altered. This seems like an easier problem then simply identifying a picture as altered without another photo to compare. That would be a lot more useful, but also probably a lot more difficult, as well.

We suppose neural networks detecting fake photos is no more outlandish than asking them to judge our photography. We’ve even seen them correct for depth of field.

13 thoughts on “Adobe Neural Net Detects Photoshop Shenanigans

    1. Nothing, the people who care whether or not something is fake could already generally figure it out with traditional fact-checking methods and a bit of good sense.

      The viewers of erotic deepfakes know and don’t care, and the target audience of political deepfakes already hates the person being faked so they’re not going to look too critically into it.

      I suppose video platforms could use this sort of algorithm to detect and automatically remove faked videos, but given facebook’s recent refusal to remove a video that they knew was altered for political purposes, I don’t see that being likely.

  1. wow the SJW crowd will explode on adobe for this one. They wanted to literally kill the maker of the makeapp just think what they will do when they believe they can not chop their insta photos any more.

    1. They’ll pay Adobe for the newer versions of photoshop which uses this as an averseral AI to learn how to make changes which aren’t detectable?

      With Adobe both making the change system, and the detection system and playing them off each other, they’ve created an entirely new revenue stream (or two).

    2. Yet another example of SJW used as “someone I don’t like”. Note that using that instead of some attempt to actually describe a group this would enrage you made yourself seen as a shallow whiner.

  2. There are already software which are capable of detecting altered images like Tungstene from exomakina but without neural network ( in fact I don’t know if it’s true ). I wonder if there are any differences between them in acuracy.

  3. Would be interesting to see how the humans would fare in recognizing the fakes after being shown a training set, like the AI (i guess they went in ‘naive’). I for one got much better at spotting deepfakes after seeing a few real>altered transitions.

    1. Yes that would actually be interesting and could potentially have a positive impact on society. Learning to question things that doesn’t seem quite right in the small should encourage critical thinking in the large. One can dream…

  4. While it is extremely impressive that the algorithm can take two pictures and determine which one is fake, call me when you double blind the algorithm and it can tell when neither or both of the pictures are fake. Having the data point that one of the pictures is guaranteed fake does not really have utility in an every day kind of functionality. I do understand that i am talking about the much harder problem of determining if a picture is fake with out the comparison but that would be a completely different algorithm as well and not an evolution of this algorithm.

  5. The humans were not really presented with the same data as the software was, therefore the results are as fake as the subject matter. Give me a fake image and I will use GIMP and G’MIC to pull it apart in different mathematical ways so that I can see if it has been manipulated.

Leave a Reply to JC_DentoCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.