The Cloak Of Invisibility Against Image Recognition

Adversarial attacks are not something new to the world of Deep Networks used for image recognition. However, as the research with Deep Learning grows, more flaws are uncovered. The team at the University of KU Leuven in Belgium have demonstrated how, by simple using a colored photo held near the torso of a man can render him invisible to image recognition systems based on convolutional neural networks.

Convolutional Neural Networks or CNNs are a class of Deep learning networks that reduces the number of computations to be performed by creating hierarchical patterns from simpler and smaller networks. They are becoming the norm for image recognition applications and are being used in the field. In this new paper, the addition of color patches is seen to confuse the image detector YoLo(v2) by adding noise that disrupts the calculations of the CNN. The patch is not random and can be identified using the process defined in the publication.

This attack can be implemented by printing the disruptive pattern on a t-shirt making them invisible to surveillance system detection. You can read the paper[PDF] that outlines the generation of the adversarial patch. Image recognition camouflage that works on Google’s Inception has been documented in the past and we hope to see more such hacks in the future. Its a new world out there where you hacking is colorful as ever.

15 thoughts on “The Cloak Of Invisibility Against Image Recognition

    1. Wear a T-shirt of a picture of a person wearing a T-shirt with a picture of a person on it, with the person on that T-shirt wearing a picture of another person, and so on down to the limit of your silk screening resolution. See if you can send the image recognition into an endless loop.

  1. Can you trick a car into not recognizing you as a pedestrian accidentally? I’m all for sticking it to the machine, but I don’t want the machine sticking [it] to me.

    1. And that would lead to a counter detection surveillance system, which hones in on those fashion items to be flagged for deep inspection. If a solution is mass produced it would no longer work as intended. And any solution must be designed to fool multiple systems. The above design only fools one.

  2. Some of the new systems in China use IR cameras + regular cameras. They should be able to spot the warm body and not fooled by the color picture or picture of a face. They can also check for normal body temperature – identify fever for disease controls.

  3. This sort of defense would likely be defeated by programming the surveillance software to ignore information inside of square or rectangular edges. Of course, next: really odd blobs with the same sort of info inside.

Leave a Reply to TruthCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.