Render Yourself Invisible To AI With This Adversarial Sweater Of Doom

Ugly sweater season is rapidly approaching, at least here in the Northern Hemisphere. We’ve always been a bit baffled by the tradition of paying top dollar for a loud, obnoxious sweater that gets worn to exactly one social event a year. We don’t judge, of course, but that’s not to say we wouldn’t look a little more favorably on someone’s fashion choice if it were more like this AI-defeating adversarial ugly sweater.

The idea behind this research from the University of Maryland is not, of course, to inform fashion trends, nor is it to create a practical invisibility cloak. It’s really to probe machine learning systems for vulnerabilities by making small changes to the input while watching for changes in the output. In this case, the ML system was a YOLO-based vision system which has little trouble finding humans in an arbitrary image. The adversarial pattern was generated by using a large set of training images, some of which contain the objects of interest — in this case, humans. Each time a human is detected, a random pattern is rendered over the image, and the data is reassessed to see how much the pattern lowers the object’s score. The adversarial pattern eventually improves to the point where it mostly prevents humans from being recognized. Much more detail is available in the research paper (PDF) if you want to dig into the guts of this.

The pattern, which looks a little like a bad impressionist painting of people buying pumpkins at a market and bears some resemblance to one we’ve seen before in similar work, is said to work better from different viewing angles. It also makes a spiffy pullover, especially if you’d rather blend in at that Christmas party.

57 thoughts on “Render Yourself Invisible To AI With This Adversarial Sweater Of Doom

  1. “vulnerabilities” eh, and they are ultimately trying to find them so they can be “fixed’ no doubt.

    I have my own views, both on the subject of using such technologies as well as on developing and improving them.

    1. Some 600 USD safety goggles and a green high powered DPSS laser to burn out those dystopian orwellian AI tracking cameras is the only answer at this point.

      The Slippery slope is not even a fallacy anymore: They are taking away our freedom step by step.

        1. For the party! After 54 hours and 38 minutes of excruciating torture I can say with certainty that I, of my own volition, love big brother with all my heart and soul, more than anything.

        2. You really think the guy saying “You will own nothing and you will be happy” is in club humanity? I’m not so sure. There’s a good reason they don’t call it the “Earth Economic Forum”…this isn’t their first world.

        1. I was at a fast food place and I saw my picture on a tv screen behind the counter and found the lipstick camera peeking at me overhead and I happened to have a high powered green laser pointer in my pocket, and the effect is not instant, or I should say the instant effect is the AGC, but after nailing it for a minute or two I was able to make a pretty good dent in it as far as taking pixels out. The sensors are more resilient than you might think, but look at the number of youtube videos where they record welding.

          1. So you’re fine damaging someone else’s property, while inside their building?

            I don’t like the loss of privacy that is becoming more prevalent in public spaces, but this is just vandalism.

        1. Exist without being watched all the time. We are evolved organisms. Part of our evolved nature is that believing we may be secretly watched stresses us. That’s a biological reality, and it’s dumb to ignore it.

          … and it’s actually true that people who find you with face detection/recognition may make decisions to your detriment, including decisions that may be either correct or incorrect from their point of view. For example, you can easily end up as a false positive in somebody’s Evil Detection algorithm. A few false positives mean nothing to them, but being one can mean a lot to you, depending on what reactions are available to them.

          There’s also the matter of inflexible enforcement of rules. The thing about being in charge of enforcing rules is that you end up being absolutist, because (1) any exceptions are chances to be unfair, and you don’t want to be seen as being unfair; and (2) you usually end up feeling a personal investment in the rules, and losing a lot of your ability to see when they’re not helping. At the same time, it’s almost impossible to make rules that *always* make sense, and if you did have such a set of rules it would probably be too complicated to remember or enforce reliably.

          One historical check on the damage from inflexibility has been that enforcers didn’t tend to actually know about violations unless they were complained about… which tended to select for violations that were actually hurting somebody. Enforcing the same set of rules absolutely and infallibly can easily be intolerable.

          1. I don’t think this is entirely correct. For the vast stretch of human history we lived in small groups where you knew everyone and what they were doing, essentially all the time. And they knew what you were doing.
            Our notions of privacy really only date to relatively recent times – even the larger cities of early historical and medieval times had limited anonymity. So notions of privacy are an aberration in our history and our evolution, perhaps two to three hundred years out of our hundreds of thousands.
            Don’t get me wrong, I like my privacy and the difference now is that some toad in his mother’s basement in Tomsk can know as much about you as your neighbors does. But the notion that we had privacy until recently is incorrect.

          2. @AI

            We had total “global privacy” before, and actions weren’t recorded forever. So if you said something bad about someone, the record would be oral, and oral history is gossip. So if you and the other party recalls different how something happened, it’s not possible to say who is right. People forget things, and if you did something shameful, there was always the possibility of moving somewhere else with a clean slate.

            We are moving to a world that everything you say or do can be recorded essentially forever. There are no moving anywhere because everything is global. And if AI says you are a terrorist because someone that looks like you did something, you are tagged essentially forever.

            The inflexibility pointed by Sok Puppette reinforces this: lawmakers will be less inclined to consider the AI committing a mistake because that would make everyone tagged by the AI to claim “AI mistake” when convicted (remember the case of the exploding Galaxy batteries: a dozen or so exploding phones killed an entire phone model), and the makers of the AI would try to secretly deal with the false positives. After a while, the general population believes the AI cannot fail, so nobody would believe you if the AI tagged you as a thief.

            A quick search on Google for people thrown in prison for mistakes on facial recognition software is scary. Here in Brazil a musician from an orchestra spent 363 days on a prison because he has been “recognized” as part of the group that stole a car. Even with videos from his workplace showing him working at the time of the theft, he was convicted and spent almost a year jailed until his appeal was listened on a higher court and he was freed. And still it was a close vote: 4 to free him, 3 to keep him. Because the AI said it was him, and AI cannot be bribed, is impartial, and is not biased. Right?

          3. >But the notion that we had privacy until recently is incorrect.

            For the vast majority of human history, you could simply wander off alone into the bushes because there wouldn’t be anyone around for miles to watch after you. There weren’t enough people until couple hundred years ago, such that you’d always run into someone. The population of the USA or the EU would be literally the entire world back in the 14th century, and back in the hunter gatherer times if you saw a chip of wood floating down the river, your neighbor was too close and you’d go kill them with a hatchet for invading your tribe’s territory.

            There was simply not enough people to police your behavior. That’s basically why we invented or evolved the notion of gods to keep people in line while nobody’s watching.

      1. Indeed they are. It is going on around you every day silently but relentlessly. Look at the camera systems in our cities, the overwatch of the AI in almost every building under the auspice of “loss prevention” or “better customer service”. Always watching, recording and dissecting our lives and us. Soon we will be no more than slaves of those who control the AI. And soon after we will all be slaves of it.

  2. BTW, I hear the Apple augmented reality glasses are rumored to use an iris scan for identification.
    Makes me fear they (the industry/researchers) are busy everywhere to develop a damn DNA scanner that can be mass produced and fitted into smartphones, as the ultimate closing of the trap.

    1. I understand that humans use an extensive facial and clothing scan for identification. Humans print up identification cards to assist the recognition process and they even use verbal cues to assist in the recognition process. It’s all very shocking, the total lack of autonomy. It’s as if they depend on each other.

    2. Meh, sure it would be harder to spoof than some other biometric scanners like fingerprint or iris scanners, but, the movie Gattaca showed that even a DNA sensor could theoretically be spoofed.

      But, with the cheapness of fingerprint scanning tech and with face unlock tech in every phone, why would they need to go to the extremes of scanning DNA in real time?

  3. I notice that it also missed three other unobscured people – the girl in the back right wearing the grey sweater is inexplicable. But the two guys on the left on the loud-patterned shirts – time for us all to start wearing camo and/or plaid shirts I think. And the one in the hoodie with his face down – I almost missed him/her myself; old school tactic. Time for plaid hoodies?

    1. Take this picture and a similar stock photo of a lecture hall, and squint your eyes.

      It becomes immediately apparent why the computer is fooled – the shirt appears like there are people just adjacent but nobody exactly at that spot. It’s like there’s a hole through the guy’s abdomen that resembles the irrelevant background information. The computer is looking for the tell-tale shape of a person, but finds what appears to be like a gap between people.

    1. Anti-AI stuff sponsored by Facebook-AI? Sounds like the purple anti-robot camouflage in Robert Asprin’s Phule’s Company series. IIRC the company that made the robots also made the purple camo and designed the robots to react as though anyone wearing the purple camo was invisible.

  4. You see a pumpkin market, I see people milling about on the sort of hideous carpet they install in large hotel conference centers. With that in mind, the reason the AI vision system fails to identify (or make eye contact with, you might say) the wearer of this sweater becomes obvious: what happens in Vegas stays in Vegas.

  5. *Invisible to one extremely specific AI
    There’s probably no way to make this apply to image recognition universally. It’s mapping out one VERY specific accidental wrinkle in the neural net.

  6. Just out of curiosity I pointed my camera at the photo and, sure enough, a box appeared around the guy’s face. So not invisible to AI… Just unnoticed by the one flawed model they trained.

  7. Yes and no. I’ve personaly seen the damage a high power laser can do to an image sensor. It burnt a neat line in moments, but only a line, if you want to actually prevent the camera from seeing and the AI being able to recognise then you’d need to take out a reasonable amount of the sensor. This is made difficult by the nature of the lens and sensor, in order to do this you need to move around the field of view which is difficult without getting up close and personal. Also I don’t know what the lower limit for laser power would be to do this, the one I saw was a 5W show laser.

  8. We never truly had privacy, just the presumption of privacy, and that others would respect it. Past few decades, there has been a lot of movement for disrespecting others, not just privacy.

    I like cameras, own a bunch of them My security cameras are a great deterent, people on there better behavior. My dashcam is a great witness in court. Cameras can be useful tools. But any tool, can also be mis-used. Out in public, there is no right to privacy, you can only hope that others respect your presumption of privacy. Like wise, you should be doing things in public, that are best done in private…

    1. I agree, but the problem is that these “tools” are not in the hands of the people. The tools are wielded by the likes of facebook and google, who aggregate the output of millions od cameras/photos so they can gain massive benefit from the tools, to the detriment of the public at large.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.