The good people at MIT’s Computer Science and Artificial Intelligence Laboratory [CSAIL] have found a way of tricking Google’s InceptionV3 image classifier into seeing a rifle where there actually is a turtle. This is achieved by presenting the classifier with what is called ‘adversary examples’.
Adversary examples are a proven concept for 2D stills. In 2014 [Goodfellow], [Shlens] and [Szegedy] added imperceptible noise to the image of a panda that from then on was classified as gibbon. This method relies on the image being undisturbed and can be overcome by zooming, blurring or rotating the image.
The applicability for real world shenanigans has been seriously limited but this changes everything. This weaponized turtle is a color 3D print that is reliably misclassified by the algorithm from any point of view. To achieve this, some knowledge about the classifier is required to generate misleading input. The image transformations, such as rotation, scaling and skewing but also color corrections and even print errors are added to the input and the result is then optimized to reliably mislead the algorithm. The whole process is documented in [CSAIL]’s paper on the method.
What this amounts to is camouflage from machine vision. Assuming that the method also works the other way around, the possibility of disguising guns (or anything else) as turtles has serious implications for automated security systems.
As this turtle targets the Inception algorithm, it should be able to fool the DIY image recognition talkbox that Hackaday’s own [Steven Dufresne] built.
Thanks to [Adam] for the tip.
It’s probably a prototype weapon we didn’t know about.
It’s Mecha-Tama.
It’s Gamera!
Starting a little early for a Friday, eh?
It’s 5 o’clock somewhere!
It is, here.
i’m wondering what will happen with the self-driving cars and trucks, if somebody will make some camouflage roads or something, better to learn the lesson now than later, i guess…
Surely you’d be building camouflaged walls, which you move in front of security trucks which now operator without a driver, because that’s safer for all involved?
Sometimes I wonder it’s a good thing I’m not a terrorist.
I’m thinking of Wile E. Coyote painting a road tunnel on the face of a cliff.
++1
https://i.ytimg.com/vi/4iWvedIhWjM/hqdefault.jpg
Super Genius!
Even if self driving cars make terror attacks easy and the number of people killed from them each year increases significantly, there could still be a very large net reduction in deaths because everyday accidents would be less frequent. But the odd thing about us humans is we tend to care more about HOW people die than how MANY people die.
Soo true. 2.4 million for every year, but somehow the way a person dirs changes whether it’s tragic, or relevant. Evil people, devoid of love and the Spirit.
Which do you think would be more effective to repel the self drivers, covering your car in red octagons or mirror chrome vinyl wrap so they think they’re about to have a head on collision with themselves?
It will be illegal, just like putting an invisible film over your license plate to fool traffic cameras is illegal.
It’s not illegal until you get caught.
By that standard, high vis clothing should be illegal.
Funny enough, that actually might create a problem. Something tells me that there’s no way google could possibly see all the edge cases coming.
You’d have a tough time tricking the radar sensors that are a key part of self-driving platforms. Unless you’ve got a bunch of stealth aircraft panels…
nothing because they use multiple systems like radar and lidar, not just machine vision. This approach is absolutely necessary when you consider saftey as relying on a single system for navigation would lead to complete failure due to one piece of equipment. Its kind of like how airplanes have multiple failure modes should any one component give out, thus making it possible to land instead of falling out of the sky like a rock.
Looks like a turtle-gun to me!
It’s a 3-D print?
So, how do we know it is NOT a rifle inside?
B^)
So, how do I make it so that automated security systems recognize me as “Head of Security”?
B^)
Easily…
https://www.theverge.com/2016/11/3/13507542/facial-recognition-glasses-trick-impersonate-fool
What kind of shells does that thing take?
C shells.
B^)
Tortoiseshell, of course. :-)
Slow gauge.
This is probably the best demonstration I’ve seen yet that neural network vision systems clearly haven’t copied a real brain. I’m seriously puzzled about what the network IS using to identify the toy turtle as a rifle. The “Cat = guacamole” one is even weirder – clearly the network isn’t using the most obvious feature of guacamole, namely, its greenness.
If I had to guess, I’d say it’s mistaking the tortoiseshell pattern for a burr walnut stock and the flippers for magazines.
Whatever it is though, gives you real nerves about this sort of thing happening to a toddler over a stuffed turtle…
https://youtu.be/A9l9wxGFl4k
Me thinks that turtle will be stuffed in more ways than one. Just teach the kid to hold the turtle up and yell BANG!
Except human vision is *also* trivially easy to fool; that’s the entire point of visual illusions.
Human vision and machine vision can be deceived. We live fairly safely by discouraging deception of humans (three card tricksters and poorly maintained traffic lights are discouraged). Should we now also discourage the deception of machine learning algorithms.
I for one think that , for non-trivial uses, we should discourage the use of machine learning algorithms that differ too much from our perceptions – it,s not us that are seeing things wrong – it is the AI.
The vision subsystem is just a part of our system though. That’s the difference between “AI” and AI – real AI wouldn’t detect the turtle as a gun as it wouldn’t fit the mental model of what a gun would have to look like.
IOW pattern matching isn’t intelligence.
Reminds me of the time Tesla autopilot mistook the side of a semi truck for empty sky and the driver died. Clearly these algorithms are missing a lot of contextual cues that are obvious to humans.
To be fair the autopilot had already prompted the human to take over numerous times and was basically in this mode…
https://m.popkey.co/7e9ee5/47ZW6.gif
That’s what “it” said!
B^)
Then we’ll have the first computer to ever say, OH S**T!
The AI, from inside it’s box manages to annoy the driver though the “Cry wolf” algorithm until they ignore the actual danger ahead of them.
Musk was right all along, but he foolishly thinks he can still stop them.
Kind of a counter to all those AI fears, and yes hopes, because it shows at least a little what AI really is, as well as how far it needs to go.
I can see the SkyNet HKs (Human Killers) identifying fire hydrants as human and exterminating them.
Wht real image recognition will use edge detection grouped into features to make objects based on their order and relative location.
Feed thst into even a basic ai and it will get less confused.
I have more than once flinched at a leaf or a plastic bag not flat lit a certain way and looking too much like a rock to not ignore.
It doesn’t stop at visuals either, Auditory perceptions:
A crisp packet* that got screwed up and put in a bin… the room warms up due to the heat of a bunch of PCs whom the PSUs are known to blow… The crisp packet* starts to creak it’s way to warmth: The heart attack sessions start…
Finally someone who notices none of the PCs have gone dead then stands around the bins listening and finds the culprit.
*Just an example, screwing up a bunch of backlight filters into the plastic recycling bin gives loud (Relative to the PCs) creaking, clicks and scrapes as they try to unfurl/unfold.
Doesn’t look like anything to me.
(hurry season 2)
Dolores? Is that you?
Hurry already!
Not hotdog.
Next from Maybelline – lipstick and mascara that makes facial recognition think you’re Godzilla.
It certainly doesn’t look like a real turtle, but it takes a close look to see that. (Spoiler: the shell is quite unnatural and kind of gun-shaped)
His head’s a bit tubular and the legs are at some weird angles too. That’s beside the walnut-effect shell. If you saw this crawling about you’d definitely ring Richard Attenborough. The fact that machine vision is susceptible to optical illusions too isn’t surprising, or even news. It takes in a lot of information, compresses huge amounts of it, to produce one of a small selection of outputs. Of course it’s going to make mistakes. Probably there’ll never be a visual system that doesn’t. In this case they’ve had to go to quite extreme lengths, that’s no ordinary looking turtle.
So instead of William Gibson/Bruce Sterling’s shirts that won’t be recorded (“Zero History”), we have patterns that can be reliably fooled into a fake interpretation by machine. That’s not much different than a “human” optical illusion but the real trouble starts when you start basing real physical/medical/legal consequences on faulty algorithms.
This is a rifle.
Some people might try to tell you this is a turtle
They might scream ‘Turtle, turtle, turtle,’ over and over and over again.
They might put ‘turtle’ in all caps.
You might even start to believe that this is a turtle.
But it’s not.
This is a rifle.
~~Google 2017~~