Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

What has dual compressed-air cannons, 500 roll-on deodorant balls, and a machine-learning brain with a bad attitude? We didn’t know either, until [Leo Fernekes] dropped this video on his autonomous robot sentry gun and saw it in action for ourselves.

Now, we’ve seen tons of sentry guns on these pages before, shooting everything from water to various forms of Nerf. And plenty of those builds have used some form of machine vision to aim the gun onto the target. So while it might appear that [Leo]’s plowing old ground here, this build is chock full of interesting tips and tricks.

It started when [Leo] saw a video on TensorFlow basics from our friend [Edje Electronics], which gave him the boost needed to jump into an AI project. The controller he ended up with looks for humans in the scene and slews the turret onto target, where the air cannons can do their thing. The hefty ammo is propelled by compressed air, which is dumped into the chamber using a solenoid valve with an interesting driver that maximizes the speed at which it opens. Style points go to the bacteriophage T4-inspired design, and to the sequence starting at 1:34 which reminded us of the factory scene from RoboCop.

[Leo] really put a ton of work into this project, and the results show. He is hoping to get an art gallery or museum to show it as an interactive piece to comment on one possible robot-human future, presumably after getting guests to sign a release. Whatever happens to it, the robot looks great and [Leo] learned a lot from it, as did we.

17 thoughts on “Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

      1. 10/10 points for style!
        Any project that requires an athletic cup for protection is on the right track.
        Why not use ping pong balls? Lower KE and cheaper.
        I get using RS-485 instead of Ethernet because you didn’t want data packet collisions. But why stick with the expensive connectors?

  1. Some algorithm anticipating the motion of the target is clearly missing – maybe something like a Kalman filter. That way, the turret could still be accurate when attacking a moving target.

  2. Sorry, but thumbs down on this one.

    I don’t wish to detract from what is clearly an impressive engineering achievement – it looks like a great deal of work and talent has gone into this.

    However, I do feel uncomfortable seeing autonomous weapons (of any kind) being developed. It’s not that I’m worried about some Skynet-type situation, I’m concerned at the possibility of someone taking inspiration from this and mounting a real gun on it and people getting killed.

    As a software developer, I would be horrified if someone took my work and used it to build a weapon. Perhaps I’m privileged to be able to choose who I work for, but I know there are many who feel the same.

    I’m fully aware that there are commercial products like this available, and I’m sure there is an argument along the lines of ‘but the bad guys already have AI-powered guns so we must build them too’. For me though, it feels like the very last thing we need at the moment is more efficient ways of shooting projectiles at our fellow humans.

    Clearly you are free to choose what you want to spend your time and effort working on. I choose not to build turret guns!

      1. So we’re not allowed to debate morality anymore? Instead of sidestepping the issue by screaming nonsense, why not reply with a better reason for building robots that kill people?

        1. Government took your taxes and killed millions people.
          You’re done nothing, just discussed imaginary things on the internet.
          Morality is a wrong. End of debate.

    1. By this same logic I’d better never catch you working on any self-driving car software.

      ‘cuz ppl could do bad things with cars u guyz!!!’

      Or facial recognition software. Because bad governments around the world might then take it and use it to track dissidents and- oh wait…

    2. Turrets don’t have to use lethal ammunition, and they likely won’t ever. What they could be designed for is enforcement. People with wild animal problems could easily (in the future) buy a DeTurret (TM) (I just made that up!) to stop things like coyotes from attacking their livestock. It doesn’t have to be lethal, just use something like paintballs or pellets.

      Look, what you are saying makes perfect sense in a fairy tale. This is the real world, and you must look at the inevitable mistakes we are to make. If we were better, we wouldn’t have nuked an island of people. We wouldn’t have designed more efficient handheld machines to neutralize battlefield opponents from many distances. We wouldn’t have created toxic chemicals that cause immense suffering on purpose. I can go on my dude.

      This is an issue of peace. We, (you and I), are peaceful people with no intent to harm anything or anyone. We prefer to teach and learn. Science is the answer! I’d say if we all could think like this, we would go extinct far quicker. No time to procreate when we have discoveries to make in the name of bettering humanity! It’s this equal balance of what EVERY human being believes which brings us closer to a real humanity. AI is just the culmination of all human thought, and with it comes indifference. It’s completely natural and it’s part of the cycle. All creatures are war-faring. We can’t stop that, it’s literally encoded in our DNA.

      In conclusion, I completely agree with you, however it’s wasted effort if you intend to push your opinion so publicly. Nobody is going to listen to you. Do what any scientist/inventor/engineer would do: figure out a way to trick it into not working. Find a way to counter it. Asking the world to agree with you on one topic (and yes, it’s a big deal) isn’t going to happen. We can’t even get people to stop killing each other over stupid stuff like a parking spot. The solution is to outsmart the invention, to try and find a way to neutralize how it works.

    3. “As a software developer, I would be horrified if someone took my work and used it to build a weapon. Perhaps I’m privileged to be able to choose who I work for, but I know there are many who feel the same.”

      Similar to:

      https://en.wikipedia.org/wiki/G._H._Hardy

      “Hardy preferred his work to be considered pure mathematics, perhaps because of his detestation of war and the military uses to which mathematics had been applied. He made several statements similar to that in his Apology:

      I have never done anything “useful”. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world.[21]

      However, aside from formulating the Hardy–Weinberg principle in population genetics, his famous work on integer partitions with his collaborator Ramanujan, known as the Hardy–Ramanujan asymptotic formula,

      __has been widely applied in physics to find quantum partition functions of atomic nuclei (first used by Niels Bohr)__

      and to derive thermodynamic functions of non-interacting Bose–Einstein systems. Though Hardy wanted his maths to be “pure” and devoid of any application, much of his work has found applications in other branches of science.”

Leave a Reply to wizardpcCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.