A familiar spirit, or just a familiar, is a creature rumored to help people in the practice of magic. The moniker is perfect for Archimedes, the robot owl built by Alex Glow, which wields the Amazon Google AIY kit to react when it detects faces. A series of very interesting design choices a what really gives the creature life. Not all of those choices were on purpose, which is the core of her talk at the 2018 Hackaday Superconference.
You can watch the video of her talk, along with an interview with Alex after the break.
When the Unexpected Becomes the Plan
Alex‘s adventure with Archimedes is a good one. As we reported back in May, the project was built to bring to the Bay Area Maker Faire — originally envisioned as a way to hand out stickers — but it became so much more. Since the AIY kit is really just a cardboard box to hold electronic guts, morphing into an owl is much more fun, and a unique challenge.
Designed in OnShape, a framework for the parts was manually added, before using the loft command to programmatically connect the edges into planes that give the wings a really nice look. However, the wings and the head were designed in different files which resulted in an unexpectedly large head by comparison. Turns out that’s actually a great way to jump the uncanny-valley — think of universally loved cartoon characters and the way they have a huge head and large eyes.
Printing the beak lead to some unfortunate support problems, but the solution is a great material hack in itself. Boil a CD for about 5 minutes and it’ll delaminate. Alex cut pieces from the boiled disc to use for the beak and I like it better, since it mimics the difference between a feather-covered face and a real beak. Also on the material tips for this one is to use armature wire, normally the domain of sculptors, for an tweakable body for the 3D printed parts and as the shoulder harness she uses to wear Archimedes around.
More Owl Insight
Elliot Williams caught up with Alex afterward for a bit more insight on the project. You’ll find a lot of common ground with her experience, like focusing on the problems of the build rather than trying to get a finished robot (what is finished, anyway?), and building a hard case after too much damage from just shoving the project into a backpack while traveling around.
Right now Archimedes wields a Raspberry Pi Zero W and AIY board that lets it react differently based on recognized faces perceived to be happy or sad. The beeps are recognizable, but beyond two or three different messages the point will begin to be lost on humans. Alex is planning for a future upgrade to add a deeper communications element that would use a smartphone app to decode a larger set of audible communications.
“Amazon AIY kit”?
Did you mean:
Deeplens
Google AIY Vision Kit 1.0/1.1
Google AIY Vision Kit. :) It runs with no Internet, which is great for me because of privacy/security concerns, and it means he works at signal-heavy events like Maker Faire. His motors are randomly driven by a separate Arduino, but will soon be getting an upgrade!
That is pretty cool! What’s next? Is motion tracking going to be integrated into the face kit?
Oof, my bad. Fixed, thanks!
So, she’s walking around with a camera on her should streaming video to Amazon, not giving anyone a chance to opt out who values their privacy and don’t want large corporations to have yet even more info scooped up. Given that she’s live streaming her feed, Amazon will even be able to correlate location data and time stamp with people’s faces via which cell tower, wifi AP MAC, or geolocate IP.
She’s just as bad or even worse than the gargoyles from Snow Crash.
Thanks for your comment! I’m the maker. Due to some unfortunate issues with WP login, it looks like I’m unable to post the response I wrote, but rest assured, I share your concerns, and Archie does not livestream, nor are the photos accessible without physically dismantling him, nor do I have that system running in environments where privacy is expected (hackerspaces, Defcon, etc). Here is the full response I wrote: https://twitter.com/glowascii/status/1077312445599764480
Cheers :)
I offer my apologies then for my words! I immediately associate anything Google, Amazon, etc with uploading and offloading everything to their ‘cloud’ and thus their respective massive data harvesting machinations. The way this article is written does nothing to dispel that notion and given the subject keywords, I think leads many to the same conclusion as mine.
I heartily cheer you though for doing all the processing and computing local and keeping privacy forefront and data away from unethical corps!
You really should watch the videos before making a comment like this. Alex specifically mentioned this issue and that the project is not uploading anything, not saving anything, and merely using camera data for in-the-moment reactions.
Google AIY Vision kit is doing tensorflow in near real time on the MA2450 chip. Check out the previous HAD article: https://hackaday.com/2017/12/17/googles-aiy-vision-kit-augments-pi-with-vision-processor/
The vision AIY kit is awesome .. I made a palm-sized Edge AI classifier using an AIY vision kit (details of which I can’t really discuss due to an NDA).
In any case, it’s pretty easy to build your own classifier by cloning the TensorFlow For Poets repo, and placing your own training photos in the classification and test folders. Then you can easily compile the resulting frozen net for the AIY “bonnet”, and run it using a generic python script that will load the net into the AIY bonnet, turn on the camera, and monitor the classification results as they stream in.
I will suggest that the inclusion of an Arduino in this project is probably kind of unnecessary, since the AIY bonnet includes a microprocessor and some buffered gpio for actuation .. so idea of being able to actuate stuff was anticipated, and the ability is baked into the board’s design.
In any case, I looking forward to a next gen AIY .. whether based on Google’s Edge TPU, or an upgrade of the vision bonnet with a Myriad X.
In my book, the biggest thing the AIY has over the Intel Neural Compute Stick is the fact that it runs the camera signal through one of the Myriad’s MIPI lanes .. so you don’t need to send pictures to the device .. instead it “spies” on cafera frames as they go by, and it runs its vision algorithms autonomously. This saves a phenomenal amount of processing power. In fact, the host hardly does anything at all, which is why a device like the Pi Zero W can host the vision AIY hardware so readily.
It’s what can Van Eck style Phreak it. Maybe shield the system if worried? I totally just went off on a tangent with the Vision system application to be used with a telescope classification of what its tracking and thinking can look at waterfall signals with a web cam, to classify signals… more processor intense and something different. Another thought is vehicle identification if you have a stalker on the streets.
just get a cat.
Drew Fustini already has that covered.
Known probably 30-40 cats, none of them were familiars. Its tricky to not mistake brain parasites for familiarity in regards to cats but crazy lonely people do it all the time. Id be more inclined to believe this animatronic owl has a soul than a disease ridden felid.
I dunno, A wardriving cat has been on my list of projects for a while.
Cyborgs are just plain cool!
(but yeah. As diseases go, cats are pretty gross.)
not as bad as humans though.
That’s been done:
DefCon: Weaponizing your pets
https://www.youtube.com/watch?v=DMNSvHswljM
Wired:
https://www.wired.com/2014/08/how-to-use-your-cat-to-hack-your-neighbors-wi-fi/
Nermal is sitting outside on our deck wanting to be let in.
Maybe I’ll make him a robot friend.
Great project, well done.
Does this remind anyone else of The Creature From Cleveland Depths by Fritz Leiber?