Microcontroller And IMU Team Up For Simple Flight Sim Controls

Classes are over at Cornell, and that means one thing: the students in [Bruce Land]’s microcontroller design course have submitted their final projects, many of which, like this flight control system for Google Earth’s flight simulator, find their way to the Hackaday tips line.

We actually got this tip several days ago, but since it revealed to us the previously unknown fact that Google Earth has a flight simulator mode, we’ve been somewhat distracted. Normally controlled by mouse and keyboard, [Sheila Balu] decided to give the sim a full set of flight controls to make it more realistic. The controls consist of a joystick with throttle, rudder pedals, and a small control panel with random switches. The whole thing is built of cardboard to keep costs down and to make the system easy to replicate. Interestingly, the joystick does not have the usual gimbals-mounted potentiometers to detect pitch and roll; rather, an IMU mounted on the top of the stick provides data on the stick position. All the controls talk to a PIC32, which sends the inputs over a serial cable to a Python script on the PC running Google Earth; the script simulates the mouse and keyboard commands needed to fly the sim. The video below shows [Sheila] taking an F-16 out for a spin, but despite being a pilot herself since age 16, she was curiously unable to land the fighter jet safely in a suburban neighborhood.

[Bruce]’s course looks like a blast, and [Sheila] clearly enjoyed it. We’re looking forward to the project dump, which last year included this billy-goat balancing Stewart platform, and a robotic ice cream topping applicator.

Continue reading “Microcontroller And IMU Team Up For Simple Flight Sim Controls”

Google Discovers Google+ Servers Are Still Running

Google is pulling the plug on their social network, Google+. Users still have the better part of a year to say their goodbyes, but if the fledgling social network was a ghost town before, news of its imminent shutdown isn’t likely to liven the place up. A quick check of the site as of this writing reveals many users are already posting their farewell messages, and while there’s some rallying behind petitions to keep the lights on, the majority realize that once Google has fallen out of love with a project there’s little chance of a reprieve.

To say that this is a surprise would be disingenuous. We’d wager a lot of you already thought it was gone, honestly. It’s no secret that Google’s attempt at a “Facebook Killer” was anything but, and while there was a group of dedicated users to be sure, it never attained anywhere near the success of its competition.

According to a blog post from Google, the network’s anemic user base isn’t the only reason they’ve decided to wind down the service. A previously undisclosed security vulnerability also hastened its demise, a revelation which will particularly sting those who joined for the privacy-first design Google touted. While this fairly transparent postmortem allows us to answer what ended Google’s grand experiment in social networking, there’s still one questions left unanswered. Where are the soon to be orphaned Google+ users supposed to go?

Continue reading “Google Discovers Google+ Servers Are Still Running”

Don’t Look Now, But Your Necklace Is Listening

There was a time when the average person was worried about the government or big corporations listening in on their every word. It was a quaint era, full of whimsy and superstition. Today, a good deal of us are paying for the privilege to have constantly listening microphones in multiple rooms of our house, largely so we can avoid having to use our hands to turn the lights on and off. Amazing what a couple years and a strong advertising push can do.

So if we’re going to be funneling everything we say to one or more of our corporate overlords anyway, why not make it fun? For example, check out this speech-to-image necklace developed by [Stephanie Nemeth]. As you speak, the necklace listens in and finds (usually) relevant images to display. Conceptually this could be used as an assistive communication technology, but we’re cool with it being a meme display device for now.

Hardware wise, the necklace is just a Raspberry Pi 3, a USB microphone, and a HyperPixel 4.0 touch screen. The Pi Zero would arguably be the better choice for hanging around your neck, but [Stephanie] notes that there’s some compatibility issues with Node.js on the Zero’s ARM6 processor. She details a workaround, but says there’s no guarantee it will work with her code.

The JavaScript software records audio from the microphone with SoX, and then runs that through the Google Cloud Speech-to-Text service to figure out what the wearer is saying. Finally it does a Google image search on the captured words using the custom search JSON API to find pictures to show on the display. There’s a user-supplied list of words to ignore so it doesn’t try looking up images for function words (such as “and” or “however”), though presumably it can also be used to blacklist certain imagery you might not want popping up on your chest in mixed company.

We’d be interested in seeing somebody implement this software on a Raspberry Pi powered digital frame to display artwork that changes based on what the people in the room are talking about. Like in Antitrust, but without Tim Robbins offing anyone.

ESP8266 Powered Tank With Voice Control

The high availability of (relatively) low cost modular components has made building hardware easier than ever. Depending on what you want to do, the hardware side of a project might be the hacker equivalent of building with LEGO. In fact, we wouldn’t be surprised if it literally involved building with LEGO. In any event, easy and quick hardware builds leave more time for developing creative software to run the show. The end result is that we’re starting to see very complex systems broken down into easy-to-replicate DIY builds that would have been nearly impossible just a few years ago.

[igorfonseca83] writes in to share with us his modular tank platform that uses the ESP8266 and a handful of software hacks to allow for voice control from the user’s mobile device. Presented as a step-by-step guide on Hackaday.io, this project is perfect for getting started in Internet-controlled robotics. Whether you just want to experiment with Google Assistant integration or use this as a blank slate to bootstrap a remotely controlled rover, this project has a lot to offer.

The chassis itself is a commercially available kit, and [igorfonseca83] uses a L298N dual channel H-bridge module to control its two geared motors. A Wemos D1 serves as the brains of the operation, and three 18650 3.7V batteries provide the juice to keep everything running. There’s plenty of expansion capability to add sensors and other gear, but for this project getting it rolling was the only concern.

Software wise, there are a number of pieces that work together to provide the Google Assistant control demonstrated in the video after the break. It starts by interfacing the ESP8266 board Adafruit.IO, which connects to IFTTT, and then finally Google Assistant. By setting up a few two variable phrases in IFTTT that get triggered by voice commands in Google Assistant, you can push commands back down to the ESP8266 through Adafruit.IO. It’s a somewhat convoluted setup, admittedly, but the fact that involves very little programming makes it an interesting solution for anyone who doesn’t want to get bogged down with all the minutiae of developing your own Internet control stack.

[igorfonseca83] is no stranger to building remotely controlled rovers. Last year we covered another of his creations which was commanded through a web browser and carried an Android phone to stream video of its adventures.

Continue reading “ESP8266 Powered Tank With Voice Control”

Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

Location Sharing With Google Home

With Google’s near-monopoly on the internet, it can be difficult to get around in cyberspace without encountering at least some aspect of this monolithic, data-gathering giant. It usually takes a concerted effort, but it is technically possible to do. While [Mat] is still using some Google products, he has at least figured out a way to get Google Home to work with location data without actually sharing that data with Google, which is a step in the right direction.

[Mat]’s goal was to use Google’s location sharing features through Google Home, but without the creepiness factor of Google knowing everything about his life, and also without the hassle of having to use Google Maps. He’s using a few things to pull this off, including a NodeRED server running on a Raspberry Pi Zero, a free account from If This Then That (IFTTT), Tasker with AutoRemote plugin, and the Google Maps API key. With all of that put together, and some configuration of IFTTT he can ask his Google assistant (or Google Home) for location data, all without sharing that data with Google.

This project is a great implementation of Google’s tools and a powerful use of IFTTT. And, as a bonus, it gets around some of the creepiness factor that Google tends to incorporate in their quest to know all the data.

Continue reading “Location Sharing With Google Home”

Two Factor Authentication With The ESP8266

Google Authenticator is a particularly popular smartphone application that can be used as a token for many two factor authentication (2FA) systems by generating a time-based one time password (referred to as TOTP). With Google Authenticator, the combination of your user name and password along with the single-use code generated by the application allows you to securely authenticate yourself in a way that would be difficult for an attacker to replicate.

That sounds great, but what if you don’t have a smartphone? That’s the situation that [Lady Ada] recently found herself in, and rather than going the easy route and buying a hardware 2FA token that’s compatible with Google Authenticator, she decided to build one herself based on the ESP8266. With the hardware and source documented on her site, the makings of an open source Google Authenticator hardware token are available for anyone who’s interested.

Generated codes can also be viewed via serial.

For the hardware, all you need is the ESP8266 and a display. Naturally [Lady Ada] uses her own particular spin on both devices which you can purchase if you want to create an identical device, but the concept will work the same on the generic hardware you’ve probably already got in the parts bin. Software wise, the code is written in CircuitPython, a derivative of MicroPython, which aims to make microcontroller development easier. If you haven’t tried MicroPython before, grab an ESP and give this a roll.

Conceptually, TOTP is relatively simple. You just need to know what time it is, and run an SHA1 hash. The time part is simple enough, as the ESP8266 can connect to the network and get the current time from NTP. The calculation of the TOTP is handled by the Python code once you’ve provided it with the “secret” pulled from the Google Authenticator application. It’s worth noting here that this means your 2FA secrets will be held in clear-text on the ESP8266’s flash, so try not to use this to secure any nuclear launch systems or anything, OK? Then again, if you ever lose it the beauty of 2-factor is you can invalidate the secret and generate a new one.

We’ve covered the ins and outs of 2FA applications before here at Hackaday if you’d like to know more about the concept, in addition to previous efforts to develop a hardware token for Google Authenticator.