Mind-Controlled Flamethrower

Mind control might seem like something out of a sci-fi show, but like the tablet computer, universal translator, or virtual reality device, is actually a technology that has made it into the real world. While these devices often requires on advanced and expensive equipment to interpret brain waves properly, with the right machine learning system it’s possible to do things like this mind-controlled flame thrower on a much smaller budget. (Video, embedded below.)

[Nathaniel F] was already experimenting with using brain-computer interfaces and machine learning, and wanted to see if he could build something practical combining these two technologies. Instead of turning to an EEG machine to read brain patterns, he picked up a much less expensive Mindflex and paired it with a machine learning system running TensorFlow to make up for some of its shortcomings. The processing is done by a Raspberry Pi 4, which sends commands to an Arduino to fire the flamethrower when it detects the proper thought patterns. Don’t forget the flamethrower part of this build either: it was designed and built entirely by [Nathanial F] as well using gas and an arc lighter.

While the build took many hours of training to gather the proper amount of data to build the neural network and works as the proof of concept he was hoping for, [Nathaniel F] notes that it could be improved by replacing the outdated Mindflex with a better EEG. For now though, we appreciate seeing sci-fi in the real world in projects like this, or in other mind-controlled projects like this one which converts a prosthetic arm into a mind-controlled music synthesizer.

Continue reading “Mind-Controlled Flamethrower”

Visual Raspberry Pi With Node-Red And TensorFlow

If you prefer to draw boxes instead of writing code, you may have tried IBM’s Node-RED to create logic with drag-and-drop flows. A recent [TensorFlow] video shows an interview between [Jason Mayes] and [Paul Van Eck] about using TensorFlow.js with Node-RED to create machine learning applications for Raspberry Pi visually. You can see the video, below.

The video doesn’t go into much detail since it is only ten minutes long. But it does show how easy it is to do things like identify images using an existing TensorFlow model. There is a more detailed tutorial available, as well as a corresponding video, which you can see below.

Continue reading “Visual Raspberry Pi With Node-Red And TensorFlow”

Machine Learning Current Sensor Snoops On MCUs

Anyone who’s ever tried their hand at reverse engineering a piece of hardware has wished there was some kind of magic wand you could tap on a PCB to understand what its doing and why. We imagine that’s what put security researcher [Mark C] on the path to developing CurrentSense-TinyML, a fascinating proof of concept that uses machine learning and sensitive current measurements to try and determine what a microcontroller is up to.

Energy consumption as the LED blinks.

The idea is simple enough: just place a INA219 current sensor between the power supply and the microcontroller under observation, and record the resulting measurements as it goes about its business. Of course in this case, [Mark] knew what the target Arduino Nano was doing because he wrote the code that blinks its onboard LED.

This allowed him to create training data for TensorFlow, which was ultimately optimized into a model that could fit onto the Arduino Nano 33 BLE Sense which stands in for our magic wand. The end result is that the model can accurately predict when the Nano has fired up its LED based on the amount of power it’s using. [Mark] has done a fantastic job of documenting the whole process, which also doubles as a great intro for putting machine learning to work on a microcontroller.

Now we already know what you’re thinking: obviously the current would go up when the LED was lit, so the machine learning aspect is completely unnecessary. That may be true in this limited context, but remember, this is just a proof of concept to base further work on. In the future, with more training data, this technique could potentially be used to identify a whole range of nuanced activities. You’d be able to see when the MCU was sitting idle, when it was writing to flash, or when it was reading from sensors. In fact, with a good enough model, it might even be possible to identify the individual sensors that are being polled.

These are early days, but we’re very interested in seeing where this research goes. It might not be magic, but if analyzing the current draw of a coffee maker can tell you how much everyone in the office is drinking, then maybe it can help us figure out what all these unlabeled ICs are doing.

Is That A Cat Or Not?

Pandemic induced boredom takes people in many different ways. Some of us go for long walks, others learn to speak a new language, while yet more unleash their inner gaming streamer. [Niklas Fauth] has taken a break from his other projects by creating a very special project indeed. A cat detector! No longer shall you ponder whether or not the object or creature before you is a cat, now that existential question can be answered by a gadget.

This is more of a novelty project than one of special new tech, he’s taken what looks to be the shell from a cheap infra-red thermometer and put a Raspberry Pi Zero with camera and a small screen into it. This in turn runs Tensorflow with the COCO-SSD object identification model. The device has a trigger, and when it’s pressed to photograph an image it applies the model to detect whether the subject is a cat or not. The video posted to Twitter is below the break, and we can’t dispute its usefulness in the feline-spotting department.

[Niklas] has featured here more than once in the past. This is not his only pandemic project, either.

Continue reading “Is That A Cat Or Not?”

This Negative Reinforcement Keyboard May Shock You

We wouldn’t be where we are today without Mrs. Coldiron’s middle school typing class. Even though she may have wanted to, she never did use negative reinforcement to improve our typing speed or technique. We unruly teenagers might have learned to type a lot faster if those IBM Selectrics had been wired up for discipline like [3DPrintedLife]’s terrifying, tingle-inducing typist trainer keyboard (YouTube, embedded below).

This keyboard uses capsense modules and a neural network to detect whether the user is touch-typing or just hunting and pecking. If you’re doing it wrong, you’ll get a shock from the guts of a prank shock pen every time you peck the T or Y keys. Oh, and just for fun, there’s a 20 V LED bar across the top that is supposed to deter you from looking down at your hands with randomized and blindingly bright strobing light.

Twenty-four of the keys are connected in groups of three by finger usage — for example Q, A, and Z are wired to the same capsense module. These are all wired up to a Raspberry Pi Zero along with the light bar. [3DPrintedLife] was getting a lot of cross-talk between capsense modules, so they solved the problem in software by training a TensorFlow model with a ton of both proper and improper typing data.

We love the little meter on the touchscreen that shows at a glance how you’re doing in the touch typing department. As the meter inches leftward, you know you’re in for a shock. [3DPrintedLife] even built in some games that use pain to promote faster and more accurate typing. Check out the build video after the break, but don’t say we didn’t warn you about the strobing lights.

The secret to the shock pen is a tiny flyback transformer like the kind used in CRT televisions. Find a full-sized flyback transformer and you can build yourself a handheld high-voltage power supply.

Continue reading “This Negative Reinforcement Keyboard May Shock You”

Reachy The Open Source Robot Says Bonjour

Humanoid robots always attract attention, but anyone who tries to build one quickly learns respect for a form factor we take for granted because we were born with it. Pollen Robotics wants to help move the field forward with Reachy: a robot platform available both as a product and as a wealth of information shared online.

This French team has released open source robots before. We’ve looked at their Poppy robot and see a strong family resemblance with Reachy. Poppy was a very ambitious design with both arms and legs, but it could only ever walk with assistance. In contrast Reachy focuses on just the upper body. One of the most interesting innovations is found in Reachy’s neck, a cleverly designed 3 DOF mechanism they called Orbita. Combined with two moving antennae at the top of the head, Reachy can emote a wide range of expressions despite not having much of a face. The remainder of Reachy’s joints are articulated with Dynamixel serial bus servos though we see an optional Orbita-based hand attachment in the demo video (embedded below).

Reachy’s € 19,990 price tag may be affordable relative to industrial robots, but it’s pretty steep for the home hacker. No need to fret, those of us with smaller bank accounts can still join the fun because Pollen Robotics has open sourced a lot of Reachy details. Digging into this information, we see Reachy has a Google Coral for accelerating TensorFlow and a Raspberry Pi 4 for general computation. Mechanical designs are released via web-based Onshape CAD. Reachy’s software suite on GitHub is primarily focused on Python, which allows us to experiment within a Jupyter notebook. Simulation can be done within Unity 3D game engine, which can be optionally compiled to run in a browser like the simulation playground. But academic robotics researchers are not excluded from the fun, as ROS1 integration is also available though ROS2 support is still on the to-do list.

Reachy might not be as sophisticated as some humanoid designs we’ve seen, and without a lower body there’s no way for it to dance. But we are very appreciative of a company willing to share knowledge with the world. May it spark new ideas for the future.

[via Engadget]

Continue reading “Reachy The Open Source Robot Says Bonjour”

Into The Belly Of The Beast With Placemon

No, no, at first we thought it was a Pokemon too, but Placemon monitors your place, your home, your domicile. Instead of a purpose-built device, like a CO detector or a burglar alarm, this is a generalized monitor that streams data to a central processor where machine learning algorithms notify you if something is awry. In a way, it is like a guard dog who texts you if your place is unusually cold, on fire, unlawfully occupied, or underwater.

[anfractuosity] is trying to make a hacker-friendly version based on inspiration from a scientific paper about general-purpose sensing, which will have less expensive components but will lose accuracy. For example, the article suggests thermopile arrays, like low-resolution heat-vision, but Placemon will have a thermometer, which seems like a prudent starting place.

The PCB is ready to start collecting sound, temperature, humidity, barometric pressure, illumination, and passive IR then report that telemetry via an onboard ESP32 using Wifi. A box utilizing Tensorflow receives the data from any number of locations and is training to recognize a few everyday household events’ sensor signatures. Training starts with events that are easy to repeat, like kitchen sounds and appliance operations. From there, [anfractuosity] hopes that he will be versed enough to teach it new sounds, so if a pet gets added to the mix, it doesn’t assume there is an avalanche every time Fluffy needs to go to the bathroom.

We have another outstanding example of sensing household events without directly interfacing with an appliance, and bringing a sensor suite to your car might be up your alley.