Thought Control Via Handwriting

Computers haven’t done much for the quality of our already poor handwriting. However, a man paralyzed by an accident can now feed input into a computer by simply thinking about handwriting, thanks to work by Stanford University researchers. Compared to more cumbersome systems based on eye motion or breath, the handwriting technique enables entry at up to 90 characters a minute.

Currently, the feat requires a lab’s worth of equipment, but it could be made practical for everyday use with some additional work and — hopefully — less invasive sensors. In particular, the sensor used two microelectrode arrays in the precentral gyrus portion of the brain. When the subject thinks about writing, recognizable patterns appear in the collected data. The rest is just math and classification using a neural network.

If you want to try your hand at processing this kind of data and don’t have a set of electrodes to implant, you can download nearly eleven hours of data already recorded. The code is out there, too. What we’d really like to see is some easier way to grab the data to start with. That could be a real game-changer.

More traditional input methods using your mouth have been around for a long time. We’ve also looked at work that involves moving your head.

Mind-Controlled Flamethrower

Mind control might seem like something out of a sci-fi show, but like the tablet computer, universal translator, or virtual reality device, is actually a technology that has made it into the real world. While these devices often requires on advanced and expensive equipment to interpret brain waves properly, with the right machine learning system it’s possible to do things like this mind-controlled flame thrower on a much smaller budget. (Video, embedded below.)

[Nathaniel F] was already experimenting with using brain-computer interfaces and machine learning, and wanted to see if he could build something practical combining these two technologies. Instead of turning to an EEG machine to read brain patterns, he picked up a much less expensive Mindflex and paired it with a machine learning system running TensorFlow to make up for some of its shortcomings. The processing is done by a Raspberry Pi 4, which sends commands to an Arduino to fire the flamethrower when it detects the proper thought patterns. Don’t forget the flamethrower part of this build either: it was designed and built entirely by [Nathanial F] as well using gas and an arc lighter.

While the build took many hours of training to gather the proper amount of data to build the neural network and works as the proof of concept he was hoping for, [Nathaniel F] notes that it could be improved by replacing the outdated Mindflex with a better EEG. For now though, we appreciate seeing sci-fi in the real world in projects like this, or in other mind-controlled projects like this one which converts a prosthetic arm into a mind-controlled music synthesizer.

Continue reading “Mind-Controlled Flamethrower”

Winners Of Hackaday’s Data Loggin’ Contest: Bluetooth Gardening, Counting Cups, And Predicting Rainfall

The votes for Hackaday’s Data Loggin’ Contest have been received, saved to SD, pushed out to MQTT, and graphed. Now it’s time to announce the three projects that made the most sense out of life’s random data and earned themselves a $100 gift certificate for Tindie, the Internet’s foremost purveyor of fine hand-crafted artisanal electronics.

First up, and winner of the Data Wizard category, is this whole-garden soil moisture monitor by [Joseph Eoff]. You might not realize it from the picture at the top of the page, but lurking underneath the mulch of that lovely garden is more than 20 Bluetooth soil sensors arranged in a grid pattern. All of the data is sucked up by a series of solar powered ESP32 access points, and ultimately ends up on a Raspberry Pi by way of MQTT. Here, custom Python software generates a heatmap that indicates possible trouble spots in the garden. With its easy to understand visualization of what’s happening under the surface, this project perfectly captured the spirit of the category.

Next up is the Nespresso Shield from [Steadman]. This clever gadget literally listens for the telltale sounds of the eponymous coffee maker doing its business to not only estimate your daily consumption, but warn you when the machine is running low on water. The clever non-invasive method of pulling data from a household appliance made this a strong entry for the Creative Genius category.

Last but certainly not least is this comprehensive IoT weather station that uses machine learning to predict rainfall. With crops and livestock at risk from sudden intense storms, [kutluhan_aktar] envisions this device as an early warning for farmers. The documentation on this project, from setting up the GPRS-enabled ESP8266 weather station to creating the web interface and importing all the data into TensorFlow, is absolutely phenomenal. This project serves as a invaluable framework for similar DIY weather detection and prediction systems, which made it the perfect choice for our World Changer category.

There may have only been three winners this time around, but the legendary skill and creativity of the Hackaday community was on full display for this contest. A browse through the rest of the submissions is highly recommended, and we’re sure the creators would love to hear your feedback and suggestions in the comments.

Continue reading “Winners Of Hackaday’s Data Loggin’ Contest: Bluetooth Gardening, Counting Cups, And Predicting Rainfall”

An RP2040 Board Designed For Machine Learning

Machine learning (ML) typically conjures up ideas of fancy code requiring oodles of storage and tons of processing power. However, there are some ML models that, once trained, can readily be run on much more spartan hardware – even a microcontroller! The RP2040, star of the Raspberry Pi Pico, is one such chip up to the task, and [Arducam] have announced a board aiming to employ it to those ends – the Pico4ML.

The board goes heavy on the hardware, equipping the RP2040 with plenty of tools useful for machine learning tasks. There’s a QVGA camera on board, as well as a tiny 0.96″ TFT display. The camera feed can even be streamed live to the screen if so desired. There’s also a microphone to capture audio and an IMU, already baked into the board. This puts object, speech, and gesture recognition well within the purview of the Pico4ML.

Running ML models on a board like the Pico4ML isn’t about robust high performance situations. Instead, it’s intended for applications where low power and portability are key. If you’ve got some ideas on what the Pico4ML could do and do well, sound off in the comments. We’d probably hook it up to a network so we could have it automatically place an order when we yell out for pizza. We’ve covered machine learning on microcontrollers before, too – with a great Remoticon talk on how to get started!

AI Upscaling And The Future Of Content Delivery

The rumor mill has recently been buzzing about Nintendo’s plans to introduce a new version of their extremely popular Switch console in time for the holidays. A faster CPU, more RAM, and an improved OLED display are all pretty much a given, as you’d expect for a mid-generation refresh. Those upgraded specifications will almost certainly come with an inflated price tag as well, but given the incredible demand for the current Switch, a $50 or even $100 bump is unlikely to dissuade many prospective buyers.

But according to a report from Bloomberg, the new Switch might have a bit more going on under the hood than you’d expect from the technologically conservative Nintendo. Their sources claim the new system will utilize an NVIDIA chipset capable of Deep Learning Super Sampling (DLSS), a feature which is currently only available on high-end GeForce RTX 20 and GeForce RTX 30 series GPUs. The technology, which has already been employed by several notable PC games over the last few years, uses machine learning to upscale rendered images in real-time. So rather than tasking the GPU with producing a native 4K image, the engine can render the game at a lower resolution and have DLSS make up the difference.

The current model Nintendo Switch

The implications of this technology, especially on computationally limited devices, is immense. For the Switch, which doubles as a battery powered handheld when removed from its dock, the use of DLSS could allow it to produce visuals similar to the far larger and more expensive Xbox and PlayStation systems it’s in competition with. If Nintendo and NVIDIA can prove DLSS to be viable on something as small as the Switch, we’ll likely see the technology come to future smartphones and tablets to make up for their relatively limited GPUs.

But why stop there? If artificial intelligence systems like DLSS can scale up a video game, it stands to reason the same techniques could be applied to other forms of content. Rather than saturating your Internet connection with a 16K video stream, will TVs of the future simply make the best of what they have using a machine learning algorithm trained on popular shows and movies?

Continue reading “AI Upscaling And The Future Of Content Delivery”

Machine Learning Current Sensor Snoops On MCUs

Anyone who’s ever tried their hand at reverse engineering a piece of hardware has wished there was some kind of magic wand you could tap on a PCB to understand what its doing and why. We imagine that’s what put security researcher [Mark C] on the path to developing CurrentSense-TinyML, a fascinating proof of concept that uses machine learning and sensitive current measurements to try and determine what a microcontroller is up to.

Energy consumption as the LED blinks.

The idea is simple enough: just place a INA219 current sensor between the power supply and the microcontroller under observation, and record the resulting measurements as it goes about its business. Of course in this case, [Mark] knew what the target Arduino Nano was doing because he wrote the code that blinks its onboard LED.

This allowed him to create training data for TensorFlow, which was ultimately optimized into a model that could fit onto the Arduino Nano 33 BLE Sense which stands in for our magic wand. The end result is that the model can accurately predict when the Nano has fired up its LED based on the amount of power it’s using. [Mark] has done a fantastic job of documenting the whole process, which also doubles as a great intro for putting machine learning to work on a microcontroller.

Now we already know what you’re thinking: obviously the current would go up when the LED was lit, so the machine learning aspect is completely unnecessary. That may be true in this limited context, but remember, this is just a proof of concept to base further work on. In the future, with more training data, this technique could potentially be used to identify a whole range of nuanced activities. You’d be able to see when the MCU was sitting idle, when it was writing to flash, or when it was reading from sensors. In fact, with a good enough model, it might even be possible to identify the individual sensors that are being polled.

These are early days, but we’re very interested in seeing where this research goes. It might not be magic, but if analyzing the current draw of a coffee maker can tell you how much everyone in the office is drinking, then maybe it can help us figure out what all these unlabeled ICs are doing.

Science Officer…Scan For Elephants!

If you watch many espionage or terrorism movies set in the present day, there’s usually a scene where some government employee enhances a satellite image to show a clear picture of the main villain’s face. Do modern spy satellites have that kind of resolution? We don’t know, and if we did we couldn’t tell you anyway. But we do know that even with unclassified resolution, scientists are using satellite imagery and machine learning to count things like elephant populations.

When you think about it, it is a hard problem to count wildlife populations in their habitat. First, if you go in person you disturb the target animals. Even a drone is probably going to upset timid wildlife. Then there is the problem with trying to cover a large area and figuring out if the elephant you see today is the same one as one you saw yesterday. If you guess wrong you will either undercount or overcount.

The Oxford scientists counting elephants used the Worldview-3 satellite. It collects up to 680,000 square kilometers every day. You aren’t disturbing any of the observed creatures, and since each shot covers a huge swath of territory, your problem of double counting all but vanishes.

Continue reading “Science Officer…Scan For Elephants!”