Side-Channel Attack Shows Vulnerabilities Of Cryptocurrency Wallets

What’s in your crypto wallet? The simple answer should be fat stacks of Bitcoin or Ethereum and little more. But if you use a hardware cryptocurrency wallet, you may be carrying around a bit fat vulnerability, too.

At the 35C3 conference last year, [Thomas Roth], [Josh Datko], and [Dmitry Nedospasov] presented a side-channel attack on a hardware crypto wallet. The wallet in question is a Ledger Blue, a smartphone-sized device which seems to be discontinued by the manufacturer but is still available in the secondary market. The wallet sports a touch-screen interface for managing your crypto empire, and therein lies the weakness that these researchers exploited.

By using a HackRF SDR and a simple whip antenna, they found that the wallet radiated a distinctive and relatively strong signal at 169 MHz every time a virtual key was pressed to enter a PIN. Each burst started with a distinctive 11-bit data pattern; with the help of a logic analyzer, they determined that each packet contained the location of the key icon on the screen.

Next step: put together a training set. They rigged up a simple automatic button-masher using a servo and some 3D-printed parts, and captured signals from the SDR for 100 presses of each key. The raw data was massaged a bit to prepare it for TensorFlow, and the trained network proved accurate enough to give any hardware wallet user pause – especially since they captured the data from two meters away with relatively simple and concealable gear.

Every lock contains the information needed to defeat it, requiring only a motivated attacker with the right tools and knowledge. We’ve covered other side-channel attacks before; sadly, they’ll probably only get easier as technologies like SDR and machine learning rapidly advance.

[via RTL-SDR.com]

Automate Sorting Your Trash With Some Healthy Machine Learning

Sorting trash into the right categories is pretty much a daily bother. Who hasn’t stood there in front of the two, three, five or more bins (depending on your area and country), pondering which bin it should go into? [Alvaro Ferrán Cifuentes]’s SeparAItor project is a proof of concept robot that uses a robotic sorting tray and a camera setup that aims to identify and sort trash that is put into the sorting tray.

The hardware consists of a sorting tray mounted to the top of a Bluetooth-connected pan and tilt platform. The platform communicates with the rest of the system, which uses a camera and OpenCV to obtain the image data, and a Keras-based back-end which implements a deep learning neural network in Python.

Training of the system was performed by using self-made photos of the items that would need to be sorted as these would most closely match real-life conditions. After getting good enough recognition results, the system was put together, with a motion detection feature added to respond when a new item was tossed into the tray. The system will then attempt to identify the item, categorize it, and instruct the platform to rotate to the correct orientation before tilting and dropping it into the appropriate bin. See the embedded video after the break for the system in action.

Believe it or not, this isn’t the first trash-sorting robot to grace the pages of Hackaday. Potentially concepts like these, that rely on automation and machine vision, could one day be deployed on a large scale to help reduce how much recyclable material end up in landfills. Continue reading “Automate Sorting Your Trash With Some Healthy Machine Learning”

DIY Personal Assistant Robot Hears And Sees All

Who wouldn’t want a robot that can fetch them a glass of water? [Saral Tayal] didn’t just think that, he jumped right in and built his own personal assistant robot. This isn’t just some remote-controlled rover though. The robot actually listens to his voice and recognizes his face.

The body of the robot is the common “Rover 5” platform, to which [Saral] added a number of 3D printed parts. A forklift like sled gives the robot the ability to pick things up. Some of the parts are more about form than function – [Saral] loves NASA’s Spirit and Opportunity Mars rovers, so he added some simulated solar cells and other greebles.

The Logitech webcam up front is very functional — images are fed to machine learning models, while audio is processed to listen for commands. This robot can find and pick up 90 unique objects.

The robot’s brains are a Raspberry Pi. It uses TensorFlow for object recognition. Some of the models [Saral] is using are pretty large – so big that the Pi could only manage a couple of frames per second at 100% CPU utilization. A Google Coral coprocessor sped things up quite a bit, while only using about 30% of the Pi’s processor.

It takes several motors to control to robot’s tracks and sled. This is handled by two Roboclaw motor controllers which themselves are commanded by the Pi.

We’ve seen quite a few mobile robot rovers over the years, but [Saral’s] ‘bot is one of the most functional designs out there. Even better is the fact that it is completely open source. You can find the code and 3D models on his GitHub repo.

Check out a video of the personal assistant rover in action after the break.

Continue reading “DIY Personal Assistant Robot Hears And Sees All”

Google Launches AI Platform That Looks Remarkably Like A Raspberry Pi

Google has promised us new hardware products for machine learning at the edge, and now it’s finally out. The thing you’re going to take away from this is that Google built a Raspberry Pi with machine learning. This is Google’s Coral, with an Edge TPU platform, a custom-made ASIC that is designed to run machine learning algorithms ‘at the edge’. Here is the link to the board that looks like a Raspberry Pi.

This new hardware was launched ahead of the TensorFlow Dev Summit, revolving around machine learning and ‘AI’ in embedded applications, specifically power- and computationally-limited environments. This is ‘the edge’ in marketing speak, and already we’ve seen a few products designed from the ground up to run ML algorithms and inference in embedded applications. There are RISC-V microcontrollers with machine learning accelerators available now, and Nvidia has been working on this for years. Now Google is throwing their hat into the ring with a custom-designed ASIC that accelerates TensorFlow. It just so happens that the board looks like a Raspberry Pi.

Continue reading “Google Launches AI Platform That Looks Remarkably Like A Raspberry Pi”

PrintRite Uses TensorFlow To Avoid Printing Catastrophies

TensorFlow is a popular machine learning package, that among other things, is particularly adept at image recognition. If you want to use a webcam to monitor cats on your lawn or alert you to visitors, TensorFlow can help you achieve this with a bunch of pre-baked libraries. [Eric] took a different tack with PrintRite – using TensorFlow to monitor his 3D printer and warn him of prints gone bad – or worse.

The project relies on training TensorFlow to recognize images of 3D prints gone bad. If layers are separated, or the nozzle is covered in melted goo, it’s probably a good idea to stop the print. Worst case, your printer could begin smoking or catch fire – in that case, [Eric] has the system configured to shut the printer off using a TP-Link Wi-Fi enabled power socket.

Currently, the project exists as a plugin for OctoPrint and relies on two Raspberry Pis – a Zero to handle the camera, and a 3B+to handle OctoPrint and the TensorFlow software. It’s in an early stage of development and is likely not quite ready to replace human supervision. Still, this is a project that holds a lot of promise, and we’re eager to see further development in this area.

There’s a lot of development happening to improve the reliability of 3D printers – we’ve even seen a trick device for resuming failed prints.

Leigh Johnson’s Guide To Machine Vision On Raspberry Pi

We salute hackers who make technology useful for people in emerging markets. Leigh Johnson joined that select group when she accepted the challenge to build portable machine vision units that work offline and can be deployed for under $100 each. For hardware, a Raspberry Pi with camera plus screen can fit under that cost ceiling, and the software to give it sight is the focus of her 2018 Hackaday Superconference presentation. (Video also embedded below.)

The talk is a very concise 13 minutes, so Leigh flies through definitions of basic terms, before quickly naming TensorFlow and Keras as the tools she used. The time she saved here was spent on explaining what convolutional neural networks are and how they work, just enough to prepare the audience. But all of that is really just background, the meat of the talk is self-contained examples that Leigh has put together and made available online. I love to see that since it means you go beyond just watching and try it out for yourself. Continue reading “Leigh Johnson’s Guide To Machine Vision On Raspberry Pi”

Ludwig Promises Easy Machine Learning From Uber

Machine learning has brought an old idea — neural networks — to bear on a range of previously difficult problems such as handwriting and speech recognition. Better software and hardware has made it feasible to apply sophisticated machine learning algorithms that would have previously been only possible on giant supercomputers. However, there’s still a learning curve for developing both models and software to use these trained models. Uber — you know, the guys that drive you home when you’ve had a bit too much — have what they are calling a “code-free deep learning toolbox” named Ludwig. The promise is you can create, train, and use models to extract features from data without writing any code. You can find the project itself on GitHub.io.

The toolbox is built over TensorFlow and they claim:

Ludwig is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures rather than data wrangling.

Continue reading “Ludwig Promises Easy Machine Learning From Uber”