A Raspberry Pi 5 Is Better Than Two Pi 4s

What’s as fast as two Raspberry Pi 4s? The brand-new Raspberry Pi 5, that’s what. And for only a $5 upcharge (with an asterisk), it’s going to the new go-to board from the British House of Fruity Single-Board Computers. But aside from the brute speed, it also has a number of cool features that will make using the board easier for a number of projects, and it’s going to be on sale in October. Raspberry Pi sent us one for review, and if you were just about to pick up a Pi 4 for a project that needs the speed, we’d say that you might wait a couple weeks until the Raspberry Pi 5 goes on sale.

Twice as Nice

On essentially every benchmark, the Raspberry Pi 5 comes in two to three times faster than the Pi 4. This is thanks to the new Broadcom BCM2712 system-on-chip (SOC) that runs four ARM A76s at 2.4 GHz instead of the Pi 4’s ARM A72s at 1.8 GHz. This gives the CPUs a roughly 2x – 3x advantage over the Pi 4. (Although the Pi 4 was eminently overclockable in the CM4 package.)

The DRAM runs at double the clock speed. The video core is more efficient and pushes pixels about twice as fast. The new WiFi controller in the SOC allows about twice as much throughput to the same radio. Even the SD card interface is capable of running twice as fast, speeding up boot times to easily under 10 sec – maybe closer to 8 sec, but who’s counting?

Heck, while we’re on factors of two, there are now two MIPI camera/display lines, so you can do stereo imaging straight off the board, or run a camera and external display simultaneously. And it’s capable of driving two 4k HDMI displays at 60 Hz.

There are only two exceptions to the overall factor-of-two improvements. First, the Gigabyte Ethernet remains Gigabyte Ethernet, so that’s a one-ex. (We’re not sure who is running up against that constraint, but if it’s you, you’ll want an external network adapter.) But second, the new Broadcom SOC finally supports the ARM cryptography extensions, which make it 45x faster at AES, for instance. With TLS almost everywhere, this keeps crypto performance from becoming the bottleneck. Nice.

All in all, most everything performance-related has been doubled or halved appropriately, and completely in line with the only formal benchmarks we’ve seen so far, it feels about twice as fast all around in our informal tests. Compared with a Pi 400 that I use frequently in the basement workshop, the Pi 5 is a lot snappier.

Continue reading “A Raspberry Pi 5 Is Better Than Two Pi 4s”

The Robot That Lends The Deaf-Blind Community A Hand

The loss of one’s sense of hearing or vision is likely to be devastating in the way that it impacts daily life. Fortunately many workarounds exist using one’s remaining senses — such as sign language — but what if not only your sense of hearing is gone, but you are also blind? Fortunately here, too, a workaround exists in the form of tactile signing, which is akin to visual sign language, except that it uses one’s sense of touch. This generally requires someone who knows tactile sign language to translate from spoken or written forms to tactile signaling. Yet what if you’re deaf-blind and without human assistance? This is where a new robotic system could conceivably fill in.

The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)
The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)

Developed by Tatum Robotics, the Tatum T1 is a a robotic hand and associated software that’s intended to provide this translation function, by taking in natural language information, whether spoken, written or in some digital format, and using a number of translation steps to create tactile sign language as output, whether it’s the ASL format, the BANZSL alphabet or another. These tactile signs are then expressed using the robotic hand, and a connected arm as needed, ideally using ASL gloss to convey as much information as quickly as possible, not unlike with visual ASL.

This also answers the question of why one would not just use a simple braille cell on a hand, as the signing speed is essential to keep up with real-time communications, unlike when, say, reading a book or email. A robotic companion like this could provide deaf-blind individuals with a critical bridge to the world around them. Currently the Tatum T1 is still in the testing phase, but hopefully before long it may be another tool for the tens of thousands of deaf-blind people in the US today.

NASA’s Parker Probe Gets Front Row Seat To CME

A little over a year ago, and about 150 million kilometers (93 million miles) from where you’re currently reading this, NASA’s Parker Solar Probe quietly made history by safely flying through one of the most powerful coronal mass ejections (CMEs) ever recorded. Now that researchers have had time to review the data, amateur space nerds like ourselves are finally getting details about the probe’s fiery flight.

Launched in August 2018, the Parker Solar Probe was built to get up close and personal with our local star. Just two months after liftoff, it had already beaten the record for closest approach to the Sun by a spacecraft. The probe, with its distinctive solar shield, has come within 8.5 million kilometers (5.3 million miles) of its surface, a record that it’s set to break as its highly elliptical orbit tightens.

The fury of a CME at close range.

As clearly visible in the video below, the Parker probe flew directly into the erupting CME on September the 5th of 2022, and didn’t get fully clear of the plasma for a few days. During that time, researchers say it observed something that had previously only been theorized — the interaction between a CME and the swirling dust and debris that fills our solar system.

According to the Johns Hopkins Applied Physics Laboratory (APL), the blast that Parker flew through managed to displace this slurry of cosmic bric a brac out to approximately 9.6 million km (6 million miles), though the void it created was nearly instantly refilled. The researchers say that better understanding how a CME propagates through the interplanetary medium could help us better predict and track potentially dangerous space weather.

It’s been a busy year for the Parker Solar Probe. Back in June it announced that data from the craft was improving our understanding of high-speed solar winds. With the spacecraft set to move closer and closer to the Sun over the next two years, we’re willing to bet this isn’t the last discovery to come from this fascinating mission.

Continue reading “NASA’s Parker Probe Gets Front Row Seat To CME”

Hackaday Links Column Banner

Hackaday Links: September 24, 2023

Modern video games are almost always written on the backs of a game engine platform, and the two most popular are definitely Unreal Engine and Unity. Some bean counter at Unity decided they essentially wanted a bigger piece of the pie and rolled out new terms of use that would have game development houses paying per Unity install. This was a horrible blow to small indie game development houses, where the fees would end up eating up something like 15% of revenue in an industry that’s already squeezed between the Apple Store and Steam. It caused an absolutely gigantic uproar in the game dev community, and now Unity is walking it back.

We noticed the change first because tons of “migrate from Unity to Godot” tutorials popped up in our YouTube stream. Godot is a free and open-source game engine, and while we’re no game devs, it looks to be at about the level of Blender five years ago – not quite as easy to use or polished as its closed-source equivalents, but just about poised to make the transition to full usability. While we’re sure Unreal Engine is happy enough to see Unity kick some more business their way, we’re crossing our fingers for the open-source underdog.

Amazon’s Kindle Direct Publishing allows independent authors to self-publish. And it’s apparently been awash in prose written by large language models. While it was fun for a while to look through self-published books for the shibboleth phrase “As an AI language model,” Amazon caught on pretty quickly. Of course, that only gets the lowest-hanging fruit. Books like the AI-written guidebook to mushrooms that recommends eating the Death Cap still manage to sneak through, as we mentioned two weeks ago.

Amazon’s solution? Limiting self-published books to three per day. I wrote a book once, and it took me the better part of a year, and Amazon is letting through three per day. If this limit is going to help limit the size of the problem, then we vastly underestimate the problem.

And it’s good news, bad news from space. The good news is that NASA’s OSIRIS-REx mission to return a sample from the asteroid Bennu successfully landed just a few hours ago. As we write this, they’ve sent a team driving around the Utah desert to pick up the capsule. The effort reminds us of retrieving high-altitude balloon capsules after a flight: you know roughly where it is, but you still have to get out there to fetch it.  Only NASA has a helicopter to go out looking for the capsule and a lot more science to do before they can throw it in the back of their car.

On the bad news side, India’s Vikram and Pragyan lunar lander/rover pair wasn’t really expected to make it through the long lunar night and had successfully executed all of its planned mission goals before going into deep sleep mode two weeks ago. But you’ve got to try to wake it up anyway, right? Well, the sun came up on Vikram on Friday, and the Indian space agency tweeted a stoic, “Efforts have been made to establish communication with the Vikram lander and Pragyan rover to ascertain their wake-up condition. As of now, no signals have been received from them. Efforts to establish contact will continue.” We’ve still got our fingers crossed, but at this point it would just be extra icing on the cake.

Humans And Balloon Hands Help Bots Make Breakfast

Breakfast may be the most important meal of the day, but who wants to get up first thing in the morning and make it? Well, there may come a day when a robot can do the dirty work for you. This is Toyota Research Institute’s vision with their innovatively-trained breakfast bots.

Going way beyond pick and place tasks, TRI has, so far, taught robots how to do more than 60 different things using a new method to teach dexterous skills like whisking eggs, peeling vegetables, and applying hazelnut spread to a substrate. Their method is built on generative AI technique called Diffusion Policy, which they use to create what they’re calling Large Behavior Models.

Instead of hours of coding and debugging, the robots learn differently. Essentially, the robot gets a large flexible balloon hand with which to feel objects, their weight, and their effect on other objects (like flipping a pancake). Then, a human shows them how to perform a task before the bot is let loose on an AI model. After a number of hours, say overnight, the bot has a new working behavior.

Now, since TRI claims that their aim is to build robots that amplify people and not replace them, you may still have to plate your own scrambled eggs and apply the syrup to that short stack yourself. But they plan to have over 1,000 skills in the bag of tricks by the end of 2024. If you want more information about the project and to learn about Diffusion Policy without reading the paper, check out this blog post.

Perhaps the robotic burger joint was ahead of its time, but we’re getting there. How about a robot barista?

Continue reading “Humans And Balloon Hands Help Bots Make Breakfast”

This Week In Security: WebP, Cavium, Gitlab, And Asahi Lina

Last week we covered the latest 0-day from NSO group, BLASTPASS. There’s more details about exactly how that works, and a bit of a worrying revelation for Android users. One of the vulnerabilities used was CVE-2023-41064, a buffer overflow in the ImageIO library. The details have not been confirmed, but the timing suggests that this is the same bug as CVE-2023-4863, a Webp 0-day flaw in Chrome that is known to be exploited in the wild.

The problem seems to be an Out Of Bounds write in the BuildHuffmanTable() function of libwebp. And to understand that, we have to understand libwebp does, and what a Huffman Table has to do with it. The first is easy. Webp is Google’s pet image format, potentially replacing JPEG, PNG, and GIF. It supports lossy and lossless compression, and the compression format for lossless images uses Huffman coding among other techniques. And hence, we have a Huffman table, a building block in the image compression and decompression.

What’s particularly fun about this compression technique is that the image includes not just Huffman compressed data, but also a table of statistical data needed for decompression. The table is rather large, so it gets Huffman compressed too. It turns out, there can be multiple layers of this compression format, which makes the vulnerability particularly challenging to reverse-engineer. The vulnerability is when the pre-allocated buffer isn’t big enough to hold one of these decompressed Huffman tables, and it turns out that the way to do that is to make maximum-size tables for the outer layers, and then malform the last one. In this configuration, it can write out of bounds before the final consistency check.

An interesting note is that as one of Google’s C libraries, this is an extensively fuzzed codebase. While fuzzing and code coverage are both great, neither is guaranteed to find vulnerabilities, particularly well hidden ones like this one. And on that note, this vulnerability is present in Android, and the fix is likely going to wait til the October security update. And who knows where else this bug is lurking. Continue reading “This Week In Security: WebP, Cavium, Gitlab, And Asahi Lina”

Hello, Halloween Hackfest!

Halloween is possibly the hackiest of holidays. Think about it: when else do you get to add animatronic eyes to everyday objects, or break out the CNC machine to cut into squashes? Labor day? Nope. Proximity-sensing jump-scare devices for Christmas? We think not. But for Halloween, you can let your imagination run wild!

Jump Scare Tombstone by [Mark]
We’re happy to announce that DigiKey and Arduino have teamed up for this year’s Hackaday Halloween Contest. Bring us your best costume, your scariest spook, your insane home decorations, your wildest pumpkin, or your most kid-pleasing feat!

We’ll be rewarding the top three with a $150 gift certificate courtesy of DigiKey, plus some Arduino Halloween treats if you use a product from the Arduino Pro line to make your hair-raising fantasy happen.

We’ve also got five honorable mention categories to inspire you to further feats of fancy.

  • Costume: Halloween is primarily about getting into outrageous costumes and scoring candy. We don’t want to see the candy.
  • Pumpkin: Pumpkin carving could be as simple as taking a knife to a gourd, but that’s not what we’re after. Show us the most insane carving method, or the pumpkin so loaded with electronics that it makes Akihabara look empty in comparison.
  • Kid-Pleaser: Because a costume that makes a kid smile is what Halloween is really all about. But games or elaborate candy dispensers, or anything else that helps the little ones have a good time is fair game here.
  • Hallowed Home: Do people come to your neighborhood just to see your haunted house? Do you spend more on light effects than on licorice? Then show us your masterpiece!
  • Spooky: If your halloween build is simply scary, it belongs here.

Head on over to Hackaday.io for the full details. And get working on your haunts, costumes, and Rube Goldberg treat dispensers today.