NASA’s Parker Probe Gets Front Row Seat To CME

A little over a year ago, and about 150 million kilometers (93 million miles) from where you’re currently reading this, NASA’s Parker Solar Probe quietly made history by safely flying through one of the most powerful coronal mass ejections (CMEs) ever recorded. Now that researchers have had time to review the data, amateur space nerds like ourselves are finally getting details about the probe’s fiery flight.

Launched in August 2018, the Parker Solar Probe was built to get up close and personal with our local star. Just two months after liftoff, it had already beaten the record for closest approach to the Sun by a spacecraft. The probe, with its distinctive solar shield, has come within 8.5 million kilometers (5.3 million miles) of its surface, a record that it’s set to break as its highly elliptical orbit tightens.

The fury of a CME at close range.

As clearly visible in the video below, the Parker probe flew directly into the erupting CME on September the 5th of 2022, and didn’t get fully clear of the plasma for a few days. During that time, researchers say it observed something that had previously only been theorized — the interaction between a CME and the swirling dust and debris that fills our solar system.

According to the Johns Hopkins Applied Physics Laboratory (APL), the blast that Parker flew through managed to displace this slurry of cosmic bric a brac out to approximately 9.6 million km (6 million miles), though the void it created was nearly instantly refilled. The researchers say that better understanding how a CME propagates through the interplanetary medium could help us better predict and track potentially dangerous space weather.

It’s been a busy year for the Parker Solar Probe. Back in June it announced that data from the craft was improving our understanding of high-speed solar winds. With the spacecraft set to move closer and closer to the Sun over the next two years, we’re willing to bet this isn’t the last discovery to come from this fascinating mission.

Continue reading “NASA’s Parker Probe Gets Front Row Seat To CME”

Hackaday Links Column Banner

Hackaday Links: September 24, 2023

Modern video games are almost always written on the backs of a game engine platform, and the two most popular are definitely Unreal Engine and Unity. Some bean counter at Unity decided they essentially wanted a bigger piece of the pie and rolled out new terms of use that would have game development houses paying per Unity install. This was a horrible blow to small indie game development houses, where the fees would end up eating up something like 15% of revenue in an industry that’s already squeezed between the Apple Store and Steam. It caused an absolutely gigantic uproar in the game dev community, and now Unity is walking it back.

We noticed the change first because tons of “migrate from Unity to Godot” tutorials popped up in our YouTube stream. Godot is a free and open-source game engine, and while we’re no game devs, it looks to be at about the level of Blender five years ago – not quite as easy to use or polished as its closed-source equivalents, but just about poised to make the transition to full usability. While we’re sure Unreal Engine is happy enough to see Unity kick some more business their way, we’re crossing our fingers for the open-source underdog.

Amazon’s Kindle Direct Publishing allows independent authors to self-publish. And it’s apparently been awash in prose written by large language models. While it was fun for a while to look through self-published books for the shibboleth phrase “As an AI language model,” Amazon caught on pretty quickly. Of course, that only gets the lowest-hanging fruit. Books like the AI-written guidebook to mushrooms that recommends eating the Death Cap still manage to sneak through, as we mentioned two weeks ago.

Amazon’s solution? Limiting self-published books to three per day. I wrote a book once, and it took me the better part of a year, and Amazon is letting through three per day. If this limit is going to help limit the size of the problem, then we vastly underestimate the problem.

And it’s good news, bad news from space. The good news is that NASA’s OSIRIS-REx mission to return a sample from the asteroid Bennu successfully landed just a few hours ago. As we write this, they’ve sent a team driving around the Utah desert to pick up the capsule. The effort reminds us of retrieving high-altitude balloon capsules after a flight: you know roughly where it is, but you still have to get out there to fetch it.  Only NASA has a helicopter to go out looking for the capsule and a lot more science to do before they can throw it in the back of their car.

On the bad news side, India’s Vikram and Pragyan lunar lander/rover pair wasn’t really expected to make it through the long lunar night and had successfully executed all of its planned mission goals before going into deep sleep mode two weeks ago. But you’ve got to try to wake it up anyway, right? Well, the sun came up on Vikram on Friday, and the Indian space agency tweeted a stoic, “Efforts have been made to establish communication with the Vikram lander and Pragyan rover to ascertain their wake-up condition. As of now, no signals have been received from them. Efforts to establish contact will continue.” We’ve still got our fingers crossed, but at this point it would just be extra icing on the cake.

Humans And Balloon Hands Help Bots Make Breakfast

Breakfast may be the most important meal of the day, but who wants to get up first thing in the morning and make it? Well, there may come a day when a robot can do the dirty work for you. This is Toyota Research Institute’s vision with their innovatively-trained breakfast bots.

Going way beyond pick and place tasks, TRI has, so far, taught robots how to do more than 60 different things using a new method to teach dexterous skills like whisking eggs, peeling vegetables, and applying hazelnut spread to a substrate. Their method is built on generative AI technique called Diffusion Policy, which they use to create what they’re calling Large Behavior Models.

Instead of hours of coding and debugging, the robots learn differently. Essentially, the robot gets a large flexible balloon hand with which to feel objects, their weight, and their effect on other objects (like flipping a pancake). Then, a human shows them how to perform a task before the bot is let loose on an AI model. After a number of hours, say overnight, the bot has a new working behavior.

Now, since TRI claims that their aim is to build robots that amplify people and not replace them, you may still have to plate your own scrambled eggs and apply the syrup to that short stack yourself. But they plan to have over 1,000 skills in the bag of tricks by the end of 2024. If you want more information about the project and to learn about Diffusion Policy without reading the paper, check out this blog post.

Perhaps the robotic burger joint was ahead of its time, but we’re getting there. How about a robot barista?

Continue reading “Humans And Balloon Hands Help Bots Make Breakfast”

This Week In Security: WebP, Cavium, Gitlab, And Asahi Lina

Last week we covered the latest 0-day from NSO group, BLASTPASS. There’s more details about exactly how that works, and a bit of a worrying revelation for Android users. One of the vulnerabilities used was CVE-2023-41064, a buffer overflow in the ImageIO library. The details have not been confirmed, but the timing suggests that this is the same bug as CVE-2023-4863, a Webp 0-day flaw in Chrome that is known to be exploited in the wild.

The problem seems to be an Out Of Bounds write in the BuildHuffmanTable() function of libwebp. And to understand that, we have to understand libwebp does, and what a Huffman Table has to do with it. The first is easy. Webp is Google’s pet image format, potentially replacing JPEG, PNG, and GIF. It supports lossy and lossless compression, and the compression format for lossless images uses Huffman coding among other techniques. And hence, we have a Huffman table, a building block in the image compression and decompression.

What’s particularly fun about this compression technique is that the image includes not just Huffman compressed data, but also a table of statistical data needed for decompression. The table is rather large, so it gets Huffman compressed too. It turns out, there can be multiple layers of this compression format, which makes the vulnerability particularly challenging to reverse-engineer. The vulnerability is when the pre-allocated buffer isn’t big enough to hold one of these decompressed Huffman tables, and it turns out that the way to do that is to make maximum-size tables for the outer layers, and then malform the last one. In this configuration, it can write out of bounds before the final consistency check.

An interesting note is that as one of Google’s C libraries, this is an extensively fuzzed codebase. While fuzzing and code coverage are both great, neither is guaranteed to find vulnerabilities, particularly well hidden ones like this one. And on that note, this vulnerability is present in Android, and the fix is likely going to wait til the October security update. And who knows where else this bug is lurking. Continue reading “This Week In Security: WebP, Cavium, Gitlab, And Asahi Lina”

Hello, Halloween Hackfest!

Halloween is possibly the hackiest of holidays. Think about it: when else do you get to add animatronic eyes to everyday objects, or break out the CNC machine to cut into squashes? Labor day? Nope. Proximity-sensing jump-scare devices for Christmas? We think not. But for Halloween, you can let your imagination run wild!

Jump Scare Tombstone by [Mark]
We’re happy to announce that DigiKey and Arduino have teamed up for this year’s Hackaday Halloween Contest. Bring us your best costume, your scariest spook, your insane home decorations, your wildest pumpkin, or your most kid-pleasing feat!

We’ll be rewarding the top three with a $150 gift certificate courtesy of DigiKey, plus some Arduino Halloween treats if you use a product from the Arduino Pro line to make your hair-raising fantasy happen.

We’ve also got five honorable mention categories to inspire you to further feats of fancy.

  • Costume: Halloween is primarily about getting into outrageous costumes and scoring candy. We don’t want to see the candy.
  • Pumpkin: Pumpkin carving could be as simple as taking a knife to a gourd, but that’s not what we’re after. Show us the most insane carving method, or the pumpkin so loaded with electronics that it makes Akihabara look empty in comparison.
  • Kid-Pleaser: Because a costume that makes a kid smile is what Halloween is really all about. But games or elaborate candy dispensers, or anything else that helps the little ones have a good time is fair game here.
  • Hallowed Home: Do people come to your neighborhood just to see your haunted house? Do you spend more on light effects than on licorice? Then show us your masterpiece!
  • Spooky: If your halloween build is simply scary, it belongs here.

Head on over to Hackaday.io for the full details. And get working on your haunts, costumes, and Rube Goldberg treat dispensers today.

Multispectral Imaging Shows Erased Evidence Of Ancient Star Catalogue

Ancient Greek astronomer Hipparchus worked to accurately catalog and record the coordinates of celestial objects. But while Hipparchus’ Star Catalogue is known to have existed, the document itself is lost to history. Even so, new evidence has come to light thanks to patient work and multispectral imaging.

Hipparchus’ Star Catalogue is the earliest known attempt to record the positions of celestial bodies (predating Claudius Ptolemy’s work in the second century, which scholars believe was probably substantially based on Hipparchus) but direct evidence of the document is slim. Continue reading “Multispectral Imaging Shows Erased Evidence Of Ancient Star Catalogue”

Helping Robots Learn By Letting Them Fail

The [MIT Technology Review] has just released its annual list of the top innovators under the age of 35, and there are some interesting people on this list of the annoyingly accomplished at a young age. Like [Lerrel Pinto], an associate professor of computer science at NY University. His work focuses on teaching robots how to do things in the home by failing.

Continue reading “Helping Robots Learn By Letting Them Fail”