The Magic Of A Diode Sampler To Increase Oscilloscope Bandwidth

Making an oscilloscope is relatively easy, while making a very fast oscilloscope is hard. There’s a trick that converts a mundane instrument into a very fast one, it’s been around since the 1950s, and [CuriousMarc] has a video explaining it with an instrument from the 1960s. The diode sampler is the electronic equivalent of a stroboscope, capturing parts of multiple cycle of a waveform to give a much-slowed-down representation of it on the screen. How it works is both extremely simple, and also exceptionally clever as some genius-level high-speed tricks are used to push it to the limit. We’ve put the video below the break.

[Marc] has a Keysight 100 MHz ‘scope and the sampler allows him to use it to show 4 GHz. Inside the instrument is a pair of sample-and-hold circuits using fast diodes as RF switches, triggered by very low-rise-time short pulses. Clever tricks abound, such as using the diode pair to cancel out pulse leakage finding its way back to the source. To complete this black magic, an RF-tuned stub is utilized to help filter the pulses and further remove slower components.

It’s slightly amusing to note that the Keysight 100 MHz ‘scope is now “slow” while the early sampling ‘scopes had their “fast” capabilities in that range. The same technique is still used today, in fact, you probably have one on your bench.

The sampler he’s showing us is an accessory for another instrument we’ve previously shown you his work with.

Continue reading “The Magic Of A Diode Sampler To Increase Oscilloscope Bandwidth”

This Week In Security: Magic Packets, GPU.zip, And Enter The Sandman

Leading out the news this week is a report of “BlackTech”, an Advanced Persistent Threat (APT) group that appears to be based out of China, that has been installing malicious firmware on routers around the world. This firmware has been found primarily on Cisco devices, and Cisco has released a statement clarifying their complete innocence and lack of liability in the matter.

It seems that this attack only works on older Cisco routers, and the pattern is to log in with stolen or guessed credentials, revert the firmware to a yet older version, and then replace it with a malicious boot image. But the real fun here is the “magic packets”, a TCP or UDP packet filled with random data that triggers an action, like enabling that SSH backdoor service. That idea sounds remarkable similar to Fwknop, a project I worked on many years ago. It would be sort of surreal to find some of my code show up in an APT.

Don’t Look Now, But Is Your GPU Leaking Pixels

There’s a bit debate on who’s fault this one is, as well as how practical of an attack it is, but the idea is certainly interesting. Compression has some interesting system side effects, and it’s possible for a program with access to some system analytics to work out the state of that compression. The first quirk being leveraged here is that GPU accelerated applications like a web browser use compression to stream the screen view from the CPU to the GPU. But normally, that’s way too many pixels and colors to try to sort out just by watching the CPU and ram power usage.

And that brings us to the second quirk, that in Chrome, one web page can load a second in an iframe, and then render CSS filters on top of the iframe. This filter ability is then used to convert the page to black and white tiles, and then transform the white tiles into a hard-to-compress pattern, while leaving the black ones alone. With that in place, it’s possible for the outer web page to slowly recreate the graphical view of the iframe, leaking information that is displayed on the page.

And this explains why this isn’t the most practical of attacks, as it not only requires opening a malicious page to host the attack, it also makes some very obvious graphical changes to the screen. Not to mention taking at least 30 minutes of data leaking to recreate a username displayed on the Wikipedia page. What it lacks in practicality, this approach makes up for in cleverness and creativity, though. The attack goes by the GPU.zip moniker, and the full PDF is available. Continue reading “This Week In Security: Magic Packets, GPU.zip, And Enter The Sandman”

A Raspberry Pi 5 Is Better Than Two Pi 4s

What’s as fast as two Raspberry Pi 4s? The brand-new Raspberry Pi 5, that’s what. And for only a $5 upcharge (with an asterisk), it’s going to the new go-to board from the British House of Fruity Single-Board Computers. But aside from the brute speed, it also has a number of cool features that will make using the board easier for a number of projects, and it’s going to be on sale in October. Raspberry Pi sent us one for review, and if you were just about to pick up a Pi 4 for a project that needs the speed, we’d say that you might wait a couple weeks until the Raspberry Pi 5 goes on sale.

Twice as Nice

On essentially every benchmark, the Raspberry Pi 5 comes in two to three times faster than the Pi 4. This is thanks to the new Broadcom BCM2712 system-on-chip (SOC) that runs four ARM A76s at 2.4 GHz instead of the Pi 4’s ARM A72s at 1.8 GHz. This gives the CPUs a roughly 2x – 3x advantage over the Pi 4. (Although the Pi 4 was eminently overclockable in the CM4 package.)

The DRAM runs at double the clock speed. The video core is more efficient and pushes pixels about twice as fast. The new WiFi controller in the SOC allows about twice as much throughput to the same radio. Even the SD card interface is capable of running twice as fast, speeding up boot times to easily under 10 sec – maybe closer to 8 sec, but who’s counting?

Heck, while we’re on factors of two, there are now two MIPI camera/display lines, so you can do stereo imaging straight off the board, or run a camera and external display simultaneously. And it’s capable of driving two 4k HDMI displays at 60 Hz.

There are only two exceptions to the overall factor-of-two improvements. First, the Gigabyte Ethernet remains Gigabyte Ethernet, so that’s a one-ex. (We’re not sure who is running up against that constraint, but if it’s you, you’ll want an external network adapter.) But second, the new Broadcom SOC finally supports the ARM cryptography extensions, which make it 45x faster at AES, for instance. With TLS almost everywhere, this keeps crypto performance from becoming the bottleneck. Nice.

All in all, most everything performance-related has been doubled or halved appropriately, and completely in line with the only formal benchmarks we’ve seen so far, it feels about twice as fast all around in our informal tests. Compared with a Pi 400 that I use frequently in the basement workshop, the Pi 5 is a lot snappier.

Continue reading “A Raspberry Pi 5 Is Better Than Two Pi 4s”

The Robot That Lends The Deaf-Blind Community A Hand

The loss of one’s sense of hearing or vision is likely to be devastating in the way that it impacts daily life. Fortunately many workarounds exist using one’s remaining senses — such as sign language — but what if not only your sense of hearing is gone, but you are also blind? Fortunately here, too, a workaround exists in the form of tactile signing, which is akin to visual sign language, except that it uses one’s sense of touch. This generally requires someone who knows tactile sign language to translate from spoken or written forms to tactile signaling. Yet what if you’re deaf-blind and without human assistance? This is where a new robotic system could conceivably fill in.

The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)
The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)

Developed by Tatum Robotics, the Tatum T1 is a a robotic hand and associated software that’s intended to provide this translation function, by taking in natural language information, whether spoken, written or in some digital format, and using a number of translation steps to create tactile sign language as output, whether it’s the ASL format, the BANZSL alphabet or another. These tactile signs are then expressed using the robotic hand, and a connected arm as needed, ideally using ASL gloss to convey as much information as quickly as possible, not unlike with visual ASL.

This also answers the question of why one would not just use a simple braille cell on a hand, as the signing speed is essential to keep up with real-time communications, unlike when, say, reading a book or email. A robotic companion like this could provide deaf-blind individuals with a critical bridge to the world around them. Currently the Tatum T1 is still in the testing phase, but hopefully before long it may be another tool for the tens of thousands of deaf-blind people in the US today.

NASA’s Parker Probe Gets Front Row Seat To CME

A little over a year ago, and about 150 million kilometers (93 million miles) from where you’re currently reading this, NASA’s Parker Solar Probe quietly made history by safely flying through one of the most powerful coronal mass ejections (CMEs) ever recorded. Now that researchers have had time to review the data, amateur space nerds like ourselves are finally getting details about the probe’s fiery flight.

Launched in August 2018, the Parker Solar Probe was built to get up close and personal with our local star. Just two months after liftoff, it had already beaten the record for closest approach to the Sun by a spacecraft. The probe, with its distinctive solar shield, has come within 8.5 million kilometers (5.3 million miles) of its surface, a record that it’s set to break as its highly elliptical orbit tightens.

The fury of a CME at close range.

As clearly visible in the video below, the Parker probe flew directly into the erupting CME on September the 5th of 2022, and didn’t get fully clear of the plasma for a few days. During that time, researchers say it observed something that had previously only been theorized — the interaction between a CME and the swirling dust and debris that fills our solar system.

According to the Johns Hopkins Applied Physics Laboratory (APL), the blast that Parker flew through managed to displace this slurry of cosmic bric a brac out to approximately 9.6 million km (6 million miles), though the void it created was nearly instantly refilled. The researchers say that better understanding how a CME propagates through the interplanetary medium could help us better predict and track potentially dangerous space weather.

It’s been a busy year for the Parker Solar Probe. Back in June it announced that data from the craft was improving our understanding of high-speed solar winds. With the spacecraft set to move closer and closer to the Sun over the next two years, we’re willing to bet this isn’t the last discovery to come from this fascinating mission.

Continue reading “NASA’s Parker Probe Gets Front Row Seat To CME”

Hackaday Links Column Banner

Hackaday Links: September 24, 2023

Modern video games are almost always written on the backs of a game engine platform, and the two most popular are definitely Unreal Engine and Unity. Some bean counter at Unity decided they essentially wanted a bigger piece of the pie and rolled out new terms of use that would have game development houses paying per Unity install. This was a horrible blow to small indie game development houses, where the fees would end up eating up something like 15% of revenue in an industry that’s already squeezed between the Apple Store and Steam. It caused an absolutely gigantic uproar in the game dev community, and now Unity is walking it back.

We noticed the change first because tons of “migrate from Unity to Godot” tutorials popped up in our YouTube stream. Godot is a free and open-source game engine, and while we’re no game devs, it looks to be at about the level of Blender five years ago – not quite as easy to use or polished as its closed-source equivalents, but just about poised to make the transition to full usability. While we’re sure Unreal Engine is happy enough to see Unity kick some more business their way, we’re crossing our fingers for the open-source underdog.

Amazon’s Kindle Direct Publishing allows independent authors to self-publish. And it’s apparently been awash in prose written by large language models. While it was fun for a while to look through self-published books for the shibboleth phrase “As an AI language model,” Amazon caught on pretty quickly. Of course, that only gets the lowest-hanging fruit. Books like the AI-written guidebook to mushrooms that recommends eating the Death Cap still manage to sneak through, as we mentioned two weeks ago.

Amazon’s solution? Limiting self-published books to three per day. I wrote a book once, and it took me the better part of a year, and Amazon is letting through three per day. If this limit is going to help limit the size of the problem, then we vastly underestimate the problem.

And it’s good news, bad news from space. The good news is that NASA’s OSIRIS-REx mission to return a sample from the asteroid Bennu successfully landed just a few hours ago. As we write this, they’ve sent a team driving around the Utah desert to pick up the capsule. The effort reminds us of retrieving high-altitude balloon capsules after a flight: you know roughly where it is, but you still have to get out there to fetch it.  Only NASA has a helicopter to go out looking for the capsule and a lot more science to do before they can throw it in the back of their car.

On the bad news side, India’s Vikram and Pragyan lunar lander/rover pair wasn’t really expected to make it through the long lunar night and had successfully executed all of its planned mission goals before going into deep sleep mode two weeks ago. But you’ve got to try to wake it up anyway, right? Well, the sun came up on Vikram on Friday, and the Indian space agency tweeted a stoic, “Efforts have been made to establish communication with the Vikram lander and Pragyan rover to ascertain their wake-up condition. As of now, no signals have been received from them. Efforts to establish contact will continue.” We’ve still got our fingers crossed, but at this point it would just be extra icing on the cake.

Humans And Balloon Hands Help Bots Make Breakfast

Breakfast may be the most important meal of the day, but who wants to get up first thing in the morning and make it? Well, there may come a day when a robot can do the dirty work for you. This is Toyota Research Institute’s vision with their innovatively-trained breakfast bots.

Going way beyond pick and place tasks, TRI has, so far, taught robots how to do more than 60 different things using a new method to teach dexterous skills like whisking eggs, peeling vegetables, and applying hazelnut spread to a substrate. Their method is built on generative AI technique called Diffusion Policy, which they use to create what they’re calling Large Behavior Models.

Instead of hours of coding and debugging, the robots learn differently. Essentially, the robot gets a large flexible balloon hand with which to feel objects, their weight, and their effect on other objects (like flipping a pancake). Then, a human shows them how to perform a task before the bot is let loose on an AI model. After a number of hours, say overnight, the bot has a new working behavior.

Now, since TRI claims that their aim is to build robots that amplify people and not replace them, you may still have to plate your own scrambled eggs and apply the syrup to that short stack yourself. But they plan to have over 1,000 skills in the bag of tricks by the end of 2024. If you want more information about the project and to learn about Diffusion Policy without reading the paper, check out this blog post.

Perhaps the robotic burger joint was ahead of its time, but we’re getting there. How about a robot barista?

Continue reading “Humans And Balloon Hands Help Bots Make Breakfast”