Why Walking Tanks Never Became A Thing

The walking tank concept has always captured imaginations. Whether you’re talking about the AT-AT walkers of Star Wars, or the Dreadnoughts from Warhammer 40,000, they are often portrayed in fiction as mighty and capable foes on the battlefield. These legged behemoths ideally combine the firepower and defense of traditional tanks with the versatility of a legged walking frame.

Despite their futuristic allure, walking tanks never found a practical military application. Let’s take a look at why tracks still rule, and why walking combat machines are going to remain firmly in the realm of fiction for the foreseeable future.

Continue reading “Why Walking Tanks Never Became A Thing”

How Warehouse Robots Actually Work, As Explained By Amazon

Amazon has been using robots to manage and automate their warehouses for years. Here’s a short feature on their current robot, Hercules. This is absolutely Amazon tooting their own horn, but if you have been curious about what exactly such robots do, and how exactly they help a busy warehouse work better, it’s a good summary with some technical details.

Amazon claims to have over 750,000 robots across their network.

The main idea is that goods are stored on four-sided shelves called pods. Hercules can scoot underneath to lift and move these pods a little like a robotic forklift, except much smaller and more nimble. Interestingly, the robots avoid rotating shelves as much as possible and are designed to facilitate this. To change direction, Hercules sets the pod down, turns, then picks the pod back up.

The overall system is centralized, but Hercules itself navigates autonomously thanks to a depth-sensing camera and a grid of navigation markers present on the floor throughout the facility.  Hercules also can wirelessly sense and communicate with nearby human-worn vests and other robots outside its line of sight.

Essentially, instead of human workers walking up and down aisles of shelves to pick products, the product shelves come to the humans. This means the organization and layout of the shelves themselves can be dynamic, higher density, and optimized for efficient robotic access. Shelves do not need to be in fixed rows or aisles, conform to a human-readable categorical layout, nor do they necessarily need walking space between them.

Sometimes robots really are the right tool for the job, and our favorite product-retrieval robot remains [Cliff Stoll]’s crawlspace warehouse bot, a diminutive device made to access boxes of product — in [Cliff]’s case, Klein bottles — stored in an otherwise quite claustrophobic crawlspace.

Passive Desalination Discovers How To Avoid Salt-Clogging

Saltwater is plentiful, but no good for drinking. Desalinization is the obvious solution, but a big problem isn’t taking the salt out, it’s where all that leftover salt goes. Excess salt accumulates, crystallizes, collects, and clogs a system. Dealing with this means maintenance, which means higher costs, which ultimately limits scalability.

The good news is that engineers at MIT and in China have succeeded in creating a desalination system that avoids this problem by intrinsically flushing accumulated salt as it is created, keeping the system clean. And what’s more, the whole thing is both scalable and entirely passive. The required energy all comes from gravity and the sun’s heat.

To do this, the device is constructed in such a way that it mimics the thermohaline circulation of the ocean on a small scale. This is a process in which temperature and density differentials drive a constant circulation and exchange. In the team’s system, this ultimately flushes concentrations of salt out of the system before it has a chance to collect.

The entirely passive nature of the device, its scalability, and the fact that it could desalinate water without accumulating salt for years means an extremely low cost to operate. The operating principle makes sense, but of course, it is careful engineering that shows it is actually possible. We have seen projects leveraging the passive heating and circulation of water before, but this is a whole new angle on letting the sun do the work.

Hypersonic Speech Jammer Works At A Distance

Speech jammers were a meme a little while back. By feeding back delayed voice audio to a person’s ears, it makes it near-impossible for most people to speak, as our speech system runs on a continual feedback loop. [Benn Jordan] decided to try reworking that concept by replacing headphones with a directed sound projector.

The key to the project is the use of hypersonic sound arrays. These essentially use high-frequency sound beyond the human range of hearing to carry a lower-frequency sound signal. By essentially modulating this higher-frequency carrier to create the perception of lower-frequency sound, it’s possible to create an audible signal that is highly directional. It’s like a “sound laser” that can be pointed directly at a person to allow them to hear it, which is then inaudible when pointed slightly away.

These allow the delayed voice signal to be fired at a person’s head with a relatively narrow spatial spread. When an individual speaks into a microphone hooked up to the device, delayed audio is sent through the hypersonic array back to the speaker’s ears, garbling their speech as their brain gets confused by the feedback.

[Benn] demonstrated the device in public by offering random individuals $100 to read a paragraph out of a book. The speech jammer worked a treat, and [Benn] was able to keep his money… until one amazingly immune individual breezed through the test. Check out our prior coverage of speech jamming technology. Video after the break.

Continue reading “Hypersonic Speech Jammer Works At A Distance”

Dr. Niels Olson uses the Augmented Reality Microscope. (Credit: US Department of Defense)

Google’s Augmented Reality Microscope Might Help Diagnose Cancer

Despite recent advances in diagnosing cancer, many cases are still diagnosed using biopsies and analyzing thin slices of tissue underneath a microscope. Properly analyzing these tissue sample slides requires highly experienced and skilled pathologists, and remains subject to some level of bias. In 2018 Google announced a convolutional neural network (CNN) based system which they call the Augmented Reality Microscope (ARM), which would use deep learning and augmented reality (AR) to assist a pathologist with the diagnosis of a tissue sample. A 2022 study in the Journal of Pathology Informatics by David Jin and colleagues (CNBC article) details how well this system performs in ongoing tests.

For this particular study, the LYmph Node Assistant (LYNA) model was investigated, which as the name suggests targets detecting cancer metastases within lymph node biopsies. The basic ARM setup is described on the Google Health GitHub page, which contains all of the required software, except for the models which are available on request. The ARM system is fitted around an existing medical-grade microscope, with a camera feeding the CNN model with the input data, and any relevant outputs from the model are overlaid on the image that the pathologist is observing (the AR part).

Although the study authors noted that they saw potential in the technology, as with most CNN-based systems a lot depends on how well the training data set was annotated. When a grouping of tissue including cancerous growth was marked too broadly, this could cause the model to draw an improper conclusion. This makes a lot of sense when one considers that this system essentially plays ‘cat or bread’, except with cancer.

These gotchas with recognizing legitimate cancer cases are why the study authors see it mostly as a useful tool for a pathologist. One of the authors, Dr. Niels Olsen, notes that back when he was stationed at the naval base in Guam, he would have liked to have a system like ARM to provide him as one of the two pathologists on the island with an easy source of a second opinion.

(Heading image: Dr. Niels Olson uses the Augmented Reality Microscope. (Credit: US Department of Defense) )

Hackaday Links Column Banner

Hackaday Links: October 1, 2023

We’ve devoted a fair amount of virtual ink here to casting shade at self-driving vehicles, especially lately with all the robo-taxi fiascos that seem to keep cropping up in cities serving as testbeds. It’s hard not to, especially when an entire fleet of taxis seems to spontaneously congregate at a single point, or all it takes to create gridlock is a couple of traffic cones. We know that these are essentially beta tests whose whole point is to find and fix points of failure before widespread deployment, and that any failure is likely to be very public and very costly. But there’s someone else in the self-driving vehicle business with way, WAY more to lose if something goes wrong but still seems to be nailing it every day. Of course, we’re talking about NASA and the Perseverance rover, which just completed a record drive across Jezero crater on autopilot. The 759-meter jaunt was completely planned by the onboard AutoNav system, which used the rover’s cameras and sensors to pick its way through a boulder-strewn field. Of course, the trip took six sols to complete, which probably would result in negative reviews for a robo-taxi on Earth, and then there’s the whole thing about NASA having a much bigger pot of money to draw from than any start-up could ever dream of. Still, it’d be nice to see some of the tech on Perseverance filtering down to Earth.

Continue reading “Hackaday Links: October 1, 2023”

RP2040 picture on left by Phiarc, CC BY-SA 4.0, via Wikimedia

Kaluma Puts JavaScript On The RP2040

With a simple firmware update, Kaluma puts a lightweight JavaScript runtime on the Raspberry Pi Pico (which uses the RP2040 microcontroller), providing handy modules for file systems, graphics, networking, and more. Code for a simple LED blink can then look like:

// index.js
const led = 25;
pinMode(led, OUTPUT);
setInterval(() => {
digitalToggle(led);
}, 1000);

Development can then be done using tools that are very familiar to JavaScript developers, such as npm and flashing new code to a USB-connected Pico with the (Node.js-based) Kaluma command-line interface. Take a look at the GitHub repository for the project, or browse some of the projects made with Kaluma.

Much like with MicroPython, there’s value to be had in putting implementations of high-level languages on microcontrollers. Each new language opens embedded programming to a whole new group of coders. But it’s not just languages making their way to the RP2040. Wonderful projects such as emulating the ZX Spectrum on an RP2040 also happen.

Thanks to [Shri Hari Ram] for the tip!