Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station

Back in 2015, European Space Agency (ESA) astronaut Tim Peake brought a pair of specially equipped Raspberry Pi computers, nicknamed Izzy and Ed, onto the International Space Station and invited students back on Earth to develop software for them as part of the Astro Pi Challenge. To date, more than 50,000 young people have had their code run on one of the single-board computers; making them arguably the most popular, and surely the most traveled, Raspberry Pis in the solar system.

While Izzy and Ed are still going strong, the ESA has decided it’s about time these veteran Raspberries finally get the retirement they’re due. Set to make the journey to the ISS in December aboard a SpaceX Cargo Dragon, the new Astro Pi MK II hardware looks quite similar to the original 2015 version at first glance. But a peek inside its 6063-grade aluminium flight case reveals plenty of new and improved gear, including a Raspberry Pi 4 Model B with 8 GB RAM.

The beefier hardware will no doubt be appreciated by students looking to push the envelope. While the majority of Python programs submitted to the Astro Pi program did little more than poll the current reading from the unit’s temperature or humidity sensors and scroll messages for the astronauts on the Astro Pi’s LED matrix, some of the more advanced projects were aimed at performing legitimate space research. From using the onboard camera to image the Earth and make weather predictions to attempting to map the planet’s magnetic field, code submitted from teams of older students will certainly benefit from the improved computational performance and expanded RAM of the newest Pi.

As with the original Astro Pi, the ESA and the Raspberry Pi Foundation have shared plenty of technical details about these space-rated Linux boxes. After all, students are expected to develop and test their code on essentially the same hardware down here on Earth before it gets beamed up to the orbiting computers. So let’s take a quick look at the new hardware inside Astro Pi MK II, and what sort of research it should enable for students in 2022 and beyond.

Continue reading “Astro Pi Mk II, The New Raspberry Pi Hardware Headed To The Space Station”

Mastering Stop Motion Through Machine Learning

Stop motion animation is notoriously difficult to pull off well, in large part because it’s a mind-numbingly slow process. Each frame in the final video is a separate photograph, and for each one of those, the characters and props need to be moved the appropriate amount so that the final result looks smooth. You don’t even want to know how long Ben Wyatt spent working on Requiem for a Tuesday, though to be fair, it might still get done before the next Avatar.

But [Nick Bild] thinks his latest project might be able to improve on the classic technique with a dash of artificial intelligence provided by a Jetson Xavier NX. Basically, the Jetson watches the live feed from the camera, and using a hand pose detection model, waits until there’s no human hand in the frame. Once the coast is clear, it takes a shot and then goes back to waiting for the next hands-free opportunity. With the photographs being taken automatically, you’re free to focus on getting your characters moving around in a convincing way.

If it’s still not clicking for you, check out the video below. [Nick] first shows the raw unedited video, which primarily consists of him moving three LEGO figures around, and then the final product produced by his system. All the images of him fiddling with the scene have been automatically trimmed, leaving behind a short animated clip of the characters moving on their own.

Now don’t be fooled, it’s still going to take awhile. By our count, it took two solid minutes of moving around Minifigs to produce just a few seconds of animation. So while we can say its a quicker pace than with traditional stop motion production, it certainly isn’t fast.

Machine learning isn’t the only modern technology that can simplify stop motion production. We’ve seen a few examples of using 3D printed objects instead of manually-adjusted figures. It still takes a long time to print, and of course it eats up a ton of filament, but the mechanical precision of the printed scenes makes for a very clean final result.

Continue reading “Mastering Stop Motion Through Machine Learning”

Nokia LCD Goes Transparent For Hands-Free Reminders

These days everyone’s excited about transparent OLED panels, but where’s the love for the classic Nokia 5110 LCD? As the prolific [Nick Bild] demonstrates in his latest creation, all you’ve got to do is peel the backing off the the late 90s era display, and you’ve got yourself a see-through cyberpunk screen for a couple bucks.

View through the modified LCD.

In this case, [Nick] has attached the modified display to a pair of frames, and used an Adafruit QT Py microcontroller to connect it to the ESP32 powered ESP-EYE development board and OV2640 camera module. This lets him detect QR codes within the wearer’s field of vision and run a TensorFlow Lite neural network right on the hardware. Power is provided by a 2000 mAh LiPo battery running through an Adafruit PowerBoost 500.

The project, intended to provide augmented reality reminders for medical professionals, uses the QR codes to look up patient and medication information. Right now the neural network is being used to detect when the wearer has washed their hands, but obviously the training model could be switched out for something different as needed. By combining these information sources, the wearable can do things like warn the physician if a patient is allergic to the medication they’re currently looking at.

Relevant information and warnings are displayed on the Nokia LCD, which has been placed far enough away from the eye that the user can actually read the text; an important design consideration that [Zach Freedman] demonstrated with his (intentionally) illegible wearable display a few weeks back. That does make the design a bit…ungainly, but at least you don’t have to worry about hand-cutting your optics

Spaghetti Detective Users Boiled By Security Gaffe

For readers that might not spend their free time watching spools of PLA slowly unwind, The Spaghetti Detective (TSD) is an open source project that aims to use computer vision and machine learning to identify when a 3D print has failed and resulted in a pile of plastic “spaghetti” on the build plate. Once users have installed the OctoPrint plugin, they need to point it to either a self-hosted server that’s running on a relatively powerful machine, or TSD’s paid cloud service that handles all the AI heavy lifting for a monthly fee.

Unfortunately, 73 of those cloud customers ended up getting a bit more than they bargained for when a configuration flub allowed strangers to take control of their printers. In a frank blog post, TSD founder Kenneth Jiang owns up to the August 19th mistake and explains exactly what happened, who was impacted, and how changes to the server-side code should prevent similar issues going forward.

Screenshot from TSD web interface
TSD allows users to remotely manage and monitor their printers.

For the record, it appears no permanent damage was done, and everyone who was potentially impacted by this issue has been notified. There was a fairly narrow window of opportunity for anyone to stumble upon the issue in the first place, meaning any bad actors would have had to be particularly quick on their keyboards to come up with some nefarious plot to sabotage any printers connected to TSD. That said, one user took to Reddit to show off the physical warning their printer spit out; the apparent handiwork of a fellow customer that discovered the glitch on their own.

According to Jiang, the issue stemmed from how TSD associates printers and users. When the server sees multiple connections coming from the same public IP, it’s assumed they’re physically connected to the same local network. This allows the server to link the OctoPrint plugin running on a Raspberry Pi to the user’s phone or computer. But on the night in question, an incorrectly configured load-balancing system stopped passing the source IP addresses to the server. This made TSD believe all of the printers and users who connected during this time period were on the same LAN, allowing anyone to connect with whatever machine they wished.

Changed TSD code from GitHub
New code pushed to the TSD repository limits how many devices can be associated with a single IP.

The mix-up only lasted about six hours, and so far, only the one user has actually reported their printer being remotely controlled by an outside party. After fixing the load-balancing configuration, the team also pushed an update to the TSD code which puts a cap on how many printers the server will associate with a given IP address. This seems like a reasonable enough precaution, though it’s not immediately obvious how this change would impact users who wish to add multiple printers to their account at the same time, such as in the case of a print farm.

While no doubt an embarrassing misstep for the team at The Spaghetti Detective, we can at least appreciate how swiftly they dealt with the issue and their transparency in bringing the flaw to light. This is also an excellent example of how open source allows the community to independently evaluate the fixes applied by the developer in response to a discovered flaw. Jiang says the team will be launching a full security audit of their own as well, so expect more changes getting pushed to the repository in the near future.

We were impressed with TSD when we first covered it back in 2019, and glad to see the project has flourished since we last checked in. Trust is difficult to gain and easy to lose, but we hope the team’s handling of this issue shows they’re on top of things and willing to do right by their community even if it means getting some egg on their face from time to time.

Intel RealSense D435 Depth Camera

RealSense No Longer Makes Sense For Intel

We love depth-sensing cameras and every neat hack they enabled, but this technological novelty has yet to break through to high volume commercial success. So it was sad but not surprising when CRN reported that Intel has decided to wind down their RealSense product line.

As of this writing, one of the better confirmations for this report can be found on the RealSense SDK GitHub repository README. The good news is that core depth-sensing RealSense products will continue business as usual for the foreseeable future, balanced by the bad news that some interesting offshoots (facial authentication, motion tracking) will be declared “End of Life” immediately and phased out over the next six months.

This information tells us while those living out on the bleeding edge will have to scramble, there is no immediate crisis for everyone else, whether they be researchers, hobbyists, or product planners. But this also means there will be no future RealSense cameras, kicking off many “What’s Next?” discussions in various communities. Like this thread on ROS (Robot Operating System) Discourse.

Three popular alternatives offer distinctly different tradeoffs. The “Been Around The Block” name is Occipital, with their more expensive Structure Pro sensor. The “Old Name, New Face” option is Microsoft Azure Kinect, the latest non-gaming-focused successor to the gaming peripheral that started it all. And let’s not forget OAK-D as the “New Kid On The Block” that started with a crowdfunding campaign and building an user community by doing things like holding contests. Each of these will appeal to a different niche, and we’ll keep our eye open in the future. Let’s see if any of them find the success that eluded the original Kinect, Google’s Tango, and now Intel’s RealSense.

[via Engadget]

Automate The Farm With Acorn

Farming has been undergoing quite a revolution in the past few years. Since World War 2, most industrial farming has relied on synthetic fertilizer, large machinery, and huge farms with single crops. Now there is a growing number of successful farmers bucking that trend with small farms growing many crops and using natural methods of fertilizing that don’t require as much industry. Of course even with these types of farms, some machinery is still nice to have, so this farmer has been developing an open-source automated farming robot.

The robot is known as Acorn and is the project of [taylor] who farms in California. The platform is powered by an 800 watt solar array feeding a set of supercapacitors for energy storage. It uses mountain bike wheels and tires fitted with electric hub motors which give it four wheel drive and four wheel steering to make it capable even in muddy fields. The farming tools, as well as any computer vision and automation hardware, can be housed under the solar panels. This prototype uses an Nvidia Jetson module to handle the heavy lifting of machine learning and automation, with a Raspberry Pi to handle the basic operation of the robot, and can navigate itself around a farm using highly precise GPS units.

While the robot’s development is currently ongoing, [taylor] hopes to develop a community that will build their own versions and help develop the platform. Farming improvements like this are certainly needed as more and more farmers shift from unsustainable monocultures to more ecologically friendly methods involving multiple simultaneous crops, carbon sequestration, and off-season cover crops. It’s certainly a long row to hoe but plenty of people are already plowing ahead.

Continue reading “Automate The Farm With Acorn”

Machine-Vision Archer Makes You The Target, If You Dare

We’ll state right up front that it’s a really, really bad idea to let a robotic archer shoot an apple off of your head. You absolutely should not repeat what you’ll see in the video below, and if you do, the results are all on you.

That said, [Kamal Carter]’s build is pretty darn cool. He wisely chose to use just about the weakest bows you can get, the kind with strings that are basically big, floppy elastic bands that shoot arrows with suction-cup tips and are so harmless that they’re intended for children to play with and you just know they’re going to shoot each other the minute you turn your back no matter what you told them. Target acquisition is the job of an Intel RealSense depth camera, which was used to find targets and calculate the distance to them. An aluminum extrusion frame holds the bow and adjusts its elevation, while a long leadscrew and a servo draw and release the string.

With the running gear sorted, [Kamal] turned to high school physics for calculations such as the spring constant of the bow to determine the arrow’s initial velocity, and the ballistics formula to determine the angle needed to hit the target. And hit it he does — mostly. We’re actually surprised how many on-target shots he got. And yes, he did eventually get it to pull a [William Tell] apple trick — although we couldn’t help but notice from his, ahem, hand posture that he wasn’t exactly filled with self-confidence about where the arrow would end up.

[Kamal] says he drew inspiration both from [Mark Rober]’s dart-catching dartboard and [Shane Wighton]’s self-dunking basketball hoop for this build. We’d say his results put in him good standing with the skill-optional sports community.

Continue reading “Machine-Vision Archer Makes You The Target, If You Dare”