Autonomous Wheelchair Lets Jetson Do The Driving

Compared to their manual counterparts, electric wheelchairs are far less demanding to operate, as the user doesn’t need to have upper body strength normally required to turn the wheels. But even a motorized wheelchair needs some kind of input from the user to control it, which still may pose a considerable challenge depending on the individual’s specific abilities.

Hoping to improve on the situation, [Kabilan KB] has developed a self-driving electric wheelchair that can navigate around obstacles by feeding the output of an Intel RealSense Depth Camera and LiDAR module into a Jetson Nano Developer Kit running OpenCV. To control the actual motors, the Jetson is connected to an Arduino which in turn is wired into a common L298N motor driver board.

As [Kabilan] explains on the NVIDIA Blog, he specifically chose off-the-shelf components and the most affordable electric wheelchair he could find to bring the total cost of the project as low as possible. An undergraduate from the Karunya Institute of Technology and Sciences in Coimbatore, India, he notes that this sort of assistive technology is usually only available to more affluent patients. With his cost-saving measures, he hopes to address that imbalance.

While automatic obstacle avoidance would already be a big help for many users, [Kabilan] imagines improved software taking things a step further. For example, a user could simply press a button to indicate which room of the house they want to move to, and the chair could drive itself there automatically. With increasingly powerful single-board computers and the state of open source self-driving technology steadily improving, it’s not hard to imagine a future where this kind of technology is commonplace.

Hackaday Prize 2023: An Agricultural Robot That Looks Ready For The Field

In the world of agriculture, not all enterprises are large arable cropland affairs upon which tractors do their work traversing strip by strip under the hot sun. Many farms raise far more intensive crops on a much smaller scale, and across varying terrain. When it comes to automation these farms offer their own special challenges, but with the benefit of a smaller machine reducing some of the engineering tasks. There’s an entry in this year’s Hackaday prize which typifies this, [KP]’s Agrofelis robot is a small four-wheeled carrier platform designed to deliver autonomous help on smaller farms. It’s shown servicing a vineyard with probably one of the most bad-ass pictures you could think of as a pesticide duster on its implement platform makes it look for all the world like a futuristic weapon.

A sturdy tubular frame houses the battery bank and brains, while motive power is provided by four bicycle derived motorized wheels with disk brakes. Interestingly this machine steers mechanically rather than the skid-steering found in so many such platforms. On top is a two degrees of freedom rotating mount which serves as the implement system — akin to a 3-point linkage on a tractor. This is the basis of the bad-ass pesticide duster turret mentioned above. Running it all is a Nvidia Jetson Nano, with input from a range of sensors including global positioning and LIDAR.

The attention to detail in this agricultural robot is clearly very high, and we could see machines like it becoming indispensable in the coming decades. Many tasks on a small farm are time-consuming and involve carrying or wheeling a small machine around performing the same task over and over. Something like this could take that load off the farmer. We’ve been there, and sure would appreciate something to do the job.

While we’re on the subject of farm robots, this one’s not the only Prize entry this year.

NVIDIA Jetson Powers Real-Time Iron Man HUD

If you could recreate any of the capabilities of Tony Stark’s Iron Man suit in real life, it would probably be the ability to fly, the super strength, or maybe even the palm-mounted lasers that can cut through whatever obstacle is in your path. But let’s be real, all that stuff is way too hard to try and pull off. Plus you’ll probably just end up accidentally killing yourself in the backyard.

But judging by the videos he’s been posting, [Kris Kersey] is doing one hell of a job creating a functional heads-up display (HUD) similar to the one Tony uses in the films. He’s even building it into a 3D printed Iron Man helmet, with the NVIDIA Jetson board that’s powering the show inside a chest-mounted “Arc Reactor”. He goes into a bit more detail about the project and his goals in an interview recently published on NVIDIA’s own blog. Continue reading “NVIDIA Jetson Powers Real-Time Iron Man HUD”

Electric Skateboard Becomes Mobile Skate Park

While building a skate park might not appear to have much in common with software development, at they very least, they both suffer from a familiar problem: scalability. Bigger skate parks need more ramps and features, and there’s no real way to scale up a construction project like this efficiently like you could with certain kinds of software other than simply building more features. This was something [Kirk] noticed, but was able to scale up a skate park in a way we’ve never thought of before. He built a mobile skateboard ramp that can turn any place into a skate park.

The mobile and approximately sidewalk-width platform is able to move around thanks to an electric skateboard as its foundation. It adds a NVIDIA Jetson Nano for control with a PS4 controller for input, although steering a skateboard with an actuator took a few prototypes to figure out since skateboards are designed to be steered by shifting the rider’s weight. Since they are already designed to carry a human-amount of weight, though, it was at least able to tote the ramp around with relative ease. Another problem was lowering the ramp into position when it got to the desired area, but with an electrically-controlled jack and a few rounds of debugging was eventually able to do this without much issue.

With all of that project development behind him, [Kirk] can finally realize his dream of having ramps scattered all across his neighborhood like in the classic videogame Paperboy, without needing to build them all individually or ask for permission to place them around his neighbor’s homes. For any future iterations of this build, we might consider adding tank tracks to the electric skateboard for better off-road performance, like facilitating a jump across a patch of grass.

NVIDIA Unleashes The First Jetson AGX Orin Module

Back in March, NVIDIA introduced Jetson Orin, the next-generation of their ARM single-board computers intended for edge computing applications. The new platform promised to deliver “server-class AI performance” on a board small enough to install in a robot or IoT device, with even the lowest tier of Orin modules offering roughly double the performance of the previous Jetson Xavier modules. Unfortunately, there was a bit of a catch — at the time, Orin was only available in development kit form.

But today, NVIDIA has announced the immediate availability of the Jetson AGX Orin 32GB production module for $999 USD. This is essentially the mid-range offering of the Orin line, which makes releasing it first a logical enough choice. Users who need the top-end performance of the 64GB variant will have to wait until November, but there’s still no hard release date for the smaller NX Orin SO-DIMM modules.

That’s a bit of a letdown for folks like us, since the two SO-DIMM modules are probably the most appealing for hackers and makers. At $399 and $599, their pricing makes them far more palatable for the individual experimenter, while their smaller size and more familiar interface should make them easier to implement into DIY builds. While the Jetson Nano is still an unbeatable bargain for those looking to dip their toes into the CUDA waters, we could certainly see folks investing in the far more powerful NX Orin boards for more complex projects.

While the AGX Orin modules might be a bit steep for the average tinkerer, their availability is still something to be excited about. Thanks to the common JetPack SDK framework shared by the Jetson family of boards, applications developed for these higher-end modules will largely remain compatible across the whole product line. Sure, the cheaper and older Jetson boards will run them slower, but as far as machine learning and AI applications go, they’ll still run circles around something like the Raspberry Pi.

Need A Snack From Across Town? Send Spot!

[Dave Niewinski] clearly knows a thing or two about robots, judging from his YouTube channel. Usually the projects involve robot arms mounted on some sort of wheeled platform, but this time it’s the tune of some pretty famous yellow robot legs, in the shape of spot from Boston Dynamics. The premise is simple — tell the robot what snacks you want, entirely by voice command, and off he goes to fetch. But, we’re not talking about navigating to the fridge in the same room. We’re talking about trotting out the front door, down the street and crossing roads to visit favorite restaurant. Spot will order the snacks and bring them back, fully autonomously.

Spot’s depth cameras provide localized navigation and object avoidance information
Local AI vision system handles avoiding those pesky moving objects

There are multiple things going here, all of which are pretty big computational tasks. Firstly, there is no cloud-based voice control, ala Google voice or Alexa. The robot works on the premise of full autonomy, which means no internet connectivity for any aspect. All voice recognition, voice-to-text, and speech synthesis are performed locally using the NVIDIA Riva GPU-based AI speech SDK, running on the local NVIDIA Jetson AGX Orin carried on Spot’s back. A front-facing webcam supplies the audio feed for this. The voice recognition application listens for the wake phrase, then turns the snack order into text, for later replay when it gets to the destination. Navigation is taken care of with a Microstrain RTK GNSS module, which has all the needed robustness, such as dual antennas, and inertial fallback for those regions with a spotty signal. Navigation is no use out in the real world on its own, which is where Spot’s depth sensor cameras come in. These enable local obstacle avoidance, as per the usual spot behavior we’ve all seen before. But what about crossing the road without getting tens of thousands of dollars of someone else’s hardware crushed by a passing truck? Spot’s onboard streaming cameras are fed into the NVIDIA dash cam net AI platform which enables real-time recognition of moving obstacles such as cars, humans and anything else that might be wandering around and get in the way. All in all a cool project showing the future potential of AI in robotics for important tasks, like fetching me a beer when I most need it, even if it comes from the local corner shop.

We love robots around here. Robots can mow your lawn, navigate inside your house with a little help from invisible QR Codes, even help out with growing your food. The robot-assisted future long promised, may now be looking more like the present.

Continue reading “Need A Snack From Across Town? Send Spot!”

the microGPS pipeline

MicroGPS Sees What You Overlook

GPS is an incredibly powerful tool that allows devices such as your smartphone to know roughly where they are with an accuracy of around a meter in some cases. However, this is largely too inaccurate for many use cases and that accuracy drops considerably when inside such as warehouse robots that rely on barcodes on the floor. In response, researchers [Linguang Zhang, Adam Finkelstein, Szymon Rusinkiewicz] at Princeton have developed a system they refer to as MicroGPS that uses pictures of the ground to determine its location with sub-centimeter accuracy.

The system has a downward-facing monochrome camera with a light shield to control for exposure. Camera output feeds into an Nvidia Jetson TX1 platform for processing. The idea is actually quite similar to that of an optical mouse as they are often little more than a downward-facing low-resolution camera with some clever processing. Rather than trying to capture relative position like a mouse, the researchers are trying to capture absolute position. Imagine picking up your mouse, dropping it on a different spot on your mousepad, and having the cursor snap to a different part of the screen. To our eyes that are quite far away from the surface, asphalt, tarmac, concrete, and carpet look quite uniform. But to a macro camera, there are cracks, fibers, and imperfections that are distinct and recognizable.

They sample the surface ahead of time, creating a globally consistent map of all the images stitched together. Then while moving around, they extract features and implement a voting method to filter out numerous false positives. The system is robust enough to work even a month after the initial dataset was created on an outside road. They put leaves on the ground to try and fool the system but saw remarkably stable navigation.

Their paper, code, and dataset are all available online. We’re looking forward to fusion systems where it can combine GPS, Wifi triangulation, and MicroGPS to provide a robust and accurate position.

Video after the break.

Continue reading “MicroGPS Sees What You Overlook”