Omnibot From The 80s Gets LED Matrix Eyes, Camera

[Ramin assadollahi] has been busy rebuilding and improving an Omnibot 5402, and the last piece of hardware he wanted to upgrade was some LED matrix eyes and a high quality Raspberry Pi camera for computer vision. An Omnibot was something most technical-minded youngsters remember drooling over in the 80s, and when [ramin] bought a couple of used units online, he went straight to the workbench to give the vintage machines some upgrades. After all, the Omnibot 5402 was pretty remarkable for its time, but is capable of much more with some modern hardware. One area that needed improvement was the eyes.

The eyes on the original Omnibot could light up, but that’s about all they were capable of. The first upgrade was installing two 8×8 LED matrix displays to form what [ramin] calls Minimal Expressive Eyes (MEE), powered by a Raspberry Pi. With the help of a 3D-printed adapter and some clever layout, the LED matrix displays fit behind the eye plate, maintaining the original look while opening loads of new output possibilities.

Adding a high quality Raspberry Pi camera with wide-angle lens was a bit more challenging and required and extra long camera ribbon connector, but with the lens nestled just below the eyes, the camera has a good view and isn’t particularly noticeable when the eyes are lit up. Having already upgraded the rest of the hardware, all that remains now is software work and we can’t wait to see the results.

Two short videos of the hardware are embedded below, be sure to give them a peek. And when you’re ready for more 80s-robot-upgrading-action, check out the Hero Jr.

Continue reading “Omnibot From The 80s Gets LED Matrix Eyes, Camera”

Cheat At Cornhole With A Bazillion-Dollar Robot

While the days of outdoor cookouts may be a few months away for most of us, that certainly leaves plenty of time to prepare for that moment. While some may spend that time perfecting recipies or doing various home improvement projects during their remaining isolation time, others are practicing their skills at the various games played at these events. Specifically, this group from [Dave’s Armory] which have trained a robot that helps play the perfect game of cornhole. (Video, embedded below.)

While the robot in question is an industrial-grade KUKA KR-20 robot with a hefty price tag of $32,000 USD, the software and control system that the group built are fairly accessible for most people. The computer vision is handled by an Nvidia Jetson board, a single-board computer with extra parallel computing abilities, which runs OpenCV. With this setup and a custom hand for holding the corn bags, as well as a decent amount of training, the software is easily able to identify the cornhole board and instruct the robot to play a perfect game.

While we don’t all have expensive industrial robots sitting around in our junk drawer, the use of OpenCV and an accessible computer might make this project a useful introduction to anyone interested in computer vision, and the group made the code public on their GitHub page. OpenCV can be used for a lot of other things besides robotics as well, such as identifying weeds in a field or using a Raspberry Pi for facial recognition.

Continue reading “Cheat At Cornhole With A Bazillion-Dollar Robot”

Science Officer…Scan For Elephants!

If you watch many espionage or terrorism movies set in the present day, there’s usually a scene where some government employee enhances a satellite image to show a clear picture of the main villain’s face. Do modern spy satellites have that kind of resolution? We don’t know, and if we did we couldn’t tell you anyway. But we do know that even with unclassified resolution, scientists are using satellite imagery and machine learning to count things like elephant populations.

When you think about it, it is a hard problem to count wildlife populations in their habitat. First, if you go in person you disturb the target animals. Even a drone is probably going to upset timid wildlife. Then there is the problem with trying to cover a large area and figuring out if the elephant you see today is the same one as one you saw yesterday. If you guess wrong you will either undercount or overcount.

The Oxford scientists counting elephants used the Worldview-3 satellite. It collects up to 680,000 square kilometers every day. You aren’t disturbing any of the observed creatures, and since each shot covers a huge swath of territory, your problem of double counting all but vanishes.

Continue reading “Science Officer…Scan For Elephants!”

Robotic Pool Cue Can Be Your Friend Or Your Foe

In his everlasting quest to replace physical skill with technology, [Shane] of [Stuff Made Here] has taken aim at the game of eight-ball pool. Using a combination of computer vision and mechatronics, he created a robotic pool system that can allow a physical game of pool over the internet, or just beat human players. See the video after the break.

Making a good pool shot requires three discrete steps. First, you need to identify the best shot, then figure out how exactly to strike the balls to achieve the desired results, and finally physically execute the shot accurately. [Shane’s] goal was to automate all these steps. For the physical part, he built a pool cue with a robotic tip which only requires the user to place in approximately the right position, while a pneumatic piston mounted on a Stewart platform does the rest. A Stewart platform is a triangular plate mounted with six reciprocating rods, which gives it the required freedom of motion. The rods’ bases are attached to a set of cranks actuated by tension cables pulled by servos mounted at the rear-end of the cue. An adjustable air system allows the power of the shot to be adjusted as required.

A camera mounted is mounted over the table and connected to computer vision software to gather the required position information. Fiducials on the corners of the table and the cue tip allow the position of the pockets, balls, and cue to be accurately determined, and theoretically should allow the robot to take the perfect shot. Getting this to work in reality quickly turned into a very frustrating experience. After many hours of debugging, [Shane] tracked the error to a tiny forgotten test function that was introducing 5-10 mm of position error, and 2 of the six servos in the cue not performing up to spec. To determine the vertical positioning of the cue, an IMU and fixed height foot were added. [Shane] also added an overhead projector to overlay all required information directly on the table. Continue reading “Robotic Pool Cue Can Be Your Friend Or Your Foe”

OpenCV Spreads Smart Camera Joy To See Ideas Come To Life

Do you have a great application for computer vision, but couldn’t spare the cost of hardware needed to build it? Or perhaps you just need a deadline to pull you away from endless doom scrolling? Either way, the OpenCV team wants you to enter their OpenCV AI Competition 2021 and they’re willing to pitch in hardware to make it happen.

This competition is part of OpenCV’s 20th anniversary celebration, and the field of machine vision has changed a lot in those two decades. OpenCV started within Intel harnessing power of their high end CPUs, but today the excitement is around specialized acceleration hardware for vision processing. Which is why OpenCV put their support and lent their name to the OpenCV AI Kit (OAK) Kickstarter we covered a few months ago. Since then, the hardware was produced and starting to arrive in project backer’s hands. (Barring pandemic-related shipping restrictions…)

This shiny new hardware is the competition’s focus. Phase one solicits team proposals for putting an OAK-D’s power to novel use. University teams may have up to ten members, general teams are limited to four. Each team’s geographic home will put them in one of six global regions. Proposals must be submitted by January 27th, 2021. By February 11th, judges will select the best twenty-five general and ten university team proposals from each region, and every member of the team gets an OAK-D unit to turn their idea into reality by phase two deadline of June 27th. That’s up to 1,200 OAK-D modules available to anyone who can convince the judges they have a great idea and they are capable of bringing it to fruition. Is that you? Of course it is!

Teams will also receive additional resources such as an allotment of cloud compute credits to train their models, and naturally all tutorials and sample code released as part of OAK Kickstarter. No explicit resource for project team organization is mentioned, but of course our own Hackaday.io is available to support you. Best of luck to everyone who enters and we look forward to seeing all the projects this contest will bring to life.

Alfred Jones Talks About The Challenges Of Designing Fully Self-Driving Vehicles

The leap to self-driving cars could be as game-changing as the one from horse power to engine power. If cars prove able to drive themselves better than humans do, the safety gains could be enormous: auto accidents were the #8 cause of death worldwide in 2016. And who doesn’t want to turn travel time into something either truly restful or alternatively productive?

But getting there is a big challenge, as Alfred Jones knows all too well. The Head of Mechanical Engineering at Lyft’s level-5 self-driving division, his team is building the roof racks and other gear that gives the vehicles their sensors and computational hardware. In his keynote talk at Hackaday Remoticon, Alfred Jones walks us through what each level of self-driving means, how the problem is being approached, and where the sticking points are found between what’s being tested now and a truly steering-wheel-free future.

Check out the video below, and take a deeper dive into the details of his talk.

Continue reading “Alfred Jones Talks About The Challenges Of Designing Fully Self-Driving Vehicles”

Really Useful Robot

[James Bruton] is an impressive roboticist, building all kinds of robots from tracked, exploring robots to Boston Dynamics-esque legged robots. However, many of the robots are proof-of-concept builds that explore machine learning, computer vision, or unique movements and characteristics. This latest build make use of everything he’s learned from building those but strives to be useful on a day-to-day basis as well, and is part of the beginning of a series he is doing on building a Really Useful Robot. (Video, embedded below.)

While the robot isn’t quite finished yet, his first video in this series explores the idea behind the build and the construction of the base of the robot itself. He wants this robot to be able to navigate its environment but also carry out instructions such as retrieving a small object from a table. For that it needs a heavy base which is built from large 3D-printed panels with two brushless motors with encoders for driving the custom wheels, along with a suspension built from casters and a special hinge. Also included in the base is an Nvidia Jetson for running the robot, and also handling some heavy lifting tasks such as image recognition.

As of this writing, [James] has also released his second video in the series which goes into detail about the mapping and navigation functions of the robots, and we’re excited to see the finished product. Of course, if you want to see some of [James]’s other projects be sure to check out his tracked rover or his investigations into legged robots.

Continue reading “Really Useful Robot”