Automated Dice Tester Uses Machine Vision To Ensure A Fair Game

People take their tabletop games very, very seriously. [Andrew Lauritzen], though, has gone far above and beyond in pursuit of a fair game. The game in question is Star War: X-Wing, a strategy wargame where miniature pieces are moved according to rolls of the dice. [Andrew] suspected that commercially available dice were skewing the game, and the automated machine-vision dice tester shown in the video after the break was the result.

The rig is a very clever design that maximizes the data set with as little motion as possible. The test chamber is a box with clear ends that can be flipped end-for-end by a motor; walls separate the chamber into four channels to test multiple dice on each throw, and baffles within the channels assure randomization. A webcam is positioned below the chamber to take a snapshot of each “throw”, which is then analyzed in OpenCV. This scheme has the unfortunate effect of looking at the dice from the table’s perspective, but [Andrew] dealt with that in true hacker fashion: he ignored it since it didn’t impact the statistics he was interested in.

And speaking of statistics, he generated a LOT of them. The 62-page report of results from his study is an impressive piece of work, which basically concludes that the dice aren’t fair due to manufacturing variability, and that players could use this fact to cheat. He recommends pooled sets of dice to eliminate advantages during competitive play. 

This isn’t the first automated dice roller we’ve seen around these parts. There was the tweeting dice-bot, the Dice-O-Matic, and all manner of electronic dice throwers. This one goes the extra mile to keep things fair, and we appreciate that.

Continue reading “Automated Dice Tester Uses Machine Vision To Ensure A Fair Game”

Making Autonomous Racing Drones Lean And Mean

Recently the MAVLab (Micro Air Vehicle Laboratory) at the Technical University of Delft in the Netherlands proudly proclaimed having made an autonomic drone that’s a mere 72 grams in weight. The best part? It’s designed to take part in drone races. What this means is that using a single camera and onboard processing, this little drone with a diameter of 10 centimeters has to navigate the course, while avoiding obstacles.

To achieve this goal, they took an Eachine trashcan drone, replacing its camera with an open source JeVois smart machine vision camera and the autopilot software with the Paparazzi open UAV software. Naturally, scaling a racing drone down to this size came at an obvious cost: with its low-quality sensors, relatively low-quality camera and limited processing power compared to its big brothers it has to rely strongly on algorithms that compensate for drift and other glitches while racing.

Currently the drone is mainly being tested at a four-gate race track at TU Delft’s Cyberzoo, where it can fly multiple laps at a leisurely two meters per second, using its gate-detecting algorithms to zip from gate to gate. By using machine vision to do the gate detection, the drone can deal with gates being displaced from their position indicated on the course map.

While competitive with other, much larger autonomous racing drones, the system is still far removed from the performance of human-controlled racing drones. To close this gap, MAVLab’s [Christophe De Wagter] mentions that they’re looking at improving the algorithms to make them better at predictive control and state estimation, as well as the machine vision side. Ideally these little drones should be able to be far more nimble and quick than they are today.

See a video of the drone in action after the link.

Continue reading “Making Autonomous Racing Drones Lean And Mean”

Automate The Freight: Amazon’s Robotic Packaging Lines

In the “Automate the Freight” series, I’ve concentrated on stories that reflect my premise that the killer app for self-driving vehicles will not be private passenger cars, but will more likely be the mundane but necessary task of toting things from place to place. The economics of replacing thousands of salary-drawing and benefit-requiring humans in the logistics chain are greatly favored compared to the profits to be made by providing a convenient and safe commuting experience to individuals. Advances made in automating deliveries will eventually trickle down to the consumer market, but it’ll be the freight carriers that drive innovation.

While I’ve concentrated on self-driving freight vehicles, there are other aspects to automating the supply chain that I’ve touched on in this series, from UAV-delivered blood and medical supplies to the potential for automating the last hundred feet of home delivery with curb-to-door robots. But automation of the other end of the supply chain holds a lot of promise too, both for advancing technology and disrupting the entire logistics field. This time around: automated packaging lines, or how the stuff you buy online gets picked and wrapped for shipping without ever being touched by human hands.

Continue reading “Automate The Freight: Amazon’s Robotic Packaging Lines”

Camera Sees Electromagnetic Interference Using An SDR And Machine Vision

It’s one thing to know that your device is leaking electromagnetic interference (EMI), but if you really want to solve the problem, it might be helpful to know where the emissions are coming from. This heat-mapping EMI probe will answer that question, with style. It uses a webcam to record an EMI probe and the overlay a heat map of the interference on the image itself.

Regular readers will note that the hardware end of [Charles Grassin]’s EMI mapper bears a strong resemblance to the EMC probe made from semi-rigid coax we featured recently. Built as a cheap DIY substitute for an expensive off-the-shelf probe set for electromagnetic testing, the probe was super simple: just a semi-rigid coax jumper with one SMA plug lopped off and the raw end looped back and soldered. Connected to an SDR dongle, the probe proved useful for tracking down noisy circuits.

[Charles]’ project takes that a step further by adding a camera that looks down upon the device under test. OpenCV is used to track the probe, which is moved over the DUT manually with the help of an augmented reality display that helps track coverage, with a Python script recording its position and the RF power measurements. The video below shows the capture process and what the data looks like when reassembled as an overlay on top of the device.

Even if EMC testing isn’t your thing, this one seems like a lot of fun for the curious. [Charles] has kindly made the sources available on GitHub, so this is a great project to just knock out quickly and start mapping.

Continue reading “Camera Sees Electromagnetic Interference Using An SDR And Machine Vision”

Leigh Johnson’s Guide To Machine Vision On Raspberry Pi

We salute hackers who make technology useful for people in emerging markets. Leigh Johnson joined that select group when she accepted the challenge to build portable machine vision units that work offline and can be deployed for under $100 each. For hardware, a Raspberry Pi with camera plus screen can fit under that cost ceiling, and the software to give it sight is the focus of her 2018 Hackaday Superconference presentation. (Video also embedded below.)

The talk is a very concise 13 minutes, so Leigh flies through definitions of basic terms, before quickly naming TensorFlow and Keras as the tools she used. The time she saved here was spent on explaining what convolutional neural networks are and how they work, just enough to prepare the audience. But all of that is really just background, the meat of the talk is self-contained examples that Leigh has put together and made available online. I love to see that since it means you go beyond just watching and try it out for yourself. Continue reading “Leigh Johnson’s Guide To Machine Vision On Raspberry Pi”

Robot Solves Rubik’s Cube With One Hand Tied Behind Its Back

For all those who have complained about Rubik’s Cube solving robots in the past by dismissing purpose-built rigs that hold the cube in a non-anthropomorphic manner: checkmate.

The video below shows not only that a robot can solve the classic puzzle with mechanical hands, but it can also do it with just one of them – and that with only three fingers. The [Yamakawa] lab at the University of Tokyo built the high-speed manipulator to explore the kinds of fine motions that humans perform without even thinking about them. Their hand, guided by a 500-fps machine vision system, uses two opposing fingers to grip the lower part of the cube while using the other finger to flick the top face of the cube counterclockwise. The entire cube can also be rotated on the vertical axis, or flipped 90° at a time. Piecing these moves together lets the hand solve the cube with impressive speed; extra points for the little, “How’s that, human?” flick at the end.

It might not be the fastest cube solver, or one that’s built right into the cube itself, but there’s something about the dexterity of this hand that we really appreciate.

Continue reading “Robot Solves Rubik’s Cube With One Hand Tied Behind Its Back”