Python Settles Bet About Best Strategy In Children’s Board Game

Simulating a tabletop game can be done for several reasons: to play the game digitally, to create computer opponent(s), or to prove someone wrong. In [Everett]’s case, he used Python to prove which adult was right about basic strategy in a children’s game.

[Everett]’s 5-year-old loves a simple game called Hoot Owl Hoot! in which players cooperatively work to move owls along a track to the safety of a nest. Player pieces move on spaces according to the matching colors drawn from a deck of cards. If a space is already occupied, a piece may jump ahead to the next available spot. The game has a bit more to it than that, but those are the important parts. After a few games, the adults in the room found themselves disagreeing about which strategy was optimal in this simple game.

It seemed to [Everett] that it was best to move pieces in the rear, keeping player pieces grouped together and maximizing the chance of free moves gained by jumping over occupied spaces. [Everett]’s wife countered that a “longest move” strategy was best, and one should always select whichever piece would benefit the most (i.e. move the furthest distance) from any given move. Which approach wins games in the fewest moves? This small Python script simulates the game enough to iteratively determine that the two strategies are quite close in results, but the “longest move” strategy does ultimately come out on top.

As far as simulations go, it’s no Tamagotchi Singularity and [Everett] admits that the simulation isn’t a completely accurate one. But since its only purpose is to compare whether “no stragglers” or “longest move” wins in fewer moves, shortcuts like using random color generation in place of drawing the colors from a deck shouldn’t make a big difference. Or would it? Regardless, we can agree that board games can be fitting metaphors for the human condition.

Automated Dice Tester Uses Machine Vision To Ensure A Fair Game

People take their tabletop games very, very seriously. [Andrew Lauritzen], though, has gone far above and beyond in pursuit of a fair game. The game in question is Star War: X-Wing, a strategy wargame where miniature pieces are moved according to rolls of the dice. [Andrew] suspected that commercially available dice were skewing the game, and the automated machine-vision dice tester shown in the video after the break was the result.

The rig is a very clever design that maximizes the data set with as little motion as possible. The test chamber is a box with clear ends that can be flipped end-for-end by a motor; walls separate the chamber into four channels to test multiple dice on each throw, and baffles within the channels assure randomization. A webcam is positioned below the chamber to take a snapshot of each “throw”, which is then analyzed in OpenCV. This scheme has the unfortunate effect of looking at the dice from the table’s perspective, but [Andrew] dealt with that in true hacker fashion: he ignored it since it didn’t impact the statistics he was interested in.

And speaking of statistics, he generated a LOT of them. The 62-page report of results from his study is an impressive piece of work, which basically concludes that the dice aren’t fair due to manufacturing variability, and that players could use this fact to cheat. He recommends pooled sets of dice to eliminate advantages during competitive play. 

This isn’t the first automated dice roller we’ve seen around these parts. There was the tweeting dice-bot, the Dice-O-Matic, and all manner of electronic dice throwers. This one goes the extra mile to keep things fair, and we appreciate that.

Continue reading “Automated Dice Tester Uses Machine Vision To Ensure A Fair Game”