An open-source canine training research tool was just been released by [Walker Arce] and [Jeffrey Stevens] at the University of Nebraska — Lincoln’s Canine Cognition and Human Interaction Lab (C-CHIL).
We didn’t realize that dog training research techniques were so high-tech. Operant conditioning, as opposed to Pavlovian, gives a positive reward, in this case dog treats, to reinforce a desired behavior. Traditionally operant conditioning involved dispensing the treat manually and some devices do exist using wireless remote controls, but they are still manually operated and can give inconsistent results (too many or too few treats). There weren’t any existing methods available to automate this process, so this team decided to rectify the situation.
They took a commercial treat dispenser and retro-fitted it with an interface board that taps into the dispenser’s IR sensors to detect that the hopper is moving and treats were actually dispensed. The interface board connects to a Raspberry Pi which serves as a full-featured platform to run the tests. In this demonstration it connects to an HDMI monitor, detecting touches from the dog’s nose to correlate with events onscreen. Future researchers won’t have to reinvent the wheel, just redesign the test itself, because [Walker] and [Jeffrey] have released all the firmware and hardware as open-source on the lab’s GitHub repository.
In the short video clip below, watch the dog as he gets a treat when he taps the white dot with his snout. If you look closely, at one point the dog briefly moves the mouse pointer as well. We predict by next year the C-CHIL researchers will have this fellow drawing pictures and playing checkers.
Continue reading “Using Open Source To Train Your Dog”
It is said that Benjamin Franklin, while watching the first manned flight of a hot air balloon by the Montgolfier brothers in Paris in 1783, responded when questioned as to the practical value of such a thing, “Of what practical use is a new-born baby?” Dr. Franklin certainly had a knack for getting to the heart of an issue.
Much the same can be said for Spot, the extremely videogenic dog-like robot that Boston Dynamics has been teasing for years. It appears that the wait for a production version of the robot is at least partially over, and that Spot (once known as Spot Mini) will soon be available for purchase by “select partners” who “have a compelling use case or a development team that [Boston Dynamics] believe can do something really interesting with the robot,” according to VP of business development Michael Perry.
The qualification of potential purchasers will certainly limit the pool of early adopters, as will the price tag, which is said to be as much as a new car – and a nice one. So it’s not likely that one will show up in a YouTube teardown video soon, so until the day that Dave Jones manages to find one in his magic Australian dumpster, we’ll have to entertain ourselves by trying to answer a simple question: Of what practical use is a robotic dog?
Continue reading “Ask Hackaday: What Good Is A Robot Dog?”
If you think this thing looks good you should see it move. [Martin Smith] hit a home run on the project, which was his Master’s Thesis. Fifteen servo motors provide a way for the bot to move around. Having been modeled after a small canine the gait is very realistic. The tail is even functional, acting as a counterweight when moving the legs.
The project was meticulously built in a 3D environment before undertaking any physical assembly. The mechanical parts are all either milled from aluminum or 3D printed. Two mBed boards mounted on its back allow it to interact with its environment. One of them handles image processing, the other drives the array of motors. And of course it doesn’t hurt that he built some Larson Scanners in as eyes.
Don’t miss the video after the break which shows off the entire project from planning to demonstration. We can’t help but be reminded of the rat-thing from Snow Crash.
Continue reading “Professional Looking Dog Robot Was Actually [Martin’s] Master’s Thesis”