SMORES Robot Finds Its Own Way To The Campfire

Robots that can dynamically reconfigure themselves to adapt to their environments offer a promising advantage over their less dynamic cousins. Researchers have been working through all the challenges of realizing that potential: hardware, software, and all the interactions in between. On the software end of the spectrum, a team at University of Pennsylvania’s ModLab has been working on a robot that can autonomously choose a configuration to best fit its task at hand.

We’ve recently done an overview of modular robots, and we noted that coordination and control are persistent challenges in this area. The robot in this particular demonstration is a hybrid: a fixed core module serving as central command, plus six of the lab’s dynamic SMORES-EP modules. The core module has a RGB+Depth camera for awareness of its environment. A separate downwards-looking camera watches SMORES modules for awareness of itself.

Combining that data using a mix of open robot research software and new machine specific code, this team’s creation autonomously navigates an unfamiliar test environment. While it can adapt to specific terrain challenges like a wood staircase, there are still limitations on situations it can handle. Kudos to the researchers for honestly showing and explaining how the robot can get stuck on a ground seam, instead of editing that gaffe out to cover it up.

While this robot isn’t the completely decentralized modular robot system some are aiming for, it would be a mistake to dismiss based on that criticism alone. At the very least, it is an instructive step on the journey offering a tradeoff that’s useful on its own merits. And perhaps this hybrid approach will find application with a modular robot close to our hearts: Dtto, the winner of our 2016 Hackaday Prize.

[via Science News]

Continue reading “SMORES Robot Finds Its Own Way To The Campfire”

[Vijay Kumar’s] TED talk on the state of quadcopter research

[Vijay Kumar] is a professor at the University of Pennsylvania and the director of the GRASP lab where research centering around autonomous quadcopters is being met with great success. If you were intrigued by the video demonstrations seen over the last few years, you won’t want to miss the TED talk [Dr. Kumar] recently gave on the program’s research. We touched on this the other week when we featured a swarm of the robots in a music video, but there’s a lot more to be learned about what this type of swarm coordination means moving forward.

We’re always wondering where this technology will go since all of the experiments we’ve seen depend on an array of high-speed cameras to give positional feedback to each bot in the swarm. The image above is a screenshot taken about twelve minutes into the TED talk video (embedded after the break). Here [Dr. Kumar] addresses the issue of moving beyond those cameras. The quadcopter shown on the projection screen is one possible solution. It carries a Kinect depth camera and laser rangefinder. This is a mapping robot that is designed to enter an unknown structure and create a 3D model of the environment.

The benefits of this information are obvious, but this raises one other possibility in our minds. Since the robots are designed to function as an autonomous swarm, could they all be outfitted with cameras, and make up the positional-feedback grid for one another? Let us know what you think about it in the comments section.

Continue reading “[Vijay Kumar’s] TED talk on the state of quadcopter research”