[Vijay Kumar] is a professor at the University of Pennsylvania and the director of the GRASP lab where research centering around autonomous quadcopters is being met with great success. If you were intrigued by the video demonstrations seen over the last few years, you won’t want to miss the TED talk [Dr. Kumar] recently gave on the program’s research. We touched on this the other week when we featured a swarm of the robots in a music video, but there’s a lot more to be learned about what this type of swarm coordination means moving forward.
We’re always wondering where this technology will go since all of the experiments we’ve seen depend on an array of high-speed cameras to give positional feedback to each bot in the swarm. The image above is a screenshot taken about twelve minutes into the TED talk video (embedded after the break). Here [Dr. Kumar] addresses the issue of moving beyond those cameras. The quadcopter shown on the projection screen is one possible solution. It carries a Kinect depth camera and laser rangefinder. This is a mapping robot that is designed to enter an unknown structure and create a 3D model of the environment.
The benefits of this information are obvious, but this raises one other possibility in our minds. Since the robots are designed to function as an autonomous swarm, could they all be outfitted with cameras, and make up the positional-feedback grid for one another? Let us know what you think about it in the comments section.