At first we thought it was awesome, then we thought it was ridiculous, and now we’re pretty much settled on “ridiculawesome”.
Bitdrones is a prototype of a human-computer interaction that uses tiny quadcopters as pixels in a 3D immersive display. That’s the super-cool part. “PixelDrones” have an LED on top. “ShapeDrones” have a gauzy cage that get illuminated by color LEDs, making them into life-size color voxels. (Cool!) Finally, a “DisplayDrone” has a touchscreen mounted to it. A computer tracks each drone’s location in the room, and they work together to create a walk-in 3D “display”. So far, so awesome.
It gets even better. Because the program that commands the drones knows where each drone is, it can tell when you’ve moved a drone around in space. That’s extremely cool, and opens up the platform to new interactions. And the DisplayDrone is like a tiny flying cellphone, so you can chat hands-free with your friends who hover around your room. Check out the video embedded below the break.
On the other hand, for a human-computer interface, we have to say that the examples they picked just aren’t a good fit. For instance, have a look in the video where the drones are used to represent files in a directory (42 seconds in). Either there are only three files in the directory, or they ran out of drones. And rotating through the three is a bit slow because the physical drones have to fly to new locations to select a file. We’re thinking of that directory with 200 photos that we just downloaded from our camera.
The section of the video where they use the three ShapeDrones to stand in for a 3D physical model (52 seconds in) is awesome again. The user positions the drones in space, and they hover there and presumably this moves around some part of a 3D model in a computer somewhere. We love it. But again they run into the difference between virtual and real; three voxels, even if they are very elegant and in RGB color, isn’t enough for any serious modeling. Even a modest real-world project is going to take a fleet of these things, which would be fantastic but incredibly expensive.
Finally, epitomizing the simultaneous awesome and head-scratching of this project, they translate the pinch gesture (to enlarge or shrink an image) from the touchscreen to the real world (1:52 into the video). That’s super cool in principle, and we love the movement. But in the video, the demo guy nearly gets hit on the shoulder by the third pixel shrinking inwards toward the other two. We’re imagining a user with long hair and a model with many more copters. The horror!
In the end, using an interaction design term, the virtual and physical worlds have different affordances, and that’s a physical fact that the project either tries to blur or ignore. Quadcopters aren’t pixels: you can’t have a million of them, and even if you could they’re slow to reconfigure and harder to manipulate. That said, something beautiful has been designed here, and we’re absolutely sure that there’s an application out there that fits it.
So, for navigating files, we’ll stick with the mouse. But we totally want a room full of flying RGB cubes!