Double 3: Your Instant Physical Presence Anywhere, No Matter Where You Are

Telepresence is one of those futuristic buzzwords that’s popped up a few times over the decades; promising the ability to attend a meeting in New York City and another in Tokyo an hour later, all without having to leave the comfort of your home or office. This is the premise of Double Robotics’ Double 3, its most recent entry in this market segment, as the commercial counterpoint to more DIY offerings.

More than just a glorified tablet screen.

Looking like a tablet perched on top of a Segway, the built-in dual 13 megapixel cameras allow the controller to get a good look at their surroundings, while the 6 beamforming microphones should theoretically allow one to pick up any conversation in a meeting or on the work floor.

Battery life is limited to 4 hours, and it takes 2 hours to recharge the built-in battery. Fortunately one can just hop over to another, freshly charged Double 3 if the battery runs out. Assuming the $3,999 price tag doesn’t get in the way of building up a fleet of them, anyway.

Probably the most interesting aspect of the product is its self-driving feature, which has resulted in a whole range of sensors and cameras (Intel RealSense D430 stereo vision depth sensors) being installed. To handle the processing of this sensor data, the system is equipped with an NVidia Jetson TX2 ARM board, running Ubuntu Linux, which also renders the mixed-reality UI for the user with way points and other information.

Currently Double Robotics accepts sign-ups for the private beta of the Double 3 API, which would give developers access to the sensor data and various autonomous features of Double 3’s hardware. Co-founder of Double Robotics, [Marc DeVidts] stated to Hackaday that he is looking forward to seeing what people can build with it. Hopefully this time people will not simply take the thing for a joyride, like what happened with a predecessor of the Double 3.

Robots Invade Your Personal Space

If you have ever had to complete a task such as building a LEGO model over a remote connection, you will know that the challenges are like an absurd grade school group project. The person giving directions often has trouble describing what they are thinking, and the person doing the work has trouble interpreting what the instructor wants. “Turn the blue block over. No, only half way. Go back. Now turn it. No, the other way. NO! Not clockwise, downward. That’s Upward! Geez. Are you even listening‽” Good times.

While you may not be in this situation every day, the Keio University of Japan has an intuitive way to give instructors a way to physically interact with an instructee through a Moore/Swayze experience. The instructor has a camera in typical pirate parrot placement over the shoulder. Two arms are controlled by the instructor who can see through stereoscopic cameras to have a first-person view from across the globe. This natural way to interact with the user’s environment allows muscle memory to pass from the instructor to the wearer.

For some of the other styles of telepresence, see this deep-sea bot and a cylindrical screen that looks like someone is beaming up directly from the holodeck.

Continue reading “Robots Invade Your Personal Space”

A Telepresence System That’s Starting To Feel Like A Holodeck

[Dr. Roel Vertegaal] has led a team of collaborators from [Queen’s University] to build TeleHuman 2 — a telepresence setup that aims to project your actual-size likeness in 3D.

Developed primarily for business videoconferencing, the setup requires a bit of space on both ends of the call. A ring of stereoscopic z-cameras capture the subject from all angles which the corresponding projector on the other end displays. Those projectors are arranged in similar halo above a human-sized, retro-reflective cylindrical screen which can be walked around — viewing the image from any angle without a VR headset or glasses — in real-time!

Continue reading “A Telepresence System That’s Starting To Feel Like A Holodeck”

Hackaday Prize Entry: Telepresence With The Black Mirror Project

The future is VR, or at least that’s what it was two years ago. Until then, there’s still plenty of time to experiment with virtual worlds, the Metaverse, and other high-concept sci-fi tropes from the 80s and 90s. Interactive telepresence is what the Black Mirror Project is all about. Their plan is to create interactive software based on JanusVR platform for creating immersive VR experiences.

The Black Mirror project makes use of the glTF runtime 3D asset delivery to create an environment ranging from simple telepresence to the mind-bending realities the team unabashedly compares to [Neal Stephenson]’s Metaverse.

For their hardware implementation, the team is looking at UDOO X86 single-board computers, with SSDs for data storage as well as a bevy of sensors — gesture, light, accelerometer, magnetometer — supplying the computer with data. There’s an Intel RealSense camera in the build, and the display is unlike any other VR setup we’ve seen before. It’s a tensor display with multiple projection planes and variable backlighting that has a greater depth of field and wider field of view than almost any other display.

Telepresence Robot 2000 Leagues Under The Sea

Telepresence robots are now a reality, you can wheel around the office and talk to people, join a meeting, see stuff and bump into your colleagues. But imagine if telepresence were applied to deep sea exploration. Today we can become oceanographers through the telepresence system created by Bob Ballard (known for locating the Titanic, discovered deep sea geothermal vents, and more) and his team at the Inner Space Center. Put on your Submariner wristwatch because its time for all of us to explore the ocean depths via the comfort of our home or office.

Continue reading “Telepresence Robot 2000 Leagues Under The Sea”

The Internet Of Interactive Cats

[Tuco] is a cat who shares the space of [Micah Elizabeth Scott]. He is a large tabby tomcat, and he is polydactyl, which is to say he has a congenital excess of toes. He is an extremely active and engaging creature and enjoys playing and interacting with her. We covet [Tuco].

Sadly for the rest of us who love cats, of course, unless we know [Micah] personally we’ll never have the opportunity to play with [Tuco]. She appreciates the cat-shaped void that will leave in our lives, and to help us she’s building a telepresence robot to allow the rest of us to interact with him in real time.

Her idea is to make a flying robot equipped with a camera on a gimbal, and because to mounting it on a multirotor platform would be a hazard, instead she’s making something closer to the aerial cameras you might be familiar with from sporting fixtures, a motorised platform suspended from the corners of her roof space on a set of nylon ropes, that can move at will by adjusting the length of each tether. It is suggested that one day the device will be able to launch plastic bolts for [Tuco] to chase and to incorporate other interactive features to allow online users to engage with him.

We are shown progress so far in the video introducing the project that we’ve placed below the break, she has completed a prototype windlass mechanism and worked on reverse engineering the gimbal mechanism for serial control. We’ll probably never meet [Tuco] in person, but we can’t wait to interact with him online.

Continue reading “The Internet Of Interactive Cats”

Keeping Humanity Safe From Robots At Disney

Almost every big corporation has a research and development organization, so it came as no surprise when we found a tip about Disney Research in the Hackaday Tip Line. And that the project in question turned out to involve human-safe haptic telepresence robots makes perfect sense, especially when your business is keeping the Happiest Place on Earth running smoothly.

That Disney wants to make sure their Animatronics are safe is good news, but the Disney project is about more than keeping guests healthy. The video after the break and the accompanying paper (PDF link) describe a telepresence robot with a unique hydrostatic transmission coupling it to the operator. The actuators are based on a rolling-diaphragm design that limits hydraulic pressure. In a human-safe system that’s exactly what you want.

The system is a hybrid hydraulic-pneumatic design; two actuators, one powered by water pressure and the other with air, oppose each other in each joint. The air-charged actuators behave like a mass-efficient spring that preloads the hydraulic actuator. This increases safety by allowing the system to be de-energized instantly by venting the air lines. What’s more, the whole system presents very low mechanical impedance, allowing haptic feedback to the operator through the system fluid. This provides enough sensitivity to handle an egg, thread a needle — or even bop a kid’s face with impunity.

There are some great ideas here for robotics hackers, and you’ve got to admire the engineering that went into these actuators. For more research from the House of Mouse, check out this slightly creepy touch-sensitive smart watch, or this air-cannon haptic feedback generator.

Continue reading “Keeping Humanity Safe From Robots At Disney”