Kinect Gave Us A Preview Of The Future, Though Not The One It Intended

This holiday season, the video game industry hype machine is focused on building excitement for new PlayStation and Xbox consoles. Ten years ago, a similar chorus of hype reached a crescendo with the release of Xbox Kinect, promising to revolutionize how we play. That vision never panned out, but as [Daniel Cooper] of Engadget pointed out in a Kinect retrospective, it premiered consumer technologies that impacted fields far beyond gaming.

Kinect has since withdrawn from the gaming market, because as it turns out gamers are quite content with handheld controllers. This year’s new controllers for a PlayStation or Xbox would be immediately familiar to gamers from ten years ago. Even Nintendo, whose Wii is frequently credited as motivation for Microsoft to develop the Kinect, have arguably taken a step back with Joy-cons of their Switch.

But the Kinect’s success at bringing a depth camera to consumer price levels paved the way to explore many ideas that were previously impossible. The flurry of enthusiastic Kinect hacking proved there is a market for depth camera peripherals, leading to plug-and-play devices like Intel RealSense to make depth-sensing projects easier. The original PrimeSense technology has since been simplified and miniaturized into Face ID unlocking Apple phones. Kinect itself found another job with Microsoft’s HoloLens AR headset. And let’s not forget the upcoming wave of autonomous cars and drones, many of which will see their worlds via depth sensors of some kind. Some might even be equipped with the latest sensor to wear the Kinect name.

Inside the Kinect was also one of the earliest microphone arrays sold to consumers. Enabling the Kinect to figure out which direction a voice is coming from, and isolate it from other noises in the room. Such technology were previously the exclusive domain of expensive corporate conference room speakerphones, but now it forms the core of inexpensive home assistants like an Amazon Echo Dot. Raising the bar so much that hacks needed many more microphones just to stand out.

With the technology available more easily elsewhere, attrition of a discontinued device is reflected in the dwindling number of recent Kinect hacks on these pages. We still see a cool project every now and then, though. As the classic sensor bar itself recedes into history, others will take its place to give us depth sensing and smart audio. But for many of us, Kinect was the ambitious videogame peripheral that gave us our first experience.

Handheld 3D Scanning, Using Raspberry Pi 4 And Intel RealSense Camera

Raspberry Pi 4 (with USB 3.0) and Intel RealSense D415 depth sensing camera.

When the Raspberry Pi 4 came out, [Frank Zhao] saw the potential to make a realtime 3D scanner that was completely handheld and self-contained. The device has an Intel RealSense D415 depth-sensing camera as the main sensor, which uses two IR cameras and an RGB camera along with the Raspberry Pi 4. The Pi uses a piece of software called RTAB-Map — intended for robotic applications — to take care of using the data from the camera to map the environment in 3D and localize itself within that 3D space. Everything gets recorded in realtime.

This handheld device can act as a 3D scanner because the data gathered by RTAB-Map consists of a point cloud of an area as well as depth information. When combined with the origin of the sensing unit (i.e. the location of the camera within that area) it can export a point cloud into a mesh and even apply a texture derived from the camera footage. An example is shown below the break.
Continue reading “Handheld 3D Scanning, Using Raspberry Pi 4 And Intel RealSense Camera”

Augmented Reality Aids In The Fight Against COVID-19

“Know your enemy” is the essence of one of the most famous quotes from [Sun Tzu]’s Art of War, and it’s as true now as it was 2,500 years ago. It also applies far beyond the martial arts, and as the world squares off for battle against COVID-19, it’s especially important to know the enemy: the novel coronavirus now dubbed SARS-CoV-2. And now, augmented reality technology is giving a boost to search for fatal flaws in the virus that can be exploited to defeat it.

The video below is a fascinating mix of 3D models of viral structures, like the external spike glycoproteins that give coronaviruses their characteristic crown appearance, layered onto live video of [Tom Goddard], a programmer/analysts at the University of California San Francisco. The tool he’s using is called ChimeraX, a molecular visualization program developed by him and his colleagues. He actually refers to this setup as “mixed reality” rather than “augmented reality”, to stress the fact that AR tends to be an experience that only the user can fully appreciate, whereas this system allows him to act as a guide on a virtual tour of the smallest of structures.

Using a depth-sensing camera and a VR headset, [Tom] is able to manipulate 3D models of the SARS virus — we don’t yet have full 3D structure data for the novel coronavirus proteins — to show us exactly how SARS binds to its receptor, angiotensin-converting enzyme-2 (ACE-2), a protein expressed on the cell surfaces of many different tissue types. It’s fascinating to see how the biding domain of the spike reaches out to latch onto ACE-2 to begin the process of invading a cell; it’s also heartening to watch [Tom]’s simulation of how the immune system responds to and blocks that binding.

It looks like ChimeraX and similar AR systems are going to prove to be powerful tools in the fight against not just COVID-19, but in all kinds of infectious diseases. Hats off to [Tom] and his team for making them available to researchers free of charge.

Continue reading “Augmented Reality Aids In The Fight Against COVID-19”

New Part Day: Mapping With RealSense Cameras For $200

Robot cars, DIY or otherwise, are hot right now. To do this right, you’re going to need cameras, LIDAR, or some other way of sensing the the world. Intel is again getting into the fray with a RealSense tracking camera for simultaneous localization and mapping for robotics, drone, and augmented reality needs.

The tech specs for the Intel RealSense T265 are impressive for small robotics uses. It includes 6DoF tracking gathered by two cameras, each with a 170° FoV. Connection to a computer is through USB 2.0 or 3.0. If you want to get an idea of how seriously Intel is taking the ‘robotics, and other power- and weight-limited platforms’ market, here’s a sample of what is on the one-page spec sheet: the T265 only uses 1.5 Watts, weighs 55 grams, and is 108 x 25 x 13 mm. There are also two M3 taps spaced 50mm apart on the back, which is an astonishing spec to publish on the product landing page. Simply the fact that the location and dimensions of the mounting holes is so prominent gives you an idea of how seriously Intel is taking robotics and prototyping applications.

This new SLAM camera complements Intel’s other tracking camera offerings, including those we’ve seen at Maker Faires past. It’s a competitor to the new crop of solid state LIDAR modules we’ve seen pop up recently. It’s not a Kinect, but we’re years past using a first-gen Kinect for robotics applications. Now, everything is custom chips and SLAM processing, and the RealSense T265 is the smallest platform to do that now.

Hackaday Prize Entry: HaptiVision Creates A Net Of Vibration Motors

HaptiVision is a haptic feedback system for the blind that builds on a wide array of vibration belts and haptic vests. It’s a smart concept, giving the wearer a warning when an obstruction comes into sensor view.

The earliest research into haptic feedback wearables used ultrasonic sensors, and more recent developments used a Kinect. The project team for HaptiVision chose the Intel RealSense camera because of its svelte form factor. Part of the goal was to make the HaptiVision as discreet as possible, so fitting the whole rig under a shirt was part of the plan.

In addition to a RealSense camera, the team used an Intel Up board for the brains, mostly because it natively controlled the RealSense camera. It takes a 640×480 IR snapshot and selectively triggers the 128 vibration motors to tell you what’s close. The motors are controlled by 8 PCA9685-based PWM expander boards.

The project is based on David Antón Sánchez’s OpenVNAVI project, which also featured a 128-motor array. HaptiVision aims to create an easy to replicate haptic system. Everything is Open Source, and all of the wiring clips and motor mounts are 3D-printable.

Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor

Echolocation projects typically rely on inexpensive distance sensors and the human brain to do most of the processing. The team creating SNAP: Augmented Echolocation are using much stronger computational power to translate robotic vision into a 3D soundscape.

The SNAP team starts with an Intel RealSense R200. The first part of the processing happens here because it outputs a depth map which takes the heavy lifting out of robotic vision. From here, an AAEON Up board, packaged with the RealSense, takes the depth map and associates sound with the objects in the field of view.

Binaural sound generation is a feat in itself and works on the principle that our brains process incoming sound from both ears to understand where a sound originates. Our eyes do the same thing. We are bilateral creatures so using two ears or two eyes to understand our environment is already part of the human operating system.

In the video after the break, we see a demonstration where the wearer doesn’t need to move his head to realize what is happening in front of him. Instead of a single distance reading, where the wearer must systematically scan the area, the wearer simply has to be pointed the right way.

Another Assistive Technology entry used the traditional ultrasonic distance sensor instead of robotic vision. There is even a version out there for augmented humans with magnet implants covered in Cyberpunk Yourself called Bottlenose.

Continue reading “Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor”

Intel’s Vision For Single Board Computers Is To Have Better Vision

At the Bay Area Maker Faire last weekend, Intel was showing off a couple of sexy newcomers in the Single Board Computer (SBC) market. It’s easy to get trapped into thinking that SBCs are all about simple boards with a double-digit price tag like the Raspberry Pi. How can you compete with a $35 computer that has a huge market share and a gigantic community? You compete by appealing to a crowd not satisfied with these entry-level SBCs, and for that Intel appears to be targeting a much higher-end audience that needs computer vision along with the speed and horsepower to do something meaningful with it.

I caught up with Intel’s “Maker Czar”, Jay Melican, at Maker Faire Bay Area last weekend. A year ago, it was a Nintendo Power Glove controlled quadcopter that caught my eye. This year I only had eyes for the two new computing modules on offer, the Joule and the Euclid. They both focus on connecting powerful processors to high-resolution cameras and using a full-blown Linux operating system for the image processing. But it feels like the Joule is meant more for your average hardware hacker, and the Euclid for software engineers who are pointing their skills at robots but don’t want to get bogged down in first-principles of hardware. Before you rage about this in the comments, let me explain.

Continue reading “Intel’s Vision For Single Board Computers Is To Have Better Vision”