Your Vacuum Cleaner Follows You

There are several projects you can imagine where it would be useful to have a robot follow you. For example, we’ve always wanted luggage that would trail us at the airport and we’ve seen several coolers that will follow you. [Madmax95] apparently dream of having a medical cart following a patient, though, and that’s good too. But how do you do it? [Max’s] method was to strip down a Roomba and build a work table and electronics on it. An Arduino controls the motor and communicates with a PC. The PC reads video from a Kinect camera on the robot and uses special tracking software to follow the patient.

We could easily imagine all of this project except the tracking. That depended on a service called Nuitrack. There is a free version that only works for 3 minutes, but it costs if you want to use it practically. However, it would still be cheaper than rolling your own if your time has value.

Continue reading “Your Vacuum Cleaner Follows You”

Left: kids stomping spiders projected on a driveway. Right: the setup.

Make This Halloween A Spider-Stomping Good Time

We can count on one hand the number of times that we haven’t needed a coat on Halloween night around here. Even if it was fair and sunny the day before, you can count on Halloween being appropriately windy, cold, and spooky. Trick-or-treating only keeps a kid so warm, and we would have loved to happen upon a house with a spider-stomping sugar-burning good time of a game going on in the driveway.

[Kyle Maas] built this game a few years ago, and it has proved quite popular ever since. It’s so popular, in fact, that they have to have someone on duty with a vaudeville hook to yank spectators off the playing field. The point is to stomp as many spiders as you can in a set amount of time, though you only need to stomp one to win. It can handle one to four players, depending on the size of the projection, but [Kyle] says it’s kind of hard to track more than two at a time.

The setup is fairly simple, provided you can reliably affix your projector to something sturdy. [Kyle] used a Structure sensor for the 3D scanner, but you could easily use a Kinect instead. Conversely, the calibration was challenging. [Kyle] ended up using a DSP math trick known as the inverse bilinear transform to be able to calibrate the system using the 3D scanner itself.

If you’re more into scaring the children, just rig up a coffin bell. Either way, don’t forget about our Halloween Hackfest contest, running now through Monday, October 11th. There are more details over on IO. While you’re there, why not check out the list of entries?

Automated Sentry Turret For Your Secret Lab

There are few things as frustrating when you’re trying to get some serious hacking done than intruders repeatedly showing up without permission. [All Parts Combined] has the solution for you, with a Kinect-based robotic sentry turret to keep them at bay.

The system consists of a Microsoft Kinect V2 connected to a PC, which runs an app to do all the processing, and outputs the targeting information to an Arduino over serial. The Arduino controls a simple 2-axis servo mount with an electric airsoft gun zip-tied to it. The trigger switch is replaced with a relay, also connected to the Arduino.

The Kinect V2 comes with SDKs that really simplify tracking human movement, and outputs the data in an easy-to-use format. [All Parts Combined] used the SDK in Unity, which allows him to choose which body parts to track. He added scripts that detect a few basic gestures, issues voice commands, and generates the serial commands for the Arduino. The servo angles are calculated with simple geometry, using XY coordinates of the target received from the SDK, and the known distance between the Kinect and turret. When an intruder enters the Kinect’s field of view it immediately starts aiming at the intruder’s heart, issues a “Hands Up!” command, and tells the intruder to leave. If the intruder doesn’t comply, it starts an audible countdown before firing. [All Parts Combined] also added a secret disarming gesture (double hand pistols), which turns the turret into an apologetic comrade. All it needs is a Portal-inspired enclosure.

It’s a fun project that illustrates how the Kinect can make complex computer vision tasks relatively simple. Unfortunately the V2 is no longer in production, having been replaced by the more expensive, developer focused Azure Kinect. We’ve covered several Kinect-based projects, including a 3D room scanner and a robotic basketball hoop.

Continue reading “Automated Sentry Turret For Your Secret Lab”

Leap Motion Controls Hands With No Glove

It isn’t uncommon to see a robot hand-controlled with a glove to mimic a user’s motion. [All Parts Combined] has a different method. Using a Leap Motion controller, he can record hand motions with no glove and then play them back to the robot hand at will. You can see the project in the video, below.

The project seems straightforward enough, but apparently, the Leap documentation isn’t the best. Since he worked it out, though, you might find the code useful.

An 8266 runs everything, although you could probably get by with less. The Leap provides more data than the hand has servos, so there was a bit of algorithm development.

We picked up a few tips about building flexible fingers using heated vinyl tubing. Never know when that’s going to come in handy — no pun intended. The cardboard construction isn’t going to be pretty, but a glove cover works well. You could probably 3D print something, too.

The Unity app will drive the hand live or can playback one of the five recorded routines. You can see how the record and playback work on the video.

This reminded us of another robot hand project, this one 3D printed. We’ve seen more traditional robot arms moving with a Leap before, too. Continue reading “Leap Motion Controls Hands With No Glove”

Kinect Gave Us A Preview Of The Future, Though Not The One It Intended

This holiday season, the video game industry hype machine is focused on building excitement for new PlayStation and Xbox consoles. Ten years ago, a similar chorus of hype reached a crescendo with the release of Xbox Kinect, promising to revolutionize how we play. That vision never panned out, but as [Daniel Cooper] of Engadget pointed out in a Kinect retrospective, it premiered consumer technologies that impacted fields far beyond gaming.

Kinect has since withdrawn from the gaming market, because as it turns out gamers are quite content with handheld controllers. This year’s new controllers for a PlayStation or Xbox would be immediately familiar to gamers from ten years ago. Even Nintendo, whose Wii is frequently credited as motivation for Microsoft to develop the Kinect, have arguably taken a step back with Joy-cons of their Switch.

But the Kinect’s success at bringing a depth camera to consumer price levels paved the way to explore many ideas that were previously impossible. The flurry of enthusiastic Kinect hacking proved there is a market for depth camera peripherals, leading to plug-and-play devices like Intel RealSense to make depth-sensing projects easier. The original PrimeSense technology has since been simplified and miniaturized into Face ID unlocking Apple phones. Kinect itself found another job with Microsoft’s HoloLens AR headset. And let’s not forget the upcoming wave of autonomous cars and drones, many of which will see their worlds via depth sensors of some kind. Some might even be equipped with the latest sensor to wear the Kinect name.

Inside the Kinect was also one of the earliest microphone arrays sold to consumers. Enabling the Kinect to figure out which direction a voice is coming from, and isolate it from other noises in the room. Such technology were previously the exclusive domain of expensive corporate conference room speakerphones, but now it forms the core of inexpensive home assistants like an Amazon Echo Dot. Raising the bar so much that hacks needed many more microphones just to stand out.

With the technology available more easily elsewhere, attrition of a discontinued device is reflected in the dwindling number of recent Kinect hacks on these pages. We still see a cool project every now and then, though. As the classic sensor bar itself recedes into history, others will take its place to give us depth sensing and smart audio. But for many of us, Kinect was the ambitious videogame peripheral that gave us our first experience.

Third Time’s A Charm For This Basketball-Catching Robot

We all know that version one of a project is usually a stinker, at least in retrospect. Sure, it gets the basic idea into concrete form, but all it really does is set the stage for a version two. That’s better, but still not quite there. Version three is where the magic all comes together.

At least that’s how things transpired on [Shane Wighton]’s quest to build the perfect basketball robot. His first version was a passive backboard that redirected incoming shots based on its paraboloid shape. As cool as the math was that determined the board’s shape, it conspicuously lacked any complicated systems like motors and machine vision — you know, the fun stuff.  Version two had all these elaborations and grabbed off-target shots a lot better, but still, it had a limited working envelope.

Enter version three, seen in action in the video below. Taking a page from [Mark Rober]’s playbook, [Shane] built a wickedly overengineered CoreXY-style robot to cover his shop wall. Everything was built with the lightest possible materials to keep inertia to a minimum and ensure the target ends up in the right place as quickly as possible. [Shane] even figured out how to mount the motor that tilts the backboard on the frame rather than to the carriage. A Kinect does depth-detection duty on the incoming ball — or the builder’s head — and drains pretty much every shot it can reach.

[Shane] has been doing some great work automating away the jobs of pro athletes. In addition to basketball, he has tackled both golf and baseball, bringing explosive power to each. We’re looking forward to versions two and three on both of those builds as well.

Continue reading “Third Time’s A Charm For This Basketball-Catching Robot”

New Kinect Sensor Switch Focus From Gamers To Developers

Microsoft’s Kinect may not have found success as a gaming peripheral, but recognizing that a depth sensor is too cool to leave for dead, development continued even after Xbox gaming peripherals were discontinued. This week their latest iteration emerged and we can get it in the form of Azure Kinect DK. This is a developer’s kit focused on exploring new applications for this technology, not a gaming peripheral we had to hack before we could use in our own projects.

Packaged into a peripheral that plugs into a PC via USB-C, it is more than the core depth sensor module announced last year but less than a full consumer product. Browsing its 10-page specification (PDF) with comparisons to second generation Kinect sensor bar, we see how this technology has evolved. Physical size and weight has dropped, as has power consumption. Auxiliary capabilities has improved with an expanded microphone array, IMU with gyro in addition to accelerometer, and the RGB camera has been upgraded to 4K resolution.

But the star of the show is a new continuous-wave time-of-flight depth sensor, presented at the 2018 IEEE ISSCC conference. (Full text requires IEEE membership, but a digest form is available via ResearchGate.) Among its many advancements, we expect the biggest impact to be its field of view. Default of 75 x 65 degrees is already better than its predecessors (64 x 45 for first generation Kinect, 70 x 60 for second) but there is an option to trade resolution for coverage by switching to a wide-angle mode of 120 x 120 degrees. Significantly wider than other depth cameras like Intel’s RealSense D400 series or Occipital’s Structure.

Another interesting feature is built-in synchronization. Many projects using multiple Kinect sensors ran into problems because they interfered with each other. People hacked around the problem, of course, but now they don’t have to: commodity 3.5 mm jacks allow multiple Azure Kinect DK to be daisy chained together so they play nicely and take turns.

From its name we were worried this product would require Microsoft’s Azure cloud service in some way and be crippled without it. Based on information released so far, it appears developers have access to all the same data streams as previous sensors. Azure tie-in takes the form of optional SDKs that make it easier to do things like upload data for processing in Azure cloud-based recognition services.

And finally, Azure Kinect DK’s price tag of $399 is significantly higher than a Kinect game peripheral, but it is a low volume product for developers. Perhaps high volume consumer products built on this technology will cost less, but that remains to be seen. In the meantime, you have alternative tools for solving similar problems. For example if you are building your own AR headset, you might use Intel’s latest RealSense camera for vision based inside-out motion tacking.