Modern Dance Or Full-Body Keyboard? Why Not Both!

If you felt in your heart that Hackaday was a place that would forever be free from projects that require extensive choreography to pull off, we’re sorry to disappoint you. Because you’re going to need a level of coordination and gross motor skills that most of us probably lack if you’re going to type with this full-body, semaphore-powered keyboard.

This is another one of [Fletcher Heisler]’s alternative inputs projects, in the vein of his face-operated coding keyboard. The idea there was to be able to code with facial gestures while cradling a sleeping baby; this project is quite a bit more expressive. Pretty much all you need to know about the technical side of the project can be gleaned from the brilliant “Hello world!” segment at the start of the video below. [Fletcher] uses OpenCV and MediaPipe’s Pose library for pose estimation to decode the classic flag semaphore alphabet, which encodes characters in the angle of the signaler’s extended arms relative to their body. To extend the character set, [Fletcher] added a squat gesture for numbers, and a shift function controlled by opening and closing the hands. The jazz-hands thing is just a bonus.

Honestly, the hack here is mostly a brain hack — learning a complex series of gestures and stringing them together fluidly isn’t easy. [Fletcher] used a few earworms to help him master the character set and tune his code; the inevitable Rickroll was quite artistic, and watching him nail the [Johnny Cash] song was strangely satisfying. We also thoroughly enjoyed the group number at the end. Ooga chaka FTW.

Continue reading “Modern Dance Or Full-Body Keyboard? Why Not Both!”

Spray-On Keyboard Is As Light As It Gets

We’ve all seen those ‘nothing’ keyboards, where the keys themselves are not much more than projected lasers, and users are asked to ritually beat their poor fingertips into the table — which has little give and even less clack. Well, a team at the Korea Advanced Institute of Science and Technology have come up with a way to eschew the keyboard altogether.

Essentially, the user wears a thin, breathable mesh of silver nanowires coated in gold, which is then embedded in a polyurethane coating. The mesh is sprayed onto their forearms and hands on the spot, and the mesh terminates in a small enclosure that is also worn on the skin. This contains a small Bluetooth unit that beams data back to a computer, a machine, or potentially another user wearing the same type of unit.

As the skin stretches and contorts, the mesh senses small electrical changes within. These changes become meaningful with applied AI, which maps the changes to specific gestures and manual tasks. To do this, the team started with teaching it to distinguish between patterns from tasks like typing on a phone, typing on a regular keyboard, and then holding and interacting with six differently-shaped simple objects.

The team isn’t stopping there — they plan to try capturing a larger range of motion by using the nanomesh on multiple fingers. In addition to facilitating communication between humans and machines, this could leave a huge fingerprint on gaming and VR.

Twitch And Blink Your Way Through Typing With This Facial Keyboard

For those that haven’t experienced it, the early days of parenthood are challenging, to say the least. Trying to get anything accomplished with a raging case of sleep deprivation is hard enough, but the little bundle of joy who always seems to need to be in physical contact with you makes doing things with your hands nigh impossible. What’s the new parent to do when it comes time to be gainfully employed?

Finding himself in such a boat, [Fletcher]’s solution was to build a face-activated keyboard to work around his offspring’s needs. Before you ask: no, voice recognition software wouldn’t work, at least according to the sleepy little boss who protests noisy awakenings. The solution instead was to first try OpenCV and the dlib facial recognition library to watch [Fletcher] blinking out Morse code. While that sorta-kinda worked, one’s blinkers can’t long endure such a workout, so he moved on to an easier set of gestures. Mouthing Morse code covers most of the keyboard, while a combination of eye, eyebrow, and other facial twitches and tics cover the rest, with MediaPipe’s Face Mesh doing the heavy-lifting in terms of landmark detection.

The resulting facial keyboard, aptly dubbed “CheekyKeys,” performed well enough for [Fletcher] to use for a skills test during an interview with a Big Tech Company. Imagining the interviewer on the other end watching him convulse his way through the interview was worth the price of admission, and we don’t even care if it was a put-on. Video after the break.

CheekyKeys is pretty cool, doing something with a webcam and Python that we thought would have needed a dedicated AI depth camera to accomplish. But perhaps the real hack here was how [Fletcher] taught himself Morse in fifteen minutes.

Continue reading “Twitch And Blink Your Way Through Typing With This Facial Keyboard”

OpenCV Brings Pinch To Zoom Into The Real World

Gesture controls arrived in the public consciousness a little over a decade ago as touchpads and touchscreens became more popular. The main limitation to gesture controls, a least as far as [Norbert] is concerned, is that they can only control objects in a virtual space. He was hoping to use gestures to control a real-world object instead, and created this device which uses gestures to control an actual picture.

In this unique augmented reality device, not only is the object being controlled in the real world but the gestures are being monitored there as well, thanks to a computer vision system watching his hand which is running OpenCV. The position data is fed into an algorithm which controls a physical picture mounted on a slender robotic arm. Now, when [Norbert] “pinches to zoom”, the servo attached to the picture physically brings it closer to or further from his field of view. He can also use other gestures to move the picture around.

While this gesture-controlled machine is certainly a proof-of-concept, there are plenty of other uses for gesture controls of real-world objects. Any robotics platform could benefit from an interface like this, or even something slightly more mundane like an office PowerPoint presentation. Opportunity abounds, but if you need a primer for OpenCV take a look at this build which tracks a hand in minute detail.

Continue reading “OpenCV Brings Pinch To Zoom Into The Real World”

Hedgehog Gesture Sensor Built With Cheap Time-of-Flight Modules

Time-of-flight sensors used to be expensive obscurities, capable of measuring the travel time of photons themselves and often used for tracking purposes. However, the technology is cheaper now, such that [jean.perardel] has used TOF sensors to build a useful and affordable gesture-tracking system.

The system relies on four VL53L1X time of flight sensors, which have a 16×16 scanning array and communicate over the I2C bus. Controlling the show is an Arduino MKR1010, though the project should be achievable with a range of other microcontrollers, too.

The device is built into a cute hedgehog-like form factor, with an LCD screen acting as the face. It displays facial expressions which show how the system is interpreting and responding to gestures. It gives the project lots of personality, which makes using the system more fun. Gestures from the system can be used to send keystrokes over USB, control relays or servos, or even fire IR signals to control TVs and other hardware.

It actually seems like a useful gesture control interface, one that could become a useful part of a workstation setup. We’ve seen gesture controls put to other uses too, like controlling robot arms. Video after the break.

Continue reading “Hedgehog Gesture Sensor Built With Cheap Time-of-Flight Modules”

Gesture Control The Easy Way

Gesture control is a technology that has floated around for quite a while, but never quite reached mainstream acceptance. Wii Bowling was fun for a while, but we’re not regularly using gestures to open doors or order pizza just yet. Doing it yourself can be quite easy, however, as [RC Lover san] found with a barebones, hacky build.

Typically, when we think of gesture control, we envisage object tracking cameras or MEMS accelerometers. Instead, this build uses simple tilt switches, as you might find in a pinball machine from days of yore. Four of these are placed on a wrist-mounted device, allowing the user to tilt their arm to move an RC car in different directions. The tilt switches are easy to hack into the controller for a toy RC car, as they simply replace the existing buttons on the PCB.

It’s a project that goes to show that not everything has to be done with advanced sensors and complex algorithms. Sometimes, it can all be done with a handful of cheap switches and some ingenuity. Plus, using arm movements to scoot BB-8 around on the floor looks like great fun. We’ve seen other attempts to build simple gesture controls with pots, too. Video after the break.

Continue reading “Gesture Control The Easy Way”

Simplified AI On Microcontrollers

Artificial intelligence is taking the world by storm. Rather than a Terminator-style apocalypse, though, it seems to be more of a useful tool for getting computers to solve problems on their own. This isn’t just for supercomputers, either. You can load AI onto some of the smallest microcontrollers as well. Tensorflow Lite is a popular tool for this, but getting it to work on your particular microcontroller can be a pain, unless you’re using an Espruino.

This project adds support for Tensorflow to this class of microcontrollers without having to fuss around with obtuse build tools. Basically adding a single line of code creates an instance, all without having to compile anything or even reboot. Tensorflow is a powerful software tool for microcontrollers, and having it this accessible now is a great leap forward.

So, what can you do with this tool? The team behind this build is using Tensorflow on an open smart watch that can be used to detect hand gestures and many other things. They also opened up these tools for use in a browser, which allows use of the AI software and emulates an Espruino without needing a physical device. There’s a lot going on with this one, and it’s a bonus that it’s open source and ready to be turned into anything you might need, like turning yourself into a Street Fighter.