DIY DNA Lamp

DIY Electronics Plus Woodworking Equal Custom Lamp

There is something about wooden crafts that when combined with electronics, have a mesmerizing effect on the visual senses. The Gesture Controlled DNA Wooden Desk Lamp by [Timber Rough] is a bit of both with a nice desk piece that’s well documented for anyone who wants to build their own.

Construction starts with a laser cutter being employed to add kerfs, such that the final strips can be bent along a frame tube to form the outer backbone of the DNA helix structure. Add to the mix some tung oil, carnauba wax, and some glue — along with skill and patience — and you get the distinct shape of sugar-phosphate backbone.

The electronics include an ESP8266 with the PAJ7620 gesture sensor that controls two WS2812B RGB LED Strips. The sensor in question is very capable, and comes with the ability to recognize nine human hand gestures along with proximity which makes it apt for this application. The sensor is mounted atop the structure with the LEDs twisting down the frame to the base where the ESP8266 is tucked away. Tiny glass bottles are painted with acrylic spray varnish and then glued to the LEDs to form the base pairs of the double helix. We thought that the varnish spray was a clever idea to make light diffusers that are quick and cheap for most DIYers.

We previously covered how this particular gesture sensor can be used to control much more than a lamp if you seek more ideas in that realm.

Continue reading “DIY Electronics Plus Woodworking Equal Custom Lamp”

Compact, Gesture-Based Remote Control Over Bluetooth

[AlexMiller11] shared a project for a DIY gesture-sensing remote control that acts like a Bluetooth keyboard, capable of controlling media and presentations on a computer with a high degree of accuracy.

The device recognizes eight different gestures and controls a host PC over Bluetooth.

The hardware is a Silicon Labs xG24 dev kit, a small IoT-focused board able to be powered by a CR2032 cell. Part of what makes it all work is the six-axis IMU sensor, but the rest is the software to interpret that data and figure out what motions the user is trying to do. That happens with a Neuton.AI model and SDK, a tiny but effective machine learning framework for small devices.

How does it actually work? The device acts as a Bluetooth HID, and gets connected to a PC in the same was as a regular Bluetooth keyboard. Once that’s done, recognized gestures are printed out the serial port as well as sent via Bluetooth to the host machine. Media can then be played, paused, volume adjusted, presentations controlled, and more. More details are on the project’s GitHub repository. There’s also a demo video that explains exactly what’s going on, embedded below the page break.

Machine learning is a way of using software to solve the kinds of problems humans are not very good at writing programs to solve, and accurate gesture recognition is a good example. Not all such applications require heaps of overheating GPUs, either. We’ve seen the concept of a neural network stripped down to its bare essentials running on an Arduino Uno, for those who would like to better appreciate the fundamentals.

Continue reading “Compact, Gesture-Based Remote Control Over Bluetooth”

Modern Dance Or Full-Body Keyboard? Why Not Both!

If you felt in your heart that Hackaday was a place that would forever be free from projects that require extensive choreography to pull off, we’re sorry to disappoint you. Because you’re going to need a level of coordination and gross motor skills that most of us probably lack if you’re going to type with this full-body, semaphore-powered keyboard.

This is another one of [Fletcher Heisler]’s alternative inputs projects, in the vein of his face-operated coding keyboard. The idea there was to be able to code with facial gestures while cradling a sleeping baby; this project is quite a bit more expressive. Pretty much all you need to know about the technical side of the project can be gleaned from the brilliant “Hello world!” segment at the start of the video below. [Fletcher] uses OpenCV and MediaPipe’s Pose library for pose estimation to decode the classic flag semaphore alphabet, which encodes characters in the angle of the signaler’s extended arms relative to their body. To extend the character set, [Fletcher] added a squat gesture for numbers, and a shift function controlled by opening and closing the hands. The jazz-hands thing is just a bonus.

Honestly, the hack here is mostly a brain hack — learning a complex series of gestures and stringing them together fluidly isn’t easy. [Fletcher] used a few earworms to help him master the character set and tune his code; the inevitable Rickroll was quite artistic, and watching him nail the [Johnny Cash] song was strangely satisfying. We also thoroughly enjoyed the group number at the end. Ooga chaka FTW.

Continue reading “Modern Dance Or Full-Body Keyboard? Why Not Both!”

Spray-On Keyboard Is As Light As It Gets

We’ve all seen those ‘nothing’ keyboards, where the keys themselves are not much more than projected lasers, and users are asked to ritually beat their poor fingertips into the table — which has little give and even less clack. Well, a team at the Korea Advanced Institute of Science and Technology have come up with a way to eschew the keyboard altogether.

Essentially, the user wears a thin, breathable mesh of silver nanowires coated in gold, which is then embedded in a polyurethane coating. The mesh is sprayed onto their forearms and hands on the spot, and the mesh terminates in a small enclosure that is also worn on the skin. This contains a small Bluetooth unit that beams data back to a computer, a machine, or potentially another user wearing the same type of unit.

As the skin stretches and contorts, the mesh senses small electrical changes within. These changes become meaningful with applied AI, which maps the changes to specific gestures and manual tasks. To do this, the team started with teaching it to distinguish between patterns from tasks like typing on a phone, typing on a regular keyboard, and then holding and interacting with six differently-shaped simple objects.

The team isn’t stopping there — they plan to try capturing a larger range of motion by using the nanomesh on multiple fingers. In addition to facilitating communication between humans and machines, this could leave a huge fingerprint on gaming and VR.

Twitch And Blink Your Way Through Typing With This Facial Keyboard

For those that haven’t experienced it, the early days of parenthood are challenging, to say the least. Trying to get anything accomplished with a raging case of sleep deprivation is hard enough, but the little bundle of joy who always seems to need to be in physical contact with you makes doing things with your hands nigh impossible. What’s the new parent to do when it comes time to be gainfully employed?

Finding himself in such a boat, [Fletcher]’s solution was to build a face-activated keyboard to work around his offspring’s needs. Before you ask: no, voice recognition software wouldn’t work, at least according to the sleepy little boss who protests noisy awakenings. The solution instead was to first try OpenCV and the dlib facial recognition library to watch [Fletcher] blinking out Morse code. While that sorta-kinda worked, one’s blinkers can’t long endure such a workout, so he moved on to an easier set of gestures. Mouthing Morse code covers most of the keyboard, while a combination of eye, eyebrow, and other facial twitches and tics cover the rest, with MediaPipe’s Face Mesh doing the heavy-lifting in terms of landmark detection.

The resulting facial keyboard, aptly dubbed “CheekyKeys,” performed well enough for [Fletcher] to use for a skills test during an interview with a Big Tech Company. Imagining the interviewer on the other end watching him convulse his way through the interview was worth the price of admission, and we don’t even care if it was a put-on. Video after the break.

CheekyKeys is pretty cool, doing something with a webcam and Python that we thought would have needed a dedicated AI depth camera to accomplish. But perhaps the real hack here was how [Fletcher] taught himself Morse in fifteen minutes.

Continue reading “Twitch And Blink Your Way Through Typing With This Facial Keyboard”

OpenCV Brings Pinch To Zoom Into The Real World

Gesture controls arrived in the public consciousness a little over a decade ago as touchpads and touchscreens became more popular. The main limitation to gesture controls, a least as far as [Norbert] is concerned, is that they can only control objects in a virtual space. He was hoping to use gestures to control a real-world object instead, and created this device which uses gestures to control an actual picture.

In this unique augmented reality device, not only is the object being controlled in the real world but the gestures are being monitored there as well, thanks to a computer vision system watching his hand which is running OpenCV. The position data is fed into an algorithm which controls a physical picture mounted on a slender robotic arm. Now, when [Norbert] “pinches to zoom”, the servo attached to the picture physically brings it closer to or further from his field of view. He can also use other gestures to move the picture around.

While this gesture-controlled machine is certainly a proof-of-concept, there are plenty of other uses for gesture controls of real-world objects. Any robotics platform could benefit from an interface like this, or even something slightly more mundane like an office PowerPoint presentation. Opportunity abounds, but if you need a primer for OpenCV take a look at this build which tracks a hand in minute detail.

Continue reading “OpenCV Brings Pinch To Zoom Into The Real World”

Hedgehog Gesture Sensor Built With Cheap Time-of-Flight Modules

Time-of-flight sensors used to be expensive obscurities, capable of measuring the travel time of photons themselves and often used for tracking purposes. However, the technology is cheaper now, such that [jean.perardel] has used TOF sensors to build a useful and affordable gesture-tracking system.

The system relies on four VL53L1X time of flight sensors, which have a 16×16 scanning array and communicate over the I2C bus. Controlling the show is an Arduino MKR1010, though the project should be achievable with a range of other microcontrollers, too.

The device is built into a cute hedgehog-like form factor, with an LCD screen acting as the face. It displays facial expressions which show how the system is interpreting and responding to gestures. It gives the project lots of personality, which makes using the system more fun. Gestures from the system can be used to send keystrokes over USB, control relays or servos, or even fire IR signals to control TVs and other hardware.

It actually seems like a useful gesture control interface, one that could become a useful part of a workstation setup. We’ve seen gesture controls put to other uses too, like controlling robot arms. Video after the break.

Continue reading “Hedgehog Gesture Sensor Built With Cheap Time-of-Flight Modules”