Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Clever Stereo Camera Uses Sony Wireless Camera Modules

Stereophotography cameras are difficult to find, so we’re indebted to [DragonSkyRunner] for sharing their build of an exceptionally high-quality example. A stereo camera has two separate lenses and sensors a fixed distance apart, such that when the two resulting images are viewed individually with each eye there is a 3D effect. This camera takes two individual Sony cameras and mounts them on a well-designed wooden chassis, but that simple description hides a much more interesting and complex reality.

Sony once tested photography waters with the QX series — pair of unusual mirrorless camera models which took the form of just the sensor and lens.  A wireless connection to a smartphone allows for display and data transfer. This build uses two of these, with a pair of Android-running Odroid C2s standing in for the smartphones. Their HDMI video outputs are captured by a pair of HDMI capture devices hooked up to a Raspberry Pi 4, and there are a couple of Arduinos that simulate mouse inputs to the Odroids. It’s a bit of a Rube Goldberg device, but it allows the system to use Sony’s original camera software. An especially neat feature is that the camera unit and display unit can be parted for remote photography, making it an extremely versatile camera.

It’s good to see a stereo photography camera designed specifically for high-quality photography, previous ones we’ve seen have been closer to machine vision systems.

Aimbot Does It In Hardware

Anyone who has played an online shooter game in the past two or three decades has almost certainly come across a person or machine that cheats at the game by auto-aiming. For newer games with anti-cheat, this is less of a problem, but older games like Team Fortress have been effectively ruined by these aimbots. These types of cheats are usually done in software, though, and [Kamal] wondered if he would be able to build an aim bot that works directly on the hardware instead.

First, we’ll remind everyone frustrated with the state of games like TF2 that this is a proof-of-concept robot that is unlikely to make any aimbots worse or more common in any games. This is mostly because [Kamal] is training his machine to work in Aim Lab, a first-person shooter training simulation, and not in a real multiplayer videogame. The robot works by taking a screenshot of his computer in Python and passing the information through a computer vision algorithm which recognizes high-contrast targets. From there a PID controller is used to tell a series of omniwheels attached to the mouse where to point, and when the cursor is in the hitbox a mouse click is triggered.

While it might seem straightforward, building the robot and then, more importantly, tuning the PID controller took [Kamal] over two months before he was able to rival pro-FPS shooters at the aim trainer. It’s an impressive build though, and if one of his omniwheel motors hadn’t burned out it may have exceeded the top human scores on the platform. If you would like a bot that makes you worse at a game instead of better, though, head over to this build which plays Valorant by using two computers to pass game information between.

Continue reading “Aimbot Does It In Hardware”

Plants compared side-by-side, with LED-illuminated plants growing way more than the sunlight-illuminated plants

Plant Growth Accelerated Tremendously With LEDs

[GreatScott!] was bummed to see his greenhouse be empty and lifeless in winter. So, he set out to take the greenhouse home with him. Well, at least, a small part of it. First, he decided to produce artificial sunlight, setting up a simple initial experiment for playing with different wavelength LEDs. How much can LEDs affect plant growth, really? This is the research direction that Würth Elektronik, supporting his project, has recently been expanding into. They’ve been working on extensive application notes, explaining the biological aspects of it for us — a treasure trove of resources available at no cost, that hackers can and should learn from.

Initially, [GreatScott!] obtained LEDs in four different colors – red, ‘hyper red’, deep blue, and daylight spectrum. The first three are valued because their specific wavelengths are absorbed well by plants. The use of daylight LEDs though has been controversial.  Nevertheless, he points out that the plant might require different wavelengths for things other than photosynthesis, and the daylight LEDs sure do help assess the plants visually as the experiment goes on.Four cut tapes of the LEDs used in this experiment, laid out side by side on the desk

Next, [GreatScott!] borrowed parts of Würth’s LED driver designs, creating an Arduino PWM driver with simple potentiometers. He used this to develop his own board to host the LEDs.

An aluminum PCB increases heat dissipation, prolonging the LEDs lifespan. [GreatScott!] reflowed the LEDs onto it with solder paste, only to find that the ‘hyper red’ LEDs died during the process. Thankfully, by the time this problem reared its head, he managed to obtain the official horticulture devkit, with an LED panel ready to go.

[GreatScott!’s] test subjects were Arugula plants, whose leaves you often find on prosciutto pizza. Having built a setup with two different sets of flower pots, one LED-adorned and one LED-less, he put both of them on his windowsill. The plants were equally exposed to sunlight and equally watered. The LED duty cycle was set to ballpark values.

The results were staggering, as you can see in the picture above — no variable changing except the LEDs being used. This experiment, even including a taste test with a pizza as a test substrate, was a huge success, and [GreatScott!] recommends that we hit Würth up for free samples as we embark on our own plant growth improvement journeys.

Horticulture (aka plant growing) is one of the areas where hackers, armed with troves of freely available knowledge, can make big strides — and we’re not even talking about the kind of plants our commenters are sure to mention. The field of plant growth is literally fruitful and ripe for the picking. You can accomplish a whole lot of change with surprisingly little effort. The value of the plants on your windowsill doesn’t have to be purely decorative, and a small desk-top setup you hack together, can easily scale up! Some hackers understand that, and we’ve started seeing automated growing solutions way before Raspberry Pi was even a thing. The best part is, that you only need a few LEDs to start.

Continue reading “Plant Growth Accelerated Tremendously With LEDs”

Hacking Toy RC Cars With The HackRF One

The origin story for many who’d call themselves a member of the hacker community usually starts with taking things apart as a child just to see how they worked. For [Radoslav], that trend doesn’t seem to have slowed down, and he’s continued taking toys apart. Although since it’s his daughters little radio controlled car, he stuck to a non-destructive teardown. The result? He’s able to control the car with his laptop through a HackRF One SDR transceiver as shown in the video below the break.

[Radoslav] is no stranger to reverse engineering embedded devices, IoT gadgets, and probably more. So he started with what information was publicly available about the radio control interface in use. Many electronic devices sold in the US must be certified by the FCC (Federal Communications Commission) and prominently display their ID number, and this toy was no exception. The FCC database gave [Radoslav] enough information to know that the communication protocol is modulated with GFSK, a type of Frequency Shift Keying.

He fired up his favorite radio signal analysis tool and and got to work on the protocol itself. Along the way he found that communication between the car and controller is bidirectional but also very easy to get around. The result is that he can drive the car around with his laptop- definitely a cool hack, but for this one, the journey was surely the goal, not the destination.

If hacking on RC cars really gets your wheels turning, you might like this little RC car that can drive on the ceiling. Or if you’re feeling a bit hungry, check out how you can use the HackRF to nab a table at your local restaurant.

Continue reading “Hacking Toy RC Cars With The HackRF One”

New Tech And The Old Ways

This week on Hackaday, we featured a project that tickled my nostalgia bone, and proved that there are cool opportunities when bringing new tech to old problems. Let me explain.

[Muth] shared a project with us that combines old-school analog photography printing with modern LCD screens. The basic idea is to use a 4K monochrome screen in place of a negative, making a contact print by placing the screen directly on top of photographic paper and exposing it under a uniform light source. Just like the old ways, but with an LCD instead of film.

LCD exposure animationBut what’s the main difference between a screen and film? You can change the image on the LCD at will, of course. So when [Muth] was calibrating out exposures, it dawned on him that he could create a dynamic, animated version of his image and progressively expose different portions of the paper, extending the available dynamic range and providing him the ability to control the slightest nuances of the resulting image contrast.

As an old photo geek, this is the sort of trick that we would pull off manually in the darkroom all the time. “Dodging” would lighten up a section of the image by covering up the projected light with your hand or a special tool for a part of the exposure time. With [Muth]’s procedure, he can dodge the image programmatically on the per-pixel level. We would have killed for this ability back in the day.

The larger story here is that by trying something out of the box, applying a new tool to an old procedure, [Muth] stumbled on new capabilities. As hackers, we’re playing around with the newest tech we can get our hands on all the time. When you are, it might be that you also stumble on new possibilities simply afforded by new tech. Keep your eyes open!

Training Doppler Radar With Smart Watch IMUs Data For Activity Recognition

When it comes to interpreting sensor data automatically, it helps to have a large data set to assist in validating it, as well as training when it concerns machine learning (ML). Creating this data set with carefully tagged and categorized information is a long and tedious process, which is where the idea of cross-domain translations come into play, as in the case of using millimeter wave (mmWave) radar sensors to recognize activity of e.g. building occupants with the IMU2Doppler project at Smash Lab of Carnegie Mellon University.

The most commonly used sensor type when it comes to classifying especially human motion are inertial measurement units (IMU) such as accelerometers and gyroscopes, which are found in everything from smartphones to smart watches and fitness bands. For these devices it’s common to classify measurement patterns as matches a particular activity, such as walking, jogging, or brushing one’s teeth. This makes them both well-defined and very accessible.

As for why a mmWave-based Doppler radar would be preferred for monitoring e.g. building occupants is the privacy aspect compared to using cameras, and the inconvenience of equipping people with a body-worn IMU. Using Doppler radar it would theoretically be possible for people to track activities within their own home, as well as in a medical setting to ensure patients are safe, or at a gym to track one’s performance, or usage of equipment. All without the use of cameras or personal sensors. In the past, we’ve seen a similar approach that used targeted laser beams.

As promising as this sounds, at this point in time the number of activities that are recognized with reasonable accuracy (~70%) is limited to ten types. Depending on the intended application this may already be sufficient, though as the published paper notes, there is still a lot of room for growth.

Continue reading “Training Doppler Radar With Smart Watch IMUs Data For Activity Recognition”