How To Hide A Photo In A Photo

If you’ve ever read up on the basics of cryptography, you’ll be aware of steganography, the practice of hiding something inside something else. It’s a process that works with digital photographs and is the subject of an article by [Aryan Ebrahimpour]. It describes the process at a high level that’s easy to understand for non-maths-wizards. We’re sure Hackaday readers have plenty of their own ideas after reading it.

The process relies on the eye’s inability to see small changes at the LSB level to each pixel. In short, small changes in colour or brightness across an image are imperceptible to the naked eye but readable from the raw file with no problems. Thus the bits of a smaller bitmap can be placed in the LSB of each byte in a larger one, and the viewer is none the wiser.

We’re guessing that the increased noise in the image data would be detectable through mathematical analysis, but this should be enough to provide some fun. If you’d like a closer look, there’s even some code to play with. Meanwhile as we’re on the topic, this isn’t the first time Hackaday have touched on steganography.

The eurorack rail piece, just printed in white plastic, not yet folded, with a folded example in the upper right corner

Bend Your Prints To Eliminate Supports

When designing even a reasonably simple 3D-printable part, you need to account for all the supports it will require to print well. Strategic offsetting, chamfering, and filleting are firmly in our toolkits. Over time we’ve learned to dial our settings in so that, hopefully, we don’t have to fumble around with a xacto knife after the bed has cooled down. On Twitter, Chris shows off his foldable 3D print experiments (nitter) that work around the support problem by printing the part as a single piece able to fold into a block as soon as you pop it off the bed.

The main components of this trick seem to be the shape of the place where the print will fold, and the alignment of bottom layer lines perpendicular to the direction of the fold lines. [Chris] shows a cross-section of his FreeCad design, sharing the dimensions he has found to work best.

Of course, this is Twitter, so other hackers are making suggestions to improve the design — like this sketch of a captive wedge likely to improve alignment. As for layer line direction alignment, [Chris] admits to winging it by rotating the part in the slicer until the layer lines are oriented just right. People have been experimenting with this for some time now, and tricks like these are always a welcome addition to our toolkits. You might be wondering – what kinds of projects are such hinges useful for?

The example Chris provides is a Eurorack rail segment — due to the kind of overhangs required, you’d be inclined to print it vertically, taking a hit to the print time and introducing structural weaknesses. With this trick, you absolutely don’t have to! You can also go way further and 3D print a single-piece foldable Raspberry Pi Zero case, available on Printables, with only two extra endcaps somewhat required to hold it together.

Foldable 3D prints aren’t new, though we typically see them done with print-in-place hinges that are technically separate pieces. This trick is a radical solution to avoiding supports and any piece separation altogether. In laser cutting, we’ve known about similar techniques for a while, called a “living hinge”, but we generally haven’t extended this technique into 3D printing, save for a few manufacturing-grade techniques. Hinges like these aren’t generally meant to bend many times before they break. It’s possible to work around that, too — last time we talked about this, it was an extensive journey that combined plastic and fabric to produce incredibly small 3D printed robots!

We thank [Chaos] for sharing this with us!

An oscilloscope with its probes stored in drawers below it

Clever Scope Probe Drawers Keep Your Workbench Tidy

Probes are an essential component of a good oscilloscope system, but they have the nasty habit of cluttering up your workbench. If you have a four-channel scope, it’s not just several meters of cable that get in the way everywhere, but also four sets of all those little clips, springs, cable markers, and adjustment screwdrivers that need to be stored safely.

[Matt Mets] came up with a clever solution to this problem: a 3D printed cable organizer that neatly fits below your scope. It has four drawers, each of which has enough space to store a complete probe and a little compartment for all its accessories. A cable cutout at the front allows you to keep the probes plugged in even when they’re not in use.

It’s a beautifully simple solution to a common problem, and with the STL files available on Printables anyone with a cluttered workbench can build one for themselves. If, however, you’d like to keep those probes even closer at hand, have a look at these probe caddies. Continue reading “Clever Scope Probe Drawers Keep Your Workbench Tidy”

Mini MIDI Synth Uses Minimum Number Of Parts

The 80s were the golden age of synthesizers in pop music. Hugely complicated setups that spared no expense were the norm, with synths capable of recreating anything from pianos and guitars to percussion, strings, and brass. These types of setups aren’t strictly necessary if you’re looking to make music, though, especially in the modern age of accessible microcontrollers. This synthesizer from [Folkert] with MIDI capabilities, for example, creates catchy tunes with only a handful of parts.

This tiny synth is built around an ESP32 and works by generating PWM signals normally meant for LEDs. In this case, the PWM signals are sent through a rudimentary amplifier and then on to an audio output device.  That could be a small speaker, an audio jack to another amplifier, or a capture device.

The synth’s eight channels use up most of the ESP32’s I/O and provide a sound that’s reminiscent of the eight-bit video game era. The total parts count for this build is shockingly small with only a handful of resistors, the ESP, an optocoupler, and a few jacks.

For those wishing to experiment with synthesizers, a build like this is attractive because it’s likely that all the parts needed are already sitting around in a drawer somewhere with possibly the exception of the 5 pin DIN jacks needed for MIDI capabilities. Either way, [Folkert] has made all of the schematics available on the project page along with some sample mp3 files. For those looking to use parts from old video game systems sitting in their parts drawer, though, take a look at this synthesizer built out of a Sega Genesis.

Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Clever Stereo Camera Uses Sony Wireless Camera Modules

Stereophotography cameras are difficult to find, so we’re indebted to [DragonSkyRunner] for sharing their build of an exceptionally high-quality example. A stereo camera has two separate lenses and sensors a fixed distance apart, such that when the two resulting images are viewed individually with each eye there is a 3D effect. This camera takes two individual Sony cameras and mounts them on a well-designed wooden chassis, but that simple description hides a much more interesting and complex reality.

Sony once tested photography waters with the QX series — pair of unusual mirrorless camera models which took the form of just the sensor and lens.  A wireless connection to a smartphone allows for display and data transfer. This build uses two of these, with a pair of Android-running Odroid C2s standing in for the smartphones. Their HDMI video outputs are captured by a pair of HDMI capture devices hooked up to a Raspberry Pi 4, and there are a couple of Arduinos that simulate mouse inputs to the Odroids. It’s a bit of a Rube Goldberg device, but it allows the system to use Sony’s original camera software. An especially neat feature is that the camera unit and display unit can be parted for remote photography, making it an extremely versatile camera.

It’s good to see a stereo photography camera designed specifically for high-quality photography, previous ones we’ve seen have been closer to machine vision systems.

Aimbot Does It In Hardware

Anyone who has played an online shooter game in the past two or three decades has almost certainly come across a person or machine that cheats at the game by auto-aiming. For newer games with anti-cheat, this is less of a problem, but older games like Team Fortress have been effectively ruined by these aimbots. These types of cheats are usually done in software, though, and [Kamal] wondered if he would be able to build an aim bot that works directly on the hardware instead.

First, we’ll remind everyone frustrated with the state of games like TF2 that this is a proof-of-concept robot that is unlikely to make any aimbots worse or more common in any games. This is mostly because [Kamal] is training his machine to work in Aim Lab, a first-person shooter training simulation, and not in a real multiplayer videogame. The robot works by taking a screenshot of his computer in Python and passing the information through a computer vision algorithm which recognizes high-contrast targets. From there a PID controller is used to tell a series of omniwheels attached to the mouse where to point, and when the cursor is in the hitbox a mouse click is triggered.

While it might seem straightforward, building the robot and then, more importantly, tuning the PID controller took [Kamal] over two months before he was able to rival pro-FPS shooters at the aim trainer. It’s an impressive build though, and if one of his omniwheel motors hadn’t burned out it may have exceeded the top human scores on the platform. If you would like a bot that makes you worse at a game instead of better, though, head over to this build which plays Valorant by using two computers to pass game information between.

Continue reading “Aimbot Does It In Hardware”