A common criticism we hear of cyberdecks is that functionality too often takes a backseat to aesthetics — in other words, they might look awesome, but they aren’t the kind of thing you’re likely to use a daily driver. It’s not an assessment that we necessarily disagree with, though we also don’t hold it against anyone if they’re more interested in honing their build’s retro-futuristic looks than its computational potential.
That said, when a build comes along that manages to strike a balance between style and function, we certainly take notice. The vecdec, built by [svenscore] is a perfect example. We actually came across this one in the Desert of the Real, also known as the outskirts of Philadelphia, while we stalked the chillout room at JawnCon 0x1. When everyone else in the room is using a gleaming MacBook or a beat-up ThinkPad, its wildly unconventional design certainly grabs your attention. But spend a bit of time checking the hardware out and chatting with its creator, and you realize it’s not just some cyberpunk prop.
[Peng Zhihui] seems to have found some spare time and energy to crack out another sweet robot build, this time it’s a much smaller, and cuter emoji-bot (Original GitHub Link,) with the usual production-ready levels of attention to detail. With a lot of fine details in the 3D printed models, this is one for SLS printing in nylon, but that can be done for a reasonable outlay, in China at least. The electronics package consists of a few full custom, and tiny, PCBs designed with Altium Designer, with off-the-shelf modules for the circular LCD and camera. The main board hosts an STM32F405 and deals with the display and SD card, The reason for this choice of STM32 was due to the requirement for connecting to an external USB3300 high-speed USB PHY. There is a sensor PCB which handles the gesture sensor, a USB hub, MPU6050 9-axis sensor, and also the USB camera module. This board attaches to the USB-C connector in the base, via a FFC cable, allowing the robot to rotate on its base.
[Peng] clearly has exacting standards as to how things should work, and we guess wanted to have the arms back-driveable in a way that enabled the host computer to track and record the motor positions for replaying later on. The connection back to the controller is via I2C, allowing all five servos to hang on the same bus, saving previous resources. Smart! Getting a processor and motor driver in such a tiny space was a bit of challenge, but a walk in the park for [Peng] as is demonstrates in the video embedded below (We believe English subtitles are pending!) The arm mechanism is particularly interesting, and rather elegantly executed, and he does seem rather proud of this part of the design, and so he should! Like with [Peng’s] other projects, there is a lot to see, and plenty of scope for feature explosion. It was nice to see the ‘bot being used as an input device, not only with gesture sensing via the dedicated sensor, but also using the camera with OpenCV to track user posture and act accordingly. This thing could act as genuinely useful AI device, as was a being darn cute at the same time!
The Joo Janta 200 super-chromatic peril-sensitive sunglasses were developed to help people develop a relaxed attitude to danger. By following the principle of, ‘what you don’t know can’t hurt you,’ these glasses turn completely opaque at the first sign of danger. In turn, this prevents you from seeing anything that might alarm you.
Here we see the beginnings of the Joo Janta hardware empire. For his Hackaday Prize entry, [matt] has created Nope Glasses. Is that meeting running long? Is your parole officer in your face again? Just Nope right out of that with a wave of the hand.
The Nope Glasses are two LCD shutters mounted in a pair of 3D printed glasses. On the bridge of the glasses is an APDS 9960 gesture sensor that tracks a hand waving in front of the glasses. Waving your hand down in front of the glasses darkens the shutters, and waving up makes them clear again. Waving left flashes between clear and dark, and waving right alternates each shutter.
In all seriousness, there is one very interesting thing about this project: how [matt] is attaching these LCD shutters to his glasses. This was done simply by taking a picture of the front and top of his glasses, converting those to 1-bit BMPs, and importing that into OpenSCAD. This gave him a pretty good idea of the shape of his glasses, allowing him to create an ‘attachment’ for his glasses. It’s great work, and we’d really like to see more of this technique.
[pepelepoisson]’s Miroir Magique (“Magic Mirror”) is an interesting take on the smart mirror concept; it’s intended to be a playful, interactive learning tool for kids who are at an age where language and interactivity are deeply interesting to them, but whose ceaseless demands for examples of spelling and writing can be equally exhausting. Inspiration came from his own five-year-old, who can neither read nor write but nevertheless has a bottomless fascination with the writing and spelling of words, phrases, and numbers.
The magic is all in the simple interface. Magic Mirror waits for activation (a simple pass of the hand over a sensor) then shows that it is listening. Anything it hears, it then displays on the screen and reads back to the user. From an application perspective it’s fairly simple, but what’s interesting is the use of speech-to-text and text-to-speech functions not as a means to an end, but as an end in themselves. A mirror in more ways than one, it listens and repeats back, while writing out what it hears at the same time. For its intended audience of curious children fascinated by the written and spoken aspects of language, it’s part interactive toy and part learning tool.
Like most smart mirror projects the technological elements are all hidden; the screen is behind a one-way mirror, speakers are out of sight, and the only inputs are a gesture sensor and a microphone embedded into the frame. Thus equipped, the mirror can tirelessly humor even the most demanding of curious children.
[pepelepoisson] explains some of the technical aspects on the project page (English translation link here) and all the code and build details are available (in French) on the project’s GitHub repository. Embedded below is a demonstration of the Magic Mirror, first in French then switching to English.