What are you doing to scare trick-or-treaters this Halloween? Surely something, right? Well, Hackaday alum [CameronCoward] certainly has his holiday under control with Dead E. Ruxpin, a murderous, cassette tape-controlled animatronic bear.
Readers of a certain vintage will no doubt see the correlation to Teddy Ruxpin, an animatronic bear from the 1980s whose mouth moved as it read stories from cassette tapes. Cleverly, the engineers used one stereo channel for the story’s audio, and the other channel to control the bear’s mouth.
Dead E. Ruxpin takes this idea and expands it, using the same two channels to send audio and control three servo motors that move both arms and the mouth. How is this possible? By sending tones built from one or more frequencies.
Essentially, [Cameron] assigned a frequency to each movement: mouth open/closed, and left and/or right arm up or down. These are all, of course, synced up with specific points in the audio so Dead E. doesn’t just move randomly, he dances along with the music.
The bear is actually a hand puppet, which leaves room for a 3D-printed skeleton that holds the RP2040 and the servos and of course, moves the puppet’s parts. We can’t decide if we prefer the bulging bloodshot eyes, or think the cutesy original eyes would have made a scarier bear. Anyway, check out the build/demo video after the break to see it in action.
Are you now into Teddy Ruxpin? Here’s a bit more about those scare bears. And don’t forget, Halloween Hackfest runs now until October 31st.
Continue reading “Dead E. Ruxpin Appears Alive And Well”
Essentially, it uses a Raspberry Pi and a Respeaker four-mic array to listen to conversations in the room. It listens and records 15-20 seconds of audio, and sends that to the OpenWhisper API to generate a transcript.
This repeats until five minutes of audio is collected, then the entire transcript is sent through GPT-4 to extract an image prompt from a single topic in the conversation. Then, that prompt is shipped off to Stable Diffusion to get an image to be displayed on the screen. As you can imagine, the images generated run the gamut from really weird to really awesome.
The natural lulls in conversation presented a bit of a problem in that the transcription was still generating during silences, presumably because of ambient noise. The answer was in voice activity detection software that gives a probability that a voice is present.
Naturally, people were curious about the prompts for the images, so [TheMorehavoc] made a little gallery sign with a MagTag that uses Adafruit.io as the MQTT broker. Build video is up after the break, and you can check out the images here (warning, some are NSFW).
Continue reading “WhisperFrame Depicts The Art Of Conversation”
There is an episode of Ren and Stimpy with a big red “history eraser’ button that must not be pressed. Of course, who can resist the temptation of pressing the unpressable button? The same goes for development boards. If there is a button on there, you want to read it in your code, right? The Raspberry Pi Pico is a bit strange in that regard. The standard one lacks a reset button, but there is a big tantalizing button to reset in bootloader mode. You only use it when you power up, so why not read it in your code? Why not, indeed?
Turns out, that button isn’t what you think it is. It isn’t connected to a normal CPU pin at all. Instead, it connects to the flash memory chip. So does that mean you can’t read it at all? Not exactly. There’s good news, and then there’s bad news.
The Good News
The official Raspberry Pi examples show how to read the button (you have read all the examples, right?). You can convert the flash’s chip-select into an input temporarily and try to figure out if the pin is low, meaning that the button is pushed. Sounds easy, right?
Continue reading “Button, Button, Who’s Got The (Pico) Button?”
We’ll beat everyone to the punch: yes, actually building a working Turing machine, especially one that uses a Raspberry Pi, is probably something that would have pushed [Alan Turing]’s buttons, and not in a good way. The Turing machine is, above all else, a thought experiment, an abstraction of how a mechanical computing machine could work. Building a working one seems to be missing the point.
Thankfully, [Michael Gardi] has ignored that message three times now, and with good reason: some people just grok abstract concepts better when they can lay their hands on something and manipulate it. His TMD-1 was based on 3D printed tiles with embedded magnets — arranging the tiles on a matrix containing Hall effect sensors programmed the finite state machine, with the “tape” concept represented by a strip of eight servo-controlled flip cards. While TMD-1 worked fine, it had some limitations, which [Mike] quickly remedied with TMD-2, a decidedly more complicated affair that used a Raspberry Pi, a camera, and OpenCV to read an expanded state machine with six symbols and six states, without breaking the budget on all the Hall sensors required.
TMD-3 refines the previous design, eschewing the machine vision approach and returning to the Hall effect roots of the original. But instead of using three sensors per tile, [Mike] determined that one sensor would suffice as long as he could mount the magnet at different depths within each tile. That way, the magnetic field for each symbol could be discerned by a single Hall sensor, greatly reducing complexity and expense. An LCD screen and a Raspberry Pi run a console app that shows the tape status, the state machine, and the state transitions.
[Mike] put a ton of work into this one — there are nineteen project logs — and he includes a lot of useful tips and tricks, like designing PCBs directly in KiCAD before even having a schematic. Of course, with a track record like his, we’d expect nothing less.
Continue reading “TMD-3: Clever Hall Sensor Hack Leads To Better Turing Demo”
Even though Windows and other operating systems constantly remind us to properly eject storage devices before removing them, plenty of people won’t heed those warnings until they finally corrupt a drive and cause all kinds of data loss and other catastrophes. It’s not just USB jump drives that can get corrupted, though. Any storage medium can become unusable if certain actions are being taken when the power is suddenly removed. That includes the SD cards on Raspberry Pis, too, and if your power isn’t reliable you might consider this hat to ensure they shut down properly during power losses.
The Raspberry Pi hat is centered around a series of supercapacitors which provide power for the Pi temporarily. The hat also communicates with the Pi to let it know there is a loss of power, so that the Pi can automatically shut itself down in that situation to prevent corrupting the memory card. The hat is more than just a set of backup capacitors, though. The device is capable of taking input power from a wide range of sources and filtering it for the power requirements of the Pi, especially in applications like boats and passenger vehicles where the input power might be somewhat noisy. There’s an optocoupled CAN bus interface as well for those looking to use this for automotive applications.
The entire project is also available on the project’s GitHub page for those wishing to build their own. Some sort of power backup is a good idea for any computer, though, not just Raspberry Pis. We’ve seen uninterruptible power supplies (UPS) with enough power to run an entire house including its computers, to smaller ones that’ll just keep your Internet online during a power outage.
Continue reading “Sailor Hat Adds Graceful Shutdown To Pis”
Camera modules for the Raspberry Pi became available shortly after its release in the early ’10s. Since then there has been about a decade of projects eschewing traditional USB webcams in favor of this more affordable, versatile option. Despite the amount of time available there are still some hurdles to overcome, and [Esser50k] has some supporting software to drive a smart doorbell which helps to solve some of them.
One of the major obstacles to using the Pi camera module is that it can only be used by one process at a time. The PiChameleon software that [Esser50k] built is a clever workaround for this, which runs the camera as a service and allows for more flexibility in using the camera. He uses it in the latest iteration of a smart doorbell and intercom system, which uses a Pi Zero in the outdoor unit armed with motion detection to alert him to visitors, and another Raspberry Pi inside with a touch screen that serves as an interface for the whole system.
The entire build process over the past few years was rife with learning opportunities, including technical design problems as well as experiencing plenty of user errors that caused failures as well. Some extra features have been added to this that enhance the experience as well, such as automatically talking to strangers passing by. There are other unique ways of using machine learning on doorbells too, like this one that listens for a traditional doorbell sound and then alerts its user.
Continue reading “Multi-Year Doorbell Project”
If you’ve ever thought about launching a high-altitude balloon, there’s much to consider. One of the things is how do you stream video down so that you — and others — can enjoy the fruits of your labor? You’ll find advice on that and more in a recent post from [scd31]. You’ll at least enjoy the real-time video recorded from the launch that you can see below.
The video is encoded with a Raspberry Pi 4 using H264. The MPEG-TS stream feeds down using 70 cm ham radio gear. If you are interested in this sort of thing, software, including flight and ground code, is on the Internet. There is software for the Pi, an STM32, plus the packages you’ll need for the ground side.
We love high-altitude balloons here at Hackaday. San Francisco High Altitude Ballooning (SF-HAB) launched a pair during last year’s Supercon, which attendees were able to track online. We don’t suggest you try to put a crew onboard, but there’s a long and dangerous history of people who did.
Continue reading “Balloon-Eye View Via Ham Radio”