Bare PCB Makes A Decent Homemade Smart Watch

These days, we live in a post-Dick Tracy world, where you can make a phone call with your fancy wristwatch, and lots more besides. [akashv44] has gone a simpler route, designing their own from scratch with a bare PCB design.

The build is based around the ESP-12E microcontroller, providing useful wireless connectivity that lets the watch interface with the outside world. The firmware makes queries of NTP servers and Yahoo’s weather API to collect time and weather data for display. It’s also capable of interacting with Blynk relay modules for controlling other equipment, which [akashv44] uses with lights and an air conditioner. The watch uses a small OLED display and a handful of small surface-mount tactile buttons for control. Power is courtesy of a small lithium-ion pouch cell, with charging handled by a TP4056 battery management IC.

It’s a simple smartwatch, but nonetheless one that teaches all kinds of useful skills in embedded development and design. It’s also funny to think how simple it is to build. A decade ago, before the ESP8266 was released, getting wireless connectivity in such a small package was a major engineering challenge. Even the Apple Watch didn’t come out until 2015! Food for thought.

Tiny Drone Racing Gate Records Your Best Laps

Professional drone racing is now an elite sport, with all the high-end tech, coverage, and equipment that goes along with it. If you’re just practicing with tiny drones in your home though, you might not be so well equipped. You might want to build something like this tiny FPV drone racing gate from [ProfessorBoots] to help keep track of laptimes while you’re training.

The build uses ultrasonic range sensors to detect when an object passes through the gate. The gate itself consists of a ring of addressable LEDs in strip form. The gate switches from green to red as a visual indicator of a drone passing through the gate. There’s also a small 2.4-inch touch screen that displays laptimes and enables the gate to be configured quickly and easily. The gate also serves up a webpage on the local network for viewing laptimes in a browser.

It does bear noting that at this stage, it’s primarily a practice tool. The gate doesn’t currently work for proper competitions, as it has no way of determiningĀ which drone might be flying through the gate at any one time.

It’s not the first time we’ve seen a TinyWhoop drone, either. Video after the break.

Continue reading “Tiny Drone Racing Gate Records Your Best Laps”

If You Aren’t Making Your Own Relays…

We’ve all been there. Someone will say something like, “I remember when we had to put our programs on a floppy disk…” Then someone will interrupt: “Floppy disk? We would have killed for floppy disks. We used paper tape…” After a few rounds, someone is talking about punching cards with a hand stylus or something. Next time someone is telling you about their relay computer, maybe ask them if they are buying their relays already built. They will almost surely say yes, and then you can refer them to [DiodeGoneWild], who shows how he is making his own relays.

While we don’t seriously suggest you make your own relays, there are a lot of fun techniques to pick up, from the abuse of a power drill to the calculation of the coil parameters. Even if you don’t learn anything, we get the desire to make as much as you can.

Continue reading “If You Aren’t Making Your Own Relays…”

Hackaday Prize 2023: Gen5X A Generatively Designed 5-Axis 3D Printer

[Ric Real] is entering the 2023 Hackaday Prize with the Gen5X, a generatively designed 3D printed five-axis 3D printer. The concept is not a new one, with the type of construction being seen a few times here and there. In addition to the usual three directions of motion, we’re familiar with, with the cartesian bot design, these types of machines add an additional two rotation axes, one which can swing the build platform front and back around the X-axis, and a second that provides rotation around the Z-axis. These combined motions give rise to some very interesting capabilities, outside of our familiar 3D printing design constraints.

As for the generative side of things, this is a largely theoretical idea. Essentially the concept is that the machine’s design can be iteratively updated and optimised for performance to fit into the constraints of available hardware such as motors and other ‘vitamins’ needed to create the next generation of machines. The design files should be parameterised enough such that this optimisation process can be automated, potentially via input from AI, but we suspect we’re a way off from that yet. Whether this project as yet satisfies any of these lofty goals remains to be seen, but do keep an eye on it if you’re so inclined. There is a Fusion 360 project here to dig into, but if you’re not interested in the research side of the project, but just want to build a 5-axis machine to play with, then you can find the project source on the GitHub Page.

If this feels familiar, you’d be on the right track, as we covered at least one other 5D printer recently. We have also touched upon generative design at least once. We’re sure we will see more on this topic in the future.

Continue reading “Hackaday Prize 2023: Gen5X A Generatively Designed 5-Axis 3D Printer”

Here’s Why GPUs Are Deep Learning’s Best Friend

If you have a curiosity about how fancy graphics cards actually work, and why they are so well-suited to AI-type applications, then take a few minutes to readĀ [Tim Dettmers] explain why this is so. It’s not a terribly long read, but while it does get technical there are also car analogies, so there’s something for everyone!

He starts off by saying that most people know that GPUs are scarily efficient at matrix multiplication and convolution, but what really makes them most useful is their ability to work with large amounts of memory very efficiently.

Essentially, a CPU is a latency-optimized device while GPUs are bandwidth-optimized devices. If a CPU is a race car, a GPU is a cargo truck. The main job in deep learning is to fetch and move cargo (memory, actually) around. Both devices can do this job, but in different ways. A race car moves quickly, but can’t carry much. A truck is slower, but far better at moving a lot at once. Continue reading “Here’s Why GPUs Are Deep Learning’s Best Friend”

Arduino-Powered Trap Hopes To Catch Mice

The old adage that you’ll make a fortune by developing a better mouse trap is not super realistic, as the engineers behind Sony’s Betamax video tape standard could tell you. However, you can still learn a lot building your own, as this project from [ROBO HUB] demonstrates.

The trap is intended to catch mice in a humane fashion, without injury to the animal. To that end, it uses an Arduino Nano armed with an ultrasonic distance sensorĀ  to detect when mice have entered a plastic container. The container’s hinged door is is held open with a servo. When a mouse is detected, the servo trips the door to snap shut under the power of an elastic band.

The key to making this design work well is ensuring that there are no gaps in the closed container that the mouse can use to escape. They’re wily creatures able to squeeze through positively tiny spaces, so it’s important to get this right. Besides that, you want to check the trap regularly, lest any caught mice simply claw and chew their way out.

We’ve seen a few mousetraps around these parts before, too. Video after the break.

Continue reading “Arduino-Powered Trap Hopes To Catch Mice”

High Quality 3D Scene Generation From 2D Source, In Realtime

Here’s some fascinating work presented at SIGGRAPH 2023 of a method for radiance field rendering using a novel technique called Gaussian Splatting. What’s that mean? It means synthesizing a 3D scene from 2D images, in high quality and in real time, as the short animation shown above shows.

Neural Radiance Fields (NeRFs) are a method of leveraging machine learning to, in a way, do what photogrammetry does: synthesize complex scenes and views based on input images. But NeRFs work in a fraction of the time, and require only a fraction of the source material. There are different ways to go about this and unsurprisingly, there tends to be a clear speed vs. quality tradeoff. But as the video accompanying this new work seems to show, clever techniques mean the best of both worlds.

A short video summary is embedded just below the page break. Interested in deeper details? The research PDF is here. The amount of development this field has seen is nothing short of staggering, and certainly higher in quality than what was state-of-the-art for NeRFs only a year ago.

Continue reading “High Quality 3D Scene Generation From 2D Source, In Realtime”