What Makes Wedge Coils Better Than Round For PCB Motors?

PCB motors are useful things. With coils printed right on the board, you don’t need to worry about fussy winding jobs, and it’s possible to make very compact, self contained motors. [atomic14] has been doing some work in this area, and decided to explore why wedge coils perform better than round coils in PCB motor designs.

[atomic14]’s designs use four-layer PCBs which allow for more magnetic strength out of the coils made with traces. While they’ve tried a variety of designs, like most in this area, they used wedge-shaped coils to get the most torque out of their motors. As the video explains, the wedge layout allows a much greater packing efficiency, allowing the construction of coils with more turns in the same space. However, diving deeper, [atomic14] also uses Python code to simulate the field generated by the different-shaped coils. Most notably, it shows that the wedge design provides a significant increase in field strength in the relevant direction to make torque, which scales positively on motors with higher numbers of coils.

This kind of simulation and optimization is typical in industry. It’s great to see an explainer on real engineering methods on YouTube for everyone to enjoy. Video after the break.

Continue reading “What Makes Wedge Coils Better Than Round For PCB Motors?”

Self-Driving Library For Python

Fully autonomous vehicles seem to perennially be just a few years away, sort of like the automotive equivalent of fusion power. But just because robotic vehicles haven’t made much progress on our roadways doesn’t mean we can’t play with the technology at the hobbyist level. You can embark on your own experimentation right now with this open source self-driving Python library.

Granted, this is a library built for much smaller vehicles, but it’s still quite full-featured. Known as Donkey Car, it’s mostly intended for what would otherwise be remote-controlled cars or robotics platforms. The library is built to be as minimalist as possible with modularity as a design principle, and includes the ability to self-drive with computer vision using machine-learning algorithms. It is capable of logging sensor data and interfacing with various controllers as well, either physical devices or through something like a browser.

To build a complete platform costs around $250 in parts, but most things needed for a Donkey Car compatible build are easily sourced and it won’t be too long before your own RC vehicle has more “full self-driving” capabilities than a Tesla, and potentially less risk of having a major security vulnerability as well.

Turning Old Kindles Into AI Powered Picture Frames

While we tend to think of Amazon’s e-paper Kindles as more or less single-purpose devices (which to be fair, is how they’re advertised), there’s actually a full-featured Linux computer running behind that simple interface, just waiting to be put to work. Given how cheap you can get old Kindles on the second hand market, this has always struck us as something of a wasted opportunity.

This is why we love to see projects like Kindlefusion from [Diggedypomme]. It turns the Kindle into a picture frame to show off the latest in machine learning art thanks to Stable Diffusion. Just connect your browser to the web-based control interface running on the Kindle, give it a prompt, and away it goes. There are also functions to recall previously generated images, and if you’re connecting from a mobile device, support for creating images from voice prompts.

You can find cheap older Kindles on eBay.

All you need is a Kindle that can be jailbroken, though technically the software has only been tested against older third and fourth-generation hardware. From there you install a few required packages as listed in the project documentation, including Python 3. Then you just move the Kindlefusion package over either via USB or SSH, and do a little final housekeeping before starting it up and letting it take over the Kindle’s normal UI.

Given the somewhat niche nature of Kindle hacking, we’re particularly glad to see that [Diggedypomme] went through the trouble of explaining the nuances of getting the e-reader ready to run your own code. While it’s not difficult to do, there are plenty of pitfalls if you’ve never done it before, so a concise guide is a nice thing to have. Unfortunately, it seems like Amazon has recently gone on the offensive, with firmware updates blocking the exploits the community was using for jailbreaking on all but the older models that are no longer officially supported.

While it’s a shame you can’t just pick up a new Kindle and start hacking (at least, for now), there are still millions of older devices floating around that could be put to good use. Hopefully, projects like this can help inspire others to pick one up and start experimenting with what’s possible.

Continue reading “Turning Old Kindles Into AI Powered Picture Frames”

ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience

ChatGPT is an AI large language model (LLM) which specializes in conversation. While using it, [Gil Meiri] discovered that one way to create models in FreeCAD is with Python scripting, and ChatGPT could be encouraged to create a 3D model of a plane in FreeCAD by expressing the model as a script. The result is just a basic plane shape, and it certainly took a lot of guidance on [Gil]’s part to make it happen, but it’s not bad for a tool that can’t see what it is doing.

The first step was getting ChatGPT to create code for a 10 mm cube, and plug that in FreeCAD to see the results. After that basic workflow was shown to work, [Gil] asked it to create a simple airplane shape. The resulting code had objects for wing, fuselage, and tail, but that’s about all that could be said because the result was almost — but not quite — completely unlike a plane. Not an encouraging start, but at least the basic building blocks were there. Continue reading “ChatGPT Makes A 3D Model: The Secret Ingredient? Much Patience”

Timeframe: The Little Desk Calendar That Could

Usually, the problem comes before the solution, but for [Stavros], the opposite happened. A 4.7″ E-Ink screen with integrated battery management and ESP32 caught his eye, and he bought it and started thinking about what he wanted to do with it. The Timeframe is a sleek desk calendar based around the integrated e-ink screen.

[Stavros] found the device’s MicroPython support was a little lackluster, and often failed to draw. He found a Platform.io project that used an older but modified library for driving the e-ink display which worked quite well. However, the older library didn’t support portrait orientation or other niceties. Rather than try and create something complex in C, he moved the complexity to a server environment he knew more about. With the help of CoPilot, he got some code that would wake up the ESP32 every half hour, download an image from a server, and then display it. A Python script uses a headless browser to visit Google Calendar, resize the window, take a screenshot, and then upload it.

The hardest part of the exercise was getting authentication with Google working reliably. A white sleek 3D printed case wraps the whole affair in an aesthetically pleasing shell. So far, this has been a great story of someone building something for themselves and using their strengths. Where’s the hack?

The hack comes when [Stavros] tried squeezing his calendar into a case that was too tight and cracked the screen. Suddenly a large portion of the screen wouldn’t draw. He turned what was broken into something new by mapping out the area that didn’t draw and converting the Python to draw weather information with Pillow rather than screenshot a webpage: clever reuse and a way to make good out of a bad accident.

The code is up on GitLab, and the 3D files for the case are available on Printables. You can also find the project on Hackaday.io, as it was an entry into our recently concluded Low-Power Contest. Unfortunately, while the Timeframe is pretty power efficient, it doesn’t last as long as this calendar with a 50-year battery life.

Wolverine Gives Your Python Scripts The Ability To Self-Heal

[BioBootloader] combined Python and a hefty dose of of AI for a fascinating proof of concept: self-healing Python scripts. He shows things working in a video, embedded below the break, but we’ll also describe what happens right here.

The demo Python script is a simple calculator that works from the command line, and [BioBootloader] introduces a few bugs to it. He misspells a variable used as a return value, and deletes the subtract_numbers(a, b) function entirely. Running this script by itself simply crashes, but using Wolverine on it has a very different outcome.

In a short time, error messages are analyzed, changes proposed, those same changes applied, and the script re-run.

Wolverine is a wrapper that runs the buggy script, captures any error messages, then sends those errors to GPT-4 to ask it what it thinks went wrong with the code. In the demo, GPT-4 correctly identifies the two bugs (even though only one of them directly led to the crash) but that’s not all! Wolverine actually applies the proposed changes to the buggy script, and re-runs it. This time around there is still an error… because GPT-4’s previous changes included an out of scope return statement. No problem, because Wolverine once again consults with GPT-4, creates and formats a change, applies it, and re-runs the modified script. This time the script runs successfully and Wolverine’s work is done.

LLMs (Large Language Models) like GPT-4 are “programmed” in natural language, and these instructions are referred to as prompts. A large chunk of what Wolverine does is thanks to a carefully-written prompt, and you can read it here to gain some insight into the process. Don’t forget to watch the video demonstration just below if you want to see it all in action.

While AI coding capabilities definitely have their limitations, some of the questions it raises are becoming more urgent. Heck, consider that GPT-4 is barely even four weeks old at this writing.

Continue reading “Wolverine Gives Your Python Scripts The Ability To Self-Heal”

Pi Microcontroller Still Runs A Webserver

At first glance, the Raspberry Pi Pico might seem like a bit of a black sheep when compared to the other offerings from the Raspberry Pi Foundation. While most of the rest of their lineup can run Linux environments with full desktops, the Pico is largely limited to microcontroller duties in exchange for much smaller price tags and footprints. But that doesn’t mean it can’t be coerced into doing some of the things we might want a mainline Pi to do, like run a web server.

The project can run a static web page simply by providing the Pico with the project code available on the GitHub page and the HTML that you’d like the Pico to serve. It can be more than a static web page though, as it is also capable of running Python commands through the web interface as well. The server can pass commands from the web server and back as well, allowing for control of various projects though a browser interface. In theory this could be much simpler than building a physical user interface for a project instead by offloading all of this control onto the web server instead.

The project not only supports the RP2040-based Raspberry Pi Pico but can also be implemented on other WiFi-enabled microcontroller boards like the ESP8266 and ESP32. Having something like this on hand could greatly streamline smaller projects without having to reach for a more powerful (and more expensive) single-board computer like a Pi 3 or 4. We’ve seen some other builds on these boards capable of not only running HTML and CSS renderers, but supporting some image formats as well.

Continue reading “Pi Microcontroller Still Runs A Webserver”