We’ve Got A Saxaboom At Home Son

Most parents have heard a familiar story. Their lovely child comes up, having seen a celebrity rocking out with a funny $20 toy from the 80s, and asks for it. Of course, you reply, it’s just 20 dollars. However, a quick scan through eBay reveals that everyone else’s kid has also been asking for this obscure toy for a school event, which now costs around $700. [Ben] found himself in that exact position and made a crucial off-hand comment, “I bet I could make one of those.” That was how his hectic journey into the world of toy reproduction began.

All [Ben] had for reference when recreating a Sax-A-Boom were pictures and sound clips. Modeling complex sweeping shapes in CAD is difficult, and [Ben] commissioned a 3d model from a professional on Fiverr. [Ben] broke down the model into printable sections and tweaked it to account for buttons. After a concerning amount of putty, wet sanding, and elbow grease, [Ben] had a decently smooth body for an instrument. The device’s guts is an ESP32-based board called Sonatino, built around music generation. The music samples came from a virtual instrument clone on GitHub and loaded onto an SD card.

Time pressure crept in towards the end, and [Ben] had to go for some dirty solution that he would have preferred (popsicle sticks and epoxy for button mounting). Yes, there were some gaps and paint flaws, but ultimately [Ben’s] son rocked the school presentation. It’s a beautiful journey through creating something with a high level of finish on a limited timescale.

Perhaps future versions of the Sax-A-Boom can take it further by adding a breath sensor, like this 3d printed MIDI instrument.

Continue reading “We’ve Got A Saxaboom At Home Son”

Hackaday Prize 2023: The Realities Of The Homework Machine

For those outside the world of education, it can be hard to judge the impact that ChatGPT has had on homework assignments. If you didn’t know, the first challenge of the 2023 Hackaday Prize is focused on improving education. [Devadath P R] decided that the best way to help teachers and teaching culture was to confront them head-on with our new reality by building the homework machine.

The goal of the machine is to be able to stick in any worksheet or assignment and have it write out the answers in your own handwriting, and so far, the results are pretty impressive. There are already pen holder tools for 3D printers, but they often have a few drawbacks. Existing tools often take quite a while to generate G-Code for long pages of text. Hobby servos to lift the pen up and down take more wear than you’d expect as a single page has thousands of actuations. Vibrations are also a problem as they are a dead giveaway that the text was not human-written. [Devadath] created a small Python GUI to record their particular handwriting style on a graphics tablet and used ChatGPT to generate answers. Continue reading “Hackaday Prize 2023: The Realities Of The Homework Machine”

Peering Down Into Talking Ant Hill

Watching an anthill brings an air of fascination. Thousands of ants are moving about and communicating with other ants as they work towards a goal as a collective whole. For us humans, we project a complex inner world for each of these tiny creatures to drive the narrative. But what if we could peer down into a miniature world and the ants spoke English? (PDF whitepaper)

Researchers at the University of Stanford and Google Research have released a paper about simulating human behavior using multiple Large Language Models (LMM). The simulation has a few dozen agents that can move across the small town, do errands, and communicate with each other. Each agent has a short description to help provide context to the LLM. In addition, they have memories of objects, other agents, and observations that they can retrieve, which allows them to create a plan for their day. The memory is a time-stamped text stream that the agent reflects on, deciding what is important. Additionally, the LLM can replan and figure out what it wants to do.

The question is, does the simulation seem life-like? One fascinating example is the paper’s authors created one agent (Isabella) intending to have a Valentine’s Day party. No other information is included. But several agents arrive at the character’s house later in the day to party. Isabella invited friends, and those agents asked some people.

A demo using recorded data from an earlier demo is web-accessible. However, it doesn’t showcase the powers that a user can exert on the world when running live. Thoughts and suggestions can be issued to an agent to steer their actions. However, you can pause the simulation to view the conversations between agents. Overall, it is incredible how life-like the simulation can be. The language of the conversation is quite formal, and running the simulation burns significant amounts of computing power. Perhaps there can be a subconscious where certain behaviors or observations can be coded in the agent instead of querying the LLM for every little thing (which sort of sounds like what people do).

There’s been an exciting trend of combining LLMs with a form of backing store, like combining Wolfram Alpha with chatGPT. Thanks [Abe] for sending this one in!

Robot Races A Little Smarter To Go Faster

[Steven Gong] is attending the University of Waterloo and found himself with a 1/10th scale F1TENTH autonomous RC car. What better use of a fast RC car with some smarts than to race itself around your computer science building?

Onboard is an Nvidia Jetson NX (not the new Nvidia Jetson Orin), a lidar module, and a depth camera. The code runs on top of ROS2, and the results were impressive. [Steven] mapped out the fifth floor of his building at 6 am using SLAM and the onboard sensors. With a map, he created a rough track for his car to follow. First, the car needs to know when to brake and when to hit the gas. With the basics out of the way, [Steven] moved on to the fun part. He wrote code to generate a faster racing line. Every turn has an optimal speed and approach, but each turn affects the next turn, which turns it into a rather exciting optimization problem.

Along the way, [Steven] fixed the gearbox, tuned the PID steering loop, and removed the software speed limits. It’s impressive engineering, and we love seeing the car zoom around faster and faster. The car eventually hit 25km/h, which seems pretty fast for indoors. The code and more details are up on GitHub.

However, if you’re curious about playing around with self-driving, perhaps a much smaller scale Pi Zero-based racer might be more your speed. Video after the break.

Continue reading “Robot Races A Little Smarter To Go Faster”

PUF Away For Hardware Fingerprinting

Despite the rigorous process controls for factories, anyone who has worked on hardware can tell you that parts may look identical but are not the same. Everything from silicon defects to microscopic variations in materials can cause profoundly head-scratching effects. Perhaps one particular unit heats up faster or locks up when executing a specific sequence of instructions and we throw our hands up, saying it’s just a fact of life. But what if instead of rejecting differences that fall outside a narrow range, we could exploit those tiny differences?

This is where physically unclonable functions (PUF) come in. A PUF is a bit of hardware that returns a value given an input, but each bit of hardware has different results despite being the same design. This often relies on silicon microstructure imperfections. Even physically uncapping the device and inspecting it, it would be incredibly difficult to reproduce the same imperfections exactly. PUFs should be like the ideal version of a fingerprint: unique and unforgeable.

Because they depend on manufacturing artifacts, there is a certain unpredictability, and deciding just what features to look at is crucial. The PUF needs to be deterministic and produce the same value for a given specific input. This means that temperature, age, power supply fluctuations, and radiation all cause variations and need to be hardened against. Several techniques such as voting, error correction, or fuzzy extraction are used but each comes with trade-offs regarding power and space requirements. Many of the fluctuations such as aging and temperature are linear or well-understood and can be easily compensated for.

Broadly speaking, there are two types of PUFs: weak and strong. Weak offers only a few responses and are focused on key generation. The key is then fed into more traditional cryptography, which means it needs to produce exactly the same output every time. Strong PUFs have exponential Challenge-Response Pairs and are used for authenticating. While strong PUFs still have some error-correcting they might be queried fifty times and it has to pass at least 95% of the queries to be considered authenticated, allowing for some error. Continue reading “PUF Away For Hardware Fingerprinting”

Timeframe: The Little Desk Calendar That Could

Usually, the problem comes before the solution, but for [Stavros], the opposite happened. A 4.7″ E-Ink screen with integrated battery management and ESP32 caught his eye, and he bought it and started thinking about what he wanted to do with it. The Timeframe is a sleek desk calendar based around the integrated e-ink screen.

[Stavros] found the device’s MicroPython support was a little lackluster, and often failed to draw. He found a Platform.io project that used an older but modified library for driving the e-ink display which worked quite well. However, the older library didn’t support portrait orientation or other niceties. Rather than try and create something complex in C, he moved the complexity to a server environment he knew more about. With the help of CoPilot, he got some code that would wake up the ESP32 every half hour, download an image from a server, and then display it. A Python script uses a headless browser to visit Google Calendar, resize the window, take a screenshot, and then upload it.

The hardest part of the exercise was getting authentication with Google working reliably. A white sleek 3D printed case wraps the whole affair in an aesthetically pleasing shell. So far, this has been a great story of someone building something for themselves and using their strengths. Where’s the hack?

The hack comes when [Stavros] tried squeezing his calendar into a case that was too tight and cracked the screen. Suddenly a large portion of the screen wouldn’t draw. He turned what was broken into something new by mapping out the area that didn’t draw and converting the Python to draw weather information with Pillow rather than screenshot a webpage: clever reuse and a way to make good out of a bad accident.

The code is up on GitLab, and the 3D files for the case are available on Printables. You can also find the project on Hackaday.io, as it was an entry into our recently concluded Low-Power Contest. Unfortunately, while the Timeframe is pretty power efficient, it doesn’t last as long as this calendar with a 50-year battery life.

Signed Distance Functions: Modeling In Math

What if instead of defining a mesh as a series of vertices and edges in a 3D space, you could describe it as a single function? The easiest function would return the signed distance to the closest point (negative meaning you were inside the object). That’s precisely what a signed distance function (SDF) is. A signed distance field (also SDF) is just a voxel grid where the SDF is sampled at each point on the grid. First, we’ll discuss SDFs in 2D and then jump to 3D.

SDFs in 2D

A signed distance function in 2D is more straightforward to reason about so we’ll cover it first. Additionally, it is helpful for font rendering in specific scenarios. [Vassilis] of [Render Diagrams] has a beautiful demo on two-dimensional SDFs that covers the basics. The naive technique for rendering is to create a grid and calculate the distance at each point in the grid. If the distance is greater than the size of the grid cell, the pixel is not colored in. Negative values mean the pixel is colored in as the center of the pixel is inside the shape. By increasing the size of the grid, you can get better approximations of the actual shape of the SDF. So, why use this over a more traditional vector approach? The advantage is that the shape is represented by a single formula calculated at many points. Most modern computers are extraordinarily good at calculating the same thing thousands of times with slightly different parameters, often using the GPU. GLyphy is an SDF-based text renderer that uses OpenGL ES2 as a shader, as discussed at Linux conf in 2014. Freetype even merged an SDF renderer written by [Anuj Verma] back in 2020. Continue reading “Signed Distance Functions: Modeling In Math”