DIY Wall-Plotter Does Generative Art, But Not As We Know It

[Teddy Warner]’s GPenT (Generative Pen-trained Transformer) project is a wall-mounted polargraph that makes plotter art, but there’s a whole lot more going on than one might think. This project was partly born from [Teddy]’s ideas about how to use aspects of machine learning in ways that were really never intended. What resulted is a wall-mounted pen plotter that offers a load of different ‘generators’ — ways to create line art — that range from procedural patterns, to image uploads, to the titular machine learning shenanigans.

There are loads of different ways to represent images with lines, and this project helps explore them.

Want to see the capabilities for yourself? There’s a publicly accessible version of the plotter interface that lets one play with the different generators. The public instance is not connected to a physical plotter, but one can still generate and preview plots, and download the resulting SVG file or G-code.

Most of the generators do not involve machine learning, but the unusual generative angle is well-represented by two of them: dcode and GPenT.

dcode is a diffusion model that, instead of converting a text prompt into an image, has been trained to convert text directly into G-code. It’s very much a square peg in a round hole. Visually it’s perhaps not the most exciting, but as a concept it’s fascinating.

The titular GPenT works like this: give it a scrap of text inspiration (a seed, if you will), and that becomes a combination of other generators and parameters, machine-selected and stacked with one another to produce a final composition. The results are unique, to say the least.

Once the generators make something, the framed and wall-mounted plotter turns it into physical lines on paper. Watch the system’s first plot happen in the video, embedded below under the page break.

This is a monster of a project representing a custom CNC pen plotter, a frame to hold it, and the whole software pipeline both for the CNC machine as well as generating what it plots. Of course, the journey involved a few false starts and dead ends, but they’re all pretty interesting. The plotter’s GitHub repository combined with [Teddy]’s write up has all the details one may need.

It’s also one of those years-in-the-making projects that ultimately got finished and, we think, doing so led to a bit of a sigh of relief on [Teddy]’s part. Most of us have unfinished projects, and if you have one that’s being a bit of a drag, we’d like to remind you that you don’t necessarily have to finish-finish a project to get it off your plate. We have some solid advice on how to (productively) let go.

Continue reading “DIY Wall-Plotter Does Generative Art, But Not As We Know It”

AI Picks Outfits With Abandon

Most of us choose our own outfits on a daily basis. [NeuroForge] decided that he’d instead offload this duty to artificial intelligence — perhaps more for the sake of a class project than outright fashion.

The concept involved first using an AI model to predict the weather. Those predictions would then be fed to a large language model (LLM), which would recommend an appropriate outfit for the conditions. The output from the LLM would be passed to a simple alarm clock which would wake [NeuroForge] and indicate what he should wear for the day. Amazon’s Chronos forecasting model was used for weather prediction based on past weather data, while Meta’s Llama3.1 LLM was used to make the clothing recommendations. [NeuroForge] notes that it was possible to set all this up to work without having to query external services once the historical weather data had been sourced.

While the AI choices often involved strange clashes and were not weather appropriate, [NeuroForge] nonetheless followed through and wore what he was told. This got tough when the outfit on a particularly cold day was a T-shirt and shorts, though the LLM did at least suggest a winter hat and gloves be part of the ensemble. Small wins, right?

We’ve seen machine learning systems applied to wardrobe-related tasks before. One wonders if a more advanced model could be trained to pick not just seasonally-appropriate clothes, but to also assemble actually fashionable outfits to boot. If you manage to whip that up, let us know on the tipsline. Bonus points if your ML system gets a gig on the reboot of America’s Next Top Model.

Continue reading “AI Picks Outfits With Abandon”

Why LLMs Are Less Intelligent Than Crows

The basic concept of human intelligence entails self-awareness alongside the ability to reason and apply logic to one’s actions and daily life. Despite the very fuzzy definition of ‘human intelligence‘, and despite many aspects of said human intelligence (HI) also being observed among other animals, like crows and orcas, humans over the ages have always known that their brains are more special than those of other animals.

Currently the Cattell-Horn-Carroll (CHC) theory of intelligence is the most widely accepted model, defining distinct types of abilities that range from memory and processing speed to reasoning ability. While admittedly not perfect, it gives us a baseline to work with when we think of the term ‘intelligence’, whether biological or artificial.

This raises the question of how in the context of artificial intelligence (AI) the CHC model translate to the technologies which we see in use today. When can we expect to subject an artificial intelligence entity to an IQ test and have it handily outperform a human on all metrics?

Continue reading “Why LLMs Are Less Intelligent Than Crows”

A Bird Watching Assistant

When AI is being touted as the latest tool to replace writers, filmmakers, and other creative talent it can be a bit depressing staring down the barrel of a future dystopia — especially since most LLMs just parrot their training data and aren’t actually creative. But AI can have some legitimate strengths when it’s taken under wing as an assistant rather than an outright replacement.

For example [Aarav] is happy as a lark when birdwatching, but the birds aren’t always around and it can sometimes be a bit of a wild goose chase waiting hours for them to show up. To help him with that he built this machine learning tool to help alert him to the presence of birds.

The small device is based on a Raspberry Pi 5 with an AI hat nested on top, and uses a wide-angle camera to keep an eagle-eyed lookout of a space like a garden or forest. It runs a few scripts in Python leveraging the OpenCV library, which is a widely available machine learning tool that allows users to easily interact with image recognition. When perched to view an outdoor area, it sends out an email notification to the user’s phone when it detects bird activity so that they can join the action swiftly if they happen to be doing other things at the time. The system also logs hourly bird-counts and creates a daily graph, helping users identify peak bird-watching times.

Right now the system can only detect the presence of birds in general, but he hopes to build future versions that can identify birds with more specificity, perhaps down to the species. Identifying birds by vision is certainly one viable way of going about this process, but one of our other favorite bird-watching tools was demonstrated by [Benn Jordan] which uses similar hardware but listens for bird calls rather than looking for the birds with a vision-based system.

Continue reading “A Bird Watching Assistant”

Dual-Arm Mobile Bot Built On IKEA Cart Costs Hundreds, Not Thousands

There are many incredible open-source robotic arm projects out there, but there’s a dearth of affordable, stable, and mobile robotic platforms with arms. That’s where XLeRobot comes in. It builds on the fantastic LeRobot framework to make a unit that can be trained for autonomous tasks via machine learning, as well as operated remotely.

XLeRobot, designed by [Vector Wang], has a pretty clever design that makes optimal use of easy to obtain parts. In addition to the mostly 3D-printed hardware, it uses an IKEA cart with stacked bin-like shelves as its main frame.

The top bin holds dual arms and a central stalk with a “head”. There’s still room left in that top bin, a handy feature that gives the robot a place to stow or carry objects.

The bottom of the cart gets the three-wheeled motion unit. Three omnidirectional wheels provide a stable base while also allowing the robot to propel itself in any direction and turn on a dime. The motion unit bolts to the bottom, but because the IKEA cart’s shelf bottoms are a metal mesh, no drilling is required.

It’s all very tidy, and results in a mobile robotics platform that is cheap enough for most hobbyists to afford, while being big enough to navigate indoor environments and do useful tasks.

Continue reading “Dual-Arm Mobile Bot Built On IKEA Cart Costs Hundreds, Not Thousands”

Learn What A Gaussian Splat Is, Then Make One

Gaussian Splats is a term you have likely come across, probably in relation to 3D scenery. But what are they, exactly? This blog post explains precisely that in no time at all, complete with great interactive examples and highlights of their strengths and relative weaknesses.

Gaussian splats excel at making colorful, organic subject matter look great.

Gaussian splats are a lot like point clouds, except the points are each differently-shaped “splats” of color, arranged in such a way that the resulting 3D scene looks fantastic — photorealistic, even — from any angle.

All of the real work is in the initial setup of the splats into the scene. Once that work is done, viewing is the easy part. Not only are the resulting file sizes of the scenes small, but rendering is computationally simple.

There are a few pros and cons to gaussian splats compared to 3D meshes, but in general they look stunning for any kind of colorful, organic scene. So how does one go about making or using them?

That’s where the second half of the post comes in handy. It turns out that making your own gaussian splats is simply a matter of combining high-quality photos with the right software. In that sense, it has a lot in common with photogrammetry.

Even early on, gaussian splats were notable for their high realism. And since this space has more than its share of lateral-thinkers, the novel concept of splats being neither pixels nor voxels has led some enterprising folks to try to apply the concept to 3D printing.

Where Is Mathematics Going? Large Language Models And Lean Proof Assistant

If you’re a hacker you may well have a passing interest in math, and if you have an interest in math you might like to hear about the direction of mathematical research. In a talk on this topic [Kevin Buzzard], professor of pure mathematics at Imperial College London, asks the question: Where is Mathematics Going?

It starts by explaining that in 2017 he had a mid-life crisis, of sorts, becoming disillusioned with the way mathematics research was being done, and he started looking to computer science for solutions.

He credits Euclid, as many do, with writing down some axioms and starting mathematics, over 2,000 years ago. From axioms came deductions, and deductions became mathematical facts, and math proceeded in this fashion. This continues to be the way mathematical research is done in mathematical departments around the world. The consequence of this is that mathematics is now incomprehensibly large. Similarly the mathematical proofs themselves are exceedingly large, he gives an example of one proof that is 10,000 pages long and still hasn’t been completely written down after having been announced more than 20 years ago.

The conclusion from this is that mathematics has become so complex that traditional methods of documenting it struggle to cope. He says that a tertiary education in mathematics aims to “get students to the 1940s”, whereas a tertiary education in computer science will expose students to the state of the art.

Continue reading “Where Is Mathematics Going? Large Language Models And Lean Proof Assistant”