Homebrew Traffic Monitor Keeps Eyes On The Streets

How many cars go down your street each day? How fast were they going? What about folks out on a walk or people riding bikes? It’s not an easy question to answer, as most of us have better things to do than watch the street all day and keep a tally. But at the same time, this is critically important data from an urban planning perspective.

Of course, you could just leave it to City Hall to figure out this sort of thing. But what if you want to get a speed bump or a traffic light added to your neighborhood? Being able to collect your own localized traffic data could certainly come in handy, which is where TrafficMonitor.ai from [glossyio] comes in.

Continue reading “Homebrew Traffic Monitor Keeps Eyes On The Streets”

New Camera Does Realtime Holographic Capture, No Coherent Light Required

Holography is about capturing 3D data from a scene, and being able to reconstruct that scene — preferably in high fidelity. Holography is not a new idea, but engaging in it is not exactly a point-and-shoot affair. One needs coherent light for a start, and it generally only gets touchier from there. But now researchers describe a new kind of holographic camera that can capture a scene better and faster than ever. How much better? The camera goes from scene capture to reconstructed output in under 30 milliseconds, and does it using plain old incoherent light.

The camera and liquid lens is tiny. Together with the computation back end, they can make a holographic capture of a scene in under 30 milliseconds.

The new camera is a two-part affair: acquisition, and calculation. Acquisition consists of a camera with a custom electrically-driven liquid lens design that captures a focal stack of a scene within 15 ms. The back end is a deep learning neural network system (FS-Net) which accepts the camera data and computes a high-fidelity RGB hologram of the scene in about 13 ms.  How good are the results? They beat other methods, and reconstruction of the scene using the data looks really, really good.

One might wonder what makes this different from, say, a 3D scene captured by a stereoscopic camera, or with an RGB depth camera (like the now-discontinued Intel RealSense). Those methods capture 2D imagery from a single perspective, combined with depth data to give an understanding of a scene’s physical layout.

Holography by contrast captures a scene’s wavefront information, which is to say it captures not just where light is coming from, but how it bends and interferes. This information can be used to optically reconstruct a scene in a way data from other sources cannot; for example allowing one to shift perspective and focus.

Being able to capture holographic data in such a way significantly lowers the bar for development and experimentation in holography — something that’s traditionally been tricky to pull off for the home gamer.

Genetic Algorithm Runs On Atari 800 XL

For the last few years or so, the story in the artificial intelligence that was accepted without question was that all of the big names in the field needed more compute, more resources, more energy, and more money to build better models. But simply throwing money and GPUs at these companies without question led to them getting complacent, and ripe to be upset by an underdog with fractions of the computing resources and funding. Perhaps that should have been more obvious from the start, since people have been building various machine learning algorithms on extremely limited computing platforms like this one built on the Atari 800 XL.

Unlike other models that use memory-intensive applications like gradient descent to train their neural networks, [Jean Michel Sellier] is using a genetic algorithm to work within the confines of the platform. Genetic algorithms evaluate potential solutions by evolving them over many generations and keeping the ones which work best each time. The changes made to the surviving generations before they are put through the next evolution can be made in many ways, but for a limited system like this a quick approach is to make small random changes. [Jean]’s program, written in BASIC, performs 32 generations of evolution to predict the points that will lie on a simple mathematical function.

While it is true that the BASIC program relies on stochastic methods to train, it does work and proves that it’s effective to create certain machine learning models using limited hardware, in this case an 8-bit Atari running BASIC. In previous projects he’s also been able to show how similar computers can be used for other complex mathematical tasks as well. Of course it’s true that an 8-bit machine like this won’t challenge OpenAI or Anthropic anytime soon, but looking for more efficient ways of running complex computation operations is always a more challenging and rewarding problem to solve than buying more computing resources.

Continue reading “Genetic Algorithm Runs On Atari 800 XL”

More Details On Why DeepSeek Is A Big Deal

The DeepSeek large language models (LLM) have been making headlines lately, and for more than one reason. IEEE Spectrum has an article that sums everything up very nicely.

We shared the way DeepSeek made a splash when it came onto the AI scene not long ago, and this is a good opportunity to go into a few more details of why this has been such a big deal.

For one thing, DeepSeek (there’s actually two flavors, -V3 and -R1, more on them in a moment) punches well above its weight. DeepSeek is the product of an innovative development process, and freely available to use or modify. It is also indirectly highlighting the way companies in this space like to label their LLM offerings as “open” or “free”, but stop well short of actually making them open source.

The DeepSeek-V3 LLM was developed in China and reportedly cost less than 6 million USD to train. This was possible thanks to developing DualPipe, a highly optimized and scalable method of training the system despite limitations due to export restrictions on Nvidia hardware. Details are in the technical paper for DeepSeek-V3.

There’s also DeepSeek-R1, a chain-of-thought “reasoning” model which handily provides its thought process enclosed within easily-parsed <think> and </think> pseudo-tags that are included in its responses. A model like this takes an iterative step-by-step approach to formulating responses, and benefits from prompts that provide a clear goal the LLM can aim for. The way DeepSeek-R1 was created was itself novel. Its training started with supervised fine-tuning (SFT) which is a human-led, intensive process as a “cold start” which eventually handed off to a more automated reinforcement learning (RL) process with a rules-based reward system. The result avoided problems that come from relying too much on RL, while minimizing the human effort of SFT. Technical details on the process of training DeepSeek-R1 are here.

DeepSeek-V3 and -R1 are freely available in the sense that one can access the full-powered models online or via an app, or download distilled models for local use on more limited hardware. It is free and open as in accessible, but not open source because not everything needed to replicate the work is actually released. Like with most LLMs, the training data and actual training code used are not available.

What is released and making waves of its own are the technical details of how researchers produced what they did, and that means there are efforts to try to make an actually open source version. Keep an eye out for Open-R1!

close up of a TI-84 Plus CE running custom software

Going Digital: Teaching A TI-84 Handwriting Recognition

You wouldn’t typically associate graphing calculators with artificial intelligence, but hacker [KermMartian] recently made it happen. The innovative project involved running a neural network directly on a TI-84 Plus CE to recognize handwritten digits. By using the MNIST dataset, a well-known collection of handwritten numbers, the calculator could identify digits in just 18 seconds. If you want to learn how, check out his full video on it here.

The project began with a proof of concept: running a convolutional neural network (CNN) on the calculator’s limited hardware, a TI-84 Plus CE with only 256 KB of memory and a 48 MHz processor. Despite these constraints, the neural network could train and make predictions. The key to success: optimizing the code, leveraging the calculator’s C programming tools, and offloading the heavy lifting to a computer for training. Once trained, the network could be transferred to the calculator for real-time inference. Not only did it run the digits from MNIST, but it also accepted input from a USB mouse, letting [KermMartian] draw digits directly on the screen.

While the calculator’s limited resources mean it can’t train the network in real-time, this project is a proof that, with enough ingenuity, even a small device can be used for something as complex as AI. It’s not just about power; it’s about resourcefulness. If you’re into unconventional projects, this is one for the books.

Continue reading “Going Digital: Teaching A TI-84 Handwriting Recognition”

Training A Self-Driving Kart

There are certain tasks that humans perform every day that are notoriously difficult for computers to figure out. Identifying objects in pictures, for example, was something that seems fairly straightforward but was only done by computers with any semblance of accuracy in the last few years. Even then, it can’t be done without huge amounts of computing resources. Similarly, driving a car is a surprisingly complex task that even companies promising full self-driving vehicles haven’t been able to deliver despite working on the problem for over a decade now. [Austin] demonstrates this difficulty in his latest project, which adds self-driving capabilities to a small go-kart.

[Austin] had been working on this project at the local park but grew tired of packing up all his gear when he wanted to work on his machine-learning algorithms. So he took all the self-driving equipment off of the first kart and incorporated it into a smaller kart with a very small turning radius so he could develop it in his shop.

He laid down some tape on the floor to create the track and then set up the vehicle to learn how to drive by watching and gathering data. The model is trained with a convolutional neural network and this data. The only inputs that the model gets are images from cameras at the front of the kart. At first, it could only change the steering angle, with [Austin] controlling the throttle to prevent crashes. Eventually, he gave it control of the throttle as well, which behaves well except at the fastest speeds.

There were plenty of challenges along the way, especially when compared to the models trained at the park; [Austin] correctly theorized that the cause of the hardship in the park was a lack of contrast at the boundary between the track and any out-of-bounds areas. With a few tweaks to the track, as well as adding some wide-angle lenses to his cameras, he was able to get a model that works fairly well. Getting started on a project like this doesn’t have as high of a barrier to entry as one might imagine, either. Take a look at this comprehensive open-source Python library for self-driving projects. If you want to start smaller, perhaps don’t start with a self-driving kart.

Continue reading “Training A Self-Driving Kart”

Dog Poop Drone Cleans Up The Yard So You Don’t Have To

Sometimes you instantly know who’s behind a project from the subject matter alone. So when we saw this “aerial dog poop removal system” show up in the tips line, we knew it had to be the work of [Caleb Olson].

If you’re unfamiliar with [Caleb]’s oeuvre, let us refresh your memory. [Caleb] has been on a bit of a dog poop journey, starting with a machine-learning system that analyzed security camera footage to detect when the adorable [Twinkie] dropped a deuce in the yard. Not content with just knowing when a poop event has occurred, he automated the task of locating the packages with a poop-pointing robot laser. Removal of the poop remained a manual task, one which [Caleb] was keen to outsource, hence the current work.

The video below, from a lightning talk at a conference, is pretty much all we have to go on, and the quality is a bit potato-esque. And while [Caleb]’s PoopCopter is clearly still a prototype, it’s easy to get the gist. Combining data from the previous poop-adjacent efforts, [Caleb] has built a quadcopter that can (or will, someday) be guided to the approximate location of the offending package, home in on it using a downward-looking camera, and autonomously whisk it away.

The retrieval mechanism is the high point for us; rather than a complicated, servo-laden “sky scoop” or something similar, the drone has a bell-shaped container on its belly with a series of geared leaves on the open end. The leaves are open when the drone descends onto the payload, and then close as the drone does a quick rotation around the yaw axis. And, as [Caleb] gleefully notes, the leaves can also open in midair with a high-torque yaw move in the opposite direction; the potential for neighborly hijinx is staggering.

All jokes and puns aside, this looks fantastic, and we can’t wait for more information and a better video. And lest you think [Caleb] only works on “Number Two” problems, never fear — he’s also put considerable work into automating his offspring and taking the awkwardness out of social interactions.

Continue reading “Dog Poop Drone Cleans Up The Yard So You Don’t Have To”