NVIDIA Trains Custom AI To Assist Chip Designers

AI is big news lately, but as with all new technology moves, it’s important to pierce through the hype. Recent news about NVIDIA creating a custom large language model (LLM) called ChipNeMo to assist in chip design is tailor-made for breathless hyperbole, so it’s refreshing to read exactly how such a thing is genuinely useful.

ChipNeMo is trained on the highly specific domain of semiconductor design via internal code repositories, documentation, and more. The result is a vast 43-billion parameter LLM running on a single A100 GPU that actually plays no direct role in designing chips, but focuses instead on making designers’ jobs easier.

For example, it turns out that senior designers spend a lot of time answering questions from junior designers. If a junior designer can ask ChipNeMo a question like “what does signal x from memory unit y do?” and that saves a senior designer’s time, then NVIDIA says the tool is already worth it. In addition, it turns out another big time sink for designers is dealing with bugs. Bugs are extensively documented in a variety of ways, and designers spend a lot of time reading documentation just to grasp the basics of a particular bug. Acting as a smart interface to such narrowly-focused repositories is something a tool like ChipNeMo excels at, because it can provide not just summaries but also concrete references and sources. Saving developer time in this way is a clear and easy win.

It’s an internal tool and part research project, but it’s easy to see the benefits ChipNeMo can bring. Using LLMs trained on internal information for internal use is something organizations have experimented with (for example, Mozilla did so, while explaining how to do it for yourself) but it’s interesting to see a clear roadmap to assisting developers in concrete ways.

Radioactive Water Was Once A (Horrifying) Health Fad

Take a little time to watch the history of Radithor, a presentation by [Adam Blumenberg] into a quack medicine that was exactly what it said on the label: distilled water containing around 2 micrograms of radium in each bottle (yes, that’s a lot.) It’s fascinatingly well-researched, and goes into the technology and societal environment surrounding such a product, which helped play a starring role in the eventual Food, Drug, and Cosmetic Act of 1938. You can watch the whole presentation in the video, embedded below the break. Continue reading “Radioactive Water Was Once A (Horrifying) Health Fad”

OpenMV Promises “Flyby” Imaging Of Components For Pick And Place Project

[iforce2d] has an interesting video exploring whether the OpenMV H7 board is viable as a flyby camera for pick and place, able to quickly snap a shot of a moving part instead of requiring the part to be held still in front of the camera. The answer seems to be yes!

The OpenMV camera module does capture, blob detection, LCD output, and more.

The H7 is OpenMV‘s most recent device, and it supports a variety of useful add-ons such as a global shutter camera sensor, which [iforce2d] is using here. OpenMV has some absolutely fantastic hardware, and is able to snap the image, do blob detection (and other image processing), display on a small LCD, and send all the relevant data over the UART as well as accept commands on what to look for, all in one neat package.

It used to be that global shutter cameras were pretty specialized pieces of equipment, but they’re much more common now. There’s even a Raspberry Pi global shutter camera module, and it’s just so much nicer for machine vision applications.

Watch the test setup as [iforce2d] demonstrates and explains an early proof of concept. The metal fixture on the motor swings over the camera’s lens with a ring light for even illumination, and despite the moving object, the H7 gets an awfully nice image. Check it out in the video, embedded below.

Continue reading “OpenMV Promises “Flyby” Imaging Of Components For Pick And Place Project”

Synthesizing 360-degree Views From Single Source Images

ZeroNVS is one of those research projects that is rather more impressive than it may look at first glance. On one hand, the 3D reconstructions — we urge you to click that first link to see them — look a bit grainy and imperfect. But on the other hand, it was reconstructed using a single still image as an input.

Most results look great, but some — like this bike visible through a park bench — come out a bit strange. A valiant effort for a single-image input, all things considered.

How is this done? It’s NeRFs (neural radiance fields) which leverages machine learning, but with yet another new twist. Existing methods mainly focus on single objects and masked backgrounds, but a new approach makes this method applicable to a variety of complex, in-the-wild images without the need to train new models.

There are a ton of sample outputs on the project summary page that are worth a browse if you find this sort of thing at all interesting. Some of the 360 degree reconstructions look rough, some are impressive, and some are a bit amusing. For example indoor shots tend to reconstruct rooms that look good, but lack doorways.

There is a research paper for those seeking additional details and a GitHub repository for the code, but the implementation requires some significant hardware.

It’s A Marble Clock, But Not As We Know It

[Ivan Miranda] is taking a very interesting approach to a marble clock. His design is a huge assembly that uses black and white marbles to create a (sort of) dot matrix display. It’s part kinetic art and part digital clock, all driven by marbles.

Here’s how it works: black and white marbles feed into a big elevator. This elevator lifts marbles to the top of the curved runs that make up the biggest part of the device. The horizontal area at the bottom is where the time is shown, with white and black marbles making up the numerical display. But how to make sure the white marbles and black marbles go in the right order?

The solution to that is simple. Marbles feed into the elevator in an unpredictable order. An array of sensors detects the color of each marble. Solenoids simply eject any marble that isn’t in the right place. For example, if the next marble for track n needs to be white, then simply kick out any black marbles in that position until there’s a white one. Simple, effective, and guarantees plenty of mesmerizing moving parts.

Of course, this means that marble ejection and marble color sensing need to be utterly reliable, and [Ivan] ran into problems with both. Marble ejection took some careful component testing and selection to get the right solenoids.  Color sensing (as well as detecting empty spaces) settled on IR-based sensors commonly used in line-following robots.

You can watch the clock in action in the video embedded below just under the page break. We recommend giving it a look, because [Ivan] does a great job of showing all of the little challenges that reared their heads, and how he addressed them. There are still a few things to address, but he expects to have those licked by the next video. In the meantime, [Ivan] asks that if anyone knows a source for high quality glass marbles in bulk, please let him know. Low quality ones vary in size and tend to get stuck.

Marble clocks are great expressions of creativity, especially now that 3D printing is common. We love clock hacks, so if you ever create or run across a good one, let us know about it!

Continue reading “It’s A Marble Clock, But Not As We Know It”

2023 Halloween Hackfest: Meet Creepsy, The Robotic People-Seeking Ghost

The 2023 Halloween Contest might be over, but we saw some great entries and clever modifications bringing projects into the Halloween spirit. One of them is Creepsy by [Hazal Mestci], a Raspberry Pi-based robotic ghost able to autonomously pick people out of a crowd and glide towards them, emitting eerie sounds as it does so.

The tech behind Creepsy (GitHub repository) originally led the somewhat less spooky existence of a mobile drink serving platform. But with a little bit of modification and the addition of a bedsheet with cutouts for sensors, the transformation into an obstacle-avoiding people-seeking spooker was complete. Key to this transformation was the Viam Python SDK, a software Swiss army knife used by robot builders everywhere. Creepsy itself was built using handy aluminum extrusion, and 3D printed parts along with the requisite suite of motors, cameras, and ultrasonic sensors.

Thanks to everyone who participated in the 2023 Halloween Contest. Got an idea for next year? It’s never too early to get started because ideas are great, but nothing beats “done on time”!

Most AI Content Is Trash, Just Like Everything Else

[Max Woolf] has been working in the AI space since 2015, and among other work has created numerous useful open-source tools. He also recently wrote a thoughtful blog post that attempts to put into words his feelings on the state of things in the wake of experiencing a bit of an AI backlash-related burnout. Essentially, people effortlessly creating vast amounts of bad AI content has caused a bigger problem than we may realize.

How so? Well, Sturgeon’s law (summarized as “ninety percent of everything is crud”) applies to AI as much as it does to anything else. Theodore Sturgeon was a science fiction author and critic (and writer of multiple Star Trek episodes) who observed in the 1950s that while Science Fiction — the hot new popular thing at the time — was often derided by critics as being little more than low quality pap, so was everything else. It was true that most Science Fiction was garbage. But most work in other fields was of similarly low quality, and thus Science Fiction was really no different. It’s all trash, except for the parts one likes. Just like anything else.

What makes this observation particularly applicable to the current AI landscape is that, according to [Max], the incredible ease of use makes AI’s “ninety percent crud” very large indeed, and the attached backlash is similarly big. The remaining ten percent of AI that is absolutely fantastic and full of possibilities? It’s practically invisible due to how quickly the industry is moving, the speed with which the big players are vying to control it, and how unfashionable it has become to admit one is using AI tools at all.

[Max] knows the scene better than most. One of his projects is simpleaichat, a tool aimed not just at enabling people to integrate AI into projects easier, but piercing the hype around AI to more easily reveal just how these tools actually work. Sadly, a general AI backlash has made developing these tools feel rather less rewarding than it once did.