Multi-View Wire Art Meets Generative AI

DreamWire is a system for generating multi-view wire art using machine learning techniques to help generate the patterns required.

The 3-dimensional wire pattern in the center creates images of Einstein, Turing, and Newton depending on viewing angle.

What’s wire art? It’s a three-dimensional twisted mass of lines which, when viewed from a certain perspective, yields an image. Multi-view wire art produces different images from the same mass depending on the viewing angle, and as one can imagine, such things get very complex, very quickly.

A recently-released paper explains how the system works, explaining the role generative AI plays in being uniquely suited to create meaningful intersections between multiple inputs. There’s also a video (embedded just under the page break) that showcases many of the results researchers obtained.

The GitHub repository for the project doesn’t have much in it yet, but it’s a good place to keep an eye on if you’re interested in what comes next.

We’ve seen generative AI applied in a similarly novel way to help create visual anagrams, or 2D patterns that can be interpreted differently based on a variety of orientations and permutations. These sorts of systems still need to be guided by a human, but having machine learning do the heavy lifting allows just about anybody to explore their creativity.

Continue reading “Multi-View Wire Art Meets Generative AI”

Can Google’s New AI Read Your Datasheets For You?

We’ve seen a lot of AI tools lately, and, of course, we know they aren’t really smart, but they sure fool people into thinking they are actually intelligent. Of course, these programs can only pick through their training, and a lot depends on what they are trained on. When you use something like ChatGPT, for example, you assume they trained it on reasonable data. Sure, it might get things wrong anyway, but there’s also the danger that it simply doesn’t know what you are talking about. It would be like calling your company’s help desk and asking where you left your socks — they simply don’t know.

We’ve seen attempts to have AI “read” web pages or documents of your choice and then be able to answer questions about them. The latest is from Google with NotebookLM. It integrates a workspace where you can make notes, ask questions, and provide sources. The sources can be text snippets, documents from Google Drive, or PDF files you upload.

You can’t ask questions until you upload something, and we presume the AI restricts its answers to what’s in the documents you provide. It still won’t be perfect, but at least it won’t just give you bad information from an unknown source. Continue reading “Can Google’s New AI Read Your Datasheets For You?”

Making Visual Anagrams, With Help From Machine Learning

[Daniel Geng] and others have an interesting system of generating multi-view optical illusions, or visual anagrams. Such images have more than one “correct” view and visual interpretation.

What’s more, there are quite a few different methods on display: 90 degree flips and other (orthogonal) image rotations, color inversions, jigsaw permutations, and more. The project page has a generous number of examples, so go check them out!

The team’s method uses pre-trained diffusion models — more commonly known as the secret sauce inside image-generating AIs — to evaluate and work to combine the differences between different images, and try to combine and apply it in a way that results in the model generating a good visual result. While conceptually straightforward, this process wasn’t really something that could work without diffusion models driven by modern machine learning techniques.

The visual_anagrams GitHub repository has code and the research paper goes into details on implementation, limitations, and gives guidance on obtaining good results. Image generation is just one of the rapidly-evolving aspects of recent innovations, and it’s always interesting to see unusual applications like this one.

Mozilla Lets Folks Turn AI LLMs Into Single-File Executables

LLMs (Large Language Models) for local use are usually distributed as a set of weights in a multi-gigabyte file. These cannot be directly used on their own, which generally makes them harder to distribute and run compared to other software. A given model can also have undergone changes and tweaks, leading to different results if different versions are used.

To help with that, Mozilla’s innovation group have released llamafile, an open source method of turning a set of weights into a single binary that runs on six different OSes (macOS, Windows, Linux, FreeBSD, OpenBSD, and NetBSD) without needing to be installed. This makes it dramatically easier to distribute and run LLMs, as well as ensuring that a particular version of LLM remains consistent and reproducible, forever.

This wouldn’t be possible without the work of [Justine Tunney], creator of Cosmopolitan, a build-once-run-anywhere framework. The other main part is llama.cpp, and we’ve covered why it is such a big deal when it comes to running self-hosted LLMs.

There are some sample binaries available using the Mistral-7B, WizardCoder-Python-13B, and LLaVA 1.5 LLMs. Just keep in mind that if you’re on a Windows platform, only the LLaVA 1.5 will run, because it’s the only one that squeaks under the 4 GB limit on executable files that Windows has. If you run into issues, check out the gotchas list for troubleshooting tips.

How Do You Prove An AI Didn’t Make Your Art?

In the world of digital art, distinguishing between AI-generated and human-made creations has become a significant challenge. Almost overnight, tool sets for generating AI artworks became commonly available to the public, and suddenly, every digital art competition had to contend with potential submissions. Some have welcomed AI, while others demand competitors create artworks by their own hand and no other.

The problem facing artists and judges alike is just how to determine whether an artwork was created by a human or an AI. So what can be done?

Continue reading “How Do You Prove An AI Didn’t Make Your Art?”

There’s No AI In A Markov Chain, But They’re Fun To Play With

Amid all the hype about AI it sometimes seems as though the world has lost sight of the fact that software such as ChatGPT contains no intelligence. Instead it’s an extremely sophisticated system for extracting plausible machine generated content from the corpus on which it is trained. There’s a long history behind machine generated text, and perhaps the simplest example comes in the form of a Markov chain. [Ben Hoyt] takes us through how these work, and provides some Python code so that you can roll your own.

If you’re uncertain what a Markov chain is, consider the predictive text on your phone. It works by offering the statistically most likely next word in your sentence, and should you accept all of its choices it will deliver sentences which are superficially readable but otherwise complete nonsense. He demonstrates with very simple short source texts how a collocate probability map is generated for two-word phrases, and how from that a likely next word can be extracted. It’s not AI, but it can be a lot of fun to play with and it opens the door to the entire field of computational linguistics. We haven’t set one loose on Hackaday’s archive yet but we suspect it would talk a lot about the Arduino.

We’re talking about Markov chains here with respect to language, but it’s also worth remembering that they work for music too.

Header: Bad AI image with Dall-E prompt, “Ten thousand monkeys with typewriters”.

NVIDIA Trains Custom AI To Assist Chip Designers

AI is big news lately, but as with all new technology moves, it’s important to pierce through the hype. Recent news about NVIDIA creating a custom large language model (LLM) called ChipNeMo to assist in chip design is tailor-made for breathless hyperbole, so it’s refreshing to read exactly how such a thing is genuinely useful.

ChipNeMo is trained on the highly specific domain of semiconductor design via internal code repositories, documentation, and more. The result is a vast 43-billion parameter LLM running on a single A100 GPU that actually plays no direct role in designing chips, but focuses instead on making designers’ jobs easier.

For example, it turns out that senior designers spend a lot of time answering questions from junior designers. If a junior designer can ask ChipNeMo a question like “what does signal x from memory unit y do?” and that saves a senior designer’s time, then NVIDIA says the tool is already worth it. In addition, it turns out another big time sink for designers is dealing with bugs. Bugs are extensively documented in a variety of ways, and designers spend a lot of time reading documentation just to grasp the basics of a particular bug. Acting as a smart interface to such narrowly-focused repositories is something a tool like ChipNeMo excels at, because it can provide not just summaries but also concrete references and sources. Saving developer time in this way is a clear and easy win.

It’s an internal tool and part research project, but it’s easy to see the benefits ChipNeMo can bring. Using LLMs trained on internal information for internal use is something organizations have experimented with (for example, Mozilla did so, while explaining how to do it for yourself) but it’s interesting to see a clear roadmap to assisting developers in concrete ways.