A Low-Cost ROM Programmer With An AI Twist

There are 0x10 ways to look at ROM programmers: they’re either relatively low-cost tools that let you quickly get about the business of programming vintage ROMs and get back to your retrocomputing activities, or they’re egregiously overpriced on a per-use basis. [Anders Nielsen] seems to land in the latter camp, firmly enough that he not only designed a dedicated ROM programmer for his 65uino ecosystem, but also suffered the indignities of enlisting ChatGPT to “help” him program the thing.

We’ll explain. [Anders]’ 65uino project has been going on for a while, with low-cost ROM programming only the latest effort. To his way of thinking, a $60 or $70 programmer might just be a significant barrier to those trying to break into retrocomputing, and besides, he seems to be more about the journey than the destination. He recently tackled the problem of generating the right programming voltages; here he turns his attention to putting that to work programming vintage ROMs like the W27C512.

Doing so with a 6502-based Arduino-compatible microcontroller requires some silicon calisthenics, including a trio of shift registers to do the addressing using a minimum of GPIO. As for the ChatGPT part, [Anders] thought asking the chatbot to help write some of the code would be a great way to increase his productivity. We thought so too, at least once, and like us, [Anders] concluded that while perhaps helpful in a broad sense, the amount of work you put into checking a chatbot’s work probably exceeds the work saved. But no matter, because in the end the code and the hardware came together to create a prototype ROM programmer for only about $10 worth of parts.

True, the resulting circuit is a bit complex, at least on a breadboard. It should clean up nicely for an eventual PCB version, though, one that plugs right into the 65uino board or even other microcontrollers. Either way, it could make creating custom ROMs for the 65uino a little more accessible.

Continue reading “A Low-Cost ROM Programmer With An AI Twist”

Generative AI Now Encroaching On Music

While it might not seem like it to a novice, music turns out to be a highly mathematical endeavor with precise ratios between chords and notes as well as overall structure of rhythm and timing. This is especially true of popular music which has even more recognizable repeating patterns and trends, making it unfortunately an easy target for modern generative AI which is capable of analyzing huge amounts of data and creating arguably unique creations. This one, called Suno, does just that for better or worse.

Unlike other generative AI offerings that are currently available for creating music, this one is not only capable of generating the musical underpinnings of the song itself but can additionally create a layer of intelligible vocals as well. A deeper investigation of the technology by Rolling Stone found that the tool uses its own models to come up with the music and then offloads the text generation for the vocals to ChatGPT, finally using the generated lyrics to generate fairly convincing vocals. Like image and text generation models that have come out in the last few years, this has the potential to be significantly disruptive.

While we’re not particularly excited about living in a world where humans toil while the machines create art and not the other way around, at best we could hope for a world where real musicians use these models as tools to enhance their creativity rather than being outright substitutes, much like ChatGPT itself currently is for programmers. That might be an overly optimistic view, though, and only time will tell.

Learn AI Via Spreadsheet

While we’ve been known to use and abuse spreadsheets in the past, we haven’t taken it to the level of [Spreadsheets Are All You Need]. The site provides a spreadsheet version of an “AI” system much like ChatGPT 2. Sure, that’s old tech, but the fundamentals are the same as the current crop of AI programs. There are several “lesson” videos that explain it all, with the promise of more to come. You can also, of course, grab the actual spreadsheet.

The spreadsheet is big, and there are certain compromises. For one thing, you have to enter tokens separately. There are 768 numbers representing each token in the input. That’s a lot for a spreadsheet, but a modern GPT uses many more.

Continue reading “Learn AI Via Spreadsheet”

Air Canada’s Chatbot: Why RAG Is Better Than An LLM For Facts

Recently Air Canada was in the news regarding the outcome of Moffatt v. Air Canada, in which Air Canada was forced to pay restitution to Mr. Moffatt after the latter had been disadvantaged by advice given by a chatbot on the Air Canada website regarding the latter’s bereavement fare policy. When Mr. Moffatt inquired whether he could apply for the bereavement fare after returning from the flight, the chatbot said that this was the case, even though the link which it provided to the official bereavement policy page said otherwise.

This latter aspect of the case is by far the most interesting aspect of this case, as it raises many questions about the technical details of this chatbot which Air Canada had deployed on its website. Since the basic idea behind such a chatbot is that it uses a curated source of (company) documentation and policies, the assumption made by many is that this particular chatbot instead used an LLM with more generic information in it, possibly sourced from many other public-facing policy pages.

Whatever the case may be, chatbots are increasingly used by companies, but instead of pure LLMs they use what is called RAG: retrieval augmented generation. This bypasses the language model and instead fetches factual information from a vetted source of documentation.

Continue reading “Air Canada’s Chatbot: Why RAG Is Better Than An LLM For Facts”

A pixellated image of pinokio

On-click Install local AI Applications Using Pinokio

Pinokio is billed as an autonomous virtual computer, which could mean anything really, but don’t click away just yet, because this is one heck of a project. AI enthusiast [cocktail peanut] (and other undisclosed contributors) has created a browser-style application which enables a virtual Unix-like environment to be embedded, regardless of the host architecture. A discover page loads up registered applications from GitHub, allowing a one-click install process, which is ‘simply’ a JSON file describing the dependencies and execution flow. The idea is rather than manually running commands and satisfying dependencies, it’s all wrapped up for you, enabling a one-click to download and install everything needed to run the application.

But what applications? we hear you ask, AI ones. Lots of them. The main driver seems to be to use the Pinokio hosting environment to enable easy deployment of AI applications, directly onto your machine. One click to install the app, then another one to download models, and whatever is needed, from the likes of HuggingFace and friends. A final click to launch the app, and a browser window opens, giving you a web UI to control the locally running AI backend. Continue reading “On-click Install local AI Applications Using Pinokio”

A Straightforward AI Voice Assistant, On A Pi

With AI being all the rage at the moment it’s been somewhat annoying that using a large language model (LLM) without significant amounts of computing power meant surrendering to an online service run by a large company. But as happens with every technological innovation the state of the art has moved on, now to such an extent that a computer as small as a Raspberry Pi can join the fun. [Nick Bild] has one running on a Pi 4, and he’s gone further than just a chatbot by making into a voice assistant.

The brains of the operation is a Tinyllama LLM, packaged as a llamafile, which is to say an executable that provides about as easy a one-step access to a local LLM as it’s currently possible to get. The whisper voice recognition sytem provides a text transcript of the input prompt, while the eSpeak speech synthesizer creates a voice output for the result. There’s a brief demo video we’ve placed below the break, which shows it working, albeit slowly.

Perhaps the most important part of this project is that it’s easy to install and he’s provided full instructions in a GitHub repository. We know that the quality and speed of these models on commodity single board computers will only increase with time, so we’d rate this as an important step towards really good and cheap local LLMs. It may however be a while before it can help you make breakfast.

Continue reading “A Straightforward AI Voice Assistant, On A Pi”

Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]