Wolverine Gives Your Python Scripts The Ability To Self-Heal

[BioBootloader] combined Python and a hefty dose of of AI for a fascinating proof of concept: self-healing Python scripts. He shows things working in a video, embedded below the break, but we’ll also describe what happens right here.

The demo Python script is a simple calculator that works from the command line, and [BioBootloader] introduces a few bugs to it. He misspells a variable used as a return value, and deletes the subtract_numbers(a, b) function entirely. Running this script by itself simply crashes, but using Wolverine on it has a very different outcome.

In a short time, error messages are analyzed, changes proposed, those same changes applied, and the script re-run.

Wolverine is a wrapper that runs the buggy script, captures any error messages, then sends those errors to GPT-4 to ask it what it thinks went wrong with the code. In the demo, GPT-4 correctly identifies the two bugs (even though only one of them directly led to the crash) but that’s not all! Wolverine actually applies the proposed changes to the buggy script, and re-runs it. This time around there is still an error… because GPT-4’s previous changes included an out of scope return statement. No problem, because Wolverine once again consults with GPT-4, creates and formats a change, applies it, and re-runs the modified script. This time the script runs successfully and Wolverine’s work is done.

LLMs (Large Language Models) like GPT-4 are “programmed” in natural language, and these instructions are referred to as prompts. A large chunk of what Wolverine does is thanks to a carefully-written prompt, and you can read it here to gain some insight into the process. Don’t forget to watch the video demonstration just below if you want to see it all in action.

While AI coding capabilities definitely have their limitations, some of the questions it raises are becoming more urgent. Heck, consider that GPT-4 is barely even four weeks old at this writing.

Continue reading “Wolverine Gives Your Python Scripts The Ability To Self-Heal”

iAPX432 Board brouhaha_, CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0 via Wikimedia Commons

Intel’s IAPX 432: Gordon Moore’s Gamble And Intel’s Failed 32-bit CISC

Intel C43201-5 Release 1 chip: Instruction Decoder and Microinstruction Sequencer of iAPX 432 General Data Processor (GDP). The chip is in a 64-contact leadless ceramic QUad Inline Package (QUIP), partially obscured by metal retention clip of the 3M socket.
Intel C43201-5 Release 1 chip: Instruction Decoder and Microinstruction Sequencer of iAPX 432 General Data Processor (GDP). The chip is in a 64-contact leadless ceramic QUad Inline Package (QUIP), partially obscured by metal retention clip of the 3M socket.

In a recent article on The Chip Letter [Babbage] looks at the Intel iAPX 432 computer architecture. This was an ambitious, hyper-CISC architecture that was Intel’s first 32-bit architecture. As a stack-based architecture, it exposed no registers to the software developer, while providing high-levels pertaining to object-oriented programming, multitasking and garbage collection in hardware.

At the time that the iAPX 432 (originally the 8800) project was proposed, Gordon Moore was CEO of Intel, and thus ultimately signed off on it. Intended as an indirect successor to the successful 8080 (which was followed up by the equally successful 8086), this new architecture was a ‘micro-mainframe’ that would target high-end users that could run Ada and similar modern languages of the early 1980s.

Unfortunately, upon its release in 1981, the iAPX 432 turned out to be excruciatingly slow and poorly optimized, including the provided Ada compiler. The immense complexity of this new architecture meant that the processor itself was split across two ASICs, with the instruction decoding itself being hugely complex, as [Babbage] describes in the article. Features in the architecture that made it very flexible also meant that a lot of transistors were required to implement these, making for an exceedingly bloated design, not unlike the Intel Itanium (IA-64) disaster a few decades later.

Although the iAPX 432 was a bridge too far by most metrics, it did mean that Intel performed a lot of R&D on advanced features that would later be used in its i960 and x86 processors. With Intel being hardly a struggling company in 1985 when the iAPX 432 architecture was retired, this meant that despite it being a commercial failure, it still provided an interesting glimpse into an alternate reality where the iAPX 432 would have taken the computer world by storm, rather than x86.

Tired Of Web Scraping? Make The AI Do It

[James Turk] has a novel approach to the problem of scraping web content in a structured way without needing to write the kind of page-specific code web scrapers usually have to deal with. How? Just enlist the help of a natural language AI. Scrapeghost relies on OpenAI’s GPT API to parse a web page’s content, pull out and classify any salient bits, and format it in a useful way.

What makes Scrapeghost different is how data gets organized. For example, when instantiating scrapeghost one defines the data one wishes to extract. For example:

from scrapeghost import SchemaScraper
scrape_legislators = SchemaScraper(
schema={
"name": "string",
"url": "url",
"district": "string",
"party": "string",
"photo_url": "url",
"offices": [{"name": "string", "address": "string", "phone": "string"}],
}
)

The kicker is that this format is entirely up to you! The GPT models are very, very good at processing natural language, and scrapeghost uses GPT to process the scraped data and find (using the example above) whatever looks like a name, district, party, photo, and office address and format it exactly as requested.

It’s an experimental tool and you’ll need an API key from OpenAI to use it, but it has useful features and is certainly a novel approach. There’s a tutorial and even a command-line interface, so check it out.

Blinks Are Useful In VR, But Triggering Blinks Is Tricky

In VR, a blink can be a window of opportunity to improve the user’s experience. We’ll explain how in a moment, but blinks are tough to capitalize on because they are unpredictable and don’t last very long. That’s why researchers spent time figuring out how to induce eye blinks on demand in VR (video) and the details are available in a full PDF report. Turns out there are some novel, VR-based ways to reliably induce blinks. If an application can induce them, it makes it easier to use them to fudge details in helpful ways.

It turns out that humans experience a form of change blindness during blinks, and this can be used to sneak small changes into a scene in useful ways. Two examples are hand redirection (HR), and redirected walking (RDW). Both are ways to subtly break the implicit one-to-one mapping of physical and virtual motions. Redirected walking can nudge a user to stay inside a physical boundary without realizing it, leading the user to feel the area is larger than it actually is. Hand redirection can be used to improve haptics and ergonomics. For example, VR experiences that use physical controls (like a steering wheel in a driving simulator, or maybe a starship simulator project like this one) rely on physical and virtual controls overlapping each other perfectly. Hand redirection can improve the process by covering up mismatches in a way that is imperceptible to the user.

There are several known ways to induce a blink reflex, but it turns out that one novel method is particularly suited to implementing in VR: triggering the menace reflex by simulating a fast-approaching object. In VR, a small shadow appears in the field of view and rapidly seems to approach one’s eyes. This very brief event is hardly noticeable, yet reliably triggers a blink. There are other approaches as well such as flashes, sudden noise, or simulating the gradual blurring of vision, but to be useful a method must be unobtrusive and reliable.

We’ve already seen saccadic movement of the eyes used to implement redirected walking, but it turns out that leveraging eye blinks allows for even larger adjustments and changes to go unnoticed by the user. Who knew blinks could be so useful to exploit?

Continue reading “Blinks Are Useful In VR, But Triggering Blinks Is Tricky”

Debouncing For Fun And… Mostly, Just For Fun

In our minds and our computer screens, we live in an ideal world. Wires don’t have any resistance, capacitors don’t leak, and switches instantly make connections and break them. The truth is, though, in the real world, none of those things are true. If you have a switch connected to a lightbulb, the little glitches when you switch are going to be hard to notice. Hook that same switch up to a processor that is sampling it constantly, and you will have problems. This is the classic bane of designing microcontroller circuits and is called switch bounce. [Dr. Volt] covers seven different ways of dealing with it in a video that you can see below.

While you tend to think of the problem when you are dealing with pushbuttons or other kinds of switches for humans, the truth is the same thing occurs anywhere you have a switch contact, like in a sensor, a mechanical rotary encoder, or even relay contacts. You can deal with the problem in hardware, software, or both.

Continue reading “Debouncing For Fun And… Mostly, Just For Fun”

Homemade Scope Does Supercapacitor Experiments

We’ve always been a little sad that supercapacitors aren’t marked with a big red S on a yellow background. Nevertheless, [DiodeGoneWild] picked up some large-value supercapacitors and used his interesting homemade oscilloscope to examine how they worked. You can watch what he is up to in his workshop in the video below.

Supercapacitors use special techniques to achieve very high capacitance values. For example, the first unit in the video is a 500 F capacitor. That’s not a typo — not microfarads or even millifarads — a full 500 Farads. With reasonable resistance, it can take a long time to charge 500F, so it is easier to see the behavior, especially with the homemade scope, which probably won’t pick up very fast signals.

For example, A 350 mA charging current takes about an hour to bring the capacitor up to 2.6 V, just under its maximum rating of 2.7 V. Supercapacitors usually have low voltage tolerance. Their high capacity makes them ideal for low-current backup applications where you might not want a rechargeable battery because of weight, heat, or problems with long-term capacity loss.

The real star of the video, though, is the cast of homemade test equipment, including the oscilloscope, a power supply, and a battery analyzer. To be fair, he also has some store-bought test gear, too, and the results seem to match well.

Supercapacitors are one of those things that you don’t need until you do. If you haven’t had a chance to play with them, check out the video or at least watch it to enjoy the homebrew gear. We usually look to [Andreas Spiess] for ESP32 advice, but he knows about supercaps, too. If you really like making as much as you can, you can make your own supercapacitors.

Continue reading “Homemade Scope Does Supercapacitor Experiments”

Revisiting Borland Turbo C And C++

Looking back on what programming used to be like can be a fascinatingly entertaining thing, which is why [Tough Developer] decided to download and try using Turbo C and C++, from version 1.0 to the more recent releases. Borland Turbo C 1.0 is a doozy as it was released in 1987 — two years before the C89 standardization that brought us the much beloved ANSI C that so many of us spent the 90s with. Turbo C++ 1.0 is from 1991, which precedes the standardization of C++ in 1998. This means that both integrated development environments (IDEs) provide a fascinating look at what was on the cutting edge in the late 80s and early 90s.

Online help and syntax coloring in Turbo C++.

It wasn’t until version 3.0 that syntax highlighting was added, with the IDE’s focus being mostly on features such as auto-indentation and tool integration. Version 2.0 added breakpoints and further integration with the debugger and other tools, as well as libraries such as the Borland Graphics Interface (BGI). Although even editors like Notepad++ and Vim probably give these old IDEs a run for their money nowadays, they were totally worth the money asked.

Those of us that have been around long enough to have gotten their start in C++ by using the free Turbo command line tools in the 1990s, or lived through the rough, early days of GCC 2.x+ on Linux, will remember that a development environment that Just Worked© was hard to obtain without shelling out some cash. Within that environment Turbo C and C++ and later Visual Studio and others served a very grateful market indeed.

Beyond the IDE itself, these also came with language documentation, as in the absence of constant internet access, referencing APIs and syntax was done using dead-tree reference books or online documentation. Here “online” meaning digital documentation that came provided on a CD and which could be referenced from within the IDE.

[Tough Developer] walks the reader through many features of the IDE, which includes running various demos to show off what the environment was capable of and how that in turn influenced contemporary software and games such as Commander Keen in Keen Dreams. While we can’t say that a return to Turbo C is terribly likely for the average Hackaday reader, we do appreciate taking a look back at older languages and development environments — if for no other reason than to appreciate how far things have come since then.