ISD1700 Based Lo-Fi Sampler

Custom music instruments here at Hackaday range from wacky to poignant. OpnBeat by [Hiro Akihabara] focuses on something different: simplicity.

There are few buttons, the design and code are optimized to be straightforward and easy to modify, and the interface is slick. Eight musical keys complement three interface keys and a knob. An Arduino Nano powers the main brains of the system but the music generation comes from eight Nuvoton ISD1700s controlled over SPI by the Nano. The beautifully laid-out PCB is 110mm by 180mm (4.33″ by 7″), so cases can easily be printed on smaller FDM printers. All the switches are Cherry MX switches for the beautiful tactile feedback.

The code, PCB, and 3D case files are all available on GitHub. We love the thought that went into the design and the focus on making it easy to recreate. It might be quite as cute and simplified as this twelve-button musical macro pad, but the two together could make quite the band.

Continue reading “ISD1700 Based Lo-Fi Sampler”

PicoCray - Raspberry Pi Pico Cluster

Parallel Computing On The PicoCray RP2040 Cluster

[ExtremeElectronics] cleverly demonstrates that if one Raspberry Pi Pico is good, then nine must be awesome.  The PicoCray project connects multiple Raspberry Pi Pico microcontroller modules into a parallel architecture leveraging an I2C bus to communicate between nodes.

The same PicoCray code runs on all nodes, but a grounded pin on one of the Pico modules indicates that it is to operate as the controller node.  All of the remaining nodes operate as processor nodes.  Each processor node implements a random back-off technique to request an address from the controller on the shared bus. After waiting a random amount of time, a processor will check if the bus is being used.  If the bus is in use, the processor will go back to waiting.  If the bus is not in use, the processor can request an address from the controller.

Once a processor node has an address, it can be sent tasks from the controller node.  In the example application, these tasks involve computing elements of the Mandelbrot Set. The particular elements to be computed in a given task are allocated by the controller node which then later collects the results from each processor node and aggregates the results for display.

The name for this project is inspired by Seymore Cray. Our Father of the Supercomputer biography tells his story including why the Cray-1 Supercomputer was referred to as “the world’s most expensive loveseat.” For even more Cray-1 inspiration, check out this Raspberry Pi Zero Cluster.

Holograms: The Future Of Speedy Nanoscale 3D Printing?

3D printing by painting with light beams on a vat of liquid plastic was once the stuff of science fiction, but now is very much science-fact. More than that, it’s consumer-level technology that we’re almost at the point of being blasé about. Scientists and engineers the world over have been quietly beavering away in their labs on the new hotness, nanoscale 3D printing with varying success. Recently IEESpectrum reports some promising work using holographic imaging to generate nanoscale structures at record speed.

Current stereolithography printers make use of UV laser scanned over the bottom of a vat of UV-sensitive liquid photopolymer resin, which is chemically tweaked to make it sensitive to the UV frequency photons. This is all fine, but as we know, this method is slow and can be of limited resolution, and has been largely superseded by LCD technology. Recent research has focussed on two-photon lithography, which uses a resin that is largely transparent to the wavelength of light concerned, but critically, can be polymerized with enough energy density (i.e. the method requires multiple photons to be simultaneously absorbed.) This is achieved by using pulsed-mode lasers to focus to a very tight point, giving the required huge energy density. This tight focus, plus the ability to pass the beam through the vat of liquid allows much tighter image resolution. But it is slow, painfully slow.

Continue reading “Holograms: The Future Of Speedy Nanoscale 3D Printing?”

Hackaday Links Column Banner

Hackaday Links: April 9, 2023

When it comes to cryptocurrency security, what’s the best way to secure the private key? Obviously, the correct answer is to write it on a sticky note and put it on the bezel of your monitor; nobody’ll ever think of looking there. But, if you’re slightly more paranoid, and you have access to a Falcon 9, you might just choose to send it to the Moon. That’s what is supposed to happen in a few months’ time, as private firm Lunar Outpost’s MAPP, or Mobile Autonomous Prospecting Platform, heads to the Moon. The goal is to etch the private key of a wallet, cheekily named “Nakamoto_1,” on the rover and fund it with 62 Bitcoins, worth about $1.5 million now. The wallet will be funded by an NFT sale of space-themed electronic art, because apparently the project didn’t have enough Web3.0 buzzwords yet. So whoever visits the lunar rover first gets to claim the contents of the wallet, whatever they happen to be worth at the time. Of course, it doesn’t have to be a human who visits.

Continue reading “Hackaday Links: April 9, 2023”

Wolverine Gives Your Python Scripts The Ability To Self-Heal

[BioBootloader] combined Python and a hefty dose of of AI for a fascinating proof of concept: self-healing Python scripts. He shows things working in a video, embedded below the break, but we’ll also describe what happens right here.

The demo Python script is a simple calculator that works from the command line, and [BioBootloader] introduces a few bugs to it. He misspells a variable used as a return value, and deletes the subtract_numbers(a, b) function entirely. Running this script by itself simply crashes, but using Wolverine on it has a very different outcome.

In a short time, error messages are analyzed, changes proposed, those same changes applied, and the script re-run.

Wolverine is a wrapper that runs the buggy script, captures any error messages, then sends those errors to GPT-4 to ask it what it thinks went wrong with the code. In the demo, GPT-4 correctly identifies the two bugs (even though only one of them directly led to the crash) but that’s not all! Wolverine actually applies the proposed changes to the buggy script, and re-runs it. This time around there is still an error… because GPT-4’s previous changes included an out of scope return statement. No problem, because Wolverine once again consults with GPT-4, creates and formats a change, applies it, and re-runs the modified script. This time the script runs successfully and Wolverine’s work is done.

LLMs (Large Language Models) like GPT-4 are “programmed” in natural language, and these instructions are referred to as prompts. A large chunk of what Wolverine does is thanks to a carefully-written prompt, and you can read it here to gain some insight into the process. Don’t forget to watch the video demonstration just below if you want to see it all in action.

While AI coding capabilities definitely have their limitations, some of the questions it raises are becoming more urgent. Heck, consider that GPT-4 is barely even four weeks old at this writing.

Continue reading “Wolverine Gives Your Python Scripts The Ability To Self-Heal”

iAPX432 Board brouhaha_, CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0 via Wikimedia Commons

Intel’s IAPX 432: Gordon Moore’s Gamble And Intel’s Failed 32-bit CISC

Intel C43201-5 Release 1 chip: Instruction Decoder and Microinstruction Sequencer of iAPX 432 General Data Processor (GDP). The chip is in a 64-contact leadless ceramic QUad Inline Package (QUIP), partially obscured by metal retention clip of the 3M socket.
Intel C43201-5 Release 1 chip: Instruction Decoder and Microinstruction Sequencer of iAPX 432 General Data Processor (GDP). The chip is in a 64-contact leadless ceramic QUad Inline Package (QUIP), partially obscured by metal retention clip of the 3M socket.

In a recent article on The Chip Letter [Babbage] looks at the Intel iAPX 432 computer architecture. This was an ambitious, hyper-CISC architecture that was Intel’s first 32-bit architecture. As a stack-based architecture, it exposed no registers to the software developer, while providing high-levels pertaining to object-oriented programming, multitasking and garbage collection in hardware.

At the time that the iAPX 432 (originally the 8800) project was proposed, Gordon Moore was CEO of Intel, and thus ultimately signed off on it. Intended as an indirect successor to the successful 8080 (which was followed up by the equally successful 8086), this new architecture was a ‘micro-mainframe’ that would target high-end users that could run Ada and similar modern languages of the early 1980s.

Unfortunately, upon its release in 1981, the iAPX 432 turned out to be excruciatingly slow and poorly optimized, including the provided Ada compiler. The immense complexity of this new architecture meant that the processor itself was split across two ASICs, with the instruction decoding itself being hugely complex, as [Babbage] describes in the article. Features in the architecture that made it very flexible also meant that a lot of transistors were required to implement these, making for an exceedingly bloated design, not unlike the Intel Itanium (IA-64) disaster a few decades later.

Although the iAPX 432 was a bridge too far by most metrics, it did mean that Intel performed a lot of R&D on advanced features that would later be used in its i960 and x86 processors. With Intel being hardly a struggling company in 1985 when the iAPX 432 architecture was retired, this meant that despite it being a commercial failure, it still provided an interesting glimpse into an alternate reality where the iAPX 432 would have taken the computer world by storm, rather than x86.

Tired Of Web Scraping? Make The AI Do It

[James Turk] has a novel approach to the problem of scraping web content in a structured way without needing to write the kind of page-specific code web scrapers usually have to deal with. How? Just enlist the help of a natural language AI. Scrapeghost relies on OpenAI’s GPT API to parse a web page’s content, pull out and classify any salient bits, and format it in a useful way.

What makes Scrapeghost different is how data gets organized. For example, when instantiating scrapeghost one defines the data one wishes to extract. For example:

from scrapeghost import SchemaScraper
scrape_legislators = SchemaScraper(
schema={
"name": "string",
"url": "url",
"district": "string",
"party": "string",
"photo_url": "url",
"offices": [{"name": "string", "address": "string", "phone": "string"}],
}
)

The kicker is that this format is entirely up to you! The GPT models are very, very good at processing natural language, and scrapeghost uses GPT to process the scraped data and find (using the example above) whatever looks like a name, district, party, photo, and office address and format it exactly as requested.

It’s an experimental tool and you’ll need an API key from OpenAI to use it, but it has useful features and is certainly a novel approach. There’s a tutorial and even a command-line interface, so check it out.