The Cyber Resilience Act Threatens Open Source

Society and governments are struggling to adapt to a world full of cybersecurity threats. Case in point: the EU CRA — Cyber Resilience Act — is a proposal by the European Commission to enact legislation with a noble goal: protect consumers from cybercrime by having security baked in during design. Even if you don’t live in the EU, today’s global market ensures that if the European Parliament adopts this legislation, it will affect the products you buy and, possibly, the products you create. In a recent podcast, our own [Jonathan Bennett] and [Doc Searles] interview [Mike Milinkovich] from the Eclipse Foundation about the proposal and what they fear would be almost a death blow to open source software development. You can watch the podcast below.

If you want some background, you can read the EU’s now closed request for comments and the blog post outlining the problems from opensource.org. At the heart of the issue is the need for organizations to self-certify their compliance with the act. Since open source is often maintained by a small loose-knit group of contributors, it is difficult to see how this will work.

Continue reading “The Cyber Resilience Act Threatens Open Source”

How Much Programming Can ChatGPT Really Do?

By now we’ve all seen articles where the entire copy has been written by ChatGPT. It’s essentially a trope of its own at this point, so we will start out by assuring you that this article is being written by a human. AI tools do seem poised to be extremely disruptive to certain industries, though, but this doesn’t necessarily have to be a bad thing as long as they continue to be viewed as tools, rather than direct replacements. ChatGPT can be used to assist in plenty of tasks, and can help augment processes like programming (rather than becoming the programmer itself), and this article shows a few examples of what it might be used for.

AI comments are better than nothing…probably.

While it can write some programs on its own, in some cases quite capably, for specialized or complex tasks it might not be quite up to the challenge yet. It will often appear extremely confident in its solutions even if it’s providing poor or false information, though, but that doesn’t mean it can’t or shouldn’t be used at all.

The article goes over a few of the ways it can function more as an assistant than a programmer, including generating filler content for something like an SQL database, converting data from one format to another, converting programs from one language to another, and even help with a program’s debugging process.

Some other things that ChatGPT can be used for that we’ve been able to come up with include asking for recommendations for libraries we didn’t know existed, as well as asking for music recommendations to play in the background while working. Tools like these are extremely impressive, and while they likely aren’t taking over anyone’s job right now, that might not always be the case.

Why LLaMa Is A Big Deal

You might have heard about LLaMa or maybe you haven’t. Either way, what’s the big deal? It’s just some AI thing. In a nutshell, LLaMa is important because it allows you to run large language models (LLM) like GPT-3 on commodity hardware. In many ways, this is a bit like Stable Diffusion, which similarly allowed normal folks to run image generation models on their own hardware with access to the underlying source code. We’ve discussed why Stable Diffusion matters and even talked about how it works.

LLaMa is a transformer language model from Facebook/Meta research, which is a collection of large models from 7 billion to 65 billion parameters trained on publicly available datasets. Their research paper showed that the 13B version outperformed GPT-3 in most benchmarks and LLama-65B is right up there with the best of them. LLaMa was unique as inference could be run on a single GPU due to some optimizations made to the transformer itself and the model being about 10x smaller. While Meta recommended that users have at least 10 GB of VRAM to run inference on the larger models, that’s a huge step from the 80 GB A100 cards that often run these models.

While this was an important step forward for the research community, it became a huge one for the hacker community when [Georgi Gerganov] rolled in. He released llama.cpp on GitHub, which runs the inference of a LLaMa model with 4-bit quantization. His code was focused on running LLaMa-7B on your Macbook, but we’ve seen versions running on smartphones and Raspberry Pis. There’s even a version written in Rust! A rough rule of thumb is anything with more than 4 GB of RAM can run LLaMa. Model weights are available through Meta with some rather strict terms, but they’ve been leaked online and can be found even in a pull request on the GitHub repo itself. Continue reading “Why LLaMa Is A Big Deal”

Hackaday Berlin: The Badge, Workshops, And Lightning Talks

Hackaday Berlin is just under two weeks away, and we’ve got news times three! If you don’t already have tickets, there are still a few left, so grab them while they’re hot. We’ll be rolling out the final full schedule soon, but definitely plan on attending a pre-party Friday night the 24th, followed by a solid 14-hour day of hacking, talks, and music on Saturday the 25th, and then a mellow Bring-a-Hack brunch with impromptu demos, workshops, and whatever else on Sunday from 10:30 until 14:00.

The Badge Round Two

Many Europeans weren’t able to make the flight to Supercon, so here’s your chance to get hands on Voja Antonic’s superb down-to-the-metal computer trainer-slash-retrocomputer on this side of the Atlantic. It’s been re-skinned for Berlin, with a couple hardware tweaks because nobody can leave a board revision alone, but it’s 100% compatible with the badge that took Supercon 2022 by storm.

If you want to read more about it, you should. We loved it, and so did the crowd. One of the coolest badge hardware hacks was a “punchcard” reader, but there was also a lot of work on the software side as well, and we got pull requests for most of the cool demos. If you’re coming, and if you’d like to start your badge hacking a bit early, you could start your research now.

We’ll have a Badge Hacking Ceremony Saturday night, so you can show off whatever you made. It’s lots of fun. Continue reading “Hackaday Berlin: The Badge, Workshops, And Lightning Talks”

Will A.I. Steal All The Code And Take All The Jobs?

New technology often brings with it a bit of controversy. When considering stem cell therapies, self-driving cars, genetically modified organisms, or nuclear power plants, fears and concerns come to mind as much as, if not more than, excitement and hope for a brighter tomorrow. New technologies force us to evolve perspectives and establish new policies in hopes that we can maximize the benefits and minimize the risks. Artificial Intelligence (AI) is certainly no exception. The stakes, including our very position as Earth’s apex intellect, seem exceedingly weighty. Mathematician Irving Good’s oft-quoted wisdom that the “first ultraintelligent machine is the last invention that man need make” describes a sword that cuts both ways. It is not entirely unreasonable to fear that the last invention we need to make might just be the last invention that we get to make.

Artificial Intelligence and Learning

Artificial intelligence is currently the hottest topic in technology. AI systems are being tasked to write prose, make art, chat, and generate code. Setting aside the horrifying notion of an AI programming or reprogramming itself, what does it mean for an AI to generate code? It should be obvious that an AI is not just a normal program whose code was written to spit out any and all other programs. Such a program would need to have all programs inside itself. Instead, an AI learns from being trained. How it is trained is raising some interesting questions.

Humans learn by reading, studying, and practicing. We learn by training our minds with collected input from the world around us. Similarly, AI and machine learning (ML) models learn through training. They must be provided with examples from which to learn. The examples that we provide to an AI are referred to as the data corpus of the training process. The robot Johnny 5 from “Short Circuit”, like any curious-minded student, needs input, more input, and more input.

Continue reading “Will A.I. Steal All The Code And Take All The Jobs?”

A Pi Pico plugged into a breadboard, with jumpre wires going away from its pins to an SPI flashing clip, that's in turn clipped onto an SPI flash chip on a BeagleBone board

Programming SPI Flash Chips? Use Your Pico!

At this point, a Pi Pico is equivalent to a bag full of programmers and debugging accessories. For instance, when you want to program an SPI flash chip, do you use one of those wonky CH341 dongles, or perhaps, even a full-on Raspberry Pi with a Linux OS? If so, it might be time to set those two aside – any RP2040 board can do this now. This is thanks to work of [stacksmashing] who implemented serprog protocol for the RP2040, letting us use a Pi Pico with stock flashrom for all our SPI flash chip needs.

After flashing the code to your RP2040 board, all you need to do is to wire your flash chip to the right pins, and then use the serprog programmer type in your flashrom commandline – instructions are available on GitHub along with the code, as you’d expect. Don’t feel like installing flashrom, or perhaps you happen to run Windows and need a flasher in a pinch? [stacksmashing] has a WebSerial-based SPI flasher tool for you, too, and shows it off with a fancy all-the-pinouts board of his own making.

This kind of tool is indispensable – you don’t need to mod one of these CH341 programmers to fix the bonkers 5 V default IO, or keep an entire Linux computer handy when you likely already have one at your fingertips. All in all, yay for one more RP2040 trick up our sleeve – this SPI flashing helper joins an assortment of applets for SWD, JTAG, UART, I2C and CAN, and in a pinch, your Pi Pico will also work as a digital and analog logic analyzer or an FPGA playground.

 

A Hackaday.io page screenshot, showing all the numerous CH552 projects from [Stefan].

All The USB You Can Do With A CH552

Recently, you might have noticed a flurry of CH552 projects on Hackaday.io – all of them with professionally taken photos of neatly assembled PCBs, typically with a USB connector or two. You might also have noticed that they’re all built by one person, [Stefan “wagiminator” Wagner], who is a prolific hacker – his Hackaday.io page lists over a hundred projects, most of them proudly marked “Completed”. Today, with all these CH552 mentions in the Hackaday.io’s “Newest” category, we’ve decided to take a peek.

The CH552 is an 8-bit MCU with a USB peripheral, with a CH554 sibling that supports USB host, and [Stefan] seriously puts this microcontroller to the test. There’s a nRF24L01+ transceiver turned USB dongle, a rotary encoder peripheral with a 3D-printed case and knob, a mouse wiggler, an interface for our beloved I2C OLED displays, a general-purpose CH55x devboard, and a flurry of AVR programmers – regular AVRISP, an ISP+UPDI programmer, and a UPDI programmer with HV support. Plus, if USB host is your interest, there’s a CH554 USB host development board specifically. Every single one of these is open-source, with PCBs designed in EasyEDA, the firmware already written (!) and available on GitHub, and a lovingly crafted documentation page for each.

[Stefan]’s seriously put the CH552 to the test, and given that all of these projects got firmware, having these projects as examples is a serious incentive for more hackers to try these chips out, especially considering that the CH552 and CH554 go for about 50 cents a piece at websites like LCSC, and mostly in friendly packages. We did cover these two chips back in 2018, together with a programming guide, and we’ve seen things like badges built with its help, but having all these devices to follow is a step up in availability – plus, it’s undeniable that all the widgets built are quite useful by themselves!