New Linux Kernel Rules Put The Onus On Humans For AI Tool Usage

It’s fair to say that the topic of so-called ‘AI coding assistants’ is somewhat controversial. With arguments against them ranging from code quality to copyright issues, there are many valid reasons to be at least hesitant about accepting their output in a project, especially one as massive as the Linux kernel. With a recent update to the Linux kernel documentation the use of these tools has now been formalized.

The upshot of the use of such Large Language Models (LLM) tools is that any commit that uses generated code has to be signed off by a human developer, and this human will ultimately bear responsibility for the code quality as well as any issues that the code may cause, including legal ones. The use of AI tools also has to be declared with the Assisted-by: tag in contributions so that their use can be tracked.

When it comes to other open source projects the approach varies, with NetBSD having banished anything tainted by ‘AI’, cURL shuttering its bug bounty program due to AI code slop, and Mesa’s developers demanding that you understand generated code which you submit, following a tragic slop-cident.

Meanwhile there are also rising concerns that these LLM-based tools may be killing open source through ‘vibe-coding’, along with legal concerns whether LLM-generated code respects the original license of the code that was ingested into the training model. Clearly we haven’t seen the end of these issues yet.

Audio Reactive LED Strips Are Hard

Back in 2017, Hackaday featured an audio reactive LED strip project from [Scott Lawson], that has over the years become an extremely popular choice for the party animals among us. We’re fascinated to read his retrospective analysis of the project, in which he looks at how it works in detail and explains that why for all its success, he’s still not satisfied with it.

Sound-to-light systems have been a staple of electronics for many decades, and have progressed from simple volume-based flashers and sequencers to complex DSP-driven affairs like his project. It’s particularly interesting to be reminded that the problem faced by the designer of such a system involves interfacing with human perception rather than making a pretty light show, and in that context it becomes more important to understand how humans perceive sound and light rather than to simply dump a visualization to the LEDs. We receive an introduction to some of the techniques used in speech recognition, because our brains are optimized to recognize activity in the speech frequency range, and in how humans register light intensity.

For all this sophistication and the impressive results it improves though, he’s not ready to call it complete. Making it work well with all musical genres is a challenge, as is that elusive human foot-tapping factor. He talks about using a neural network trained using accelerometer data from people listening to music, which can only be described as an exciting prospect. We genuinely look forward to seeing future versions of this project. Meanwhile if you’re curious, you can head back to 2017 and see our original coverage.