Screenshot of the GitHub Marketplace action listing, describing the extension

Giving Your KiCad PCB Repository Pretty Pictures

Publishing your boards on GitHub or GitLab is a must, and leads to wonderful outcomes in the hacker world. On their own, however, your board files might have the repo look a bit barren; having a picture or two in the README is the best. Making them yourself takes time – what if you could have it happen automatically? Enter kicad-render, a GitHub and GitLab integration for rendering your KiCad projects by [linalinn].

This integration makes your board pictures, top and bottom view, generated on every push into the repo – just embed two image links into your README.md. This integration is made possible thanks to the new option in KiCad 8’s kicad-cli – board image generation, and [linalinn]’s code makes KiCad run on GitHub/GitLab servers.

For even more bling, you can enable an option to generate a GIF that rotates your board, in the style of that one [arturo182] demo – in fact, this integration’s GIF code was borrowed from that script! Got a repository with many boards in one? There’s an option you could make work for yourself, too.

All you need to do is to follow a couple of simple steps; [linalinn] has documented both the GitHub and GitLab integration. We’ve recently talked about KiCad integrations in more detail, if you’re wondering what else your repository could be doing!

Don’t Object To Python Objects

There’s the old joke about 10 kinds of programmers, but the truth is when it comes to programming, there are often people who make tools and people who use tools. The Arduino system is a good example of this. Most people use it like a C compiler. However, it really uses C++, and if you want to provide “things” to the tool users, you need to create objects. For example, when you put Serial in a program, you use an object someone else wrote. Python — and things like Micropython — have the same kind of division. Python started as a scripting language, but it has added object features, allowing a rich set of tools for scripters to use. [Damilola Oladele] shows the ins and outs of object-oriented Python in a recent post.

Like other languages, Python allows you to organize functions and data into classes and then create instances that belong to that class. Class hierarchies are handy for reusing code, customizing behavior, and — through polymorphism — building device driver-like architectures.

Continue reading “Don’t Object To Python Objects”

Programming Ada: Packages And Command Line Applications

In the previous installment in this series we looked at how to set up an Ada development environment, and how to compile and run a simple Ada application. Building upon this foundation, we will now look at how to create more complex applications, along with how to parse and use arguments passed to Ada applications on the command line (CLI). After all, passing flags and strings to CLI applications when we launch them is a crucial part of user interaction, as well as when automating systems as is the case with system services.

The way that a program is built-up is also essential, as well-organized code eases maintenance and promotes code reusability through e.g. modularity. In Ada you can organize subprograms (i.e. functions and procedures) in a declarative fashion as stand-alone units, as well as embed subprograms in other subprograms. Another option is packages, which roughly correspond to C++ namespaces, while tagged types are the equivalent of classes. In the previous article we already saw the use of a package, when we used the Ada.Text_IO package to output text to the CLI. In this article we’ll look at how to write our own alongside handling command line input, after a word about the role of the binding phase during the building of an Ada application.

Continue reading “Programming Ada: Packages And Command Line Applications”

Train A GPT-2 LLM, Using Only Pure C Code

[Andrej Karpathy] recently released llm.c, a project that focuses on LLM training in pure C, once again showing that working with these tools isn’t necessarily reliant on sprawling development environments. GPT-2 may be older but is perfectly relevant, being the granddaddy of modern LLMs (large language models) with a clear heritage to more modern offerings.

LLMs are fantastically good at communicating despite not actually knowing what they are saying, and training them usually relies on PyTorch deep learning library, itself written in Python. llm.c takes a simpler approach by implementing the neural network training algorithm for GPT-2 directly. The result is highly focused and surprisingly short: about a thousand lines of C in a single file. It is a highly elegant process that does the same thing the bigger, clunkier methods accomplish. It can run entirely on a CPU, or it can take advantage of GPU acceleration, where available.

This isn’t the first time [Andrej Karpathy] has bent his considerable skills and understanding towards boiling down these sorts of concepts into bare-bones implementations. We previously covered a project of his that is the “hello world” of GPT, a tiny model that predicts the next bit in a given sequence and offers low-level insight into just how GPT (generative pre-trained transformer) models work.

The Performance Impact Of C++’s `final` Keyword For Optimization

In the world of software development the term ‘optimization’ is generally reason for experienced developers to start feeling decidedly nervous, especially when a feature is marked as an ‘easy and free optimization’. The final keyword introduced in C++11 is one of such features. It promises a way to speed up object-oriented code by omitting the vtable call indirection by marking a class or member function as – unsurprisingly – final, meaning that it cannot be inherited from or overridden. Inspired by this promise, [Benjamin Summerton] figured that he’d run a range of benchmarks to see what performance uplift he’d get on his ray tracing project.

To be as thorough as possible, the tests were run on three different systems, including 64-bit Intel and AMD systems, as well as on Apple Silicon (M1). For the compilers various versions of GCC (12.x, 13.x), as well as Clang  (15, 17) and MSVC (17) were employed, with rather interesting results for final versus no final tests. Clang was probably the biggest surprise, as with the keyword added, performance with Clang-generated code absolutely tanked. MSVC was a mixed bag, as were the GCC versions other than GCC 13.2 on AMD Ryzen, which saw a bump of a few percent faster.

Ultimately, it seems that there’s no free lunch as usual, and adding final to your code falls distinctly under ‘only use it if you know what you’re doing’. As things stand, the resulting behavior seems wildly inconsistent.

FLOSS Weekly Episode 780: Zoneminder — Better Call Randal

This week Jonathan Bennett and Aaron Newcomb chat with Isaac Connor about Zoneminder! That’s the project that’s working to store and deliver all the bits from security cameras — but the CCTV world has changed a lot since Zoneminder first started, over 20 years ago. The project is working hard to keep up, with machine learning object detection, WebRTC, and more. Isaac talks a bit about developer burnout, and a case or two over the years where an aggressive contributor seems suspicious in retrospect. And when is the next stable version of Zoneminder coming out, anyway?

Continue reading “FLOSS Weekly Episode 780: Zoneminder — Better Call Randal”

Programming Ada: First Steps On The Desktop

Who doesn’t want to use a programming language that is designed to be reliable, straightforward to learn and also happens to be certified for everything from avionics to rockets and ICBMs? Despite Ada’s strong roots and impressive legacy, it has the reputation among the average hobbyist of being ‘complicated’ and ‘obscure’, yet this couldn’t be further from the truth, as previously explained. In fact, anyone who has some or even no programming experience can learn Ada, as the very premise of Ada is that it removes complexity and ambiguity from programming.

In this first part of a series, we will be looking at getting up and running with a basic desktop development environment on Windows and Linux, and run through some Ada code that gets one familiarized with the syntax and basic principles of the Ada syntax. As for the used Ada version, we will be targeting Ada 2012, as the newer Ada 2022 standard was only just approved in 2023 and doesn’t change anything significant for our purposes.

Continue reading “Programming Ada: First Steps On The Desktop”