Programming Ada: Packages And Command Line Applications

In the previous installment in this series we looked at how to set up an Ada development environment, and how to compile and run a simple Ada application. Building upon this foundation, we will now look at how to create more complex applications, along with how to parse and use arguments passed to Ada applications on the command line (CLI). After all, passing flags and strings to CLI applications when we launch them is a crucial part of user interaction, as well as when automating systems as is the case with system services.

The way that a program is built-up is also essential, as well-organized code eases maintenance and promotes code reusability through e.g. modularity. In Ada you can organize subprograms (i.e. functions and procedures) in a declarative fashion as stand-alone units, as well as embed subprograms in other subprograms. Another option is packages, which roughly correspond to C++ namespaces, while tagged types are the equivalent of classes. In the previous article we already saw the use of a package, when we used the Ada.Text_IO package to output text to the CLI. In this article we’ll look at how to write our own alongside handling command line input, after a word about the role of the binding phase during the building of an Ada application.

Continue reading “Programming Ada: Packages And Command Line Applications”

The Performance Impact Of C++’s `final` Keyword For Optimization

In the world of software development the term ‘optimization’ is generally reason for experienced developers to start feeling decidedly nervous, especially when a feature is marked as an ‘easy and free optimization’. The final keyword introduced in C++11 is one of such features. It promises a way to speed up object-oriented code by omitting the vtable call indirection by marking a class or member function as – unsurprisingly – final, meaning that it cannot be inherited from or overridden. Inspired by this promise, [Benjamin Summerton] figured that he’d run a range of benchmarks to see what performance uplift he’d get on his ray tracing project.

To be as thorough as possible, the tests were run on three different systems, including 64-bit Intel and AMD systems, as well as on Apple Silicon (M1). For the compilers various versions of GCC (12.x, 13.x), as well as Clang  (15, 17) and MSVC (17) were employed, with rather interesting results for final versus no final tests. Clang was probably the biggest surprise, as with the keyword added, performance with Clang-generated code absolutely tanked. MSVC was a mixed bag, as were the GCC versions other than GCC 13.2 on AMD Ryzen, which saw a bump of a few percent faster.

Ultimately, it seems that there’s no free lunch as usual, and adding final to your code falls distinctly under ‘only use it if you know what you’re doing’. As things stand, the resulting behavior seems wildly inconsistent.

Programming Ada: First Steps On The Desktop

Who doesn’t want to use a programming language that is designed to be reliable, straightforward to learn and also happens to be certified for everything from avionics to rockets and ICBMs? Despite Ada’s strong roots and impressive legacy, it has the reputation among the average hobbyist of being ‘complicated’ and ‘obscure’, yet this couldn’t be further from the truth, as previously explained. In fact, anyone who has some or even no programming experience can learn Ada, as the very premise of Ada is that it removes complexity and ambiguity from programming.

In this first part of a series, we will be looking at getting up and running with a basic desktop development environment on Windows and Linux, and run through some Ada code that gets one familiarized with the syntax and basic principles of the Ada syntax. As for the used Ada version, we will be targeting Ada 2012, as the newer Ada 2022 standard was only just approved in 2023 and doesn’t change anything significant for our purposes.

Continue reading “Programming Ada: First Steps On The Desktop”

Mapping The Nintendo Switch PCB

As electronics have advanced, they’ve not only gotten more powerful but smaller as well. This size is great for portability and speed but can make things like repair more inaccessible to those of us with only a simple soldering iron. Even simply figuring out what modern PCBs do is beyond most of our abilities due to the shrinking sizes. Thankfully, however, [μSoldering] has spent their career around state-of-the-art soldering equipment working on intricate PCBs with tiny surface-mount components and was just the person to document a complete netlist of the Nintendo Switch through meticulous testing, a special camera, and the use of a lot of very small wires.

The first part of reverse-engineering the Switch is to generate images of the PCBs. These images are taken at an astonishing 6,000 PPI and as a result are incredibly large files. But with that level of detail the process starts to come together. A special piece of software is used from there that allows point-and-click on the images to start to piece the puzzle together, and with an idea of where everything goes the build moves into the physical world.

[μSoldering] removes all of the parts on the PCBs with hot air and then meticulously wires them back up using a custom PCB that allows each connection to be wired up and checked one-by-one. With everything working the way it is meant to, a completed netlist documenting every single connection on the Switch hardware can finally be assembled.

The final documentation includes over two thousand photos and almost as many individual wires with over 30,000 solder joints. It’s an impressive body of work that [μSoldering] hopes will help others working with this hardware while at the same time keeping their specialized skills up-to-date. We also have fairly extensive documentation about some of the Switch’s on-board chips as well, further expanding our body of knowledge on how these gaming consoles work and how they’re put together.

Ask Hackaday: What About Imperfect Features?

Throughout the last few years’ time, I’ve been seeing sparks of an eternal discussion here and there. It’s a nuanced one, but if I could summarize, it’s about different feature development strategies we can follow to design things, especially if they’re aimed at a larger market. Specifically – when adding a feature, how complete and perfect should it be?

A while back, I read a Mastodon thread about VLC not implementing backwards per-frame skipping. At the surface level, it’s about an indignant user asking – what’s the deal with VLC not having a “go back a frame” button? A ton of video players have this feature implemented. There’s a forum thread linked, and, reading it could leave you with a good few conflicting emotions. Here’s a recap.

In what appears to be one of multiple threads asking about a ‘previous frame’ button in VLC, there’s an 82-post discussion involving multiple different VLC developers. The users’ argument is that it appears to be clearly technically possible to add a ‘previous frame’ button in practice, and the developers’ argument is that it’s technologically complex to implement in some cases – for certain formats, even impossible to implement! Let’s go into the developers’ stated reasoning in more details, then – here’s what you can find in the thread, to the best of my ability.

Continue reading “Ask Hackaday: What About Imperfect Features?”

Hardware Should Lead Software, Right?

Once upon a time, about twenty years ago, there was a Linux-based router, the Linksys WRT54G. Back then, the number of useful devices running embedded Linux was rather small, and this was a standout. Back then, getting a hacker device that wasn’t a full-fledged computer onto a WiFi network was also fairly difficult. This one, relatively inexpensive WiFi router got you both in one box, so it was no surprise that we saw rovers with WRT54Gs as their brains, among other projects.

Long Live the WRT54G

Of course, some people just wanted a better router, and thus the OpenWRT project was born as a minimal Linux system that let you do fancy stuff with the stock router. Years passed, and OpenWRT was ported to newer routers, and features were added. Software grew, and as far as we know, current versions won’t even run on the minuscule RAM of the original hardware that gave it it’s name.

Enter the ironic proposal that OpenWRT – the free software group that developed their code on a long-gone purple box – is developing their own hardware. Normally, we think of the development flow going the other way, right? But there’s a certain logic here as well. The software stack is now tried-and-true. They’ve got brand recognition, at least within the Hackaday universe. And in comparison, developing some known-good hardware to work with it is relatively easy.

We’re hardware hobbyists, and for us it’s often the case that the software is the hard part. It’s also the part that can make or break the user experience, so getting it right is crucial. On our hacker scale, we often choose a microcontroller to work with a codebase or tools that we want to use, because it’s easier to move some wires around on a PCB than it is to re-jigger a software house of cards. So maybe OpenWRT’s router proposal isn’t backwards after all? How many other examples of hardware designed to fit into existing software ecosystems can you think of?

Niklaus Wirth with Personal Computer Lilith that he developed in the 1970ies. (Photo: ETH Zurich)

Remembering Niklaus Wirth: Father Of Pascal And Inspiration To Many

Although perhaps not as much of a household name as other pioneers of last century’s rapid evolution of computer hardware and the software running on them, Niklaus Wirth’s contributions puts him right along with other giants. Being a very familiar face both in his native Switzerland at the ETH Zurich university – as well as at Stanford and other locations around the world where computer history was written – Niklaus not only gave us Pascal and Modula-2, but also inspired countless other languages as well as their developers.

Sadly, Niklaus Wirth passed away on January 1st, 2024, at the age of 89. Until his death, he continued to work on the Oberon programming language, as well as its associated operating system: Oberon System and the multi-process, SMP-capable A2 (Bluebottle) operating system that runs natively on x86, X86_64 and ARM hardware. Leaving behind a legacy that stretches from the 1960s to today, it’s hard to think of any aspect of modern computing that wasn’t in some way influenced or directly improved by Niklaus.

Continue reading “Remembering Niklaus Wirth: Father Of Pascal And Inspiration To Many”