The Performance Impact Of C++’s `final` Keyword For Optimization

In the world of software development the term ‘optimization’ is generally reason for experienced developers to start feeling decidedly nervous, especially when a feature is marked as an ‘easy and free optimization’. The final keyword introduced in C++11 is one of such features. It promises a way to speed up object-oriented code by omitting the vtable call indirection by marking a class or member function as – unsurprisingly – final, meaning that it cannot be inherited from or overridden. Inspired by this promise, [Benjamin Summerton] figured that he’d run a range of benchmarks to see what performance uplift he’d get on his ray tracing project.

To be as thorough as possible, the tests were run on three different systems, including 64-bit Intel and AMD systems, as well as on Apple Silicon (M1). For the compilers various versions of GCC (12.x, 13.x), as well as Clang  (15, 17) and MSVC (17) were employed, with rather interesting results for final versus no final tests. Clang was probably the biggest surprise, as with the keyword added, performance with Clang-generated code absolutely tanked. MSVC was a mixed bag, as were the GCC versions other than GCC 13.2 on AMD Ryzen, which saw a bump of a few percent faster.

Ultimately, it seems that there’s no free lunch as usual, and adding final to your code falls distinctly under ‘only use it if you know what you’re doing’. As things stand, the resulting behavior seems wildly inconsistent.

Programming Ada: First Steps On The Desktop

Who doesn’t want to use a programming language that is designed to be reliable, straightforward to learn and also happens to be certified for everything from avionics to rockets and ICBMs? Despite Ada’s strong roots and impressive legacy, it has the reputation among the average hobbyist of being ‘complicated’ and ‘obscure’, yet this couldn’t be further from the truth, as previously explained. In fact, anyone who has some or even no programming experience can learn Ada, as the very premise of Ada is that it removes complexity and ambiguity from programming.

In this first part of a series, we will be looking at getting up and running with a basic desktop development environment on Windows and Linux, and run through some Ada code that gets one familiarized with the syntax and basic principles of the Ada syntax. As for the used Ada version, we will be targeting Ada 2012, as the newer Ada 2022 standard was only just approved in 2023 and doesn’t change anything significant for our purposes.

Continue reading “Programming Ada: First Steps On The Desktop”

Mapping The Nintendo Switch PCB

As electronics have advanced, they’ve not only gotten more powerful but smaller as well. This size is great for portability and speed but can make things like repair more inaccessible to those of us with only a simple soldering iron. Even simply figuring out what modern PCBs do is beyond most of our abilities due to the shrinking sizes. Thankfully, however, [μSoldering] has spent their career around state-of-the-art soldering equipment working on intricate PCBs with tiny surface-mount components and was just the person to document a complete netlist of the Nintendo Switch through meticulous testing, a special camera, and the use of a lot of very small wires.

The first part of reverse-engineering the Switch is to generate images of the PCBs. These images are taken at an astonishing 6,000 PPI and as a result are incredibly large files. But with that level of detail the process starts to come together. A special piece of software is used from there that allows point-and-click on the images to start to piece the puzzle together, and with an idea of where everything goes the build moves into the physical world.

[μSoldering] removes all of the parts on the PCBs with hot air and then meticulously wires them back up using a custom PCB that allows each connection to be wired up and checked one-by-one. With everything working the way it is meant to, a completed netlist documenting every single connection on the Switch hardware can finally be assembled.

The final documentation includes over two thousand photos and almost as many individual wires with over 30,000 solder joints. It’s an impressive body of work that [μSoldering] hopes will help others working with this hardware while at the same time keeping their specialized skills up-to-date. We also have fairly extensive documentation about some of the Switch’s on-board chips as well, further expanding our body of knowledge on how these gaming consoles work and how they’re put together.

Ask Hackaday: What About Imperfect Features?

Throughout the last few years’ time, I’ve been seeing sparks of an eternal discussion here and there. It’s a nuanced one, but if I could summarize, it’s about different feature development strategies we can follow to design things, especially if they’re aimed at a larger market. Specifically – when adding a feature, how complete and perfect should it be?

A while back, I read a Mastodon thread about VLC not implementing backwards per-frame skipping. At the surface level, it’s about an indignant user asking – what’s the deal with VLC not having a “go back a frame” button? A ton of video players have this feature implemented. There’s a forum thread linked, and, reading it could leave you with a good few conflicting emotions. Here’s a recap.

In what appears to be one of multiple threads asking about a ‘previous frame’ button in VLC, there’s an 82-post discussion involving multiple different VLC developers. The users’ argument is that it appears to be clearly technically possible to add a ‘previous frame’ button in practice, and the developers’ argument is that it’s technologically complex to implement in some cases – for certain formats, even impossible to implement! Let’s go into the developers’ stated reasoning in more details, then – here’s what you can find in the thread, to the best of my ability.

Continue reading “Ask Hackaday: What About Imperfect Features?”

Hardware Should Lead Software, Right?

Once upon a time, about twenty years ago, there was a Linux-based router, the Linksys WRT54G. Back then, the number of useful devices running embedded Linux was rather small, and this was a standout. Back then, getting a hacker device that wasn’t a full-fledged computer onto a WiFi network was also fairly difficult. This one, relatively inexpensive WiFi router got you both in one box, so it was no surprise that we saw rovers with WRT54Gs as their brains, among other projects.

Long Live the WRT54G

Of course, some people just wanted a better router, and thus the OpenWRT project was born as a minimal Linux system that let you do fancy stuff with the stock router. Years passed, and OpenWRT was ported to newer routers, and features were added. Software grew, and as far as we know, current versions won’t even run on the minuscule RAM of the original hardware that gave it it’s name.

Enter the ironic proposal that OpenWRT – the free software group that developed their code on a long-gone purple box – is developing their own hardware. Normally, we think of the development flow going the other way, right? But there’s a certain logic here as well. The software stack is now tried-and-true. They’ve got brand recognition, at least within the Hackaday universe. And in comparison, developing some known-good hardware to work with it is relatively easy.

We’re hardware hobbyists, and for us it’s often the case that the software is the hard part. It’s also the part that can make or break the user experience, so getting it right is crucial. On our hacker scale, we often choose a microcontroller to work with a codebase or tools that we want to use, because it’s easier to move some wires around on a PCB than it is to re-jigger a software house of cards. So maybe OpenWRT’s router proposal isn’t backwards after all? How many other examples of hardware designed to fit into existing software ecosystems can you think of?

Niklaus Wirth with Personal Computer Lilith that he developed in the 1970ies. (Photo: ETH Zurich)

Remembering Niklaus Wirth: Father Of Pascal And Inspiration To Many

Although perhaps not as much of a household name as other pioneers of last century’s rapid evolution of computer hardware and the software running on them, Niklaus Wirth’s contributions puts him right along with other giants. Being a very familiar face both in his native Switzerland at the ETH Zurich university – as well as at Stanford and other locations around the world where computer history was written – Niklaus not only gave us Pascal and Modula-2, but also inspired countless other languages as well as their developers.

Sadly, Niklaus Wirth passed away on January 1st, 2024, at the age of 89. Until his death, he continued to work on the Oberon programming language, as well as its associated operating system: Oberon System and the multi-process, SMP-capable A2 (Bluebottle) operating system that runs natively on x86, X86_64 and ARM hardware. Leaving behind a legacy that stretches from the 1960s to today, it’s hard to think of any aspect of modern computing that wasn’t in some way influenced or directly improved by Niklaus.

Continue reading “Remembering Niklaus Wirth: Father Of Pascal And Inspiration To Many”

Getting Root Access On A Tesla

A growing number of manufacturers are locking perfectly good hardware behind arbitrary software restrictions. While this ought to be a bigger controversy, people seem to keep paying for things like printers with ink subscriptions, cameras with features disabled in firmware, or routers with speed restrictions, ensuring that this practice continues. Perhaps the most blatant is car manufacturers that lock features such as heated seats or even performance upgrades in the hopes of securing a higher price for their vehicles. This might be a thing of the past for Teslas, whose software has been recently unlocked by Berlin IT researchers.

Researchers from Technische Universität Berlin were able to unlock Tesla’s driving assistant by inducing a two-microsecond voltage drop on the processor which allowed root access to the Autopilot software. Referring to this as “Elon mode” since it drops the requirement for the driver to keep their hands on the steering wheel, they were able to access the full self-driving mode allowing autonomous driving without driver input. Although this might be a bad idea based on the performance of “full self-driving” in the real world, the hack at least demonstrates a functional attack point and similar methods could provide free access to other premium features.

While the attack requires physical access to the vehicle’s computer and a well-equipped workbench, in the short term this method might allow for owners of vehicles to use hardware they own however they would like, and in the long term perhaps may make strides towards convincing manufacturers that “features as a service” isn’t a profitable strategy. Perhaps that’s optimistic, but at least for Teslas it’s been shown that they’re not exactly the most secured system on four wheels.