Embedded USB Debug For Snapdragon

According to [Casey Connolly], Qualcomm’s release of how to interact with their embedded USB debugging (EUD) is a big deal. If you haven’t heard of it, nearly all Qualcomm SoCs made since 2018 have a built-in debugger that connects to the onboard USB port. The details vary by chip, but you write to some registers and start up the USB phy. This gives you an oddball USB interface that looks like a seven-port hub with a single device “EUD control interface.”

So what do you do with that? You send a few USB commands, and you’ll get a second device. This one connects to an SWD interface. Of course, we have plenty of tools to debug using SWD.

Continue reading “Embedded USB Debug For Snapdragon”

Dithering With Quantization To Smooth Things Over

It should probably come as no surprise to anyone that the images which we look at every day – whether printed or on a display – are simply illusions. That cat picture isn’t actually a cat, but rather a collection of dots that when looked at from far enough away tricks our brain into thinking that we are indeed looking at a two-dimensional cat and happily fills in the blanks. These dots can use the full CMYK color model for prints, RGB(A) for digital images or a limited color space including greyscale.

Perhaps more interesting is the use of dithering to further trick the mind into seeing things that aren’t truly there by adding noise. Simply put, dithering is the process of adding noise to reduce quantization error, which in images shows up as artefacts like color banding. Within the field of digital audio dithering is also used, for similar reasons. Part of the process of going from an analog signal to a digital one involves throwing away data that falls outside the sampling rate and quantization depth.

By adding dithering noise these quantization errors are smoothed out, with the final effect depending on the dithering algorithm used.

Continue reading “Dithering With Quantization To Smooth Things Over”

screenshot of C programming on Macintosh Plus

Programming Like It’s 1986, For Fun And Zero Profit

Some people slander retrocomputing as an old man’s game, just because most of those involved are more ancient than the hardware they’re playing with. But there are veritable children involved too — take the [ComputerSmith], who is recreating Conway’s game of life on a Macintosh Plus that could very well be as old as his parents. If there’s any nostalgia here, it’s at least a generation removed — thus proving for the haters that there’s more than a misplaced desire to relive one’s youth in exploring these ancient machines.

So what does a young person get out of programming on a 1980s Mac? Well, aside from internet clout, and possible YouTube monetization, there’s the sheer intellectual challenge of the thing. You cant go sniffing around StackExchange or LLMs for code to copy-paste when writing C for a 1986 machine, not if you’re going to be fully authentic. ANSI C only dates to 1987, after all, and figuring out the quirks and foibles of the specific C implementation is both half the fun, and not easily outsourced. Object Pascal would also have been an option (and quite likely more straightforward — at least the language was clearly-defined), but [ComputerSmith] seems to think the exercise will improve his chops with C, and he’s likely to be right. 

Apparently [ComputerSmith] brought this project to VCS Southwest, so anyone who was there doesn’t have to wait for Part 2 of the video to show up to see how this turns out, or to snag a copy of the code (which was apparently available on diskette). If you were there, let us know if you spotted the youngest Macintosh Plus programmer, and if you scored a disk from him.

If the idea of coding in this era tickles the dopamine receptors, check out this how-to for a prizewinning Amiga demo.  If you think pre-ANSI C isn’t retro enough, perhaps you’d prefer programming by card?

Continue reading “Programming Like It’s 1986, For Fun And Zero Profit”

Going To The (Parallel) Chapel

There is always the promise of using more computing power for a single task. Your computer has multiple CPUs now, surely. Your video card has even more. Your computer is probably networked to a slew of other computers. But how do you write software to take advantage of that? There are many complex systems, of course, but there’s also Chapel.

Chapel is a reasonably simple programming language, but it supports parallelism in various forms. The run time controls how computers — whatever that means — communicate with one another. You can have code running on your local CPUs, your GPU, and other processing elements over the network without much work on your part.

Continue reading “Going To The (Parallel) Chapel”

Why GitHub Copilot Isn’t Your Coding Partner

These days ‘AI’ is everywhere, including in software development. Coming hot on the heels of approaches like eXtreme Programming and Pair Programming, there’s now a new kind of pair programming in town in the form of an LLM that’s been digesting millions of lines of code. Purportedly designed to help developers program faster and more efficiently, these ‘AI programming assistants’ have primarily led to heated debate and some interesting studies.

In the case of [Jj], their undiluted feelings towards programming assistants like GitHub Copilot burn as brightly as the fire of a thousand Suns, and not a happy kind of fire.

Whether it’s Copilot or ChatGPT or some other chatbot that may or may not be integrated into your IDE, the frustration with what often feels like StackOverflow-powered-autocomplete is something that many of us can likely sympathize with. Although [Jj] lists a few positives of using an LLM trained on codebases and documentation, their overall view is that using Copilot degrades a programmer, mostly because of how it takes critical thinking skills out of the loop.

Regardless of whether you agree with [Jj] or not, the research so far on using LLMs with software development and other tasks strongly suggests that they’re not a net positive for one’s mental faculties. It’s also important to note that at the end of the day it’s still you, the fleshy bag of mostly salty water, who has to justify the code during code review and when something catches on fire in production. Your ‘copilot’ meanwhile gets off easy.

The rust language logo being branded onto a microcontroller housing

C++ Encounters Of The Rusty Zig Kind

There comes a time in any software developer’s life when they look at their achievements, the lines of code written and the programming languages they have relied on, before wondering whether there may be more out there. A programming language and its associated toolchains begin to feel like familiar, well-used tools after you use them for years, but that is no excuse to remain rusted in place.

While some developers like to zigzag from one language and toolset to another, others are more conservative. My own journey took me from a childhood with QuickBasic and VisualBasic to C++ with a bit of Java, PHP, JavaScript, D and others along the way. Although I have now for years focused on C++, I’m currently getting the hang of Ada in particular, both of which tickle my inner developer in different ways.

Although Java and D never quite reached their lofty promises, there are always new languages to investigate, with both Rust and Zig in particular getting a lot of attention these days. Might they be the salvation that was promised to us C-afflicted developers, and do they make you want to zigzag or ferrously oxidize?

Continue reading “C++ Encounters Of The Rusty Zig Kind”

Data Visualization And Aggregation: Time Series Databases, Grafana And More

If there’s one thing that characterizes the Information Age that we find ourselves in today, it is streams of data. However, without proper ways to aggregate and transform this data into information, it’ll either vanish into the ether or become binary blobs gathering virtual dust on a storage device somewhere. Dealing with these streams of data is thus essential, whether it’s in business (e.g. stock markets), IT (e.g. services status), weather forecasting, or simply keeping tracking of the climate and status of devices inside a domicile.

The first step of aggregating data seems simple, but rather than just writing it to a storage device until it runs out of space like a poorly managed system log, the goal here isn’t merely to record, but also to make it searchable. After all, for information transformation we need to be able to efficiently search and annotate this data, which requires keeping track of context and using data structures that lend themselves to this.

For such data aggregation and subsequent visualization of information on flashy dashboards that people like to flaunt, there are a few mainstream options, with among ‘smart home’ users options like InfluxDB and Grafana often popping up, but these are far from the only options, and depending on the environment there are much more relevant solutions.

Continue reading “Data Visualization And Aggregation: Time Series Databases, Grafana And More”