Bringing Back The CRT TV Experience In Software

Cathode-Retro is a collection of shaders and sample C++ code for reliving the glorious days when graphics were composite video signals displayed on a CRT screen. How? By faking it in software and providing more configuration options than any authentic setup ever had.

Love it or don’t, there’s nothing quite like it.

Not satisfied with creating CRT-style color images with optional scanlines and TV picture controls like tint and saturation, Cathode-Retro can emulate more nuanced elements as well.

The tool includes the ability to imitate things like the slight distortion of a period-correct curved screen, the subtle effects of different methods CRT displays used to actually work (such as shadow mask vs aperture grille), and even taking into account the slight distortion of light refracting imperfectly through the glass face of the CRT. There’s even options for adding noise and ghosting, which may spark some artistic ideas.

If all you need is software to recreate an old-school CRT terminal, we have you covered. But if your needs are a bit more low-level, Cathode-Retro might be what you’re missing.

OpenGL Machine Learning Runs On Low-End Hardware

If you’ve looked into GPU-accelerated machine learning projects, you’re certainly familiar with NVIDIA’s CUDA architecture. It also follows that you’ve checked the prices online, and know how expensive it can be to get a high-performance video card that supports this particular brand of parallel programming.

But what if you could run machine learning tasks on a GPU using nothing more exotic than OpenGL? That’s what [lnstadrum] has been working on for some time now, as it would allow devices as meager as the original Raspberry Pi Zero to run tasks like image classification far faster than they could using their CPU alone. The trick is to break down your computational task into something that can be performed using OpenGL shaders, which are generally meant to push video game graphics.

An example of X2’s neural net upscaling.

[lnstadrum] explains that OpenGL releases from the last decade or so actually include so-called compute shaders specifically for running arbitrary code. But unfortunately that’s not an option on boards like the Pi Zero, which only meets the OpenGL for Embedded Systems (GLES) 2.0 standard from 2007.

Constructing the neural net in such a way that it would be compatible with these more constrained platforms was much more difficult, but the end result has far more interesting applications to show for it. During tests, both the Raspberry Pi Zero and several older Android smartphones were able to run a pre-trained image classification model at a respectable rate.

This isn’t just some thought experiment, [lnstadrum] has released an image processing framework called Beatmup using these concepts that you can play around with right now. The C++ library has Java and Python bindings, and according to the documentation, should run on pretty much anything. Included in the framework is a simple tool called X2 which can perform AI image upscaling on everything from your laptop’s integrated video card to the Raspberry Pi; making it a great way to check out this fascinating application of machine learning.

Truth be told, we’re a bit behind the ball on this one, as Beatmup made its first public release back in April of this year. It might have flown under the radar until now, but we think there’s a lot of potential for this project, and hope to see more of it once word gets out about the impressive results it can wring out of even the lowliest hardware.

[Thanks to Ishan for the tip.]

GPU Turned Into Radio Transmitter To Defeat Air-Gapped PC

Another week, another exploit against an air-gapped computer. And this time, the attack is particularly clever and pernicious: turning a GPU into a radio transmitter.

The first part of [Mikhail Davidov] and [Baron Oldenburg]’s article is a review of some of the basics of exploring the RF emissions of computers using software-defined radio (SDR) dongles. Most readers can safely skip ahead a bit to section 9, which gets into the process they used to sniff for potentially compromising RF leaks from an air-gapped test computer. After finding a few weak signals in the gigahertz range and dismissing them as attack vectors due to their limited penetration potential, they settled in on the GPU card, a Radeon Pro WX3100, and specifically on the power management features of its ATI chipset.

With a GPU benchmarking program running, they switched the graphics card shader clock between its two lowest power settings, which produced a strong signal on the SDR waterfall at 428 MHz. They were able to receive this signal up to 50 feet (15 meters) away, perhaps to the annoyance of nearby hams as this is plunk in the middle of the 70-cm band. This is theoretically enough to exfiltrate data, but at a painfully low bitrate. So they improved the exploit by forcing the CPU driver to vary the shader clock frequency in one megahertz steps, allowing them to implement higher throughput encoding schemes. You can hear the change in signal caused by different graphics being displayed in the video below; one doesn’t need much imagination to see how malware could leverage this to exfiltrate pretty much anything on the computer.

It’s a fascinating hack, and hats off to [Davidov] and [Oldenburg] for revealing this weakness. We’ll have to throw this on the pile with all the other side-channel attacks [Samy Kamkar] covered in his 2019 Supercon talk.

Continue reading “GPU Turned Into Radio Transmitter To Defeat Air-Gapped PC”