[Alessandro Carminati] spends the day hacking Linux kernels, and to such an end needed a decent compilation machine to chew through the builds. One day, this machine refused to boot leaving some head-scratching to do, and remembering the motherboard diagnostics procedures of old, realized that wasn’t going to work for this modern board. You see, older ISA-based systems were much simpler, with diagnostic POST codes accessible by sniffing the bus with an appropriate card inserted, but the modern motherboard doesn’t even export the same bus anymore.
See “out 0x80, al” in there? That’s a POST code being written
Do modern machines even run a POST test at all, or are there other standards? After firing up a Linux machine and dumping the first meg of memory address space, it clearly contained some of the BIOS code. [Alessandro] looked at a disassembly of the BIOS update image and saw a similar structure, with POST code data sent to port 0x80 just like machines of old.
But instead of an ISA CPU bus, we have the Low Pin Count (LPC) bus which is used to hook up the ‘super IO’ functions, controlling things such as fans, temp sensors, and other system management functions. It also serves as the connection for the TPM feature, which usually appears as one of the motherboard connectors intended to be user-accessible. It turns out that POST codes can be accessed from this point with an appropriate POST card that can talk LPC.
[Shahriar] of The Signal Path is back with another fascinating video teardown and analysis for your viewing pleasure. (Embedded below.) This time the target is an Agilent E5052A 7 GHz signal Source/Analyser which is an expensive piece of kit not many of us are fortunate enough to have on the bench. This particular unit is reported as faulty, with a signal power measurement that is completely off-the-rails wrong, which leads one to not trust anything the instrument reports.
After digging into the service manual of the related E5052B unit, [Shahriar] notes that the phase noise measurement part of the instrument is totally separate from the power measurement, only connected via some internal resistive power splitters, and this simplifies debugging a lot. But first, a short segue into that first measurement subsystem, because it’s really neat.
Cross-correlating time-gated FFT (TG-FFT) subsystem at the top, dodgy power detector at the bottom
A traditional swept-mode instrument works by mixing the input signal with a locally-sourced low-noise oscillator, which when low-pass filtered, is fed into a power meter or digitizer. This simply put, down-converts the signal to something easy to measure. It then presents power or noise as a function of the local oscillator (LO) frequency, giving us the spectral view we require. All good, but this scheme has a big flaw. The noise of the LO is essentially added to that of the signal, producing a spectral noise floor below which signals cannot be resolved.
The E5052 instrument uses a cunning cross-correlation technique enabling it to measure phase noise levels below that of its own internal signal source. The instrument houses an Oven-Compensated Crystal Oscillator (OCXO) for high stability, in fact, two from two different vendors, one for each LO, and mounted perpendicular to each other. The technique splits the input signal in half with a power splitter, then feeds both halves into identical (apart from the LOs) down-converters, the outputs of which are fed into a DSP via a pair of ADCs. Having identical input signals, but different LOs (with different phase noise spectra) turns the two signals from a correlated pair to an uncorrelated pair, with the effects of chassis vibration and gravity effects also rolled in.
The DSP subtracts the uncorrelated signal from the correlated signal, therefore removing the effect of the individual LO’s effect on the phase noise spectrum. This clever technique results in a phase noise spectrum below that of the LOs themselves, and a good representation of the input signal being measured.
This is what a DC-7GHz resistive power divider looks like. Notice the inductive matching section before each resistor branch.
Handily for [Shahriar] this complex subsystem is totally separate from the dodgy power measurement. This second system is much simpler, being fed with another copy of the input signal, via the main resistive power splitter. This second feed is then split again with a custom power divider, which upon visual inspection of the input SMA connector was clearly defective. It should not wobble. The root cause of the issue was a cold solder joint of a single SMA footprint, which worked loose over time. A little reflow and reassembly and the unit was fit for recalibration, and back into service.
[Kevin] wanted to emulate the look and feel of the original TX816 aesthetic, developing a custom PCB handling the user interface for four of the eight channels, and a second acting as an interface to the Raspberry Pi using a Pico. Also sitting on this PCB is the GY-PCM5102 I2S DAC, and the MIDI connectors needed to connect to the system controller. Both PCBs, including a PCB-based front panel, were developed with KiCAD. The firmware for the Pico part of the system can be found on the firmware GitHub. The video demo (embedded below) shows off the system running a very 80s-sounding rendition of Holst’s famous ‘Jupiter’ from the planet series, and we all agree it sounds pretty sweet. For a complete rundown of the build, here are the links for the blog series for ease of access: Intro, PCBs, Panel, Build Guide, Mechanical, Pico/TX816 IO code, and finally usage. Phew!
The aero designers of the day were quickly finding out the limitations of the wind tunnel testing approach, especially for so-called transonic flow conditions. This occurs when an object moving through a fluid (like air can be modeled) produces regions of supersonic flow mixed in with subsonic flow and makes for additional drag scenarios. This severely impacts aircraft performance. Not accounting for these effects is not an option, hence the great industry interest in CFD modeling. But the equations for which (usually based around the Navier-Stokes system) are non-linear, and extremely computationally intensive.
Obviously, a certain Mr. Cray is a prominent player in this story, who, as the story goes, exhausted the financial tolerance of his employer, CDC, and subsequently formed Cray Research Inc, and the rest is (an interesting) history. Many Cray machines were instrumental in the development of the space program, and now adorn computing museums the world over. You simply haven’t lived until you’ve sipped your weak lemon drink whilst sitting on the ‘bench’ around an early Cray machine.
You see, supercomputers are a different beast from those machines mere mortals have access to, or at least the earlier ones were. The focus is on pure performance, ideally for floating-point computation, with cost far less of a concern, than getting to the next computational milestone. The Cray-1 for example, is a 64-bit machine capable of 80 MIPS scalar performance (whilst eating over 100 kW of juice), and some very limited parallel processing ability.
While this was immensely faster than anything else available at the time, the modern approach to supercomputing is less about fancy processor design and more about the massive use of parallelism of existing chips with lots of local fast storage mixed in. Every hacker out there should experience these old machines if they can, because the tricks they used and the lengths the designers went to get squeeze out every ounce of processing grunt, can be a real eye-opener.
Using a Pico to drive a pair of AD767 12-bit DACs, the outputs of which drive the two ‘scope input channels directly, this breadboard and pile-of-wires hack can produce some seriously impressive results. On the software side of things, the design is a now a familiar show, with core0 running the application’s high-level processing, and core1 acting in parallel as the rendering engine, determining static DAC codes to be pushed out to the DACs using the DMA and the PIO.
3D printing by painting with light beams on a vat of liquid plastic was once the stuff of science fiction, but now is very much science-fact. More than that, it’s consumer-level technology that we’re almost at the point of being blasé about. Scientists and engineers the world over have been quietly beavering away in their labs on the new hotness, nanoscale 3D printing with varying success. Recently IEESpectrum reports some promising work using holographic imaging to generate nanoscale structures at record speed.
Current stereolithography printers make use of UV laser scanned over the bottom of a vat of UV-sensitive liquid photopolymer resin, which is chemically tweaked to make it sensitive to the UV frequency photons. This is all fine, but as we know, this method is slow and can be of limited resolution, and has been largely superseded by LCD technology. Recent research has focussed on two-photon lithography, which uses a resin that is largely transparent to the wavelength of light concerned, but critically, can be polymerized with enough energy density (i.e. the method requires multiple photons to be simultaneously absorbed.) This is achieved by using pulsed-mode lasers to focus to a very tight point, giving the required huge energy density. This tight focus, plus the ability to pass the beam through the vat of liquid allows much tighter image resolution. But it is slow, painfully slow.
Intaglio is an ancient carving technique for adding details to a workpiece, by manually removing material from a surface with only basic hand tools. If enough material depth is removed, the resulting piece can be used as a stamp, as was the case with rings, used to stamp the wax seals of verified letters. [Nicolas Tranchant] works in the jewelry industry, and wondered if he could press a CNC engraving machine into service to engrave gemstones in a more time-efficient manner than the manual carving methods of old.
Engraving and machining generally work only if the tool you are using is mechanically harder than the material the workpiece is made from. In this case, this property is measured on the Mohs scale, which is a qualitative measurement of the ability of one (harder) material to scratch another. Diamond is the hardest known material on the Mohs scale and has a Mohs hardness of 10, so it can produce a scratch on the surface of say, Corundum — Mohs value 9 — but not the other way around.
[Nicolas] shows the results of using a diamond tip equipped CNC engraver on various gemstones typical of Intaglio work, such as Black Onyx, Malachite, and Amethyst with some details of the number of engraving passes needed and visual comparison to the same material treated to traditional carving.
Let’s be clear here, the traditional intaglio process produces deep grooves on the surface of the workpiece and the results are different from this simple multi-pass engraving method — but limiting the CNC machine to purely metal engraving duties seemed a tad wasteful. Now if they can only get a suitable machine for deeper engraving, then custom digitally engraved intaglio style seal rings could be seeing a comeback!
Intaglio isn’t just about jewelry of course, the technique has been used in the typesetting industry for centuries. But to bring this back into ours, here’s a little something about making a simple printing press.