Building A Paper Tape Reader To Read Bytes

Over at the Usagi Electric farm, [David Lovett]’s custom 1-bit, vacuum tube-based computer (UEVTC for short) has been coming along well the past years, matching and exceeding the Motorola MC14500B 1-bit industrial control unit (ICU) that it is heavily inspired by. What is still missing, however, is a faster way to get data into the computer than manually toggling switches. The obvious choice is to make a (punched) paper tape reader, but how does one go about this, and what options exist here? With a few historical examples as reference and the tape reader on the impressive 1950s Bendix G-15 which [David] happens to have lounging around, [David] takes us in a new video through the spiraling complexity of what at first glance seems like a simple engineering challenge.

Photodiodes in the tape reader of the Bendix G-15. (Credit: David Lovett, Usagi Electric)

Punched paper tape saw significant use alongside punched paper cards and magnetic tape, and despite their low bit density, if acid-free paper (or e.g. mylar) is used, rolls of paper tape should remain readable for many decades. So how to read these perforations in the paper? This can be done mechanically, or optically, with in both case the feedrate an important consideration.

Right off the bat the idea of a mechanical reader was tossed out due to tape wear, with [David] digging into his stack of photodetector tubes. After looking at a few rather clunky approaches involving such tubes, the photodiodes in the Bendix G-15’s tape reader were instead used as inspiration for a design. These are 1.8 mm diameter photodiodes, which aren’t super common, but have the nice property that they align exactly with the holes in the paper tape.

This left building a proof-of-concept on a breadboard with some incandescent bulbs and one of the photodiode to demonstrate that a valid logic signal could be produced. This turned out to be the case, clearing the construction of the actual tape reader, which will feature in upcoming videos.

Continue reading “Building A Paper Tape Reader To Read Bytes”

Reverse-Engineering The AMD Secure Processor Inside The CPU

On an x86 system the BIOS is the first part of the system to become active along with the basic CPU core(s) functionality, or so things used to be until Intel introduced its Management Engine (IME) and AMD its AMD Secure Processor (AMD-SP). These are low-level, trusted execution environments, which in the case of AMD-SP involves a Cortex-A5 ARM processor that together with the Cryptographic Co-Processor (CCP) block in the CPU perform basic initialization functions that would previously have been associated with the (UEFI) BIOS like DRAM initialization, but also loading of encrypted (AGESA) firmware from external SPI Flash ROM. Only once the AMD-SP environment has run through all the initialization steps will the x86 cores be allowed to start up.

In a detailed teardown by [Specter] over at the Dayzerosec blog the AMD-SP’s elements, the used memory map  and integration into the rest of the CPU die are detailed, with a follow-up article covering the workings of the CCP. The latter is used both by the AMD-SP as well as being part of the cryptography hardware acceleration ISA offered to the OS. Where security researchers are interested in the AMD-SP (and IME) is due to the fascinating attack vectors, with the IME having been the most targeted, but AMD-SP having its own vulnerabilities, including in related modules, such as an injection attack against AMD’s Secure Encrypted Virtualization (SEV).

Although both AMD and Intel are rather proud of how these bootstrapping systems enable TPM, secure virtualization and so on, their added complexity and presence invisible to the operating system clearly come with some serious trade-offs. With neither company willing to allow a security audit, it seems it’s up to security researchers to do so forcefully.

For years, the first Air Force One sat neglected and forgotten in an open field at Arizona’s Marana Regional Airport. (Credit: Dynamic Aviation)

The First Air Force One And How It Was Nearly Lost Forever

Although the designation ‘Air Force One’ is now commonly known to refer to the airplane used by the President of the United States, it wasn’t until Eisenhower that the US President would make significant use of a dedicated airplane. He would have a Lockheed VC-121A kitted out to act as his office as commander-in-chief. Called the Columbine II after the Colorado columbine flower, it served a crucial role during the Korean War and would result the coining of the ‘Air Force One’ designation following a near-disaster in 1954.

This involved a mix-up between Eastern Air Lines 8610 and Air Force 8610 (the VC-121A). After the Columbine II was replaced with a VC-121E model (Columbine III), the Columbine II was mistakenly sold to a private owner, and got pretty close to being scrapped.

In 2016, the plane made a “somewhat scary and extremely precarious” 2,000-plus-mile journey to Bridgewater, Virginia, to undergo a complete restoration. (Credit: Dynamic Aviation)
In 2016, the plane made a “somewhat scary and extremely precarious” 2,000-plus-mile journey to Bridgewater, Virginia, to undergo a complete restoration. (Credit: Dynamic Aviation)

Although nobody is really sure how this mistake happened, it resulted in the private owner stripping the airplane for parts to keep other Lockheed C-121s and compatible airplanes flying. Shortly before scrapping the airplane, he received a call from the Smithsonian Institution, informing him that this particular airplane was Eisenhower’s first presidential airplane and the first ever Air Force One. This led to him instead fixing up the airplane and trying to sell it off. Ultimately the CEO of the airplane maintenance company Dynamic Aviation, [Karl D. Stoltzfus] bought the partially restored airplane after it had spent another few years baking in the unrelenting sun.

Although in a sorry state at this point, [Stoltzfus] put a team led by mechanic [Brian Miklos] to work who got the airplane in a flying condition by 2016 after a year of work, so that they could fly the airplane over to Dynamic Aviation facilities for a complete restoration. At this point the ‘nuts and bolts’ restoration is mostly complete after a lot of improvisation and manufacturing of parts for the 80 year old airplane, with restoration of the Eisenhower-era interior and exterior now in progress. This should take another few years and another $12 million or so, but would result in a fully restored and flight-worthy Columbine II, exactly as it would have looked in 1953, plus a few modern-day safety upgrades.

Although [Stoltzfus] recently passed away unexpectedly before being able to see the final result, his legacy will live on in the restored airplane, which will after so many years be able to meet up again with the Columbine III, which is on display at the National Museum of the USAF.

Canadarm2 captures Cygnus OA-5 S.S. Alan Poindexter in late 2016 (Credit: NASA)

Canadarm2 Scores Milestone With Catching Its 50th Spacecraft

Recently Canada’s Canadarm2 caught its 50th spacecraft in the form of a Northrop Grumman Cygnus cargo vessel since 2009. Although perhaps not the most prominent part of the International Space Station (ISS), the Canadarm2 performs a range of very essential functions on the outside of the ISS, such as moving equipment around and supporting astronauts during EVAs.

Power and Data Grapple Fixture on the ISS (Credit: NASA)
Power and Data Grapple Fixture on the ISS (Credit: NASA)

Officially called the Space Station Remote Manipulator System (SSRMS), it is part of the three-part Mobile Servicing System (MSS) that allows for the Canadarm2 and the Dextre unit to scoot around the non-Russian part of the ISS, attach to Power Data Grapple Fixtures (PDGFs) on the ISS and manipulate anything that has a compatible Grapple Fixture on it.

Originally the MSS was not designed to catch spacecraft when it was installed in 2001 by Space Shuttle Endeavour during STS-100, but with the US moving away from the Space Shuttle to a range of unmanned supply craft which aren’t all capable of autonomous docking, this became a necessity, with the Japanese HTV (with grapple fixture) becoming the first craft to be caught this way in 2009. Since the Canadarm2 was originally designed to manipulate ISS modules this wasn’t such a major shift, and the MSS is soon planned to also started building new space stations when the first Axiom Orbital Segment is launched by 2026. This would become the Axiom Station.

With the Axiom Station planned to have its own Canadarm-like system, this will likely mean that Canadarm2 and the rest of the MSS will be decommissioned with the rest of the ISS by 2031.

Top image: Canadarm2 captures Cygnus OA-5 S.S. Alan Poindexter in late 2016 (Credit: NASA)

Edge-Lit, Thin LCD TVs Are Having Early Heat Death Issues

Canadian consumer goods testing site RTINGS has been subjecting 100 TVs to an accelerated TV longevity test, subjecting them so far to over 10,000 hours of on-time, equaling about six years of regular use in a US household. This test has shown a range of interesting issues and defects already, including for the OLED-based TVs. But the most recent issue which they covered is that of uniformity issues with edge-lit TVs. This translates to uneven backlighting including striping and very bright spots, which teardowns revealed to be due to warped reflector sheets, cracked light guides, and burned-out LEDs.

Excluding the 18 OLED TVs, which are now badly burnt in, over a quarter of the remaining TVs in the test suffer from uniformity issues. But things get interesting when contrasting between full-array local dimming (FALD), direct-lit (DL) and edge-lit (EL) LCD TVs. Of the EL types, 7 out of 11 (64%) have uniformity issues, with one having outright failed and others in the process of doing so. Among the FALD and DL types the issue rate here is 14 out of 71 (20%), which is still not ideal after a simulated 6 years of use but far less dramatic.

Cracks in the Samsung AU8000's Light Guide Plate (Credit: RTINGS)
Cracks in the Samsung AU8000’s Light Guide Plate (Credit: RTINGS)

As part of the RTINGS longevity test, failures and issues are investigated and a teardown for analysis, and fixing, is performed when necessary. For these uniformity issues, the EL LCD teardowns revealed burned-out LEDs in the EL LED strips, with cracks in the light-guide plate (LGP) that distributes the light, as well as warped reflector sheets. The LGPs are offset slightly with plastic standoffs to not touch the very hot LEDs, but these standoffs can melt, followed by the LGP touching the hot LEDs. With the damaged LGP, obviously the LCD backlighting will be horribly uneven.

In the LG QNED80 (2022) TV, its edge lighting LEDs were measured with a thermocouple to be running at a searing 123 °C at the maximum brightness setting. As especially HDR (high-dynamic range) content requires high brightness levels, this would thus be a more common scenario in EL TVs than one might think. As for why EL LCDs still exist since they seem to require extreme heatsinking to keep the LEDs from melting straight through the LCD? RTINGS figures it’s because EL allows for LCD TVs to be thinner, allowing them to compete with OLEDs while selling at a premium compared to even FALD LCDs.

Continue reading “Edge-Lit, Thin LCD TVs Are Having Early Heat Death Issues”

The experimental setup for entanglement-distribution experiments. (Credit: Craddock et al., PRX Quantum, 2024)

Entangled Photons Maintained Using Existing Fiber Under NYC’s Streets

Entangled photons are an ideal choice for large-scale networks employing quantum encryption or similar, as photons can use fiber-optical cables to transmit them. One issue with using existing commercial fiber-optic lines for this purpose is that these have imperfections which can disrupt photon entanglement. This can be worked around by delaying one member of the pair slightly, but this makes using the pairs harder. Instead, a team at New York-based startup Qunnect used polarization entanglement to successfully transmit and maintain thousands of photons over the course of weeks through a section of existing commercial fiber, as detailed in the recently published paper by [Alexander N. Craddock] et al. in PRX Quantum (with accompanying press release).

The entangled photons were created via spontaneous four-wave mixing in a warm rubidium vapor. This creates a photon with a wavelength of 795 nm and one with 1324 nm. The latter of which is compatible with the fiber network and is thus transmitted over the 34 kilometers. To measure the shift in polarization of the transmitted photos, non-entangled photons with a known polarization were transmitted along with the entangled ones. This then allowed for polarization compensation for the entangled photos by measuring the shift on the single photons. Overall, the team reported an uptime of nearly 100% with about 20,000 entangled photons transmitted per second.

As a proof of concept it shows that existing fiber-optical lines could in the future conceivably be used for quantum computing and encryption without upgrades.

Wacky Science: Using Mayonnaise To Study Rayleigh-Taylor Instability

Sometimes a paper in a scientific journal pops up that makes you do a triple-take, case in point being a recent paper by [Aren Boyaci] and [Arindam Banerjee] in Physical Review E titled “Transition to plastic regime for Rayleigh-Taylor instability in soft solids”. The title doesn’t quite do their methodology justice — as the paper describes zipping a container filled with mayonnaise along a figure-eight track to look at the surface transitions. With the paper paywalled and no preprint available, we have to mostly rely the Lehigh University press releases pertaining to the original 2019 paper and this follow-up 2024 one.

Rayleigh-Taylor instability (RTI) is an instability of an interface between two fluids of different densities when the less dense fluid acts up on the more dense fluid. An example of this is water suspended above oil, as well as the expanding mushroom cloud during a explosion or eruption. It also plays a major role in plasma physics, especially as it pertains to nuclear fusion. In the case of inertial confinement fusion (ICF) the rapidly laser-heated pellet of deuterium-tritium fuel will expand, with the boundary interface with the expanding D-T fuel subject to RTI, negatively affecting the ignition efficiency and fusion rate. A simulation of this can be found in a January 2024 research paper by [Y. Y. Lei] et al.

As a fairly chaotic process, RTI is hard to simulate, making a physical model a more ideal research subject. Mayonnaise is definitely among the whackiest ideas here, with other researchers like [Samar Alqatari] et al. as published in Science Advances opting to use a Hele-Shaw cell with dyed glycerol-water mixtures for a less messy and mechanically convoluted experimental contraption.

What’s notable here is that the Lehigh University studies were funded by the Lawrence Livermore National Laboratory (LLNL), which explains the focus on ICF, as the National Ignition Facility (NIF) is based there.

This also makes the breathless hype about ‘mayo enabling fusion power’ somewhat silly, as ICF is even less likely to lead to net power production, far behind even Z-pinch fusion. That said, a better understanding of RTI is always welcome, even if one has to question the practical benefit of studying it in a container of mayonnaise.