You Got Something On Your Processor Bus: The Joys Of Hacking ISA And PCI

Although the ability to expand a home computer with more RAM, storage and other features has been around for as long as home computers exist, it wasn’t until the IBM PC that the concept of a fully open and modular computer system became mainstream. Instead of being limited to a system configuration provided by the manufacturer and a few add-ons that really didn’t integrate well, the concept of expansion cards opened up whole industries as well as a big hobbyist market.

The first IBM PC had five 8-bit expansion slots that were connected directly to the 8088 CPU. With the IBM PC/AT these expansion slots became 16-bit courtesy of the 80286 CPU it was built around. These slots  could be used for anything from graphics cards to networking, expanded memory or custom I/O. Though there was no distinct original name for this card edge interface, around the PC/AT era it got referred to as PC bus, as well as AT bus. The name Industry Standard Architecture (ISA) bus is a retronym created by PC clone makers.

With such openness came the ability to relatively easy and cheaply make your own cards for the ISA bus, and the subsequent and equally open PCI bus. To this day this openness allows for a vibrant ecosystem, whether one wishes to build a custom ISA or PCI soundcard, or add USB support to a 1981 IBM PC system.

But what does it take to get started with ISA or PCI expansion cards today? Continue reading “You Got Something On Your Processor Bus: The Joys Of Hacking ISA And PCI”

Xolography: A Method To Improve The Accuracy Of Volumetric 3D Printing

Over the past years, additive manufacturing (AM) has become a common tool for hackers and makers, with first FDM and now SLA 3D printers becoming affordable for the masses. While these machines are incredibly useful, they utilize a slow layer-by-layer approach to produce objects. A relatively new technology called Volumetric Additive Manufacturing (VAM) promises to change all that by printing the entire object in one go, and according to a recent article in Nature, it just got a big resolution boost.

The concept is similar to SLA printing, but instead of curing the resin by projecting a 2D image of the current layer into the container, VAM uses multiple lasers to create intersecting points within the liquid. After exposing the resin to this projection for several seconds, the 3D model is built all at once. Not only is this far faster, but it removes the need for support materials and even a traditional build plate is unnecessary.

Visualization of the dual-color printing process as used by Regehly et al. (Credit: Nature)

Up till now the resolution and maximum object size of VAM has left a lot to be desired, but in this new research by Regehly et al. claim to have accomplished a feature resolution of ‘up to 25 micrometers’ and a solidification rate of ‘up to 55 cm3/s’. They used two crossing laser beams of different wavelengths, one to form the ‘light sheet’ (blue in the graphic) and a second beam (in red) to project the slide onto this light sheet. They refer to this technique as ‘xolography’, as a mesh-up of ‘holo’ (Greek for ‘whole’) and the ‘X’ shape formed by the crossing laser beams.

Key to making this work is the chemistry of the resin: the first wavelength excites the molecules called DCPI (Dual-Color Photo Initiators) that are dissolved in the resin. The second wavelength when hitting the same molecules initiates the resin polymerization process. The object pictured at the top of the page was a test print; producing such a design on a traditional 3D printer would have required a considerable amount of difficult to remove support material.

While this is obviously not a technology hobbyists will be using to replace their FDM and SLA printers with any time soon, there are still many companies and institutes working on various VAM technologies and approaches. As more and more of the complexities and challenges are dealt with, who knows when VAM may become a viable replacement for at least some SLA applications?

Thanks to [Qes] for the tip.

How A Quadriplegic Patient Was Able To Eat By Himself Again Using Artificial Limbs

Thirty years ago, [Robert “Buz” Chmielewski] suffered a surfing accident as a teenager. This left him as a quadriplegic due to a C6 spinal cord injury. After becoming a participant in a brain-computer interface study at Johns Hopkins, he was recently able to feed himself through the use of prosthetic arms. The most remarkable thing about these prosthetic arms is primarily the neural link with [Buz’s] brain, which allows him to not only control the artificial arms, but also feel what they are touching, due to a closed-loop system which transfers limb sensory input to the patient’s brain.

The prosthetic limb in question is the Modular Prosthetic Limb (MPL) from Johns Hopkins Applied Physics Laboratory (APL). The Johns Hopkins Medicine Brain-Computer Interface study began a year ago, when [Buz] had six microelectrode arrays (MEA) implanted into his brain: half in the motor cortex and half in the sensory cortex. During the following months, the study focused on using the signals from the first set of arrays to control actuators, such as the MPL. The second set of arrays was used to study how the sensory cortex had to be stimulated to allow a patient to feel the artificial limb much as one feels a biological limb.

What makes this study so interesting is not only the closed-loop approach which provides the patient with feedback on the position and pressure on the prosthetic, but also that it involves both hemispheres of the brain. As a result, after only a year of the study, [Buz] was able to use two of the MPLs simultaneously to feed himself, which is a delicate and complicated tasks.

In the video embedded after the break one can see a comparison of [Buz] at the beginning of the study and today, as he manages to handle cutlery and eat cake, without assistance.

Continue reading “How A Quadriplegic Patient Was Able To Eat By Himself Again Using Artificial Limbs”

Magnetocuring: Curing Epoxy With A Magnetic Field

Who doesn’t love epoxy? Epoxy resins, also known as polyepoxides, are an essential adhesive in many applications, both industrially and at smaller scales. Many polyepoxides however require the application of heat (around 150 °C for most types) in order to cure (harden), which can be complicated when the resin is applied to or inside layers of temperature sensitive materials. Now researchers at Nanyang Technological University (NTU) in Singapore have found a way to heat up resins using an alternating magnetic field (PDF), so-called magnetocuring.

As detailed in the research article by R. Chaudhary et al., they used commercially available epoxy resin and added nano particles of a MnxZn1-xFe2O4 alloy. This mixture was exposed to an alternating magnetic field to induce currents in the nano particles and subsequently produce heat that served to raise the temperature of the surrounding resin to about 160 °C in five minutes, allowing the resin to cure. There is no risk of overheating, as the nano particles are engineered to reach their Curie temperature, at which point the magnetic field no longer affects them. The exact Curie temperature was tweaked by changing the amount of manganese and zinc in the alloy.

After trying out a number of different alloy formulations, they settled on Mn0.7Zn0.3Fe2O4 as the optimal formulation at which no resin scorching occurred. As with all research it’s hard to tell when (and if) it will make it into commercial applications, but if this type of technology works out we could soon be gluing parts together using epoxy resin and an EM field instead of fumbling with the joys of two-component epoxy.

(Thanks, Qes)

Unbricking A SEGGER J-Link V9 Debug Probe

Last year [Emil] found themselves in the situation where a SEGGER J-link debug probe suddenly just stopped working. This was awkward not only because in-circuit debuggers are vital pieces of equipment in embedded firmware development, but also because they’re not that cheap. This led [Emil] to take the device apart to figure out what was wrong with it.

After checking voltages on the PCB, nothing obvious seemed wrong. The Tag-Connect style JTAG header on the PCB appeared to be a good second stop, requiring only a bit of work to reverse-engineer the exact pinout and hook up an ST-Link V2 in-circuit debugger to talk with the STM32F205RC MCU on the PCB. This led to the interesting discovery that apparently the MCU’s Flash ROM had seemingly lost the firmware data.

Fortunately [Emil] was able to flash back a version of the firmware which was available on the internet, allowing the J-Link device to work again. This was not the end of the story, however, as after this the SEGGER software was unable to update the firmware on the device, due to a missing bootloader that was not part of the firmware image.

Digging further into this, [Emil] found out a whole host of fascinating details about not only these SEGGER J-Link devices, but also the many clones that are out there, as well as the interesting ways that SEGGER makes people buy new versions of their debug probes.

(Thanks Zelea for the tip)

Seeking Enlightenment: The Quest To Restore Vision In Humans

Visual impairment has been a major issue for humankind for its entire history, but has become more pressing with society’s evolution into a world which revolves around visual acuity. Whether it’s about navigating a busy city or interacting with the countless screens that fill modern life, coping with reduced or no vision is a challenge. For countless individuals, the use of braille and accessibility technology such as screen readers is essential to interact with the world around them.

For refractive visual impairment we currently have a range of solutions, from glasses and contact lenses to more permanent options like LASIK and similar which seek to fix the refractive problem by burning away part of the cornea. When the eye’s lens itself has been damaged (e.g. with cataracts), it can be replaced with an artificial lens.

But what if the retina or optic nerve has been damaged in some way? For individuals with such (nerve) damage there has for decades been the tempting and seemingly futuristic concept to restore vision, whether through biological or technological means. Quite recently, there have been a number of studies which explore both approaches, with promising results.

Continue reading “Seeking Enlightenment: The Quest To Restore Vision In Humans”

Bare-Metal STM32: Exploring Memory-Mapped I/O And Linker Scripts

In the first installment of this series we had a brief look at the steps needed to get a bare-metal application running on an STM32 microcontroller. While this allowed us to quickly get to the juicy stuff, there are two essential elements which make an MCU so easy to use. One is found on the hardware side, in the form of so-called memory-mapped I/O (input/output), the other is the information contained in the files that are passed to the linker when we build a firmware image.

Memory-mapping of hardware peripheral registers is a straightforward way to make them accessible to the processor core, as each register is accessible as a memory address. This is both convenient when writing the firmware code, as well as for testing, as we can use a memory mapping specific for unit or integration testing.

We will take an in-depth look at this way of testing, as well as how these linker script files are connected to the memory layout. Continue reading “Bare-Metal STM32: Exploring Memory-Mapped I/O And Linker Scripts”