Linux: Coming Soon To M1 Macbooks

Regardless of the chipset or original intended use of any computer system, someone somewhere is going to want to try and run Linux on it. And why not? Linux is versatile and free to use as well as open-source, so it’s quite capable of running on almost anything. Of course, it takes a little while for the Linux folk to port the software to brand new hardware, but it’s virtually guaranteed that it’s only a matter of time before Linux is running on even the most locked-down of hardware, like the M1 MacBooks.

[Hector Martin] aka [marcan] has been hard at work getting Linux up and running on the latest Apple offerings with their ARM-based M1 processors. Since these are completely divorced from their x86 product line the process had to be worked from the ground up which included both booting Linux and modifying the kernel to include support for the hardware. [marcan] has a lot of hardware working such as the USB ports and the SD card slot, and notes that his setup is even compatible with the webcam notch included in the latest batch of MacBooks.

There are a few things still missing. He’s running Arch and doesn’t have the GPU configured yet, so all of the graphics are rendered in software. But he has put the computer through the wringer including running some computationally-intense software for nearly a full day before realizing that the machine wasn’t charging, which did not make much difference in performance. These machines are indeed quite capable with their new ARM chipsets and hopefully his work going forward will bring Linux to the rest of us who still use Macs even if they don’t want to run macOS.

ARM’s Chinese Venture Goes Sour

We’re used by now to many of the more capable microcontrollers and systems-on-chip that we use having an ARM core at their heart. From its relatively humble beginings in a 1980s British home computer, the RISC processor architecture from Cambridge has transformed itself into the go-to power-sipping yet powerful core for manufacturers far and wide. This has been the result of astute business decisions over decades, with ARM’s transformation into a fabless vendor of cores as IP at its heart. Recent news suggests that perhaps the astuteness has been in short supply of late though, as it’s reported that ARM’s Chinese subsidiary has gone rogue and detatched from the mothership taking the IP with it.

It seems that the CEO of the Chinese company managed to retain legal power when sacked by the parent company over questionable ties with another of his ventures, and has thus been able to declare it independent of its now-former parent. It still has the ARM IP up to the moment of detatchment and claims to be developing its own new products, but it seems likely that it won’t receive any new ARM IP.

What will be the effect of this at our level? Perhaps we have already seen it, as more Chinese chips such as the cheaper STM32 clones are likely to get low-end ARM cores as a result. It seems likely that newer ARM IP will remain for now in more expensive non-Chinese chip families, but in the middle of a semiconductor shortage it’s likely that we wouldn’t notice anyway. Where it will have a lasting effect is in future Chinese joint ventures by non-Chinese chip companies. Seeing ARM’s then-owner Softbank getting their fingers burned in such a way is likely to provide a disincentive to other companies considering a similar course. Whether ARM will manage to resolve the impasse remains to be seen, but it can hardly be a help to the rocky progress of their Nvidia merger.

Minimalist Robot Arm Really Stacks Up

There’s nothing like a little weekend project, especially one that ends up better than you expected. And when you literally build a robotic arm out of workshop scraps, so much the better.

Longtime readers will no doubt recognize the build style used here as that of [Norbert Heinz], aka “Homofaciens” on YouTube. [Norbert] has a way of making trash do his bidding, and has shown us all kinds of seemingly impossible feats of mechatronics with just what’s lying around. In this case, his robot arm is made from scrap wooden roofing battens, or what we’d call furring strips here in the US. The softwood isn’t something you’d think would make a great material for building robots, but [Norbert] makes its characteristics work for him, like using wax-lubricated holes for hinge points. Steppers and lead screws cannibalized from an old CNC build, along with the drive electronics, provide the motion. It’s a bit — compliant — but precise enough to pick up nuts and stack them nicely. The video below gives an overview of the build, and detailed instructions are available too.

We always appreciated [Norbert]’s minimalist builds, and seeing what can be accomplished with almost nothing is always inspirational. If you’re not familiar with his work, check out his cardboard and paperclip CNC plotter, his tin can encoders, or his plasma-powered printer.

Continue reading “Minimalist Robot Arm Really Stacks Up”

Arm Researchers Announce The PlasticArm

If the Cortex family of embedded microprocessors aren’t flexible enough for your designs, an article published this week (click here for the PDF version) in the journal Nature might be of interest. We’re not talking flexibility in terms of features, but real, physical flexibility of the microprocessor itself. A research team from Arm Ltd. has developed the PlasticArm, which is a 32-bit processor derived from the Cortex-M0+ family.

They accomplished this by constructing a CPU from metal-oxide thin-film transistors (TFT) on a polyimide substrate, the resultant chip being called a natively flexible microprocessor. While much of the hype focuses on the flexibility aspect, we think the real innovation here is the low cost. The processes used to deposit transistors onto silicon wafers is much more expensive than those on this flexible substrate.

Don’t get too excited just yet, because there were some compromises made along the way. Modern microprocessor silicon dies are measured in the tens of microns, but the PlasticArm total die size is a comparatively whopping 9 mm square. The researchers were appropriately focused on the core CPU, and the auxiliary building blocks such as ROM and RAM seem almost an afterthought. With only 456 bytes of program store and 128 bytes of RAM, only the tiniest of applications are suited to this chip. Other compromises were made, such as no internal registers — they are mapped to the external RAM — and the CPU runs a lot slower than we’re used to, topping out at 29 kHz (note: k not M).

There are certainly some challenges with this new technology, and we won’t be designing with these chips any time soon. But it has the potential to offer benefits in certain niche applications where low-cost and/or flexibility is more important than processor speed and performance.

 

Is 32-bits Really Dead?

While some of us are still clinging onto our favorite 8-bit microprocessors, ARM announced they will be killing off the 32-bit architecture in 2022 and/or 2023. Over on the GaryExplains YouTube channel, [Gary Sims] posted a great review of the current 32- vs 64-bit state-of-affairs — not just for ARM but for Intel and AMD processors as well. And it’s a dismal outlook for you 32-bit fans.

ARM announced last Fall that there would be no more 32-bit support as of 2022, then this March they made a similar announcement but with a 2023 deadline. [Gary] tries to parse these statements, and takes an educated guess at what the disparity means (spoiler alert — he predicts that one more 32-bit core will soon be released).

[Gary] clearly breaks down the 32-bit situation by operating systems such as Linux, Windows, MacOS, Android, and iOS, and how all of these have been transitioning to 64-bits over recent years. He does a thorough job, and concludes that the transition is already well underway. And while Linux and Windows have not completely dropped 32-bit support, the writing is on the wall.

Take note, however, that this discussion regards the Cortex-A family of cores found in smart phones, tablets, computers, and powerful embedded applications like autonomous vehicles. The popular 32-bit Cortex-M family of low-cost / low-power cores that are used in so many embedded system designs will remain 32-bits for the foreseeable future.

After watching [Gary]’s presentation, if you want to learn more, check out the writeup that [Maya Posch] did on the details of the latest ARMv9 ISA a few weeks ago. Also watch this 8-bit vs 32-bit presentation by our Editor-in-Chief [Mike Szczys]. Despite being from five years ago, it is still quite applicable today. What about 16-bit MCUs — the old Intel/AMD embedded 80186 processor, the 8051 follow-ons like the 80C196, 80C251, or 8051XA, the 6502 follow-ons like the 65C816, Zilog’s Z8000, the Renesas M16C, etc. — is anyone using them anymore? If so, or if you’re using a 4-bit MCU these days, let us know in the comments below.

Continue reading “Is 32-bits Really Dead?”

Deep Learning Enables Intuitive Prosthetic Control

Prosthetic limbs have been slow to evolve from simple motionless replicas of human body parts to moving, active devices. A major part of this is that controlling the many joints of a prosthetic is no easy task. However, researchers have worked to simplify this task, by capturing nerve signals and allowing deep learning routines to figure the rest out.

The prosthetic arm under test actually carries a NVIDIA Jetson Nano onboard to run the AI nerve signal decoder algorithm.

Reported in a pre-published paper, researchers used implanted electrodes to capture signals from the median and ulnar nerves in the forearm of Shawn Findley, who had lost a hand to a machine shop accident 17 years prior. An AI decoder was then trained to decipher signals from the electrodes using an NVIDIA Titan X GPU.

With this done, the decoder model could then be run on a significantly more lightweight system consisting of an NVIDIA Jetson Nano, which is small enough to mount on a prosthetic itself. This allowed Findley to control a prosthetic hand by thought, without needing to be attached to any external equipment. The system also allowed for intuitive control of Far Cry 5, which sounds like a fun time as well.

The research is exciting, and yet another step towards full-function prosthetics becoming a reality. The key to the technology is that models can be trained on powerful hardware, but run on much lower-end single-board computers, avoiding the need for prosthetic users to carry around bulky hardware to make the nerve interface work. If it can be combined with a non-invasive nerve interface, expect this technology to explode in use around the world.

[Thanks to Brian Caulfield for the tip!]

Raspberry Pi RP2040: Hands-On Experiences From An STM32 Perspective

The release of the Raspberry Pi Foundation’s Raspberry Pi Pico board with RP2040 microcontroller has made big waves these past months in the maker community. Many have demonstrated how especially the two Programmable I/O (PIO) state machine peripherals can be used to create DVI video generators and other digital peripherals.

Alongside this excitement, it raises the question of whether any of this will cause any major upheaval for those of us using STM32, SAM and other Cortex-M based MCUs. Would the RP2040 perhaps be a valid option for some of our projects? With the RP2040 being a dual Cortex-M0+ processor MCU, it seems only fair to put it toe to toe with the offerings from one of the current heavyweights in the 32-bit ARM MCU space: ST Microelectronics.

Did the Raspberry Pi Foundation pipsqueak manage to show ST’s engineers how it’s done, or should the former revisit some of their assumptions? And just how hard is it going to be to port low-level code from STM32 to RP2040? Continue reading “Raspberry Pi RP2040: Hands-On Experiences From An STM32 Perspective”