Is 32-bits Really Dead?

While some of us are still clinging onto our favorite 8-bit microprocessors, ARM announced they will be killing off the 32-bit architecture in 2022 and/or 2023. Over on the GaryExplains YouTube channel, [Gary Sims] posted a great review of the current 32- vs 64-bit state-of-affairs — not just for ARM but for Intel and AMD processors as well. And it’s a dismal outlook for you 32-bit fans.

ARM announced last Fall that there would be no more 32-bit support as of 2022, then this March they made a similar announcement but with a 2023 deadline. [Gary] tries to parse these statements, and takes an educated guess at what the disparity means (spoiler alert — he predicts that one more 32-bit core will soon be released).

[Gary] clearly breaks down the 32-bit situation by operating systems such as Linux, Windows, MacOS, Android, and iOS, and how all of these have been transitioning to 64-bits over recent years. He does a thorough job, and concludes that the transition is already well underway. And while Linux and Windows have not completely dropped 32-bit support, the writing is on the wall.

Take note, however, that this discussion regards the Cortex-A family of cores found in smart phones, tablets, computers, and powerful embedded applications like autonomous vehicles. The popular 32-bit Cortex-M family of low-cost / low-power cores that are used in so many embedded system designs will remain 32-bits for the foreseeable future.

After watching [Gary]’s presentation, if you want to learn more, check out the writeup that [Maya Posch] did on the details of the latest ARMv9 ISA a few weeks ago. Also watch this 8-bit vs 32-bit presentation by our Editor-in-Chief [Mike Szczys]. Despite being from five years ago, it is still quite applicable today. What about 16-bit MCUs — the old Intel/AMD embedded 80186 processor, the 8051 follow-ons like the 80C196, 80C251, or 8051XA, the 6502 follow-ons like the 65C816, Zilog’s Z8000, the Renesas M16C, etc. — is anyone using them anymore? If so, or if you’re using a 4-bit MCU these days, let us know in the comments below.

Thanks to reader [Feinfinger] for the tip.

98 thoughts on “Is 32-bits Really Dead?

  1. My company is still using Pic24s for a few math intense tasks that an 8bit (like a pic18 series) would have struggled to do. Sure 32bit processors are here, and becoming the defacto for most things over 8 bits, but older products with 16bit cores will likely be used for some time more. Going 8/16 -> 32 is tricky because of how much things change instruction set wise.

    4-/16- bit micros will probably slowly become like Itanium or PowerPC architectures in application processing. Someone is using them and they still have some support but it is extremely niche.

  2. Moving to 64 bit has pros and cons.

    If talking about the address space for the application, then 32 bit has some serious advantages over 64 bit as long as one’s application doesn’t need more than 4GB of memory. And the difference here is that putting a 32 bit value in memory is less intensive than a 64 bit value. And the value only represents a local address, ie, it has some offset compared to system memory. (as long as we overlook other memory system complexities… An application is rarely allocated one continuous span of “global” address space.)

    There is also a similar story for data processing. If all one needs is to represent a value between 0 and 100, then an 8 bit variable is already sufficient. But there is technically nothing stopping one from using a 16 bit one, or even a 64 bit one. And in regards to 64 bit computing, this is sometimes the case. Using excessively large variables “just in case”, is a fairly trivial way to double (or more) one’s memory utilization. Not to mention memory bandwidth requirements.

    So it is a question if 64 bit is actually a step in the right direction. In some cases, it truly is, in others it is only excessive. But obviously, one should design one’s code with this in mind and assign variables of a practical size. Though, the size of one’s local address space is usually outside of one’s control.

    It would be fairly practical if one could have the best of both, a nice compact 32 bit local address space, and still have the ability to shuffle about 64 bit values. This is though only applicable for programs using relatively small amounts of memory by modern standards. But an application should realistically be able to make a call based on the global address space as well, or have multiple local ones to choose between. But this is muddling the waters with a fair bit of complexity…

    In the end.
    I don’t actually think 64 bit is a wise move for a lot of applications.
    But likewise, 64 bit is something that is nice to have.

    1. I have, in fact, just cross-graded my Pinebook Pro from arm64 to arm32 for precisely this reason. AFAICT there are negligible benefits to arm64 if you’re not using SIMD or NEON or very large address spaces; using 32-bit binaries produces a small but definite performance benefit. I’m also curious to know whether thumb2 is even faster, due its very dense code, but this is harder to determine because there isn’t a convenient thumb2 debian build to try.

      1. Sadly most architectures don’t support 32 bit local addresses for the threads/processes while still allowing for 64 bit operation of instructions and the like.

        Since if one could use a 32 bit local address, then one could save in a fair bit of memory to be fair, especially if one deals with a lot of pointers.

        A hobby architecture of mine though uses 17 bit local address, but this is more due to it being the leftover space in the load/store instruction call that weren’t use for anything else… Where the locality is within the thread itself, or around a specific pointer that acts as a bookmark of sorts. Though, it also supports some odd ball variable sizes like 24 and 48 bits in addition to the normal powers of two. Since a fair few times, dubbling the amount of bits is overkill compared to what one actually needs in a variable. But it also uses a weird 48 bit global address space, but does support 64 bit instructions and above up to 128 bit currently. Somewhat non traditional…..

    2. Actually this reminds me to 64 bit at least try to fix something there are cretin technologies which not they are just pushed down on peoples throats just because. For this the list would obviously start with the biggest cancer ever invented called SystemD but the list will pretty much go on with most modern software like Acrobat Reader, Daemon Tools, Firefox etc etc. which are actually just getting worse and worse over time packing all kind of adware crapware slowing your machine downware with forced autoupdates so they can screw you even more on the next version.
      Well thats why I love retro computers and retro software I can use the same sw without these garbage new “features” I don’t need in the rest of my life.

      1. I agree, a lot of software is turning into bloatware. A software package that used to be 50-100 MB is now 5-10 GB but it’s functions are basically the same. Windows 7-10 is 30 GB and for me it’s the same 200 MB of useful stuff and 29 GB of something I don’t want or need. What baffles me is that some people are defending this. SystemD is a beautiful example of solving a minor problem by creating a much bigger one.

        1. The address space and variable/register sizes doesn’t have to be 1:1 correlated with each other.

          Though, describing a whole architecture implementation with 1 single number is usually not telling any meaningful information.

          But to answer the question of “how long it’ll be before we see 128 bit CPU addresses.” well, PCIe supports 263 bit addresses with the Expanded Resizable BAR feature. (Not to be confused with the other Resizable BAR that only has support for up to 38 bit addresses.)

          PCIe however only goes to the crazy 263 bit due to fancy addressing schemes that can be pulled of. Ie, using the address as a way to logically map out a larger PCIe fabric and having plenty of address overlap and get more efficient throughput over it. In short, the effective address range is way smaller than 263 bit.

          Not that the address space matters all that much.
          We can build a CPU with an 8 bit address space, but working with 8192 bit variables/registers. Yes, it would be a bit impractical. But if we call this an 8 bit architecture or an 8K bit architecture is a good question.

          A real world example would be x86, or more specifically 8086, it had a 20 bit address space, but were strictly a 16 bit processor. And to be fair, this tends to be the case in general, address space haven’t been a defining feature for how many bits a given architecture/CPU has, the working registers are much more important.

          Going beyond a 64 bit address space has little practical benefits as far as a CPU is concerned. And address space doesn’t directly effect the other aspects of an architecture.

          In the end.
          Going to a 128 bit address space isn’t a meaningful step as far as encryption is concerned. Register size however is. But the address space can be a lot smaller than the largest registers, and vice versa.

          1. X.
            The address space and register/instruction sizes aren’t directly related.
            (In short, the address formatting on an envelope/package doesn’t intrinsically dictate the contents of said package. (as in regards to good old snail mail that is.))

            Applications can technically use a 32 bit local address space while still working with 64 or more bits in their instructions.

            Though, most real architectures don’t do 32 bit local addresses when in 64 bit operation, and this I think is a flaw.

            I am all for 64 or more bit variables in a fair few workloads, but for most applications, 32 bit addresses are plenty enough. (And I’ll repeat just to be safe, Addresses are Not Instructions. For similar reasons to why a budget computer doesn’t have 2TB of RAM just because the HDD in it is that large….)

        2. When 32 bit is off the market and all chips are 64 bit somebody will come out with a 128 bit chip because bigger is better, “MOA POWAR”! (Markets drive technology.)

          1. i figure there is a niche in science where you need higher precision floating point in super computing applications. i really don’t see much use for it elsewhere. perhaps military. or perhaps we will need it for ksp3.

    1. i used amd64 back in the day. was the one oddball who used the 64-bit version of windows xp (vista hadn’t even come out yet). i retired that machine years ago. it got superseded by a pair of core 2s, an i5, 3 i7s, and a ryzen 7 (mobo is still in the mail, and i dont have ram or power supply yet). yet there are still things i use that need 32 bit. i somehow thought the transition to 64bit would be somewhat faster. imho pure 64 bit systems cant come soon enough.

  3. I can understand that they want to clean up their product range.
    But in the end it does not matter what ARM wants
    – the customers and their product requirements decide
    – no customers, no product sale, no income..
    There was a time before ARM, and there will be a time after ARM.

    1. The more complicated the machine, the more likely to fail… But, in this case it’s frequent firmware/software updates. The hardware it’s self, will provide exploitation opportunities. Keep it as simple as practical, and it’s much easier to keep it safe, secure, reliable.

    1. I love Org/Babel, so I sometimes need definitely not 8bitty hardware…
      …but 8bitters too…
      That can coexist surprisingly well in one brain!

  4. i mean, ARM creates new stuff, and new stuff will all be 64-bit, that should be no surprise to anyone. so the 32-bit stuff still exists, but it won’t be updated. that’s fine, i think. it’s great that legacy workloads really don’t need the newest silicon.

    i’m not gonna argue that 8-bit is better than 64-bit but for some projects an 8-bit PIC12 really fits best. because of that, microchip will hopefully continue to make PIC12s for a good while longer. i don’t really care if they update them. the PIC12 has been a great resource for decades. so far as i’m concerned, the last feature added to 8-bit micros of any value is storing program code in flash instead of OTP ROM (or external memory for some).

    presumably the STM32 will carry on for a good while, even after ARM abandons 32-bit cortex-M. eventually (probably sooner than later) someone will make a 64-bit embedded ARM with similar features and power consumption, and it’ll slowly grow to replace the STM32.

    my point, is the end of updates isn’t a problem if you really only wanted/needed the older features. on the 486 that i salvaged to play retro games, *of course* i’m gonna use a retro OS. the fact that i can’t run the newest linux kernel on it is not a problem.

    1. Microchip will be making 8-bit PICs and AVRs for many decades, because no one needs 64-bit ARM clocked at 1,8GHz for simple tasks, like operating a thermostat. Except for bad programmers…
      I think the only family they dropped was PIC17. As for their 32-bit portfolio they won’t drop PIC32 for many years – they invested too much to compete with ARM…
      STM32 also won’t disappear as these offer great performance at low cost and low power. Which is great for any small, battery-operated gadget. And are easier to program in normal languages than more powerful 32- and 64-bit ARMs…

      1. Yeah I think the article was confusing leading off with a mention of 8 bit mcus then only saying at the end that the announcement was only for application processors (cortex a), not microcontrollers.

        1. “Yeah I think the article was confusing”

          I’m shocked, simply shocked! And what’s this about an “8 bit mucus”? Ah, you typed it right, I just read it wrong.

      2. There are plenty of applications for which 8-bit microcontollers are perfectly adequate, and they will stay that way for years to come. How much processor speed do you need in a toothbrush or a keyboard?

        The price difference between 8-bit and 32 bit microcontrollers is also small, but it still may be significant for volume production.

        8 bitters do have an other advantage though. Most of the 8-bitters are capable of running on 5V. You can directly power them from a Li-Ion cell and drive MOSfets (as long as you do not need to switch them too often) 32-bit microcontrollers that run on 5V are hard to find.

        1. [Dave Jones] of EEVBlog found a 4-bit uC in an electric toothbrush.

          Also are there any 32-bit microcontrollers in DIP package? There used to be at least one, tiny ARM uC in 8-pin DIP, but it’s discontinued. It was used in MIDI controlled synth that fit in the DIN-5 plug…

  5. Two points: First – I sill have flat-blade screwdrivers, and Phillips-head ones too. Those “better” Torx, hex, and so many other head types did not come close to eliminate the flat-blade. But if we focus on the application domain of the ARM A series, nearly 100% phones, that industry is clearly not embracing 32 bit anymore. It’s hard to get quad vs. octo core. Leading edge lithography is pulling us there, as we need more stuff to lay down on those ever smaller footprints, because “why not?”.

    Second – Intel’s 8096 (only 80196 mentioned in article) is not a derivative of their 8051, which is itself a derivative of the 8048. I’ve used all, but mostly the 80C196 at my work before i retired. It was EOL a long time ago, but our design was “committed” to that part (read as: managers didn’t want to invest in newer technology), so they did a giant life-time buy and they still have a bunch more than 5 years after I retired. The story I was told about the RISC based 8096’s origin was that it was developed for GM for use in autos. That architecture soon ran out of steam for that industry and they switched to more powerful platforms and Intel no longer pursued that market. I also used an MSP430 16-bit part at work before retirement. We were finally transitioning to ARM M series, but it was slow going. They didn’t even want to pay for the support tools of the older parts, let alone buy new ones and then there’s the learning curve for everything.

    1. 8096 is derived from the 8061, which was used in Ford’s EEC-IV ECU. IIRC the 8096 had a more conventional data/address bus setup vs the 8061 that shared some (all?) data/address pins and required some funky clock cycle hijinx.

      1. My bad, I guess. My info came from my direct experience with the 80C196 which I started using in 1989. I relied on “open” (i.e. not hidden behind non-disclosure agreements) information from Intel reps. I didn’t know its history was more extensive that what I was lead to believe. Also, I didn’t have the power of the internet to help me, it was mostly a few books and lots of work. If you fly on an airplane, my work still provide safe ILS landings at airports (somewhere near 1200 airports around the world at the time of my retirement in 2015). My work wasn’t in the automotive field.

    2. Who exactly said that flat bladed and Philips screwdrivers were obsolete? Arguing with straw men is not a good way to make a point!

      Also, do you really think that there are no valid reasons for AARCH64??? As you say “why not”? Have you noticed that the encryption required for safe browsing is driving 64 bit developments? Soon enough we will need far more power in our phones to deal with more and more advanced encryption. Bigger and bigger storage devices need more address bits, calculating these large numbers on 32 bit systems is painfully slow.

      Oh and sure, comparing 8 bit microcontrollers to modern 64 bit systems is perfectly valid, just as comparing apples to radial tires, both will give equally valid results.

  6. Sounds good to me. The 64 bit ARM A series is a lot cleaner to deal with than 32 bit anyway — and hard to see any reason they would want to spend more energy on 32 bit. On the other hand, ARM doesn’t produce any silicon, so it is hard to see why some company could not continue to crank out 32 bit chips if they are already doing so.

    As for all this 8 bit chatter, how does that have anything to do with this article?

    1. I agree. With something targeting battery applications like arm so often does, every bit of leakage counts. Every unused transistor sitting there for the 32 bit compatibility would be a small leakage path and all those add up.

      If for some reason there’s some weird 32-bit only binary that’ll never provide a 64 bit binary, qemu will probably do a sufficient job. Besides, if you (I mean random people on the internet and not any specific “you” in this thread) still absolutely insist on having a 32 bit only arm device, there’s still a tonne of manufacturers out there providing armv7a socs.

      1. Not just leakage, but all those MOSFET gates being charged and discharged on transmission gates and multiplexers with 64 lanes that are passing only eight or fewer bits of useful information.

    1. 42 bits for a total of 4 TB of memory is a somewhat nice number.
      But 42 bits isn’t architecturally a nice number as far as 8 bit bytes are concerned.

      48 bit would be a fair bit more logical, or 40 bit. I would though personally lean towards 48 due to higher end systems currently pushing past the 1TB mark. 256 TB is however also a bit of a limit, but realistically, RAM isn’t going to get that dense/cheap in a long time, though quantum tunneling issues could put a complete halt on RAM density increases in the near future.

      Not to mention that an increased parallelism is likely to skew markets towards more multi node solutions, something that is already the case. Though, from an ISA standpoint, one can ask if one needs a global address space to begin with. One could technically give each thread/process its own address space, where address overlap is handled in hardware, since one still knows the thread/process currently running.

      Though, who knows, 42 might just be a reference to a popular book.

  7. So…the “Trash the RISC-V” campaign didn’t work, eh? And this is the result of the very best of what the rocket-scientist mousketeers…err, make that ‘marketeers’, could come up with to replace The Trash Campaign? Obviously.

    ARM is going irrelevant. Much as they will not admit it, they continue to grow more scared of RISC-V. This is a marketing move, not one motivated by any engineering (customer) design necessities. Perhaps they should invest their energies BACK into ‘bad-mouthing’ RISC-V: even though that was a monumental flop (seen any mud-slinging by ARM against the RISC-V in a while?), it’ll cost far less than this does.

    The customer will use whatever does the specific job for the least cost.
    Here’s a big clue for you, ARM: by the time you have this particular strategy available, it will be too late. Your would-have-been customers will have been designing in, and using, PICs, RISC-V, and AMD.

    1. ARM still has 90% market share for both mobile and IoT devices. When will we have first RISC-V smartphone that would compete in performance with top of the line ARM phones? I expect it in 2-5 years, or even later. And probably from Chinese manufacturers, like Huawei or Xiaomi. And while we wait, manufacturers of ARM chips will work hard to improve them. Until RISC-V proves itself in mass market, ARM will be fine. And for that we will have to wait…

      1. it’s not the value (market share) today, but rather the rate of change (delta market share) that matters. in technology, it is continuously astonishing how even the tiniest trend that appears to show initial exponential growth really carries on through several doublings. that’s why everyone’s obsessed with the singularity, after all.

        i really don’t know who to bet on, but if i was ARM, i really would be at least a little bit worried. i mean, you don’t watch ARM go from a dismissable niche to a serious challenge against intel x86 in 10-15 years and not realize that no market position is safe, long term.

        1. The fact that ARM SoC by Apple competes with both Intel and AMD consumer grade CPUs will keep ARM value and market share high. How many RISC-V chips are on the market now? One? Two? Five? And they are in form of devkits, not consumer products. Also consider how much various companies invested in developing ARM chips. They won’t suddenly switch over to RISC-V. Especially when Apple with M1 had proven that they can squeeze more performance from them.

          Unless some company releases a RISC-V powered consumer product that can outperform competition, or be at least cheaper than competition, ARM will keep it’s market share and dominant position…

    2. Most current RISC-V chips don’t check a lot of requirements boxes yet, but it may become a viable option at some point.

      The Linux kernels for aarch64 and amd64 allow seamless cross-platform coding in many cases. Additionally, having a 64bit standard code base on IoT that behaves the same on each platform is very convenient (essentially one branch to test).

      There are a lot of applications that will not even compile for 32bit systems anymore. Soon Linux will also drop 32bit, and time_t will finally have fixed the Year 2038 epoch rollover problem.

      8bit/16bit/32bit bare-metal micro-controllers will likely remain active lines, as Real-Time code is a different problem area from a multitasking OS/App. Micro-power applications with predictive simple 8bit PIC/AVR are likely to remain in use for the foreseeable future, for many of the same reasons 555 are still made.

    3. I expect that ARM will figure out after a while, that 64 bits makes sense for the Cortex A series, but not so much for Cortex M. If their issue is that they don’t want to have two fundamentally different products, well, they’re already there. I was considering using a Raspberry Pi Zero board (which uses a Cortex-A) rather than a Pi Pico (which uses a Cortex-M) for projects that just need a more powerful microcontroller, but then I realized that it is far more difficult to do simple things with complex CPUs. In effect, once you get beyond a certain point (and that point lies between Cortex M and Cortex A), you almost HAVE to use an operating system, because of all the setup that has to be done when the processor has things like memory protection.

      1. Or maybe I should have read the article more carefully. ARM is abandoning 32 bits! ARM is abandoning 32 bits! ARM is abandoing 32 bits. Oh, but not for the Cortex-M series. D’oh!

        1. My radio dial is filled with stations that play nothing but “classic” rock. Elvis, the Beatles, the Who are all as popular as ever. Rock and roll will never die. And it’s not just rock. Mozart, Bach, Beethoven etc are also still very popular. New “popular” music is not needed when our existing selections are still so popular. Rock and roll will never die.

    1. Nah. There is no good reason to make *all* µC 64 bit. Nice to play with, but no point really.
      32 bit has a major advantage to provide a flat address space and memory mapping.
      64 bit makes a lot of sense on desktops/laptops etc.
      But more microcontrollers? Not so much.

  8. How long before x86 CPUs go 64 bit only with 16 and 32 bit support moved to BIOS level emulation or to a small core that can be repurposed for power management or whatnot once the big cores come online for 64 bit operation?

  9. I don’t see how any operating system is dropping 32 bit support. Quite the opposite, they all do support 32 bit applications and will do so for a very long time, or they’d make all the programs/apps obsolete for no good reason.

    The driver side and low level system side is another story, but there is no good reason to force programs to become 64 bit exclusively.

    And I say that as someone who couldn’t wait to finally get a fully 64 bit system.

    Sometimes this drive to make everything obsolete is really an obsession, same as making everything old.

    1. Yeah “the writing’s on the wall” for Linux and Windows dropping 32 bit support? Windows, maybe, as it’s targetting application processors (Desktop/Laptop/Server/Tablets) only, but Linux is used in a lot of deeply embedded places. Really no point in having a system with support for more than 2 GB of process memory space if you’ve got a soldered on 128 MB of RAM… and the people paying Linux developers *sell* devices for these markets; outside of the Windows/OS X domain, there’s way more than the x86/arm duopoly, and for good (as in: actually technically valid) reasons.

      1. I’m using 32bit Linux, but for five years my 8gigs of RAM has been seen by Linux.

        I got that refurbished i7 because of Slackware 14.2’s release , but I was still thinking in terms of saving space, so I went with 32bits.

        When Slackware 15.0 comes out any day now (it’s been beta for a while), I’ll probably go 64bits to give it a try.

        Twenty years ago tomorrow, I bought a used 200MHz Pentium to run Linux, and I think I.installed as soon as I got home with it. So twenty years of Slackware.

      1. The kernel supports 32-bit cores, but any generic RISC-V distro only supports 64-bit. hardware.
        Moreover, there is no support for running 32-bit user space on rv64 kernels, or any of the rv64 hardware I have seen so far.

        On 64-bit x86 or Arm hardware with limited memory resources, it makes a lot of sense to run 32-bit user space code, and 32-bit Arm hardware has its place for the same reasons, but RISC-V developers are better off concentrating on getting 64-bit RISC-V working, as they lack an established ecosystem of 32-bit RISC-V software.

        For Arm CPU cores in turn, dropping 32-bit support is a logical step, since aarch32 and aarch64 are rather different, and a core design that supports only aarch64 can make a couple of design tradeoffs that are not possible on cores that support both.

        1. On x86, the “x32” mode actually uses the 64 bit instruction set with 32 bit pointers, solving the pointer size issue while retaining most of the advantages of 64 bit. Not sure if 64 bit ARM has something similar, but I’d be surprised if it didn’t.

          1. Support for this has existed outside of the mainline kernel for a long time, but we never merged it on arm64 in order to limit the number of supported user space ABIs.

            MIPS has this in the form of the ‘n32’ ABI, but generally nobody wants to do this any more for any other architectures, though it does get suggested occasionally.

          2. My first “personal” computer was an 8008 based Mark 8 with a lot of hand-made add-ons. That was an 8 bit processor with 14 bit addressing (2 bits to decode specific machine cycles). Later I had access to a TMS-1000 4 bit processor with 64 byte “code pages” (addressing within a page was not sequential for “security”). Then I did National SC/MP, Fairchild F8 series, Motorola 6800, 6809, Intel 8088, 8086, 80188… and that was within a span of less than 10 years. My access to the 8008 (and most other early processor access) required “hand assembly” — using the manufacturer’s published instruction set and hand calculating addressing, especially relative branch offsets, which I almost always screwed up too easily. These invariably were hand entered using a “debug monitor” code in a UVEPROM to load programs into SRAM. The exception was the Mark 8 8088, which used synchronous counters to “jam” code onto the data bus with a write strobe from a one-shot. I even went to far as to demonstrate my college senior design project to my professor for a 1702A 256 byte UVEPROM programmer (his jaw dropped). I wrote up my report on a REAL Underwood typewriter back in early 1977.

            What’s my point? Well, each had a distinct instruction set. There were different approaches for populating that “instruction set space”, and different ways to do addressing. The easiest was of course, “extended” addressing that allowed JUMPing to any location within the address range. As I said earlier, there was relative offset addressing for branch instructions — done for code compactness. Early instruction set variants tried to preserve binary compatibility, i.e. the code could run directly on both variants without reassembly or relinking. Later, that was a lessor issue as instruction sets exploded: e.g. 8008 => 8088 => 80188 => 80286 => 80386, etc. The same was true for so many others processor family extensions. Some took different approaches like the ARM Thumb instruction set for pseudo-16 bit operation when they actually wanted to “down-grade” their 32 bit operations and make them more compact. Another issue related to 6502 variants (both licensed and unlicensed second sources) was “unused op codes”. After the success of the Apple computer, lots of people found that the MOS Technology coding had unused codes that actually did something. Those were used by some to advantage, and by other to protect their code’s proprietary nature.

            So, all in all, regardless of a processor architecture’s operation, number of bits used for data and/or addressing, unused op-code space, modes of operation (like Intel’s escape codes to support various coprocessors), weave a tapestry of instruction set coding. Some may be binary compatible but most now need to be recompiled to be retargeted. This parallels that work Wirth did with Pascal and P-code and other variants of that practice. I even recall trying to use someone’s Fortran to C converted which essentially was a Fortran (variant) compiler that used C as its “P-code”.

            These are all artifacts of the evolution of coding as I am an artifact of what and how I coded.

    2. AARCH64 does NOT support 32 bit ARM instructions when it runs in 64 bit mode. Future AARM64 devices will be removing all 32 bit support, won’t run at all. Welcome to the future!!!

    3. > I don’t see how any operating system is dropping 32 bit support.

      Yeah, tell that to my 32-bit SPARCstation gathering dust since Linux dropped the whole architecture. FreeBSD and OpenBSD too.

  10. I’d expect some 64 bit MCUs to appear some time in the future but it’ll augment the MCU range rather than replace it, unlike in desktop/laptop/mobile devices. These general purpose devices gain function the more powerful they get.
    MCUs are used in devices that perform very specific tasks that are optimized for function, power saving and cost. Complexity increases cost.
    Setting up an 8 bit MCU is so much simpler and faster than setting up a 16 or 32 bit MCU, with all its IO mapping matrix, interrupt levels, clock modes, DMAs etc.
    So no, 32 is here to stay, though I guess for general purpose devices it may disappear.

    1. Why is there even a need for this in the micro-controller world? Most micro-controllers are hardly using more than a few megs of ram. If there’s not a need to address additional memory it would seem 64bit would just make them more complex/expensive/power hungry than lower bit counterparts.

      1. It’s NOT needed in the microcontroller world. Which is why they specifically say that this does not apply to the Cortex-M (“mobile”, aka microcontroller) products.

  11. I read the links that are described as ARM announcing that they’re dropping 32 bit support, but the links don’t say anything like that.

    They’re talking specifically about their application processors that are targeted at mobile devices, and about Android requiring 64 bit versions of apps.

    This makes sense, things using 32bit ARM are already mostly microcontrollers. And nothing is changing there.

  12. You mean my 4004 4 bit processor is obsolete ? Is 4 bits really dead ? 😁

    When I started at NCR the 4 bit processor was used heavily until the NCR MED-80 which was an 8080 clone but had a multiplexed bus that operated in 8/16 bit modes to address NVRAM and EAROM.

  13. 32 bit needs to go away for general purpose computing. but its still great for micros. if 8 or 16 bit parts continue to exist, its going to be down to cost or power consumption and cost has gotten low enough to become irrelevant. im not really sure who wins the power consumption game. seems to come down to leveraging power saving features available on each part if you really need to sip power. but i have a feeling the 32 bit parts use less power while under load just from being built on a newer process node.

      1. And you should check the errors these AMD Ryzen processors produce 24/7 because of low voltage-operation. lol.

        Intel doesn´t have these processor-errors btw. Because their voltage is much higher…

  14. One problem I see with processors going to “more bits” is a software process called “thunking”.

    I worked with OS/2 at IBM back in the day. When OS/2 went to 32 bit, a large portion of the drivers and other important code was still 16 bit. The programmers built what they called a “thunking layer” that “thunked” 32 bit code into 16 bit drivers (and sometimes the other way).

    I don’t know ARM very well but suspect that programmers will be programmers – and there will be a “Conversion layer” somewhere rather than a proper re-write.

    1. The actual video is already way more nuanced and well worth watching than one would expect from the clickbait title.

      I wrote a related article (specific to the Linux kernel) last year at https://lwn.net/Articles/838807/ that provides some more background and some other concerns.

      The most important point that I missed in the video is how old CPU cores have a really long life: There are still new ARM926 (Armv5) based SoCs in 40nm and new Cortex-A7 based SoCs running Linux all the way down to 5nm (in my LWN article I incorrectly stated that the end was 28nm), and new embedded systems still use old SoCs for a long time. If there is another “little” Armv9 core with aarch32 EL0 support in 2022, this may easily be shipping with 32-bit Linux user space in new embedded products during the 2040s.

      What I agree with though is that with Arm ending the roadmap for 32-bit mobile processors, it’s obvious that new 32-bit (Linux) products will only get less common in the future over time.

  15. I started with Z80 (ZX Spectrum) then I saw the sunrise of 32 bit, i will see their sunset. I saw sunrise of 64 bit, maybe i will see also sunrise of 128 bit. I had no phone line at home when i was child and now i have smarthone with videochat and fiber at home. At primay schooI read comic of “man on mars”… just fiction and if i will be lucky i will be still around to see “steps on mars” What a lovely timeframe to be alive… I am in the future!

  16. TI made the MSP430 16 Bitter accessible via the EZ-430 Platforms in combination with IDEs like TI Code Composer , IAR Embedded Workbench, and Forth tools like SwiftX, VFX, and Mecrisp. MSP430 had small form factor nodes, RF modules even including Bluetooth. Indeed in December Hackaday identified an EZ430 Chronos watch (circa 2010) repurposed as a medical alert device.
    Of course TI now has a 32 Bit ARM based MSP platform in the MSP432

  17. My main problem is, what happens to all those still good working x86 computers? For example for writting a text or daddling around in $programminglanguage my old EeePC 1000 running Debian is still enough. It can connect to anything trough ssh still fine, as it can still, if you bring some patience, browse the web just fine. I don’t see me throwing a working machine on the dumpster just so that i can follow the capitalisms mantra of “Buy New!”. And if that means in the future that i need to compile my own shit instead of using the repository then so be it.

  18. Do we really need sensationalized headlines? Of course 32 bit isn’t dead, and of course it’ll still be around for many decades. The title really feels like click-bait.

Leave a Reply to Old GuyCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.