That Handheld 386SX Gets A Teardown

A few weeks ago our community was abuzz with the news of a couple of new portable computers available through AliExpress. Their special feature was that they are brand new 2023-produced retrocomputers, one with an 8088, and the other with a 386SX. Curious to know more? [Yeo Kheng Meng] has one of the 386 machines, and he’s taken it apart for our viewing pleasure.

What he found is a well-designed machine that does exactly what it claims, and which runs Windows 95 from a CF card. It’s slow because it’s an embedded version of the 386 variant with a 16-bit bus originally brought to market as a chip that could work with 16-bit 286-era chipsets. But the designer has done a good job of melding old and new parts to extract the most from this vintage chip, and has included some decidedly modern features unheard of in the 386 era such as a CH375B USB mass storage interface.

If we had this device we’d ditch ’95 and run DOS for speed with Windows 3.1 where needed. Back in the day with eight megabytes of RAM it would have been considered a powerhouse before users had even considered its form factor, so there’s an interesting exercise for someone to get a vintage Linux build running on it.

One way to look at it is as a novelty machine with a rather high price tag, but he makes the point that considering the hardware design work that’s gone into it, the 200+ dollar price isn’t so bad. With luck we’ll get to experience one hands-on in due course, and can make up our own minds. Our original coverage is here.

An Almost Invisible Desktop

When you’re putting together a computer workstation, what would you say is the cleanest setup? Wireless mouse and keyboard? Super-discrete cable management? How about no visible keeb, no visible mouse, and no obvious display?

That’s what [Basically Homeless] was going for. Utilizing a Flexispot E7 electronically raisable standing desk, an ASUS laptop, and some other off-the-shelf parts, this project is taking the idea of decluttering to the extreme, with no visible peripherals and no visible wires.

There was clearly a lot of learning and much painful experimentation involved, and the guy kind of glazed over how a keyboard was embedded in the desk surface. By forming a thin layer of resin in-plane with the desk surface, and mounting the keyboard just below, followed by lots of careful fettling of the openings meant the keys could be depressed. By not standing proud of the surface, the keys were practically invisible when painted. After all, you need that tactile feedback, and a projection keeb just isn’t right.

ChatGPT-inspired machine learning mouse emulator

Moving on, never mind an ultralight gaming mouse, how about a zero-gram mouse? Well, this is a bit of a cheat, as they mounted a depth-sensing camera inside a light fitting above the desk, and built a ChatGPT-designed machine-learning model to act as a hand-tracking HID device. Nice idea, but we don’t see the code.

The laptop chassis had its display removed and was embedded into the bottom of the desk, along with the supporting power supplies, a couple of fans, and a projector. To create a ‘floating’ display, a piece of transparent plastic was treated to a coating of Lux labs “ClearBright” transparent display film, which allows the image from the projector to be scattered and observed with sufficient clarity to be usable as a PC display. We have to admit, it looks a bit gimmicky, but playing Minecraft on this setup looks a whole lotta fun.

Many of the floating displays we’ve covered tend to be for clocks (after all timepieces are important) like this sweet HUD hack.

Continue reading “An Almost Invisible Desktop”

The Apple Silicon That Never Was

Over Apple’s decades-long history, they have been quick to adapt to new processor technology when they see an opportunity. Their switch from PowerPC to Intel in the early 2000s made Apple machines more accessible to the wider PC world who was already accustomed to using x86 processors, and a decade earlier they moved from Motorola 68000 processors to take advantage of the scalability, power-per-watt, and performance of the PowerPC platform. They’ve recently made the switch to their own in-house silicon, but, as reported by [The Chip Letter], this wasn’t the first time they attempted to design their own chips from the ground up rather than using chips from other companies like Motorola or Intel.

In the mid 1980s, Apple was already looking to move away from the Motorola 68000 for performance reasons, and part of the reason it took so long to make the switch is that in the intervening years they launched Project Aquarius to attempt to design their own silicon. As the article linked above explains, they needed a large amount of computing power to get this done and purchased a Cray X-MP/48 supercomputer to help, as well as assigning a large number of engineers and designers to see the project through to the finish. A critical error was made, though, when they decided to build their design around a stack architecture rather than a RISC. Eventually they switched to a RISC design, though, but the project still had struggled to ever get a prototype working. Eventually the entire project was scrapped and the company eventually moved on to PowerPC, but not without a tremendous loss of time and money.

Interestingly enough, another team were designing their own architecture at about the same time and ended up creating what would eventually become the modern day ARM architecture, which Apple was involved with and currently licenses to build their M1 and M2 chips as well as their mobile processors. It was only by accident that Apple didn’t decide on a RISC design in time for their personal computers. The computing world might look a lot different today if Apple hadn’t languished in the early 00s as the ultimate result of their failure to develop a competitive system in the mid 80s. Apple’s distance from PowerPC now doesn’t mean that architecture has been completely abandoned, though.

Thanks to [Stephen] for the tip!

ADATA SSD Gets Liquid Cooling, But Not Everyone’s Convinced

Solid-state drives (SSDs) were a step change in performance when it came to computer storage. They offered incredibly fast seek times by virtue of dispensing with solid rust for silicon instead. Now, some companies have started pushing the limits to the extent that their drives supposedly need liquid cooling, as reported by The Register.

The device in question is the ADATA Project NeonStorm, which pairs a PCIe 5.0 SSD with RGB LEDs, a liquid cooling reservoir and radiator, and a cooling fan. The company is light on details, but it’s clearly excited about its storage products becoming the latest piece of high-end gamer jewelry.

Notably though, not everyone’s jumping on the bandwagon. Speaking to The Register, Jon Tanguy from Crucial indicated that while the company has noted modern SSDs running hotter, it doesn’t yet see a need for active cooling. In their case, heatsinks have proven enough. He notes that NAND flash used in SSDs actually operates best at 60 to 70 C. However, going beyond 80 C risks damage and most drives will shutdown or throttle access at this point.

Realistically, you probably don’t need to liquid cool your SSDs, even if you’ve got the latest and greatest models. However, if you want the most tricked out gaming machine on Twitch, there’s plenty of products out there that will happily separate you from your money.

Is MINIX Dead? And Does It Matter?

Is MINIX dead? OSnews is sounding its death-knell, citing evidence from the operating system’s git log that its last updates happened as long ago as 2018. Given that the last news story on the MINIX website is from 2016 and the last release version, 3.3, came out in 2014, it appears they they may have a point. But perhaps it’s more appropriate to ask not whether or not MINIX is dead, but whether indeed it matters that the venerable OS appears no longer in development. It started as an example to teach OS theory before becoming popular in an era when there were no other inexpensive UNIX-like operating systems for 16-bit microcomputers, but given that its successors such as Linux-based operating systems have taken its torch and raced ahead, perhaps its day has passed.

No doubt many of you will now be about to point out that MINIX lives on unexpectedly baked into the management engine core on Intel microprocessors, and while there’s some debate as to whether that’s still the case, you may have a point. But the more important thing for us isn’t whether MINIX is still with us or even whether it’s a contender, but what it influenced and thus what it was responsible for. This is being written on a GNU/Linux operating system, which has its roots in [Linus Torvalds]’ desire to improve on… MINIX.

Read more about the tangled web of UNIX-like operating systems here.

Hackaday Prize 2023: Building A Relay ALU

There’s much truth in the advice that, to truly understand something, you need to build it yourself from the ground up. That’s the idea behind [Christian]’s entry for the Re-engineering Education category of the 2023 Hackaday Prize. Built as an educational demonstrator, this is a complete arithmetic-logic unit (ALU) using discrete relays — and not high-density types either — these are the big honking clear-cased kind.

The design is neatly, intentionally, partitioned along functional lines, with four custom PCB designs, each board operating on 4-bits. To handle a byte-length word, boards are simply cascaded, making a total of eight. The register, adder, logic function, and multiplex boards are the heart of the build with an additional two custom boards for visualization (using an Arduino for convenience) and IO forming the interface. After all, a basic CPU is just an ALU and some control around it, the magic is really in the ALU.

The fundamental logical operations operating upon two operands, {A, B} are A, ~A, B, ~B, A or B, A and B, A xor B, can be computed from just four relays per bit. The logic outputs do need to be fed into a 7-to-1 bit selector before being fed to the output register, but that’s the job of a separate board. The adder function is the most basic, simply a pair of half-adders and an OR-gate to handle the chaining of the carry inputs and generate the carry chain output.

3D printed cable runs are a nice touch and make for a slick wiring job to tie it all together.

For a more complete relay-based CPU, you could check out the MERCIA relay computer project, not to mention this wonderfully polished build.

 

What Next For The SBC That Has Everything?

In the decade-and-a-bit since the first Raspberry Pi was launched we’ve seen an explosion of affordable single-board computers (SBCs), but as the prices creep up alongside user expectation and bloat, [Christopher Barnatt] asks where the industry will go next.

The Pi started with an unbeatable offer, $35 got you something similar to the desktop PC you’d had a decade earlier — able to run a Linux desktop on your TV from an SD card. Over the years the boards have become faster and more numerous, but the prices for ARM boards are now only nominally as affordable as they were in 2012, and meanwhile the lower end of x86 computing is now firmly in the same space. He demonstrates how much slower the 2023 Raspberry Pi OS distribution is on an original Pi compared to one of the early pre-Raspbian distros, and identifies in that a gap forming between users. From that he sees those people wanting a desktop heading towards the x86 machines, and the bare-metal makers at the lower end heading for the more powerful microcontrollers which simply weren’t so available a decade ago.

We have to admit that we agree with him, as the days when a new Raspberry Pi board was a special step forward rather than just another fast SBC are now probably behind us. In that we think the Pi people are probably also looking beyond their flagship product, as the hugely successful lunches of the RP2040 and the industrial-focused Compute Module 4 have shown.

What do you think about the SBC market? Tell us in the comments.

Continue reading “What Next For The SBC That Has Everything?”