Ethernet History: Why Do We Have Different Frame Types?

Although Ethernet is generally considered to be a settled matter, its history was anything but peaceful, with its standardization process (under Project 802) leaving its traces to this very day. This is very clear when looking at the different Ethernet frame types in use today, and with many more historical types. While Ethernet II is the most common frame type, 802.2 LLC (Logical Link Control) and 802 SNAP (Subnetwork Access Protocol) are the two major remnants of this struggle that raged throughout the 1980s, even before IEEE Project 802 was created. An in-depth look at this history with all the gory details is covered in this article by [Daniel].

The originally proposed IEEE 802 layout, with the logical link control (LLC) providing an abstraction layer.
The originally proposed IEEE 802 layout, with the logical link control (LLC) providing an abstraction layer.

We covered the history of Ethernet’s original development by [Robert Metcalfe] and [David Boggs] while they worked at Xerox, leading to its commercial introduction in 1980, and eventual IEEE standardization as 802.3. As [Daniel]’s article makes clear, much of the problem was that it wasn’t just about Ethernet, but also about competing networking technologies, including Token Ring and a host of other technologies, each with its own gaggle of supporting companies backing them.

Over time this condensed into three subcommittees:

  • 802.3: CSMA/CD (Ethernet).
  • 802.4: Token bus.
  • 802.5: Token ring.

An abstraction layer (the LLC, or 802.2) would smooth over the differences for the protocols trying to use the active MAC. Obviously, the group behind the Ethernet and Ethernet II framing push (DIX) wasn’t enamored with this and pushed through Ethernet II framing via alternate means, but with LLC surviving as well, yet its technical limitations caused LLC to mutate into SNAP.  These days network engineers and administrators can still enjoy the fallout of this process, but it was far from the only threat to Ethernet.

Ethernet’s transition from a bus to a star topology was enabled by the LANBridge 100 as an early Ethernet switch, allowing it to scale beyond the limits of a shared medium. Advances in copper wiring (and fiber) have further enabled Ethernet to scale from thin- and thicknet coax to today’s range of network cable categories, taking Ethernet truly beyond the limits of token passing, CSMA/CD and kin, even if their legacy will probably always remain with us.

Second Human Neuralink Brain Implant Recipient Uses It For CAD And Videogaming

As Neuralink works towards getting its brain-computer interface technology approved for general use, it now has two human patients who have received the experimental implant. The second patient, [Alex], received the implant in July of 2024 and is said to be doing well, being able to play games like Counter Strike 2 without using his old mouth-operated controller. He’s also creating designs in Fusion 360 to  have them 3D printed.

This positive news comes after the first patient ([Noland Arbaugh]) suffered major issues with his implant, with only 10-15% of the electrodes still working after receiving the implant in January. The issue of electrode threads retracting was apparently a known issue years prior already.

We analyzed Neuralink’s claims back in 2019, when its founder – [Elon Musk] – was painting lofty goals for the implant, including reading and writing of brains, integration with AIs and much more. Since that time Neuralink has been mostly in the news for the many test animals which it euthanized during its test campaign prior to embarking on its first human test subjects.

There also appears a continuing issue with transmitting the noisy data from the electrodes, as it is far more data than can be transmitted wirelessly. To solve this seemingly impossible problem, Neuralink has now turned to the public with its Neuralink Compression Challenge to have someone make a miraculous lossless compression algorithm for it.

With still many challenges ahead, it ought to be clear that it will take many more years before Neuralink’s implant is ready for prime-time, but so far at least it seems to at least make life easier for two human patients.

Continue reading “Second Human Neuralink Brain Implant Recipient Uses It For CAD And Videogaming”

565RU1 die manufactured in 1981.

The First Mass Produced DRAM Of The Soviet Union

KE565RU1A (1985) in comparison with the analogue from AMD (1980)
KE565RU1A (1985) in comparison with the analogue from AMD (1980)

Although the benefits of semiconductor technology were undeniable during the second half the 20th century, there was a clear divide between the two sides of the Iron Curtain. Whilst the First World had access to top-of-the-line semiconductor foundries and engineers, the Second World was having to get by with scraps. Unable to keep up with the frantic pace of the USA’s developments in particular, the USSR saw itself reduced to copying Western designs and smuggling in machinery where possible. A good example of this is the USSR’s first mass-produced dynamic RAM (DRAM), the 565RU1, as detailed by [The CPUShack Museum].

While the West’s first commercially mass-produced DRAM began in 1970 with the Intel 1103 (1024 x 1) with its three-transistor design, the 565RU1 was developed in 1975, with engineering samples produced until the autumn of 1977. This DRAM chip featured a three-transistor design, with a 4096 x 1 layout and characteristics reminiscent of Western DRAM ICs like the Ti TMS4060. It was produced at a range of microelectronics enterprises in the USSR. These included Angstrem, Mezon (Moldova), Alpha (Latvia) and Exciton (Moscow).

Of course, by the second half of the 1970s the West had already moved on to single-transistor, more efficient DRAM designs. Although the 565RU1 was never known for being that great, it was nevertheless used throughout the USSR and Second World. One example of this is a 1985 article (page 2) by [V. Ye. Beloshevskiy], the Electronics Department Chief of the Belorussian Railroad Computer Center in which the unreliability of the 565RU1 ICs are described, and ways to add redundancy to the (YeS1035) computing systems.

Top image: 565RU1 die manufactured in 1981.

Atari Announces The Atari 7800+ Nostalgia Console

Following the trend of re-releasing every single game console as some kind of modern re-imagining or merely an ARM-SBC-with-emulator slapped into a nice looking enclosure, we now got the announcement from Atari that they will soon be releasing the Atari 7800+.

It’s now up for pre-order for a cool $130 USD or a mega bundle with wired controllers for $170 and shipping by Winter 2024. Rather than it being a cute-but-non-functional facsimile like recent miniature Nintendo and Commodore-themed releases, this particular console is 80% of the size of the original 7800 console, and accepts 2600 and 7800 cartridges, including a range of newly released cartridges.

On the outside you find the cartridge slot, an HDMI video/audio output, a USB-C port (for power) and DE-9 (incorrectly listed as DB-9) controller ports, with wireless controllers also being an option. Inside you find a (2014-vintage) Rockchip RK3128 SoC with a quad core Cortex-A7 that runs presumably some flavor of Linux with the Stella 2600 emulator and ProSystem 7800 emulator. This very likely means that compatibility with 2600 and 7800 titles is the same as for these emulators.

Bundled with the console is a new 7800 cartridge for the game Bentley Bear’s Crystal Quest, and a number of other new games are also up for pre-order at the Atari site. These games are claimed to be compatible with original Atari consoles, which might make it the biggest game release year for the 7800 since its launch, as it only had 59 official games released for it.

Given the backwards compatibility of this new system, you have to wonder how folks who purchased the 2600+ last year are feeling right about now. Then again, the iconic faux-wood trim of the earlier console might be worth the price of admission alone.

Raptor DID. Photo by Matt Mechtley.

How Jurassic Park’s Dinosaur Input Device Bridged The Stop-Motion And CGI Worlds

In a double-blast from the past, [Ian Failes]’ 2018 interview with [Phil Tippett] and others who worked on Jurassic Park is a great look at how the dinosaurs in this 1993 blockbuster movie came to be. Originally conceived as stop-motion animatronics with some motion blurring applied using a method called go-motion, a large team of puppeteers was actively working to make turning the book into a movie when [Steven Spielberg] decided to go in a different direction after seeing a computer-generated Tyrannosaurus rex test made by Industrial Light and Magic (ILM).

Naturally, this left [Phil Tippett] and his crew rather flabbergasted, leading to a range of puppeteering-related extinction jokes. Of course, it was the early 90s, with computer-generated imagery (CGI) animators being still very scarce. This led to an interesting hybrid solution where [Tippett]’s team were put in charge of the dinosaur motion using a custom gadget called the Dinosaur Input Device (DID). This effectively was like a stop-motion puppet, but tricked out with motion capture sensors.

This way the puppeteers could provide motion data for the CG dinosaur using their stop-motion skills, albeit with the computer handling a lot of interpolation. Meanwhile ILM could handle the integration and sprucing up of the final result using their existing pool of artists. As a bridge between the old and new, DIDs provided the means for both puppeteers and CGI artists to cooperate, creating the first major CGI production that holds up to today.

Even if DIDs went the way of the non-avian dinosaurs, their legacy will forever leave their dino-sized footprints on the movie industry.

Thanks to [Aaron] for the tip.


Top image: Raptor DID. Photo by Matt Mechtley.

Cost-Optimized Raspberry Pi 5 Released With 2 GB RAM And D0 Stepping

When the Raspberry Pi 5 SBC was released last year, it came in 4 and 8 GB RAM variants, which currently retail from around $80 USD and €90 for the 8 GB variant to $60 and €65 for the 4 GB variant. Now Raspberry Pi has announced the launch of a third Raspberry Pi 5 variant: a 2 GB version which also features a new stepping of the BCM2712 SoC. This would sell for about $50 USD and feature the D0 stepping that purportedly strips out a lot of the ‘dark silicon’ that is not used on the SBC.

These unused die features are likely due to the Broadcom SoCs used on Raspberry Pi SBCs being effectively recycled set-top box SoCs and similar. This means that some features that make sense in a set-top box or such do not make sense for a general-purpose SBC, but still take up die space and increase the manufacturing defect rate. The D0 stepping thus would seem to be based around an optimized die, with as only possible negative being a higher power density due to a (probably) smaller die, making active cooling even more important.

As for whether 2 GB is enough for your purposes depends on your use case, but knocking $10 off the price of an RPi 5 could be worth it for some. Perhaps more interesting is that this same D0 stepping of the SoC is likely to make it to the other RAM variants as well. We’re awaiting benchmarks to see what the practical difference is between the current C1 and new D0 steppings.

Thanks to [Mark Stevens] for the tip.

Historical Microsoft And Apple Artifacts Among First Christie’s Auction Of Living Computers Museum

Recently the Christie’s auction house released the list of items that would be going up for sale as part of the first lot of Living Computer Museum items, under the banner “Gen One: Innovations from the Paul G. Allen Collection”. One auction covers many ‘firsts’ in the history of computing,  including a range of computers like an Apple 1, and a PDP-10, as well as early Microsoft memos and code printouts. The other auctions include such items like a Gemini Spacesuit as worn by [Ed White] and a signed 1939 letter from [Albert Einstein] to [US President Roosevelt] on the discovery by the Germans of a fissionable form of uranium from which a nuclear weapon could be constructed.

We previously reported on this auction when it was first announced in June of this year. At the time many were saddened at seeing the only computer history and its related educational facilities vanish, and there were worries among those who had donated items to the museum what would happen to these now that the museum’s inventory was being put up for sale. As these donations tend to be unconditional, the museum is free to do with the item as they see fit, but ‘being sold at auction’ to probably a private collector was likely not on their mind when filling in the donation form.

As the first auctions kick off in a few days we will just have to wait and see where the museum’s inventory ends up at, but it seems likely that many of these items which were publicly viewable will now be scattered across the globe in private collections.

Top image: A roughly 180° panorama of the “conditioned” room of the Living Computer Museum, Seattle, Washington, USA. Taken in 2014. (Credit: Joe Mabel)