It is hard to imagine today, but there was a time when there were several competing network technologies. There was Ethernet, of course. But you could also find token ring, DEC Net, EcoNet, and ARCNet. If you’ve never dug into ARCNet, [Retrobytes] has a comprehensive history you can watch that will explain it all.
Like token ring, ARCNet used a token-passing scheme to allow each station on the network to take turns sending data. Unlike token ring and Ethernet, the hardware setup was much less expensive. Along the way, you get a brief history of the Intel 8008 CPU, which, arguably, started the personal computer revolution.
Like most networking products of the day, ARCNet was proprietary. However, by the late 1980s, open standards were the rage, and Ethernet took advantage. Up until Ethernet was able to ride on twisted pairs, however, it was more expensive and less flexible than ARCNet.
The standard used RG-62/U coax and either passive or active hubs in a star configuration. The coax could be up to 2,000 feet away, so very large networks were feasible. It was also possible to share the coax with analog videoconferencing.
Looking back, ARCNet had a lot to recommend it, but we know that Ethernet would win the day. But [Retrobytes] explains what happened and why.
If you missed “old-style Ethernet,” we can show you how it worked. Or, check out EcoNet, which was popular in British schools.
Arcnet also allowed longer cable runs and was more forgiving in the cabling than ethernet, but significantly slower.
(this was in the pre-switch days where your entire ethernet network, end-to-end, was limited to the 100m limit, not just each cable)
10base5 had other problems, but not the 100m segment limit
As I remember it, the 100m limit came from that, they needed to define a suitable wait time and the speed of a signal going the max distance defined the time.
10BASE5 had a 500m limit, but required stations to be connected at multiples of 2.5m. 10BASE2 had a 185m limit, and allowed stations to be connected anywhere as long as they were far enough apart.
The 100m limit is from 10BASE-T.
Yeah, that’s literally where the 2 and 5 came from: the 5 is 500M max segment, the 2 is “close enough to 200m” max segment.
I have some plenum Ethernet coax being used right now as HF feedline. And a bunch of “cheapernet” aka RG-58 being used as BNC jumper cables.
Came here to say this, the 100m limit is for twisted pair CAT3 onward.
I’ve been putting together a “networking timeline”, that will hopefully allow one to compare “when was X developed” with “when was X deployable on available hardware”, and such. It’s here: https://docs.google.com/spreadsheets/d/1jTVYl5sFJaUAzV5tWW5LgIxx7V9zg1uhz1nRoEGCZNs/edit?usp=sharing
Comments welcome.
Nice 👍
Wow, so it took 30years for modems to go from 300-9600baud?? That is … Something. But i feel that it kind of explains why there is still lots of slow defaults on the serial ports
I remember explaining to a customer why 2400 baud dialup was physically impossible.
Having to explain to people why they can’t get full speed out of their 56k modem. In USA due to regs, the modem are limited to 53 max so there’s the missing 3000 that irks ignorant customers.
Still don’t know why companies had to advertise them as 56k in USA when it is not allowed to reach full 56k.
You have to remember the technology. The early modems were designed and built by the telephone companies, using transistors, inductors and capacitors. Integrated circuits made them smaller, but we were still in the frequency shift and QAM era. Only when processors were added could the rates go up using more “clever” modulation schemes. A 56k modem uses a lot of tricks to optimise the signal and use the most of the channel capacity. Nyquist would be proud.
weird to imagine SDR as an evolution of soft modems
Shannon, not Nyquist! Channel capacity comes from the Shannon-Hartley theorem and usually is just called the Shannon limit.
And the 56k modems didn’t actually use that many tricks – well, they did, but it wasn’t those tricks that allowed 56k throughput. It was the fact that the analog portion of the link was now much shorter, just user-to-telco (instead of all the way from user-to-ISP). The connection between the telco and ISP was purely digital, so there was no degradation.
That same 56k modem transported 20 years earlier wouldn’t have been able to get 56k performance because the physical connection was that much worse. In the pre-digital link era 56k would have been over the Shannon channel capacity.
Funny. 56k worked on my modem until the telco “upgraded” the hardware in the local switching office to digital gear. After that it was as though some kind of brick-wall bandpassing had been put into effect. 28k, fine, all day long, but no more 56k.
“After that it was as though some kind of brick-wall bandpassing had been put into effect.”
Yeah, there were plenty of telcos which did exactly that. Once you implement a digital backend you’re totally free to restrict the bandwidth you dedicate to any one customer. It’s just a question of how you’re transmitting the data, and obviously low bitrates mean cheaper hardware. You’re paying for voice, not data.
Individual customers could get 56k links prior to backends being digital if they were close enough to the ISP that the losses weren’t that bad. But obviously there wasn’t a push for a standard for it because the number of people that lived that close were small.
But yeah, there was this narrow window in time when ISPs starting installing 56k-capable modems where the customers who lived close enough got great service, which then got murdered with less-capable digital backends – and by the time pressure for fast internet service was enough to force change you’d be looking at ADSL service to compete with cable.
Wasn’t the problem with making 56k work on dial up modems due to the bandwidth limits of telco system which was designed for analog voice? Point to point Digital lines like T1 over twisted pairs existed since the 60’s
Shannon.
My only excuse is bit rot in the wetware
T1 and 56k dedicated lines were carefully tuned (which is why they cost astronomical sums per month. We had a T1 line at work, and it was THOUSANDS a month. 56k over dialup was 56k over “take what you get” voice grade lines.
I have not tried an analog modem over a VOIP line. I suspect 300/1200 might work. Everything else would be a crapshoot. Surprised your 56k modem didn’t auto-degrade until the other end was able to receive…but maybe the training at the start of the connection failed?
Don’t forget ISDN. The selling point was that you got two channels in multiplex, so you could get 128k or 64k and voice at the same time, so you could remain connected and still use your phone. The telcos got greedy and charged by the minute for both channels separately, so you could never use it as intended.
I remember the Thomson Nanoréseau from a school trip to France: https://fr.wikipedia.org/wiki/Nanor%C3%A9seau
All the computers in the classroom were connected together via a cartridge plugged to their cartridge port, and the teacher’s machine was able to send and run the same program to all the machines in the classroom. The wiki page says it operated from 1982 and the maximum (?) speed was 500Kbps and could connect up to 31 computers together.
Cheers,
John
Ever hear of MAP (manufacturing automation protocol)? The auto industry was big on this for awhile (late 80s, early 90s). The physical hardware was cable-TV stuff, a backbone with a translator at the head end, and fixed db-loop-loss boxes along the backbone from which RG6 drops emerged. The network carried PLC traffic, point-to-point modem traffic (I think Xicor made the modems we used), 6Mhz NTSC video channels (this was cable TV-type hardware after all), and desktop computer data traffic.) At Ford, the PS2s (IBM PCs, NOT play stations) had network cards with F connectors on the back.
Chipcom did MAP (802.4?) as well. It never went anywhere, unless you count cable ISPs…
Why don’t you make wikipedia entry?
For a while there in the 90s when T1 lines were considered hot stuff, I liked to blow people’s minds by pointing out that the T-carrier system was engineered in the late 1950s and first deployed in 1962.
That was all internal to AT&T though, I don’t know when the first T1 circuits might’ve been sold to customers. And of course they were first used for voice with bit-robbed signalling; ESF and B8ZS came later because data couldn’t guarantee the ones-density needed for AMI/D4.
So, T1 belongs on the timeline, possibly in two places. One is 1962, but the other one, first customer T1 data circuit, I don’t have a year for.
Good point. The whole “what you could get in a leased line” probably needs its own column.
Cisco Systems initial “internet” connection was a 19.2kbps bare copper circuit to a friendly site.
I guess the range goes from “bare copper” to “dark fiber”?
There are actually two “T1” events to consider – one where you could get a “T1-rate” digital link (usually V.35) with a telco CSU/DSU box, and a second once you were allowed to get the T1 itself (perhaps for channelizing on your own) on an RJ45…
Arcnet was used by Kodak in some of their digital printers, like the Digimaster, to link the external devices like stackers to the printer.
DECnet is a protocol stack, not the media and it most often ran over Ethernet, which DEC had a hand in standardising, along with Intel and Xerox – hence the ‘DIX standard’ (Ethernet II).
Acorn Econet was a great solution. And worked where USA technologies might have worked on the lab bench but were hopeless in the in real world. We ran 8bit Econet technology when early PC networking on twisted cables were hopeless. In the end Ethernet gained dominance. But Acorn could still sell bridges that allowed 8bit Econet to talk on Ethernet. These people were brilliant!! By the way, they are the ones that designed ARM chips doit should be no surprise.
I would like to see another crack at using RF in a local network. Considering what can be done now with QAM – a version of which is the basis of broadband cable modems – a full-spectrum closed network consisting of multiple QAM/return channel pairs (usually based around 6MHz channels) up to a practical limit 1500MHz(modern cable system utilization) could have an incredible throughput. And low power, too, since you’re not really sending it far, unlike common cable modems. Bonus, shielded transmission medium.
And nothing is really stopping you from going beyond 1.5GHz… the run lengths will just be a bit shorter and you may have to find some specialty hardware for splitters and the like.
I base this on how modern cable broadband is set up; 1-10 downlink QAM channels and 1-4 QAM return channels… this supports 100Mbps for an entire cable system. Imagine hundreds of channel pairs available without conforming to the shadow of broadcast channel plans, as cable systems do.
Fiber is cheaper and faster. Analog coaxial cable plant requires a lot of maintenance, and at the end of the day, you’re still trying to use a system designed for on-way broadcast TV as a data network.
“I would like to see another crack at using RF in a local network”
Grab WiFi external antennas on both ends and maybe stick appropriate attenuators/couplers to deal with the link-loss on both ends. Poof.
Similar to the “single pair ethernet” stuff that is coming out?
HaD article leaves out “localTalk.”
Video repeats largely-debunked theories on how much collisions will slow down an Ethernet.
That’s readily available as MOCA. Or DIY it with Wifi over coax.
Was about to post something similar. I’d wager most folks with multiple cable set top boxes, certain streaming appliances or the extra satellite receivers not connected to their dish [before wifi became the preferred box-2-box tech] have quietly used it without knowing it.
I think we can just say that the advent of low cost switches and home run wiring to network closets made for a HUGE increase in diagnosability (and thus, reliability). I started with “yellow-cable” Ethernet at Data General, and continued to work with Token Ring (yeccch!) and 10 and 100BASE-T. When I built my house, I had it wired with CAT-3 (got it for free) and it’s still in use today for networking.
Lest we forget, Norm Abramson and Franklin Kuo designed and built the RF network ALOHAnet at UHawaii. It was the precursor to Ethernet.
I manage some environmental monitoring equipment that uses ARCnet. Basically, one device that connects to hsing IP/Ethernet (to a central server) but feeding to it will be other devices connected over ARCnet.
What Andrew Back said. We ran DECnet over Ethernet.
We had Arcnet at work but the network was slow at a lot of times. We used what was called the T-test to determine how the network were. The T-test was to hold the T key in and see how fast the T was echoed back to the terminal from the VAX machine. On good days it would be quick but at times it would take seconds for it to be echoed back.
One of the big problems were that the Arcnet card in a machine would go bad causing the segment to recon and put the segment out of action. That happened fairly often. The network technician had several monitors running on different segments in the computer room to see what was happening on a segment. If a segment reconned he had to find the culprit and replace that Arcnet card.
Arcnet was eventually replaced by Ethernet and the network got a whole lot better, faster and stable.
As IEEE 802 was forming in 1980, I asked Datapoint VP Engineering Vic Poor RIP if his leading LAN Arcnet would be submitted to IEEE for consideration as a standard. Vic consulted with his board and called me back with a no. This left the field of battle to GM token bus, IBM token ring, and Ethernet. Internet customers wanted standards, so after that Arcnet was overtaken. Non-standard.
/Bob Metcalfe, Ethernet inventor
Hey! Quit trying to steal Al Gore’s glory!
B^)
I assume he figured that IEEE standardizing ARCnet would force the company to reveal what was inside those potted PCBs on every ARCnet card, with all of them having to be purchased from his company. Has anyone ever X-Rayed one and dissolved the coating to find out what’s in them? I wouldn’t be surprised if some of the components are unused / dummies.
Speaking of Ethernet now reminds me of a funny incident of those days. We has some Retix bridges in the Ethernet network to limit the traffic on either side of the bridge. Then one of the computer room guys was told to buy another 5 Retix bridges. He bought 5 packs Rattex (rat poison)! Needless to say we had a good laugh about that little mistake!
Well, at least you no longer had to worry about rodents chewing through cables!
B^)
“Rodents Chewing” Never underestimate the engineering staff when laying cable. We were instructed to simply lay the cat5 cable on the ground due to lack of budget, The distance was near the limit of cat5 specs, but “Git R Done” was the order of the day. Needless to say, my crew was replacing the cable in a conduit two weeks later. Fun times.
One of the first implementations of networking I put together used ´Little big lan´, it was great as you could add almost anything to it. I used some second hand 10 base 2 network cards with coax.
Oh yeah, that was advertised in every computer magazine!
How did it work? I assumed it was a protocol stack that could run over any number of commodity physical layers, including serial ports, if I recall correctly.
I remember some sort of “Token” coax nonsense back in the day. I always worked until it didn’t.
Do you mean Triax?
Thar Rings a bell..
Token Ring is a computer networking technology used to build local area networks. It was introduced by IBM in 1984, and standardized in 1989 as IEEE 802.5.
It uses a special three-byte frame called a token that is passed around a logical ring of workstations or servers. This token passing is a channel access method providing fair access for all stations, and eliminating the collisions of contention-based access methods.
Token Ring was a successful technology, particularly in corporate environments, but was gradually eclipsed by the later versions of Ethernet.
Next, do an article about what was there before the internet – what were the alternatives and how they worked.
A station wagon full of tapes?
B^)
Not that far off, actually. There were private BBS servers that were starting to interconnect by routinely calling each other and synchronizing data between the systems in the off hours. You wrote your mail and it would be sent off with that “station wagon”, and returned with the mail and data you requested.
Had the internet as we know not become a thing, those would have evolved into the internet instead with permanent connections between the nodes. The entire routing scheme was different – based on the phone network structure – which meant that you could route stuff around more freely, multi-home, etc. because a phone number is a route description whereas an IP number is an end point identifier. With the modern internet, the network must know where everyone is, whereas with the “alt”-internet, the network is dumb and the endpoints decide how to navigate it.
Imagine for example a piece of software that periodically sends probe packets through the network to find faster paths, so congestion isn’t such an issue. No ISP can throttle traffic or put up firewalls and filter content because the traffic just goes around like water past a rock in a stream; it’s more resilient against disruptions and broken infrastructure, decentralized and more scalable than the IP number system we have, but the latter won because the authorities wanted authority over the assignment of numbers and addresses, i.e. who’s on the net and where.
It could be argued that phone number routing isn’t that different. SS7 is just doing all of the path selection for you without you realizing it. One analogy that could work:
1. Country Code = BGP AS / Core Router functionality
2. Area Code = Distribution Router within said AS / Org
3. Local Exchange = Access Router handling local connections
i.e. the “subnet”
4. Line Number = Local Address
Other comparisons could be made, e.g. OSPF areas. Or how about older party lines being a human powered form of Ethernet’s CSMA/CD: dial tone, multiple parties per line, “Hey, stop hogging the line! I need to make an important call”?
One might even argue that the move from party lines to dedicated ones was akin to the move from Ethernet hubs to switches. Afterall, both reduced the imoact of mutiple nodes attempting to broadcast simultaneously.
Want some 56k modem?
Ethernet is still not the only wired networking option, far from that. There are ethernet-derived systems, like EtherCAT, TSN Ethernet, etc. There is CAN, there is good old RS485, there are specialised PHY layers that can be grossly misappropriated (e.g., FPD-Link, PCIe, USB-3, just raw LVDS, and many more).
On the telco side, there’s several flavors of DSL, T-carrier/E-carrier/PDH, SONET/SDH, ATM…
I wonder what the world would look like if ATM had won, according to oh so many breathless predictions in the mid-90s.
DECNet was a protocol, not a transport spec. As such it competed with TCP/IP not a Ethernet Ave, in fact implementations often ran on Ethernet.
ARCNet was very tolerant of power line induced noise, which made it well suited to control equipment on the factory floor. I’ve seen ARCNet networks using a mix of several different types of non-RG62U coaxial cable, without a glitch, and read that the designers @DataPoint even bragged that it would operate using a pair of wire coat hangers as conductors!
Hey, don’t forget AppleTalk! It was originally a little dongle that connected to a Mac’s serial port and had 2 RJ11 connectors. Using standard telephone cable you could easily string together a simple network. It was only 256kbs, but simple and cheap. Several vendors later offered AppleTalk to coax Ethernet adapters. Apple eventually offered AppleTalk over Ethernet, supporting 1mbs. The protocol still exists in Apple’s file sharing, and open source Netatalk is built-in to the Raspberry Pi OS.
Holy Smoke. I worked for Datapoint back in the day and thought their technology was pretty cool. Talk about a blast-from-the-past!