Here’s a fun exercise: take a list of the 20th century’s inventions and innovations in electronics, communications, and computing. Make sure you include everything, especially the stuff we take for granted. Now, cross off everything that can’t trace its roots back to the AT&T Corporation’s research arm, the Bell Laboratories. We’d wager heavily that the list would still contain almost everything that built the electronics age: microwave communications, data networks, cellular telephone, solar cells, Unix, and, of course, the transistor.
But is that last one really true? We all know the story of Bardeen, Brattain, and Shockley, the brilliant team laboring through a blizzard in 1947 to breathe life into a scrap of germanium and wires, finally unleashing the transistor upon the world for Christmas, a gift to usher us into the age of solid state electronics. It’s not so simple, though. The quest for a replacement for the vacuum tube for switching and amplification goes back to the lab of Julius Lilienfeld, the man who conceived the first field-effect transistor in the mid-1920s.
The story goes that Atari was developing a premium model of their popular home video game console, the Atari 2600, for the 1981 fiscal year. Internally known as the Stella RC, this model revision promised touch sensitive game selection toggles, LED indicators, and onboard storage for the controllers. The focus of the project, however, was the “RC” in Stella RC which stood for remote control. Atari engineers wanted to free players from the constraints of the wires that fettered them to their televisions.
Problem with the prototypes was that the RF transmitters in the controllers were powerful enough to send a signal over a 1000 ft. radius, and they interfered with a number of the remote garage door openers on the market. Not to mention that if there were another Stella RC console on the same channel in an apartment building, or simply across the street, you could be playing somebody else’s Pitfall run. The mounting tower of challenges to making a product that the FCC would stamp their approval on were too great. So Atari decided to abandon the pioneering Stella RC project. Physical proof of the first wireless game controllers would have been eliminated at that point if it were created by any other company… but prototypes mysteriously left the office in some peculiar ways.
“Atari had abandoned the project at the time…[an Atari engineer] thought it would be a great idea to give his girlfriend’s son a videogame system to play with…I can’t [comment] about the relationship itself or what happened after 1981, but that’s how this system left Atari…and why it still exists today.”
– Joe Cody, Atari2600.com
Atari did eventually get around to releasing some wireless RF 2600 joysticks that the FCC would approve. A couple years after abandoning the Stella RC project they released the Atari 2600 Remote Control Joysticks at a $69.95 MSRP (roughly $180 adjusted for inflation). The gigantic price tag mixed with the video game market “dropping off the cliff” in 1983 saw few ever getting to know the bliss of wire-free video game action. It was obvious that RF game controllers were simply ahead of their time, but there had to be cheaper alternatives on the horizon.
Out of Sight, Out of Control with IR Schemes
Video games were a dirty word in America in 1985. While games themselves were still happening on the microcomputer platforms, the home console business was virtually non-existent. Over in Japan, Nintendo was raking in money hand over fist selling video games on their Famicom console. They sought to replicate that success in North America by introducing a revised model of the Famicom, but it had to impress the tech journos that would be attending its reveal at the Consumer Electronics Show (CES).
The prototype system was called the Nintendo Advanced Video System (AVS). It would feature a keyboard, a cassette tape drive, and most importantly two wireless controllers. The controllers used infrared (IR) communication and the receiver was built-into the console deck itself. Each controller featured a square metallic directional pad and four action buttons that gave the impression of brushed aluminum. The advancement in video game controller technology was too good to be true though, because the entire system received a makeover before releasing as the Nintendo Entertainment System (NES) that Christmas. The NES lacked the keyboard, the tape drive, and the IR controllers and its change in materials hardly captured the high-end flash of the AVS. The removal of IR meant the device was cheaper to manufacture. A decision that ultimately helped the NES to become a breakout success that in turn brought back dedicated video game consoles single-handedly.
A colleague of mine used to say he juggled a lot of balls; steel balls, plastic balls, glass balls, and paper balls. The trick was not to drop the glass balls. How do you know which is which? For example, suppose you were tasked with making sure a nuclear power plant was safe. What would be important? A fail-safe way to drop the control rods into the pile, maybe? A thick containment wall? Two loops of cooling so that only the inner loop gets radioactive? I’m not a nuclear engineer, so I don’t know, but ensuring electricians at a nuclear plant aren’t using open flames wouldn’t be high on my list of concerns. You might think that’s really obvious, but it turns out if you look at history that was a glass ball that got dropped.
In the 1960s and 70s, there was a lot of optimism in the United States about nuclear power. Browns Ferry — a Tennessee Valley Authority (TVA) nuclear plant — broke ground in 1966 on two plants. Unit 1 began operations in 1974, and Unit 2 the following year. By 1975, the two units were producing about 2,200 megawatts of electricity.
That same year, an electrical inspector and an electrician were checking for air leaks in the spreading room — a space where control cables split to go to the two different units from a single control room. To find the air drafts they used a lit candle and would observe the flame as it was sucked in with the draft. In the process, they accidentally started a fire that nearly led to a massive nuclear disaster.
Much to the chagrin of local historians, the city of Scranton, Pennsylvania is today best known as the setting for the American version of The Office. But while the exploits of Dunder Mifflin’s best and brightest might make for a good Netflix binge, there’s a lot more to the historic city than the fictional paper company. From its beginnings as a major supplier of anthracite coal to the introduction of America’s first electrically operated trolley system on its streets, Scranton earned its nickname “The Electric City” by being a major technological hub from the Industrial Revolution through to the Second World War.
Today, the mines and furnaces of Scranton lie silent but not forgotten. In the 1980’s, the city started turning what remained of their industrial sites into historic landmarks and museums with the help of State and Federal grants. I recently got a chance to tour some of these locations, and came away very impressed. They’re an exceptional look into the early technology and processes which helped turn America into an industrial juggernaut.
While no substitute for visiting these museums and parks for yourself, hopefully the following images and descriptions will give you an idea of what kind of attractions await visitors to the modern day Electric City.
Do you talk to your alarm clock? I do. I was recently in a hotel room, woke up in the middle of the night and said, “Computer. What time is it?” Since my Amazon Echo (which responds to the name Computer) was at home, I was greeted with silence. Isn’t the future great?
Of course, there have been a variety of talking clocks over the years. You used to be able to call a phone number and a voice would tell you the time. But how old do you think the talking clock really is? Would you guess that this year is the 140th anniversary of the world’s first talking clock? In fact, it doesn’t just hold the talking clock record. The experimental talking clock Frank Lambert made is also the oldest surviving recording that can be still be played back on its original device.
In 1878, the phonograph had just been invented and scratched out sounds on a piece of tin foil. Lambert realized this wouldn’t hold up to multiple playbacks and set out to find a more robust recording medium. What he ended up building was a clock that would announce the time using lead to record the speech instead of tin foil.
Science fiction is usually couched in fact, and it’s fun to look at an iconic computer like HAL 9000 and trace the origins of this artificial intelligence gone wrong. You might be surprised to find that you can trace HAL’s origins to a computer built for the US Army in 1952.
If you are a fan of the novel and movie 2001: A Space Oddessy, you may recall that the HAL 9000 computer was “born” in Urbana, Illinois. Why pick such an odd location? Urbana is hardly a household name unless you know the Chicago area well. But Urbana has a place in real-life computer history. As the home of the University of Illinois at Urbana–Champaign, Urbana was known for producing a line of computers known as ILLIAC, several of which had historical significance. In particular, the ILLIAC IV was a dream of a supercomputer that — while not entirely successful — pointed the way for later supercomputers. Sometimes you learn more from failure than you do successes and at least one of the ILLIAC series is the poster child for that.
The Urbana story starts in the early 1950s. This was a time when the 1945 book “First Draft of a Report on the EDVAC” was sweeping through the country from its Princeton origins. This book outlined the design and construction of the Army computer that succeeded ENIAC. In it, Von Neumann proposed changes to EDVAC that would make it a stored program computer — that is, a computer that treats data and instructions the same.
By the early 20th century, naval warfare was undergoing drastic technological changes. Ships were getting better and faster engines and were being outfitted with wireless communications, while naval aviation was coming into its own. The most dramatic changes were taking place below the surface of the ocean, though, as brave men stuffed themselves into steel tubes designed to sink and, usually, surface, and to attack by stealth and cunning rather than brute force. The submarine was becoming a major part of the world’s navies, albeit a feared and hated one.
For as much animosity as there was between sailors of surface vessels and those that chose the life of a submariner, and for as vastly different as a battleship or cruiser seems from a submarine, they all had one thing in common: the battle against the sea. Sailors and their ships are always on their own dealing with forces that can swat them out of existence in an instant. As a result, mariners have a long history of doing whatever it takes to get back to shore safely — even if that means turning a submarine into a sailboat.