Speaking Computers From The 1970s

Talking computers are nothing these days. But in the old days, a computer that could speak was quite the novelty. Many computers from the 1970s and 1980s used an AY-3-8910 chip and [InazumaDenki] has been playing with one of these venerable chips. You can see (and hear) the results in the video below.

The chip uses PCM, and there are different ways to store and play sounds. The video shows how different they are and even looks at the output on the oscilloscope. The chip has three voices and was produced by General Instruments, the company that initially made PIC microcontrollers. It found its way into many classic arcade games, home computers, and games like Intellivision, Vectrex, the MSX, and ZX Spectrum. Soundcards for the TRS-80 Color Computer and the Apple II used these chips. The Atari ST used a variant from Yamaha, the YM2149F.

There’s some code for an ATmega, and the video says it is part one, so we expect to see more videos on this chip soon.

General instruments had other speech chips, and some of them are still around in emulated form. In fact, you can emulate the AY-3-8910 with little more than a Raspberry Pi.

19 thoughts on “Speaking Computers From The 1970s

    1. Reading between the lines: That would have been how much the speech synthesis manufacturer charged them to process each word into phonemes. There’s no way they spent that much per machine on hardware, eg. The similar-tech Speak and Spell sold for $50 and had 200-odd words.

      1. Oh yes it was the development cost, not the hardware cost. Hiring some guy to mess around with a keyboard until he manually modulated some notes to kinda sound like words. Not the correct way to do it anymore ;)

    2. that really is remarkable. looking at the specific algorithm, it seems you should be able to do the encoding yourself today even in, say, python or something on your pc. it’s funky; definitely not LPC as was used in “Speak & Spell”, or the digital filter as used in the “SPO256”.
      here is some discussion of the gory details of the chip (SSi TSI S14001A, which was apparently first used for a talking calculator for the disabled) if curious:
      http://www.vintagecalculators.com/html/development_of_the_tsi_speech-.html

    3. (further) and apparently:
      “The speech technology was licensed (I believe with a 3 year exclusive deal) from Forrest S. Mozer, a
      professor of atomic physics (speech was a spare time thing for him) at Berkeley. Forrest Mozer would encode the speech in his
      basement laboratory using his novel form of speech encoding (the encoding process apparently involved several minicomputers
      running FFTs and a spectrum analyzer), and then General Instruments would make the resulting speech data into a mask ROM to
      be used with the TSI chip.”
      ref: https://archive.org/stream/pdfy-QPCSwTWiFz1u9WU_/david_djvu.txt

  1. Somewhere in the dark dungeon of my basement is a hearsay 1000 for the Commodore 64. I found it to be a novelty at best so getting it was a bit anticlimactic, but for handicapped users, I could see all kinds of cool uses for it.

  2. “The chip uses PCM” – no it does not, it’s a fairly simple PSG. What [InazumaDenki] has done, is use it for PCM, comparable to how the PC speaker could be used to play samples.

  3. Back in the early 70’s I proposed in EET class 10 analog synth words to “speak” I wanted just numbers to be spoken from a digital voltmeter so I didn’t have to look away from a easy to slide off ball tip test probe, instead I sharpened hardened steel to make a probe that won’t slip or bridge traces. Radio Shack had one years ago but they are an oddity.

    One HAD article from ’14 comes up, Arduino powered of course. I need a mulitmeter not just volts. It’s not a blind accessibility thing but about positioning a meter to be able to see it and not having it fall leads attached whilst poking around in a furnace, car underdash, something overhead, dark spaces lit only by one flashlight, just about any place other than a flat workbench. Also having to head bob repeatedly between a row of pins and the meter can be tiring.

    The only thing better would be AR glasses display.

  4. There’s recordings on YouTube of an IBM 7094 mainframe “singing” the song Daisy, Daisy. As I understand it, this wasn’t digital speech synthesis, but analog, using a rubber/synthetic membrane of some sort. It starts with some slightly ropey-sounding monophonic melody, which had me thinking “huh, primitive but I suppose good for 1961…” before adding percussion and polyphonic harmonies (“wow, better audio than my Sinclair 48K had 20+ years later…”) and finally, the synthesized speech, which is frankly amazing for the technology & time-period.
    https://youtu.be/41U78QP8nBk

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.