MotorMouth For Future Artificial Humans

MotorMouth

When our new computer overlord arrives it’ll likely give orders using an electromagnetic speaker (or more likely, by texting instead of talking). But for a merely artificial human being, shouldn’t we use an artificial mouth with vocal cords chords, nasal cavity, tongue,  teeth and lips? Work on such a thing is scarce these days, but [Martin Riches] developed a delightful one called MotorMouth between 1996 and 1999.

It’s delightful for its use of a Z80 processor and assembly language, things many of us remember fondly, as well as its transparent side panel, allowing us to see the workings in action. As you’ll see and hear in the video below, it works quite well given the extreme difficulty of the task.

The lower section with electronics, motors and Bowden cables.
The lower section with electronics, motors and Bowden cables.

The various parts of the mouth are moved using eight stepper motors, seven of which are located below the mouth along with the electronics. They use Bowden cables to transfer the motor movements up to the mouth parts. One of those opens and closes a valve for letting some air flow through a nasal cavity located just above the mouth cavity. The eighth motor rotates the tongue from the back to the front of the mouth cavity and isn’t visible. A blower supplies the air. Unlike with us humans, there are two air paths from the blower. One passes through a reed for pitch control, and another path can be opened to allow more airflow, bypassing the reed. That increased airflow is needed for unvoiced sounds such as F, S and T. At the very front of the mouth are two block-like pieces that move up and down, one representing the lips, and behind it, one for the teeth.

[Martin’s] webpage includes early drawings as well as an explanation of how he represents the eight motors in the assembly code as eight bits. For each sound that can be made, the corresponding bits for the motors that need to be turned on are set, along with data for where each stepper motor should be turned to and how fast. He also includes a sample of the assembly code, though not all of it is there.

And while we’re talking about doing voice from scratch, how about making the Z80 computer from scratch too, like [Lumir Vanek] did with his Rum 80 PC. Or you can go one step further like [Scott Baker] did and make not only the Z80 computer but a speech synthesizer for it too.

28 thoughts on “MotorMouth For Future Artificial Humans

    1. It just sounds like a 8-bit synthesized waveform. It’s actually fed by a blower, which is behind so you don’t see it, and there’s a reed in the main airflow path. The pitch is controlled by one of the servos manipulating the tension of the reed. You can see it better on his webpage.
      That’s what I like about this one — it tries to reproduce so many of the features the way we humans.do.

  1. Comments got off to a rocky start on this one. I’ve deleted a couple that were offensive and contacted the people who left them. (This explains the mismatch in the comment count). Let’s make Hackaday a welcoming environment for all by acting that way in our discussions. Thank you.

  2. Fascinating device. It has always been amazing to me how complex things we take for granted are. Things like speech and hearing (along with the interpretation of what is heard) are so complex that our best attempts to imitate them fall quite short. That being said, this is a great attempt at imitating speech.

  3. Speech synthesis is maddening.

    I worked on a talking scientific calculator prototype about 40 years ago.
    Part of the trick was creating a new vocabulary from only the words of an existing talking 4-function calculator. (Speech Plus)

    Splicing together “si” from “six”, and the “ine” from “nine” yielded “sin”.
    Stealing the “k” from “equals”, and the “oh” from “zero” gave “co” and hence “cosine”.
    You get the picture.

    The maddening part is that you become insensitive to the words after you’ve heard them a few dozen times, so you’re constantly dragging in your office mate, or secretary for a “test”.

    You’ve convinced yourself you hear “cosine-pi”, your friends hear “go and die” or “gosiping”.

    Oh well.

    1. When Mattel’s “Blue Sky Rangers” were doing voice digitizing for the Intellivoice games for the Intellivision, in the game “TRON Solar Sailer” they needed the word “can’t”. Some on the team insisted it sounded like the word spelled with a u instead of an a. The game shipped and there weren’t any complaints from customers.

      They weren’t doing voice synthesis. The cartridges had digitized audio and the Intellivoice was mostly a passthrough, but with a DAC to take the audio data and feed it into the analog audio input line that had been put on the cartridge port, well before anyone had thought of having voice in the games.

      To cram as much voice data as possible into the limited space, they used manually optimized variable bit rate encoding so that different parts of each word had a different rate, saving the highest rate for the areas requiring the best fidelity.

      As for the mis-heard “can’t”, the programmers used it in an “adult” version of Astrosmash that of course was not released on a cartridge.

      Related to that, “TRON Solar Sailer” had access codes it would provide. The programmer’s boss accused the programmer of deliberately making it give codes ending in 69 a high percentage of the time. He said “Look, if I was going to put a ’69’ in the game, I’d put it right on the title screen!” Then he proceeded to do exactly that. The string of 1’s and 0’s on the title screen is 69 in binary. When finally pointed out, he said “But that’s 45.” 69 in binary = 45 in hexadecimal. 45 in hex = 69 decimal.

      http://www.intellivisionlives.com/bluesky/games/credits/voice2.shtml

Leave a Reply to Redhatter (VK4MSL)Cancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.