MotorMouth For Future Artificial Humans

When our new computer overlord arrives it’ll likely give orders using an electromagnetic speaker (or more likely, by texting instead of talking). But for a merely artificial human being, shouldn’t we use an artificial mouth with vocal cords chords, nasal cavity, tongue,  teeth and lips? Work on such a thing is scarce these days, but [Martin Riches] developed a delightful one called MotorMouth between 1996 and 1999.

It’s delightful for its use of a Z80 processor and assembly language, things many of us remember fondly, as well as its transparent side panel, allowing us to see the workings in action. As you’ll see and hear in the video below, it works quite well given the extreme difficulty of the task.

Continue reading “MotorMouth For Future Artificial Humans”

From sign language to spoken language

As part of a senior design project for a biomedical engineering class [Kendall Lowrey] worked in a team to develop a device that translates American Sign Language into spoken English. Wanting to eclipse glove-based devices that came before them, the team set out to move away from strictly spelling words, to combining sign with common gesture. The project is based around an Arduino Mega and is limited to the alphabet and about ten words because of the initial programming space restraints. When the five flex sensors and three accelerometer values register an at-rest state for two seconds the device takes a reading and looks up the most likely word or letter in a table. It then outputs that to a voicebox shield to translate the words or letters into phonetic sounds.