Hackaday Prize 2023: Wear-a-Chorder Lets Discreet Chording Keyboards Do The Talking

Being mute or speech-challenged can be a barrier, and [Raymond Li] has an interesting project to contribute to the 2023 Hackaday Prize: a pair of discreet chording keyboards that allow the user to emit live text-to-speech as quickly as one can manipulate them.

Rapid generation of input to high-quality speech helps normalize interactions.

The project leverages recent developments to deliver high-quality speech via an open-source web app called VoiceBox, while making sure the input devices themselves don’t get in the way of personal interaction. Keeping the chorders at waist level and ensuring high-quality speech is generated and delivered quickly goes a long way towards making interaction and communication flow more naturally.

The VoiceBox software is doing the heavy lifting, and there’s not yet much detail about the rest of the hardware used in the prototype. It’s currently up to the user to figure out a solution for a wearable computer or a suitable chording keyboard. Still, the prototype looks like the Charachorder with a 3D-printed mounting solution to locate them at one’s beltline. Of course, the beauty of the underlying system being so standard is that one can use whatever is most comfortable.

MotorMouth

MotorMouth For Future Artificial Humans

When our new computer overlord arrives it’ll likely give orders using an electromagnetic speaker (or more likely, by texting instead of talking). But for a merely artificial human being, shouldn’t we use an artificial mouth with vocal cords chords, nasal cavity, tongue,  teeth and lips? Work on such a thing is scarce these days, but [Martin Riches] developed a delightful one called MotorMouth between 1996 and 1999.

It’s delightful for its use of a Z80 processor and assembly language, things many of us remember fondly, as well as its transparent side panel, allowing us to see the workings in action. As you’ll see and hear in the video below, it works quite well given the extreme difficulty of the task.

Continue reading “MotorMouth For Future Artificial Humans”

From Sign Language To Spoken Language

As part of a senior design project for a biomedical engineering class [Kendall Lowrey] worked in a team to develop a device that translates American Sign Language into spoken English. Wanting to eclipse glove-based devices that came before them, the team set out to move away from strictly spelling words, to combining sign with common gesture. The project is based around an Arduino Mega and is limited to the alphabet and about ten words because of the initial programming space restraints. When the five flex sensors and three accelerometer values register an at-rest state for two seconds the device takes a reading and looks up the most likely word or letter in a table. It then outputs that to a voicebox shield to translate the words or letters into phonetic sounds.