Google’s Duplex AI Has Conversation Indistinguishable From Human’s

First Google gradually improved its WaveNet text-to-speech neural network to the point where it sounds almost perfectly human. Then they introduced Smart Reply which suggests possible replies to your emails. So it’s no surprise that they’ve announced an enhancement for Google Assistant called Duplex which can have phone conversations for you.

What is surprising is how well it works, as you can hear below. The first is Duplex calling to book an appointment at a hair salon, and the second is it making reservation’s with a restaurant.

Note that this reverses the roles when talking to a computer on the phone. The computer is the customer who calls the business, and the human is on the business side. The goal of the computer is to book a hair appointment or reserve a table at a restaurant. The computer has to know how to carry out a conversation with the human without the human knowing that they’re talking to a computer. It’s for communicating with all those businesses which don’t have online booking systems but instead use human operators on the phone.

Not knowing that they’re talking to a computer, the human will therefore speak as it would with another human, with all the pauses, “hmm”s and “ah”s, speed, leaving words out, and even changing the context in mid-sentence. There’s also the problem of multiple meanings for a phrase. The “four” in “Ok for four” can mean 4 pm or four people.

The component which decides what to say is a recurrent neural network (RNN) trained on many anonymized phone calls. The input is: the audio, the output from Google’s automatic speech recognition (ASR) software, and context such as the conversation’s history and the parameters of the conversation (e.g. book places at a restaurant, for how many, when), and more.

Producing the speech is done using Google’s text-to-speech technologies, Wavenet and Tacotron. “Hmm”s and “ah”s are inserted for a more natural sound. Timing is also taken into account. “Hello?” gets an immediate response. But they introduce latency when responding to more complex questions since replying too soon would sound unnatural.

There are limitations though. If it decides it can’t complete a task then it hands the conversation over to a human operator. Also, Duplex can’t handle a general conversation. Instead, multiple instances are trained on different domains. So this isn’t the singularity which we’ve talked about before. But if you’re tired of talking to computers at businesses, maybe this will provide a little payback by having the computer talk to the business instead.

On a more serious note, would you want to know if the person you were speaking to was in fact a computer? Perhaps Google should preface each conversation with “Hi! This is Google Assistant calling.” And even knowing that, would you want to have a human conversation with a computer, knowing that it’s “um”s were artificial? This may save time for the person whom the call is on behalf of, but the person being called may wish the computer would be a little more computer-like and speak more efficiently. Let us know your thoughts in the comments below. Or just check out the following Google I/O ’18 keynote presentation video where all this was announced.

Continue reading “Google’s Duplex AI Has Conversation Indistinguishable From Human’s”

DIY Text-to-Speech with Raspberry Pi

We can almost count on our eyesight to fail with age, maybe even past the point of correction. It’s a pretty big flaw if you ask us. So, how can a person with aging eyes hope to continue reading the printed word?

There are plenty of commercial document readers available that convert text to speech, but they’re expensive. Most require a smart phone and/or an internet connection. That might not be as big of an issue for future generations of failing eyes, but we’re not there yet. In the meantime, we have small, cheap computers and plenty of open source software to turn them into document readers.

[rgrokett] built a RaspPi text reader to help an aging parent maintain their independence. In the process, he made a good soup-to-nuts guide to building one. It couldn’t be easier to use—just place the document under the camera and push the button. A Python script makes the Pi take a picture of the text. Then it uses Tesseract OCR to convert the image to plain text, and runs the text through a speech synthesis engine which reads it aloud. The reader is on as long as it’s plugged in, so it’s ready to work at the push of a button. We can probably all appreciate such a low-hassle design. Be sure to check out the demo after the break.

If you wanted to use this to read books, you’d still have to turn the pages yourself. Here’s a BrickPi reader that solves that one.

Continue reading “DIY Text-to-Speech with Raspberry Pi”

Quick Hack Helps ALS Patient Communicate

A diagnosis of amyotrophic lateral sclerosis, or ALS, is devastating. Outlier cases like [Stephen Hawking] notwithstanding, most ALS patients die within four years or so of their diagnosis, after having endured the progressive loss of muscle control that robs them of their ability to walk, to swallow, and even to speak.

Rather than see a friend’s father locked in by his ALS, [Ricardo Andere de Mello] decided to help out by building a one-finger interface to a [Hawking]-esque voice synthesizer on the cheap. Working mainly with what hardware he had on hand, his system lets his friend’s dad flick a finger to operate off-the-shelf assistive communication software running on a laptop. The sensor is an accelerometer velcroed to a fingertip; when a movement threshold is passed, an Arduino sends the laptop an F12 keypress, which is all that’s needed to operate the software. You can watch it in action in the video after the break.

Hats off to [Ricardo] for pitching in and making a difference without breaking the bank. This isn’t the first expedient speech synthesizer we’ve seen for ALS patients — this one does it just three chips, including voice synthesis. Continue reading “Quick Hack Helps ALS Patient Communicate”

MicroVox Puts the 80’s Back into Your Computer’s Voice

[Monta Elkins] got it in his mind that he wanted to try out an old-style speech synthesizer with the SC-01 (or SC-01A) chip, one that uses phonemes to produce speech. After searching online he found a MicroVox text-to-speech synthesizer from the 1980s based around the chip, and after putting together a makeshift serial cable, he connected it up to an Arduino Uno and tried it out. It has that 8-bit artificial voice that many of us remember fondly and is fairly understandable.

The SC-01, and then the SC-01A, were made by Votrax International, Inc. In addition to the MicroVox, the SC-01 and SC-01A were used in the Heath Hero robot, the VS-100 synthesizer add-on for TRS-80s, various arcade games such as Qbert and Krull, and in a variety of other products. Its input determines which phonemes to play and where it shines is in producing good transitions between them to come up with decent speech, much better than you’d get if you just play the phonemes one after the other.

microvox-manualThe MicroVox has a 25-pin RS-232 serial port as well as a parallel port and a speaker jack. In addition to the SC-01A, it has a 6502 under the hood. [Monta] was lucky to also receive the manual, and what a manual it is! In addition to a list of the supported phonemes and words, it also contains the schematics, parts list and details for the serial port which alone would make for fun reading. We really liked the taped-in note seen in this screenshot. It has a hand-written noted that says “Factory Corrected 10/18/82”.

Following along with [Monta] in the video below, he finds the serial port’s input buffer chip datasheet online and verifies the voltage levels. Next he opens up the case and uses dips switches to set baud rate, data bits, parity, stop bits and so on. After hooking up the speakers, putting together a makeshift cable for RX, TX and ground, and writing a little Arduino code, he sends it text and out comes the speech.

Continue reading “MicroVox Puts the 80’s Back into Your Computer’s Voice”

Arduino Clock Is HAL 1000

In the movie 2001: A Space Odyssey, HAL 9000 — the neurotic computer — had a birthday in 1992 (for some reason, in the book it is 1997). In the late 1960s, that date sounded impossibly far away, but now it seems like a distant memory. The only thing is, we are only now starting to get computers with voice I/O that are practical and even they are a far cry from HAL.

[GeraldF6] built an Arduino-based clock. That’s nothing new but thanks to a MOVI board (ok, shield), this clock has voice input and output as you can see in the video below. Unlike most modern speech-enabled devices, the MOVI board (and, thus, the clock) does not use an external server in the cloud or any remote processing at all. On the other hand, the speech quality isn’t what you might expect from any of the modern smartphone assistants that talk. We estimate it might be about 1/9 the power of the HAL 9000.

Continue reading “Arduino Clock Is HAL 1000”

Retrotechtacular: The Incredible Machine

They just don’t write promotional film scripts like they used to: “These men are design engineers. They are about to engage a new breed of computer, called Graphic 1, in a dialogue that will test the ingenuity of both men and machine.”

This video (embedded below) from Bell Labs in 1968 demonstrates the state of the art in “computer graphics” as the narrator calls it, with obvious quotation marks in his inflection. The movie ranges from circuit layout, to animations, to voice synthesis, hitting the high points of the technology at the time. The soundtrack, produced on their computers, naturally, is pure Jetsons.

Highlights are the singing “Daisy Bell” at 9:05, which inspired Stanley Kubrick to play a glitchy version of the track as Dave is pulling Hal 9000’s brains out, symbolically regressing backwards through a history of computer voice synthesis which at that point in time was the present. (Whoah!)
Continue reading “Retrotechtacular: The Incredible Machine”

Talking Neural Nets

Speech synthesis is nothing new, but it has gotten better lately. It is about to get even better thanks to DeepMind’s WaveNet project. The Alphabet (or is it Google?) project uses neural networks to analyze audio data and it learns to speak by example. Unlike other text-to-speech systems, WaveNet creates sound one sample at a time and affords surprisingly human-sounding results.

Before you rush to comment “Not a hack!” you should know we are seeing projects pop up on GitHub that use the technology. For example, there is a concrete implementation by [ibab]. [Tomlepaine] has an optimized version. In addition to learning English, they successfully trained it for Mandarin and even to generate music. If you don’t want to build a system out yourself, the original paper has audio files (about midway down) comparing traditional parametric and concatenative voices with the WaveNet voices.

Another interesting project is the reverse path — teaching WaveNet to convert speech to text. Before you get too excited, though, you might want to note this quote from the read me file:

“We’ve trained this model on a single Titan X GPU during 30 hours until 20 epochs and the model stopped at 13.4 ctc loss. If you don’t have a Titan X GPU, reduce batch_size in the train.py file from 16 to 4.”

Last time we checked, you could get a Titan X for a little less than $2,000.

There is a multi-part lecture series on reinforced learning (the foundation for DeepMind). If you wanted to tackle a project yourself, that might be a good starting point (the first part appears below).

Continue reading “Talking Neural Nets”