How To Train A New Voice For Piper With Only A Single Phrase

[Cal Bryant] hacked together a home automation system years ago, which more recently utilizes Piper TTS (text-to-speech) voices for various undisclosed purposes. Not satisfied with the robotic-sounding standard voices available, [Cal] set about an experiment to fine-tune the Piper TTS AI voice model using a clone of a single phrase created by a commercial TTS voice as a starting point.

Before the release of Piper TTS in 2023, existing free-to-use TTS systems such as espeak and Festival sounded robotic and flat. Piper delivered much more natural-sounding output, without requiring massive resources to run. To change the voice style, the Piper AI model can be either retrained from scratch or fine-tuned with less effort. In the latter case, the problem to be solved first was how to generate the necessary volume of training phrases to run the fine-tuning of Piper’s AI model. This was solved using a heavyweight AI model, ChatterBox, which is capable of so-called zero-shot training. Check out the Chatterbox demo here.

As the loss function gets smaller, the model’s accuracy gets better

Training began with a corpus of test phrases in text format to ensure decent coverage of everyday English. [Cal] used ChatterBox to clone audio from a single test phrase generated by a ‘mystery TTS system’ and created 1,300 test phrases from this new voice. This audio set served as training data to fine-tune the Piper AI model on the lashed-up GPU rig.

To verify accuracy, [Cal] used OpenAI’s Whisper software to transcribe the audio back to text, in order to compare with the original text corpus. To overcome issues with punctuation and differences between US and UK English, the text was converted into phonemes using espeak-ng, resulting in a 98% phrase matching accuracy.

After down-sampling the training set using SoX, it was ready for the Piper TTS training system. Despite all the preparation, running the software felt anticlimactic. A few inconsistencies in the dataset necessitated the removal of some data points. After five days of training parked outside in the shade due to concerns about heat, TensorBoard indicated that the model’s loss function was converging. That’s AI-speak for: the model was tuned and ready for action! We think it sounds pretty slick.

If all this new-fangled AI speech synthesis is too complex and, well, a bit creepy for you, may we offer a more 1980s solution to making stuff talk? Finally, most people take the ability to speak for granted, until they can no longer do so. Here’s a team using cutting-edge AI to give people back that ability.

A man’s hand is visible holding a large, potato-shaped object in the foreground. A short, white, cylindrical structure is on the top of the potato, with black wires bending back into the potato. A smaller rectangular structure is to one side of it, and a red alligator clip connects to a nail protruding from the potato.

Building A Potato-based GLaDOS As An Introduction To AI

Although not nearly as intimidating as her ceiling-mounted hanging arm body, GLaDOS spent a significant portion of the Portal 2 game in a stripped-down computer powered by a potato battery. [Dave] had already made a version of her original body, but it was built around a robotic arm that was too expensive for the project to be really accessible. For his latest project, therefore, he’s created a AI-powered version of GLaDOS’s potato-based incarnation, which also serves as a fun introduction to building AI systems.

[Dave] wanted the system to work offline, so he needed a computer powerful enough to run all of his software locally. He chose an Nvidia Jetson Orin Nano, which was powerful enough to run a workable software system, albeit slowly and with some memory limitations. A potato cell unfortunately doesn’t generate enough power to run a Jetson, and it would be difficult to find a potato large enough to fit the Jetson inside. Instead, [Dave] 3D-printed and painted a potato-shaped enclosure for the Jetson, a microphone, a speaker, and some supplemental electronics.

A large language model handles interactions with the user, but most models were too large to fit on the Jetson. [Dave] eventually selected Llama 3.2, and used LlamaIndex to preprocess information from the Portal wiki for retrieval-augmented generation. The model’s prompt was a bit difficult, but after contacting a prompt engineer, [Dave] managed to get it to respond to the hapless user in an appropriately acerbic manner. For speech generation, [Dave] used Piper after training it on audio files from the Portal wiki, and for speech recognition used Vosk (a good programming exercise, Vosk being, in his words, “somewhat documented”). He’s made all of the final code available on GitHub under the fitting name of PotatOS.

The end result is a handheld device that sarcastically insults anyone seeking its guidance. At least Dave had the good sense not to give this pernicious potato control over his home.

Convert Any Book To A DIY Audiobook?

If the idea of reading a physical book sounds like hard work, [Nick Bild’s] latest project, the PageParrot, might be for you. While AI gets a lot of flak these days, one thing modern multimodal models do exceptionally well is image interpretation, and PageParrot demonstrates just how accessible that’s become.

[Nick] demonstrates quite clearly how little code is needed to get from those cryptic black and white glyphs to sounds the average human can understand, specifically a paltry 80 lines of Python. Admittedly, many of those lines are pulling in libraries, and some are just blank, so functionally speaking, it’s even shorter than that. Of course, the whole application is mostly glue code, stitching together other people’s hard work, but it’s still instructive and fun to play with.

The hardware required is a Raspberry Pi Zero 2 W, a camera (in this case, a USB webcam), and something to hold it above the book. Any Pi with the ability to connect to a camera should also work, however, with just a little configuration.

On the software side, [Nick] pulls in the CV2 library (which is the interface to OpenCV) to handle the camera interfacing, programming it to full HD resolution. Google’s GenAI is used to interface the Gemini 2.5 Flash LLM via an API endpoint. This takes a captured image and a trivial prompt, and returns the whole page of text, quick as a flash.

Finally, the script hands that text over to Piper, which turns that into a speech file in WAV format. This can then be played to an audio device with a call out to the console aplay tool. It’s all very simple at this level of abstraction.

Continue reading “Convert Any Book To A DIY Audiobook?”