Websites used to be uglier than they are now. Sure, you can still find a few disasters, but back in the early days of the Web you’d have found blinking banners, spinning text, music backgrounds, and bizarre navigation themes. Practices evolve, and now there’s much less variation between professionally-designed sites.
In a mirror of the world of hypertext, the same thing is going to happen with voice user interfaces (or VUIs). As products like Google Home and Amazon Echo get more users, developing VUIs will become a big deal. We are also starting to see hacker projects that use VUIs either by leveraging the big guys, using local code on a Raspberry Pi, or even using dedicated speech hardware. So what are the best practices for a VUI? [Frederik Goossens] shares his thoughts on the subject in a recent post.
Truthfully, a lot of the design process [Frederik] suggests mimics conventional user interface design in defining the use case and mapping out the flow. However, there are some unique issues surrounding usable voice interactions.
In summary, the main points for a VUI are:
- Simple and conversational
- Confirm completion
- An error strategy
- Consider extra layers of security
You can find more details in the original post, which also covers some general advice about tools and design.
If you are looking for something a little more detailed, veteran voice developer [Jo Jaquinta] posted a video in the comments about his presentation on the same topic in Dublin, which you can see below. Meanwhile if you want to try your own voice development, we’ve seen that it is probably easier than you think. If you want to do it all locally, even an Arduino can hear what you have to say.