The current state of virtual personal assistants — Alexa, Cortana, Google, and Siri — leaves something to be desired. The speech recognition is mostly pretty good. However, customization options are very limited. Beyond that, many people are worried about the privacy of their data when using one of these assistants. Stanford Open Virtual Assistant Lab has rolled out Almond, which is open and is reported to have better privacy features.
Like most other virtual assistants, Almond has skills that determine what it can do. You can use Almond in a browser, on a Google phone, or as a command line application. It all lives on GitHub, so if you don’t like something you are free to fix it.
The skills are on a market-like thing known as Thingpedia. There are a surprising number, although not nearly as many as commercial devices. The assistant can integrate with Nest, GNOME, Gmail, Twitter, Slack, and many more services.
The natural language processing is impressive. Here are some examples from the web site:
- When the New York Times has an article about China, translate the headline to Chinese, then email it to my friend.
- When I leave home, turn off the heating.
- When I post to Twitter, copy the post to Facebook.
- Get the Bitcoin price and then send it to my colleague on Slack.
The web site is a little glitzy and the GitHub will take some time to parse. However, the documentation is very readable.
Almond is begging to be run on a smart speaker and there is a way to do it. You can even run it using a docker image that is already configured.
What we really want to do is build Almond into a robot. For now, we may just repurpose a Google Raspberry Pi.
Glad I saw this. I have been looking at setting up snips (https://snips.ai/), but this may be just what I was looking for!
Sonos just bought Snips. I don’t see any hope for Sonos keeping any of their open source assets that way for long. :/
Yeah, I saw that. I was going to try out snips since it seemed so promising. Now I don’t see any future in it. But who knows..
That didn’t take long..Snips is dead.
https://forum.snips.ai/t/important-message-regarding-the-snips-console/4145
Annoying. At the rate this sort of acquisition happens with anything interesting, it might be practical for developers looking to work on a project that actually stays on-track to consider going with a worker-owned tech collective. At least then, there’s a greater degree of actually answering to people with relevant expertise.
But node.js? Why? Alsof shows in their minimal requirments which i found tot be quite beefy …
Almond has been baked into the latest release of Home Assistant. Lots of Smart home goodness.
https://www.home-assistant.io/blog/2019/11/20/release-102/
My biggest question is one I can’t find a direct answer to, does this do the language processing locally or on a remote server. If it’s the second “Sorry, still not interested”. I don’t want to bring a remote spy bug into my home (whether it’s intended that way or (potentially) turned into one later doesn’t matter).
After a brief browse through the GitHub organization, it offloads language processing to a remote server, however you define what server that is, and the full source code for said server is available. So it can’t “phone home” if home is your own server under your control.
Now THAT is what makes it interesting to me.
Except, if you need racks full of computers to run the software then it’s not so interesting anymore.
Worst case is that is required today. There is at least one Risc V processor with some AI acceleration on chip. In a few years, those racks might be a stack of Raspi-like boards that fit in a shoebox…
That’s brilliant, b/c you can run the heavy lifting on a computer of your choosing / server in the closet / AWS under your control, and still keep the device lightweight.
Keep the data within your WiFi and it’s both fast and private. Winner.
I agree with Elliot, there are certain thing which require hefty computer horsepower. Voice recognition and ML/AI come to mind. I do want my own control and would like to avoid the cloud (privacy and connectivity come to mind) but this works for me (so far). I now have a Z-Wave dongle, a ZigBee dongle and WiFI to cover my HA. And I can add BT and RF to that if I like.
“Hello Wiretap, please open my garage door, and while you’re at it let the NSA, advertisers and whoever else has access to this Speech-To-Text server and wants to build a pattern of my behavior know what I’m up to.”
Said every smartphone owner ever.
It’s not just smartphones. With smartphones they also get location data. But everything you say, write in an e-mail, or post on the internet is potentially monitored.
https://en.wikipedia.org/wiki/Communications_Assistance_for_Law_Enforcement_Act
“interception of communications for Law Enforcement purposes, and for other purposes.”
(Oh, “other” purposes. Well, at least that’s clearly limited in scope by the law…)
Okay Google to say Alexa to say okay google …
I think Amazon is getting a bit crazy with it’s voice assistant everywhere. I’m expecting my box of cereal to have a secret surprise of an Alexa in every box. Mmmm Chocobombs.
> Okay Google to say Alexa to say okay google …
Ooops dyslexia …
Okay Google say to Alexa say okay google …