Alexa in Billy Bass singing fish

Alexa Brings Back Singing Fish, This Time It’s A Good Thing

Remember Big Mouth Billy Bass? That’s the singing fish with which you could torture family members by having it endlessly perform a rendition of either “Take Me to the River” or “Don’t Worry Be Happy”.

Now [Brian Kane], a teacher at the Rhode Island School of Design, has connected Amazon’s Alexa to the fish. Speak the “wake word”, “Alexa”, and the fish’s head turns to face you. Then ask it any question you’d normally ask Alexa and Alexa’s voice answers while the fish opens and closes its mouth in time to the words. Want to know the weather? Ask the fish, which you can see [Brian] do in the video below.

[Brian] hasn’t given details on how he’s done it but he’s likely made use of the Alexa Skills Kit, an SDK from Amazon that let’s you use the Alexa voice recognition and speech service with your own hardware (wetware, aquaware?), just as Amazon does with their home assistant, Echo .

Continue reading “Alexa Brings Back Singing Fish, This Time It’s A Good Thing”

Amazon Offers $2.5M To Make Alexa Your Friend

Amazon has unveiled the Alexa Prize, a $2.5 Million purse for the first team to turn Alexa, the voice service that powers the Amazon Echo, into a ‘socialbot’ capable of, “conversing coherently and engagingly with humans on popular topics for 20 minutes”.

The Alexa Prize is only open to teams from colleges or universities, with the winning team taking home $500,000 USD, with $1M awarded to the team’s college or university in the form of a research grant. Of course, the Alexa Prize grants Amazon a perpetual, irrevocable, worldwide, royalty-free license to make use of the winning socialbot.

It may be argued the Alexa Prize is a competition to have a chat bot pass a Turning Test. This is a false equivalency; the Turing Test, as originally formulated, requires a human evaluator to judge between two conversation partners, one of which is a human, one of which is a computer. Additionally, the method of communication is text-only, whereas the Alexa Prize will make use of Alexa’s Text to Speech functionality. The Alexa Prize is not a Turing Test, but only because of semantics. If you generalize the phrase, ‘Turing Test’ to mean a test of natural language conversation, the Alexa Prize is a Turing Test.

This is not the first prize offered for a computer program that is able to communicate with a human in real time using natural language. Since 1990, the Loebner Prize, cosponsored by AI god Marvin Minsky, has offered a cash prize of $100,000 (and a gold medal) to the first computer that is indistinguishable from a human in conversation. Since 1991, yearly prizes have been awarded to the computer that is most like a human as part of the competition.

For any team attempting the enormous task of developing a theory of mind and consciousness, here are a few tips: don’t use Twitter as a dataset. Microsoft tried that, and their chatbot predictably turned racist. A better idea would be to copy Hackaday and our article-generating algorithm. Just use Markov chains and raspberry pi your way to arduino this drone.

Seeed Studio’s ReSpeaker Speaks All The Voice Recognition Languages

Seeed Studio recently launched its third Kickstarter campaign: ReSpeaker, an open hardware voice interface. After their previous Kickstarted IoT hardware, such as the RePhone, mostly focused on connectivity, the electronics manufacturer from Shenzhen now tackles another highly contested area of IoT: Voice recognition.

The ReSpeaker Core is a capable development board based on Mediatek’s MT7688 WiFi module and runs OpenWrt. Onboard is a WM8960 stereo audio codec with integrated 1W speaker/headphone driver, a microphone, an ATMega32U4 coprocessor, 12 addressable RGB LEDs and 8 touch sensors. There are also two expansion headers with GPIOs, I2S, I2C, analog audio and USB 2.0 and an onboard microSD card slot.

The latter is especially useful to feed the ReSpeaker’s integrated speech recognition engine PocketSphinx with a vocabulary and audio file library, enabling it to respond to keywords and commands even when it’s not hooked up to the internet. Once it’s online, ReSpeaker also supports most of the available cloud based cognitive speech recognition services, such as Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai and Houndify. It also comes with an SDK and Python API, supports JavaScript, Lua and C/C++, and it looks like the coprocessor features an Arduino-compatible bootloader.

The expansion header accepts shield-like hardware add-ons. Some of them are also available through the campaign. The most important one is the circular, far-field microphone array. Based on 7 XVSM-2000 respeaker_meow2digital microphones, the extension board enhances the device’s hearing with sound localization, beam forming, reverb and noise suppression. A Grove extension board connects the ReSpeaker to the Seeed’s current lineup on ready-to-use sensors, actuators and other peripherals.

Seeed also cooperates with the Meow King Audio Electronic Company to develop a nice tower-shaped enclosure with built-in speaker, 5W amplifier and battery. As a portable speaker, the Meow King Drive Unit (shown on the right) certainly doesn’t knock your socks off, but it practically turns the ReSpeaker into an open source version of the Amazon Echo — including the ability to run offline instead of piping everything you say to Big Brother.

According to Seeed, the freshly baked hardware will ship to backers in November 2016, and they do have a track-record of on-schedule shipped Kickstarter rewards. At the time of writing, some of the Crazy Early Birds are still available for $39. Enjoy the campaign video below and let us know what you think of think hardware in the comments!

When The Smart Hits The Fan

A fan used to be a simple device – motor rotates blades, air moves, and if you were feeling fancy, maybe the whole thing oscillates. Now fans have thermostats, timers, and IR remotes. So why not increase the complexity by making a smart fan with an IoT interface?

[Casper]’s project looks more like a proof of concept or learning platform than a serious attempt at home automation. His build log mentions an early iteration based on a Raspberry Pi. But an ESP8266 was a better choice and made it into the final build, which uses an IR LED to mimic the signals from the remote so that all the stock modes of the fan are supported. The whole thing is battery powered and sits on a breadboard on top of the fan, but we’ll bet that a little surgery could implant the interface and steal power internally. As for interfaces, take your pick – an iOS app via the SmartThings home automation platform, through their SmartTiles web client, or using an Amazon Echo. [Casper] mentions looking into MQTT as well but having some confusion; we’d suggest he check out [Elliot Williams]’ new tutorial on MQTT to get up to speed.

Continue reading “When The Smart Hits The Fan”

Roll Your Own Amazon Echo On A Raspberry Pi

Speech recognition coupled with AI is the new hotness. Amazon’s Echo is a pretty compelling device, for a largish chunk of change. But if you’re interested in building something similar yourself, it’s just gotten a lot easier. Amazon has opened up a GitHub with instructions and code that will get you up and running with their Alexa Voice Service in short order.

If you read Hackaday as avidly as we do, you’ve already read that Amazon opened up their SDK (confusingly called a “Skills Kit”) and that folks have started working with it already. This newest development is Amazon’s “official” hello-world demo, for what that’s worth.

There are also open source alternatives, so if you just want to get something up and running without jumping through registration and licensing hoops, you’ve got that option as well.

Whichever way you slice it, there seems to be a real interest in having our machines listen to us. It’s probably time for an in-depth comparison of the various options. If you know of a voice recognition system that runs on something embeddable — a single-board computer or even a microcontroller — and you’d like to see us look into it, post up in the comments. We’ll see what we can do.

Thanks to [vvenesect] for the tip!

Internet Of Things In Five Minutes

If you’re looking for the quickest way to go from zero to voice-controlled home automation system, you should spend five minutes checking out [Hari Wiguna]’s project on Hackaday.io where he connects up IoT gadgets and services into a functioning lightswitch. (Video below the break.)

6166971452133983621[Hari] demonstrates how to set up a complex chain: Amazon Echo to IFTTT to Adafruit.io as a data broker, which is then polled by an ESP8266 unit in his home that controls his X10 setup. (Pshwew.) But each step along the way is designed to be nearly plug-and-play, so it’s really a lot like clicking Lego blocks together. [Hari]’s video is a nice overview.

There’s only one catch if you’re going to replicate this yourself: the X10 system that’s used for the last mile. Unless you have one of these setups already, you’re on your own for controlling the outlets that turn the lights on and off. For price and hackability, we suggest the common 433MHz wireless outlet switches and pairing them with cheap 433MHz transmitters, available at eBay for around $1. We’ve seen a lot of hacks of these systems — they’re quite common both in the US and Europe.

We’ve also covered [Hari]’s projects before: both his self-learning TV remote and a sweet Halloween hack. His video production skills are excellent. We’re in awe of how much info he crams into his YouTube videos.

Echo, Meet Mycroft

The Amazon Echo is an attempt to usher in a new product category. A box that listens to you and obeys your wishes. Sort of like Siri or Google Now for your house. Kickstarter creator [Joshua Montgomery] likes the idea, but he wants to do it all Open Source with a Raspberry Pi and an Arduino.

The Kickstarter (which reached its funding goal earlier this month) claims the device will use natural language to access media, control IoT devices, and will be open both for hardware and software hacking. The Kickstarter page says that Mycroft has partnerships with Lucid and Canonical (the people behind Ubuntu). In addition, they have added stretch goals to add computer vision and Linux desktop control to Mycroft.

Continue reading “Echo, Meet Mycroft”