Location Sharing with Google Home

With Google’s near-monopoly on the internet, it can be difficult to get around in cyberspace without encountering at least some aspect of this monolithic, data-gathering giant. It usually takes a concerted effort, but it is technically possible to do. While [Mat] is still using some Google products, he has at least figured out a way to get Google Home to work with location data without actually sharing that data with Google, which is a step in the right direction.

[Mat]’s goal was to use Google’s location sharing features through Google Home, but without the creepiness factor of Google knowing everything about his life, and also without the hassle of having to use Google Maps. He’s using a few things to pull this off, including a NodeRED server running on a Raspberry Pi Zero, a free account from If This Then That (IFTTT), Tasker with AutoRemote plugin, and the Google Maps API key. With all of that put together, and some configuration of IFTTT he can ask his Google assistant (or Google Home) for location data, all without sharing that data with Google.

This project is a great implementation of Google’s tools and a powerful use of IFTTT. And, as a bonus, it gets around some of the creepiness factor that Google tends to incorporate in their quest to know all the data.

Continue reading “Location Sharing with Google Home”

Google Lowers The Artificial Intelligence Bar With Complete DIY Kits

Last year, Google released an artificial intelligence kit aimed at makers, with two different flavors: Vision to recognize people and objections, and Voice to create a smart speaker. Now, Google is back with a new version to make it even easier to get started.

The main difference in this year’s (v1.1) kits is that they include some basic hardware, such as a Raspberry Pi and an SD card. While this might not be very useful to most Hackaday readers, who probably have a spare Pi (or 5) lying around, this is invaluable for novice makers or the educational market. These audiences now have access to an all-in-one solution to build projects and learn more about artificial intelligence.

We’ve previously seen toys, phones, and intercoms get upgrades with an AIY kit, but would love to see more! [Mike Rigsby] has used one in his robot dog project to detect when people are smiling. These updated kits are available at Target (Voice, Vision). If the kit is too expensive, our own [Inderpreet Singh] can show you how to build your own.

Via [BGR].

Oracle v Google could Chill Software Development

Unless you’ve completely unplugged from the news, you probably are aware that the long-running feud between Oracle and Google had a new court decision this week. An appeal court found that Google’s excuse of fair use wasn’t acceptable and that they did infringe on Oracle’s copyrights to Java. Oracle has asked for about $9 billion in damages, although the actual amount is yet to be decided. In addition, it is pretty likely Google will take it up to the Supreme Court before any actual judgment is levied.

The news is aimed at normal people, so it is pretty glossy about what exactly happened. We set out to try to make sense of it all. We found a pretty good article from [Michaela Barry] about what the courts previously found.  There were three main parts:

  • There were 37 API (Application Programming Interface) declarations taken verbatim from Java. This would be like a C header file if you aren’t familiar with Java.
  • Google decompiled 8 security files and used them.
  • The rangeCheck function — 9 lines of Java code — were exactly the same in Oracle’s Java and Android.

Continue reading “Oracle v Google could Chill Software Development”

Google Builds A Synthesizer With Neural Nets And Raspberry Pis.

AI is the new hotness! It’s 1965 or 1985 all over again! We’re in the AI Rennisance Mk. 2, and Google, in an attempt to showcase how AI can allow creators to be more… creative has released a synthesizer built around neural networks.

The NSynth Super is an experimental physical interface from Magenta, a research group within the Big G that explores how machine learning tools can create art and music in new ways. The NSynth Super does this by mashing together a Kaoss Pad, samples that sound like General MIDI patches, and a neural network.

Here’s how the NSynth works: The NSynth hardware accepts MIDI signals from a keyboard, DAW, or whatever. These MIDI commands are fed into an openFrameworks app that uses pre-compiled (with Machine Learning™!) samples from various instruments. This openFrameworks app combines and mixes these samples in relation to whatever the user inputs via the NSynth controller. If you’ve ever wanted to hear what the combination of a snare drum and a bassoon sounds like, this does it. Basically, you’re looking at a Kaoss pad controlling rompler that takes four samples and combines them, with the power of Neural Networks. The project comes with a set of pre-compiled and neural networked samples, but you can use this interface to mix your own samples, provided you have a beefy computer with an expensive GPU.

Not to undermine the work that went into this project, but thousands of synth heads will be disappointed by this project. The creation of new audio samples requires training with a GPU; the hardest and most computationally expensive part of neural networks is the training, not the performance. Without a nice graphics card, you’re limited to whatever samples Google has provided here.

Since this is Open Source, all the files are available, and it’s a project that uses a Raspberry Pi with a laser-cut enclosure, there is a huge demand for this machine learning Kaoss pad. The good news is that there’s a group buy on Hackaday.io, and there’s already a seller on Tindie should you want a bare PCB. You can, of course, roll your own, and the Digikey cart for all the SMD parts comes to about $40 USD. This doesn’t include the OLED ($2 from China), the Raspberry Pi, or the laser cut enclosure, but it’s a start. Of course, for those of you who haven’t passed the 0805 SMD solder test, it looks like a few people will be selling assembled versions (less Pi) for $50-$60.

Is it cool? Yes, but a basement-bound producer that wants to add this to a track will quickly learn that training machine learning algorithms cost far more than playing with machine algorithms. The hardware is neat, but brace yourself for disappointment. Just like AI suffered in the late 60s and the late 80s. We’re in the AI Renaissance Mk. 2, after all.

Continue reading “Google Builds A Synthesizer With Neural Nets And Raspberry Pis.”

Up AlphaGoer Five

AlphaGo is the deep learning program that can beat humans at the game Go. You can read Google’s highly technical paper on it, but you’ll have to wade through some very academic language. [Aman Agarwal] has done us a favor. He took the original paper and dissected the important parts of in in plain English. If the title doesn’t make sense to you, you need to read more XKCD.

[Aman] says his treatment will be useful for anyone who doesn’t want to become an expert on neural networks but still wants to understand this important breakthrough. He also thinks people who don’t have English as a first language may find his analysis useful. By the way, the actual Go matches where AlphaGo beat [Sedol] were streamed and you can watch all the replays on YouTube (the first match appears below).

Continue reading “Up AlphaGoer Five”

Google Ups the Ante in Quantum Computing

At the American Physical Society conference in early March, Google announced their Bristlecone chip was in testing. This is their latest quantum computer chip which ups the game from 9 qubits in their previous test chip to 72 — quite the leap. This also trounces IBM and Intel who have 50- and 49-qubit devices. You can read more technical details on the Google Research Blog.

It turns out that just the number of qubits isn’t the entire problem, though. Having qubits that last longer is important and low-noise qubits help because the higher the noise figure, the more likely you will need redundant qubits to get a reliable answer. That’s fine, but it does leave fewer qubits for working your problem.

Continue reading “Google Ups the Ante in Quantum Computing”

Google’s AIY Vision Kit Augments Pi With Vision Processor

Google has announced their soon to be available Vision Kit, their next easy to assemble Artificial Intelligence Yourself (AIY) product. You’ll have to provide your own Raspberry Pi Zero W but that’s okay since what makes this special is Google’s VisionBonnet board that they do provide, basically a low power neural network accelerator board running TensorFlow.

AIY VisionBonnet with Myriad 2 (MA2450) chip
AIY VisionBonnet with Myriad 2 (MA2450) chip

The VisionBonnet is built around the Intel® Movidius™ Myriad 2 (aka MA2450) vision processing unit (VPU) chip. See the video below for an overview of this chip, but what it allows is the rapid processing of compute-intensive neural networks. We don’t think you’d use it for training the neural nets, just for doing the inference, or in human terms, for making use of the trained neural nets. It may be worth getting the kit for this board alone to use in your own hacks. An alternative is to get Modivius’s Neural Compute Stick, which has the same chip on a USB stick for around $80, not quite double the Vision Kit’s $45 price tag.

The Vision Kit isn’t out yet so we can’t be certain of the details, but based on the hardware it looks like you’ll point the camera at something, press a button and it will speak. We’ve seen this before with this talking object recognizer on a Pi 3 (full disclosure, it was made by yours truly) but without the hardware acceleration, a single object recognition took around 10 seconds. In the vision kit we expect the recognition will be in real-time. So the Vision Kit may be much more dynamic than that. And in case it wasn’t clear, a key feature is that nothing is done on the cloud here, all processing is local.

The kit comes with three different applications: an object recognition one that can recognize up to 1000 different classes of objects, another that recognizes faces and their expressions, and a third that detects people, cats, and dogs. While you can get up to a lot of mischief with just that, you can run your own neural networks too. If you need a refresher on TensorFlow then check out our introduction. And be sure to check out the Myriad 2 VPU video below the break.

Continue reading “Google’s AIY Vision Kit Augments Pi With Vision Processor”