It’s The Web, Basically

If you are of a certain age, you probably learned to program in Basic. Even if you aren’t, a lot of microcontroller hobbyists got started on the Basic Stamp, and there are plenty of other places where to venerable language still hides out. But if you want to write cool browser applications, you have to write JavaScript, right? Google will now let you code your web pages in Basic. Known as WWWBasic, this is — of course — a Javascript hack that you can load remotely into a web page and then have your page use Basic for customization. You can even import the thing into Node.js and use Basic inside your JavaScript, although it is hard to think of why you’d want to.

According to the project’s documentation — which is pretty sparse so far, we’re afraid — the Basic program is compiled into JavaScript on page load. There are a few examples, so you can generally pick up what’s available to use. There are graphics, the ability to read a keyboard key, and a way to handle the mouse.

Continue reading “It’s The Web, Basically”

Don’t Look Now, But Your Necklace Is Listening

There was a time when the average person was worried about the government or big corporations listening in on their every word. It was a quaint era, full of whimsy and superstition. Today, a good deal of us are paying for the privilege to have constantly listening microphones in multiple rooms of our house, largely so we can avoid having to use our hands to turn the lights on and off. Amazing what a couple years and a strong advertising push can do.

So if we’re going to be funneling everything we say to one or more of our corporate overlords anyway, why not make it fun? For example, check out this speech-to-image necklace developed by [Stephanie Nemeth]. As you speak, the necklace listens in and finds (usually) relevant images to display. Conceptually this could be used as an assistive communication technology, but we’re cool with it being a meme display device for now.

Hardware wise, the necklace is just a Raspberry Pi 3, a USB microphone, and a HyperPixel 4.0 touch screen. The Pi Zero would arguably be the better choice for hanging around your neck, but [Stephanie] notes that there’s some compatibility issues with Node.js on the Zero’s ARM6 processor. She details a workaround, but says there’s no guarantee it will work with her code.

The JavaScript software records audio from the microphone with SoX, and then runs that through the Google Cloud Speech-to-Text service to figure out what the wearer is saying. Finally it does a Google image search on the captured words using the custom search JSON API to find pictures to show on the display. There’s a user-supplied list of words to ignore so it doesn’t try looking up images for function words (such as “and” or “however”), though presumably it can also be used to blacklist certain imagery you might not want popping up on your chest in mixed company.

We’d be interested in seeing somebody implement this software on a Raspberry Pi powered digital frame to display artwork that changes based on what the people in the room are talking about. Like in Antitrust, but without Tim Robbins offing anyone.

Modern Wizard Summons Familiar Spirit

In European medieval folklore, a practitioner of magic may call for assistance from a familiar spirit who takes an animal form disguise. [Alex Glow] is our modern-day Merlin who invoked the magical incantations of 3D printing, Arduino, and Raspberry Pi to summon her familiar Archimedes: The AI Robot Owl.

The key attraction in this build is Google’s AIY Vision kit. Specifically the vision processing unit that tremendously accelerates image classification tasks running on an attached Raspberry Pi Zero W. It no longer consumes several seconds to analyze each image, classification can now run several times per second, all performed locally. No connection to Google cloud required. (See our earlier coverage for more technical details.) The default demo application of a Google AIY Vision kit is a “joy detector” that looks for faces and attempts to determine if a face is happy or sad. We’ve previously seen this functionality mounted on a robot dog.

[Alex] aimed to go beyond the default app (and default box) to create Archimedes, who was to reward happy people with a sticker. As a moving robotic owl, Archimedes had far more crowd appeal than the vision kit’s default cardboard box. All the kit components have been integrated into Archimedes’ head. One eye is the expected Pi camera, the other eye is actually the kit’s piezo buzzer. The vision kit’s LED-illuminated button now tops the dapper owl’s hat.

Archimedes was created to join in Google’s promotion efforts. Their presence at this Maker Faire consisted of two tents: one introductory “Learn to Solder” tent where people can create a blinky LED badge, and the other tent is focused on their line of AIY kits like this vision kit. Filled with demos of what the kits can do aside from really cool robot owls.

Hopefully these promotional efforts helped many AIY kits find new homes in the hands of creative makers. It’s pretty exciting that such a powerful and inexpensive neural net processor is now widely available, and we look forward to many more AI-powered hacks to come.

Continue reading “Modern Wizard Summons Familiar Spirit”

Location Sharing With Google Home

With Google’s near-monopoly on the internet, it can be difficult to get around in cyberspace without encountering at least some aspect of this monolithic, data-gathering giant. It usually takes a concerted effort, but it is technically possible to do. While [Mat] is still using some Google products, he has at least figured out a way to get Google Home to work with location data without actually sharing that data with Google, which is a step in the right direction.

[Mat]’s goal was to use Google’s location sharing features through Google Home, but without the creepiness factor of Google knowing everything about his life, and also without the hassle of having to use Google Maps. He’s using a few things to pull this off, including a NodeRED server running on a Raspberry Pi Zero, a free account from If This Then That (IFTTT), Tasker with AutoRemote plugin, and the Google Maps API key. With all of that put together, and some configuration of IFTTT he can ask his Google assistant (or Google Home) for location data, all without sharing that data with Google.

This project is a great implementation of Google’s tools and a powerful use of IFTTT. And, as a bonus, it gets around some of the creepiness factor that Google tends to incorporate in their quest to know all the data.

Continue reading “Location Sharing With Google Home”

Google Lowers The Artificial Intelligence Bar With Complete DIY Kits

Last year, Google released an artificial intelligence kit aimed at makers, with two different flavors: Vision to recognize people and objections, and Voice to create a smart speaker. Now, Google is back with a new version to make it even easier to get started.

The main difference in this year’s (v1.1) kits is that they include some basic hardware, such as a Raspberry Pi and an SD card. While this might not be very useful to most Hackaday readers, who probably have a spare Pi (or 5) lying around, this is invaluable for novice makers or the educational market. These audiences now have access to an all-in-one solution to build projects and learn more about artificial intelligence.

We’ve previously seen toys, phones, and intercoms get upgrades with an AIY kit, but would love to see more! [Mike Rigsby] has used one in his robot dog project to detect when people are smiling. These updated kits are available at Target (Voice, Vision). If the kit is too expensive, our own [Inderpreet Singh] can show you how to build your own.

Via [BGR].

Oracle V Google Could Chill Software Development

Unless you’ve completely unplugged from the news, you probably are aware that the long-running feud between Oracle and Google had a new court decision this week. An appeal court found that Google’s excuse of fair use wasn’t acceptable and that they did infringe on Oracle’s copyrights to Java. Oracle has asked for about $9 billion in damages, although the actual amount is yet to be decided. In addition, it is pretty likely Google will take it up to the Supreme Court before any actual judgment is levied.

The news is aimed at normal people, so it is pretty glossy about what exactly happened. We set out to try to make sense of it all. We found a pretty good article from [Michaela Barry] about what the courts previously found.  There were three main parts:

  • There were 37 API (Application Programming Interface) declarations taken verbatim from Java. This would be like a C header file if you aren’t familiar with Java.
  • Google decompiled 8 security files and used them.
  • The rangeCheck function — 9 lines of Java code — were exactly the same in Oracle’s Java and Android.

Continue reading “Oracle V Google Could Chill Software Development”

Google Builds A Synthesizer With Neural Nets And Raspberry Pis.

AI is the new hotness! It’s 1965 or 1985 all over again! We’re in the AI Rennisance Mk. 2, and Google, in an attempt to showcase how AI can allow creators to be more… creative has released a synthesizer built around neural networks.

The NSynth Super is an experimental physical interface from Magenta, a research group within the Big G that explores how machine learning tools can create art and music in new ways. The NSynth Super does this by mashing together a Kaoss Pad, samples that sound like General MIDI patches, and a neural network.

Here’s how the NSynth works: The NSynth hardware accepts MIDI signals from a keyboard, DAW, or whatever. These MIDI commands are fed into an openFrameworks app that uses pre-compiled (with Machine Learning™!) samples from various instruments. This openFrameworks app combines and mixes these samples in relation to whatever the user inputs via the NSynth controller. If you’ve ever wanted to hear what the combination of a snare drum and a bassoon sounds like, this does it. Basically, you’re looking at a Kaoss pad controlling rompler that takes four samples and combines them, with the power of Neural Networks. The project comes with a set of pre-compiled and neural networked samples, but you can use this interface to mix your own samples, provided you have a beefy computer with an expensive GPU.

Not to undermine the work that went into this project, but thousands of synth heads will be disappointed by this project. The creation of new audio samples requires training with a GPU; the hardest and most computationally expensive part of neural networks is the training, not the performance. Without a nice graphics card, you’re limited to whatever samples Google has provided here.

Since this is Open Source, all the files are available, and it’s a project that uses a Raspberry Pi with a laser-cut enclosure, there is a huge demand for this machine learning Kaoss pad. The good news is that there’s a group buy on Hackaday.io, and there’s already a seller on Tindie should you want a bare PCB. You can, of course, roll your own, and the Digikey cart for all the SMD parts comes to about $40 USD. This doesn’t include the OLED ($2 from China), the Raspberry Pi, or the laser cut enclosure, but it’s a start. Of course, for those of you who haven’t passed the 0805 SMD solder test, it looks like a few people will be selling assembled versions (less Pi) for $50-$60.

Is it cool? Yes, but a basement-bound producer that wants to add this to a track will quickly learn that training machine learning algorithms cost far more than playing with machine algorithms. The hardware is neat, but brace yourself for disappointment. Just like AI suffered in the late 60s and the late 80s. We’re in the AI Renaissance Mk. 2, after all.

Continue reading “Google Builds A Synthesizer With Neural Nets And Raspberry Pis.”