Uncertain Future Of Orphaned Jibo Robots Presents Opportunities

In our modern connected age, our devices have become far more powerful and useful when they could draw upon resources of a global data network. The downside of a cloud-connected device is the risk of being over-reliant on computers outside of our own control. The people who brought a Jibo into their home got a stark reminder of this fact when some (but not all) Jibo robots gave their owners a farewell message as their servers are shut down, leaving behind little more than a piece of desktop sculpture.

Jibo launched their Indiegogo crowdfunding campaign with the tagline “The World’s First Social Robot For The Home.” Full of promises of how Jibo will be an intelligent addition to a high tech household, it has always struggled to justify its price tag. It cost as much as a high end robot vacuum, but without the house cleaning utility. Many demonstrations of a Jibo’s capabilities centered around its voice control, which an Amazon Echo or Google Home could match at a fraction of the price.

By the end of 2018, all assets and intellectual property have been sold to SQN Venture Partners. They have said little about what they planned to do with their acquisition. Some Jibo owner still hold hope that there’s still a bright future ahead. Both on the official forums (for however long that will stay running) and on unofficial channels like Reddit. Other owners have given up and unplugged their participation in this social home robotics experiment.

If you see one of these orphans in your local thrift store for a few bucks, consider adopting it. You could join the group hoping for something interesting down the line, but you’re probably more interested in its hacking potential: there is a Nvidia Jetson inside good for running neural networks. Probably a Tegra K1 variant, because Jibo used the Jetson TK1 to develop the robot before launch. Jibo has always promised a developer SDK for the rest of us to extend Jibo’s capabilities, but it never really materialized. The inactive Github repo mainly consists of code talking to servers that are now offline, not much dealing directly with the hardware.

Jibo claimed thousands were sold and, if they start becoming widely available inexpensively, we look forward to a community working to give new purpose to these poor abandoned robots. If you know of anyone who has done a teardown to see exactly what’s inside, or if someone has examined upgrade files to create custom Jibo firmware, feel free to put a link in the comments and help keep these robots out of e-waste.

If you want to experiment with power efficient neural network accelerators but rather work with an officially supported development platform, we’ve looked at the Jetson TK1 successors TX1 and TX2. And more recently, Google has launched one of their own, as has our friends at Beaglebone.

Ludwig Promises Easy Machine Learning From Uber

Machine learning has brought an old idea — neural networks — to bear on a range of previously difficult problems such as handwriting and speech recognition. Better software and hardware has made it feasible to apply sophisticated machine learning algorithms that would have previously been only possible on giant supercomputers. However, there’s still a learning curve for developing both models and software to use these trained models. Uber — you know, the guys that drive you home when you’ve had a bit too much — have what they are calling a “code-free deep learning toolbox” named Ludwig. The promise is you can create, train, and use models to extract features from data without writing any code. You can find the project itself on GitHub.io.

The toolbox is built over TensorFlow and they claim:

Ludwig is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures rather than data wrangling.

Continue reading “Ludwig Promises Easy Machine Learning From Uber”

Foundations For Machine Learning In English (Or Russian)

We are big fans of posts and videos that try to give you a gut-level intuition on technical topics. While [vas3k’s] post “Machine Learning for Everyone” fits the bill, we knew we’d like it from the opening sentences:

Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it.”

That sets the tone. What follows is a very comprehensive exposition of machine learning fundamentals. There is no focus on a particular tool, instead this is all the underpinnings. The original post was in Russian, but the English version is easy to read and doesn’t come off as a poor machine translation.

Continue reading “Foundations For Machine Learning In English (Or Russian)”

Neural Network Knows When Cat Wants To Go Outside

Neural networks are computer systems that are vaguely inspired by the construction of animal brains, and much like human brains, can be trained to obey the whims of the almighty domestic cat. [EdjeElectronics] has built just such a system, and his cat is better off for it.

The build uses a Raspberry Pi, fitted with the Pi Camera board, to image the area around the back door of the house. A Python script regularly captures images and passes them to a TensorFlow neural network for object recognition. The TensorFlow network returns object type and positions to the Python script. This information can be used to determine if there is a cat in the frame, and if it is inside or outside. If the cat remains in position for ten consecutive frames, a text message is sent via Twilio, indicating to the owner to let the cat in or out, as the case may be.

Thirty years ago, object classification was a pie-in-the-sky technology, but now you can run it on a $30 computer to figure out where your pets are. What a time we live in! A similar solution to this problem may be a cat door that unlocks via facial recognition. Video after the break.

[Thanks to Baldpower for the tip!]

Continue reading “Neural Network Knows When Cat Wants To Go Outside”

Artificial Intelligence Composes New Christmas Songs

One of the most common uses of neural networks is the generation of new content, given certain constraints. A neural network is created, then trained on source content – ideally with as much reference material as possible. Then, the model is asked to generate original content in the same vein. This generally has mixed, but occasionally amusing, results. The team at [Made by AI] had a go at generating Christmas songs using this very technique.

The team decided that the easiest way to train their model would be to use note data from MIDI files. MIDI versions of Christmas songs are readily available and provide a broad base with which to train the model. For a neural network, the team chose to use a Long-short Term Memory (LSTM) architecture. This is a model which is contextually sensitive, which is important when dealing with structured formats like music or language.

The neural network generated five tunes which you can listen to on the Made by AI Soundcloud page. The team notes their time was limited, and we think that with some further work and more adherence to musical concepts such as structure and repetition, it might be possible to generate something a little more catchy.

There are other applications for AI in music, too – like these intelligent musical prostheses.

Neural Network Pies That Might Be Worth A Try

Neural networks are a key technology in the field of machine learning. A common technique is training them with sample data, and then asking them to create something new in the same vein. AI researcher [Janelle Shane] decided to task a neural network with a fun task – inventing new kinds of pie.

Using the char-rnn library, the network was initially trained on a sample of 2237 pie recipe titles, sourced from around the internet. Early iterations struggled to even spell “pie”, but as the network improved, so did the results. Where we can’t imagine how one would even make a “Sweesh Pie Ipple Pie”, later results, such as the “Impossible Maple Spinach Apple Pie” seem far more cromulent by comparison.

At this point, [Janelle] decided to mix things up, stirring in a further sample consisting of the names of various cookies and apples. The data were carefully sorted such that the network still prioritized pies, but this additional data gave the model a richer library to draw from. This led to such home-baked classics as Flangerson’s Blusty Tart and Chicken Pineapple Cream Pie.

On the surface, it’s a fun project with whimsical output, but fundamentally it highlights how much can be accomplished these days by standing on the shoulders of giants, so to speak. We’ve seen [Janelle]’s output before, too – naming tomatoes, no less.

The Naughty AIs That Gamed The System

Artificial intelligence (AI) is undergoing somewhat of a renaissance in the last few years. There’s been plenty of research into neural networks and other technologies, often based around teaching an AI system to achieve certain goals or targets. However, this method of training is fraught with danger, because just like in the movies – the computer doesn’t always play fair.

It’s often very much a case of the AI doing exactly what it’s told, rather than exactly what you intended. Like a devious child who will gladly go to bed in the literal sense, but will not actually sleep, this can cause unexpected, and often quite hilarious results. [Victoria] has created a master list of scholarly references regarding exactly this.

The list spans a wide range of cases. There’s the amusing evolutionary algorithm designed to create creatures capable of high-speed movement, which merely spawned very tall creatures that generated these speeds by falling over. More worryingly, there’s the AI trained to identify toxic and edible mushrooms, which simply picked up on the fact that it was presented with the two types in alternating order. This ended up being an unreliable model in the real world. Similarly, the model designed to assess malignancy of skin cancers determined that lesions photographed with rulers for scale were more likely to be cancerous.

[Victoria] refers to this as “specification gaming”. One can draw parallels to classic sci-fi stories around the “Laws of Robotics”, where robots take such laws to their literal extremes, often causing great harm in the process. It’s an interesting discussion of the difficulty in training artificially intelligent systems to achieve their set goals without undesirable side effects.

We’ve seen plenty of work in this area before – like this use of evolutionary algorithms in circuit design.