Teaching Robots Workplace Etiquette

Most often, humans and robots do not have to work directly together, instead working on different parts in a production pipeline or with the robot performing tasks instead of a human. In such cases any human-robot interaction (HRI) will be superficial. Yet what if humans and robots have to work alongside each other? This is a question which a group of students at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have recently studied some answers to.

In their paper on human-robot collaborative tasks (PDF), they cover the three possible models one can use for this kind of interaction: there can be no communication (‘silent’), the communication can be pre-programmed (state machine), or in this case a Markov model-based system. This framework which they demonstrate is called CommPlan and it uses observation data from human subjects to construct a Markov model that can integrate sensor data in order to decide on its next action.

In the experiment they performed (the preparation of a meal; see the embedded video after the break), human subjects had to work alongside a robot. Between the three different approaches, the CommPlan one was the fastest, using voice interaction only when it deemed it to be necessary. The experiment’s subjects expressed hereby a preference for bidirectional communication, much as would occur between human workers.

Continue reading “Teaching Robots Workplace Etiquette”

A Soldering LightSaber For The Speedy Worker

We all have our preferences when it comes to soldering irons, and for [Marius Taciuc] the strongest of them all is for a quick heat-up. It has to be at full temperature in the time it takes him to get to work, or it simply won’t cut the mustard. His solution is a temperature controlled iron, but one with no ordinary temperature control. Instead of a normal feedback loop it uses a machine learning algorithm to find the quickest warm-up.

The elements he’s using have a thermocouple in series with the element itself, meaning that to measure the temperature the power must be cut to the element. This duty cycle can not be cut too short or the measurements become noisy, so under a traditional temperature control regimen there is a limit on how quickly it can be heated up. His approach is to turn it on full-time for a period without stopping to measure the temperature, only measuring after it has had a chance to heat up. The algorithm constantly learns how long to switch it on to achieve what temperature, and is able to interpolate to arrive at the desired reading. It’s a clever way to make existing hardware perform new tricks, and we like that.

He’s appeared on these pages quite a few times over the years, but perhaps you’d like to see the first version of the same hardware. Meanwhile watch the quick heat up in action with a fuller explanation in the video below.

Continue reading “A Soldering LightSaber For The Speedy Worker”

Silicone And AI Power This Prayerful Robotic Intercessor

Even in a world that is as currently far off the rails as this one is, we’re going to go out on a limb and say that this machine learning, servo-powered prayer bot is going to be the strangest thing you see today. We’re happy to be wrong about that, though, and if we are, please send links.

“The Prayer,” as [Diemut Strebe]’s work is called, may look strange, but it’s another in a string of pieces by various artists that explores just what it means to be human at a time when machines are blurring the line between them and us. The hardware is straightforward: a silicone rubber representation of a human nasopharyngeal cavity, servos for moving the lips, and a speaker to create the vocals. Those are generated by a machine-learning algorithm that was trained against the sacred texts of many of the world’s major religions, including the Christian Bible, the Koran, the Baghavad Gita, Taoist texts, and the Book of Mormon. The algorithm analyzes the structure of sacred verses and recreates random prayers and hymns using Amazon Polly that sound a lot like the real thing. That the lips move in synchrony with the ersatz devotions only adds to the otherworldliness of the piece. Watch it in action below.

We’ve featured several AI-based projects that poke at some interesting questions. This kinetic sculpture that uses machine learning to achieve balance comes to mind, while AI has even been employed in the search for spirits from the other side.

Continue reading “Silicone And AI Power This Prayerful Robotic Intercessor”

Assistive Specs Help Jog Your Memory

It’s something that can happen to all of us, that we forget things. Young and old, we know things are on our to-do list but in the heat of the moment they disappear from our minds and we miss them. There are a myriad of technological answers to this in the form of reminders and calendars, but [Nick Bild] has come up with possibly the most inventive yet. His Newrons project is a pair of glasses with a machine vision camera, that flashes a light when it detects an object in its field of view associated with a calendar entry.

At its heart is a JeVois A33 Smart Machine Vision Camera, which runs a neural network trained on an image dataset. It passes its sightings to an Arduino Nano IoT fitted with a real-time clock, that pulls appointment information from Google Calendar and flashes the LED when it detects a match between object and event. His example which we’ve placed below the break is a pill bottle triggering a reminder to take the pills.

We like this idea, but can’t help thinking that it has a flaw in that the reminder relies on the object moving into view. A version that tied this in with more conventional reminding based upon the calendar would address this, and perhaps save the forgetful a few problems.

Continue reading “Assistive Specs Help Jog Your Memory”

OpenSource GUI Tool For OpenCV And DeepLearning

AI and Deep Learning for computer vision projects has come to the masses. This can be attributed partly to the  community projects that help ease the pain for newbies. [Abhishek] contributes one such project called Monk AI which comes with a GUI for transfer learning.

Monk AI is essentially a wrapper for Computer Vision and deep learning experiments. It facilitates users to finetune deep neural networks using transfer learning and is written in Python. Out of the box, it supports Keras and Pytorch and it comes with a few lines of code; you can get started with your very first AI experiment.

[Abhishek] also has an Object Detection wrapper(GitHub) that has some useful examples as well as a Monk GUI(GitHub) tool that looks similar to the tools available in commercial packages for running, training and inference experiments.

The documentation is a work in progress though it seems like an excellent concept to build on. We need more tools like these to help more people getting started with Deep Learning. Hardware such as the Nvidia Jetson Nano and Google Coral are affordable and facilitate the learning and experimentation.

Now You Can Be Big Brother Too, With A Raspberry Pi License Plate Reader

If you are wowed by some of the abilities of a Tesla but can’t quite afford one, perhaps you can enhance your current ride with a few upgrades. This was what [Robert Lucian Chiriac] did with his Land Rover, to gain some insight into automotive machine vision he fitted it with a Raspberry Pi and camera with an automatic number plate recognition system.

This bracket should find a use in a few projects.
This bracket should find a use in a few projects.

His exceptionally comprehensive write-up takes us through the entire process, from creating a rather useful set of 3D-printed brackets for a Pi and camera through deciding the combination of artificial intelligence software components required, to making the eventual decision to offload part of the processing to a cloud service through a 4G mobile phone link. In this he used Cortex, a system designed for easy deployment of machine learning models, which he is very impressed with.

The result is a camera in his car that identifies and reads the plates on the vehicles around it. Which in a way has something of the Big Brother about it, but in another way points to a future in which ever more accessible AI applications self-contained without a cloud service become possible that aren’t quite so sinister.  It’s an inevitable progression whose privacy questions may go beyond a Hackaday piece, but it’s also a fascinating area of our remit that should be available at our level.

You can see the system in action in the video below the break, as well as find the code in his GitHub repository.

Continue reading “Now You Can Be Big Brother Too, With A Raspberry Pi License Plate Reader”

Machine Learning System Uses Images To Teach Itself Morse Code

Conventional wisdom holds that the best way to learn a new language is immersion: just throw someone into a situation where they have no choice, and they’ll learn by context. Militaries use immersion language instruction, as do diplomats and journalists, and apparently computers can now use it to teach themselves Morse code.

The blog entry by the delightfully callsigned [Mauri Niininen (AG1LE)] reads like a scientific paper, with good reason: [Mauri] really seems to know a thing or two about machine learning. His method uses curated training data to build a model, namely Morse snippets and their translations, as is the usual approach with such systems. But things take an unexpected turn right from the start, as [Mauri] uses a Tensorflow handwriting recognition implementation to train his model.

Using a few lines of Python, he converts short, known snippets of Morse to a grayscale image that looks a little like a barcode, with the light areas being the dits and dahs and the dark bars being silence. The first training run only resulted in about 36% accuracy, but a subsequent run with shorter snippets ended up being 99.5% accurate. The model was also able to pull Morse out of a signal with -6 dB signal-to-noise ratio, even though it had been trained with a much cleaner signal.

Other Morse decoders use lookup tables to convert sound to text, but it’s important to note that this one doesn’t. By comparing patterns to labels in the training data, it inferred what the characters mean, and essentially taught itself Morse code in about an hour. We find that fascinating, and wonder what other applications this would be good for.

Thanks to [Gordon Shephard] for the tip.