Star Trek has its universal language translator and now researchers from Facebook Artificial Intelligence Research (FAIR) has developed a universal music translator. Much of it is based on Google’s WaveNet, a version of which was also used in the recently announced Google Duplex AI.
The inspiration for it came from the human ability to hear music played by any instrument and to then be able to whistle or hum it, thereby translating it from one instrument to another. This is something computers have had trouble doing well, until now. The researchers fed their translator a string quartet playing Haydn and had it translate the music to a chorus and orchestra singing and playing in the style of Bach. They’ve even fed it someone whistling the theme from Indiana Jones and had it translate the tune to a symphony in the style of Mozart.
Shown here is the architecture of their network. Note that all the different music is fed into the same encoder network but each instrument which that music can be translated into has its own decoder network. It was implemented in PyTorch and trained using eight Tesla V100 GPUs over a total of six days. Efforts were made during training to ensure that the encoder extracted high-level semantic features from the music fed into it rather than just memorizing the music. More details can be found in their paper.
So if you want to hear how an electric guitar played in the style of Metallica might have been translated to the piano by Beethoven then listen to the samples in the video below.
Continue reading “Facebook’s Universal Music Translator”
Blowing an acrylic sheet after heating it is an easy way to make a smooth and transparent canopy or bubble for anything from clams to light fixtures. [Michael Barton-Sweeney] does it using plastic blow ovens he made cheaply, mainly from stuff which most of us already have in our workshops.
All you need is a way to heat the plastic, to then clamp it down around the edges, and finally to blow air into it as you would when blowing up a balloon. Of course, there are things to watch out for such as making sure the plastic is heated evenly and letting it cool slowly afterward but he covers all that on his hackaday.io page.
He’s also on his second plastics blow oven. The first one worked very well and is perhaps the easiest to make, building up an enclosure of CMUs (cinder blocks) and brick. He had success heating it with both propane and with electric current run through Kanthal wire. But the CMUs absorbed a lot of heat, slowing down the process. So for his second one he made a cast concrete enclosure with aluminum reflectors inside to focus the heat more to where needed.
We’re not sure of everything he’s blown acrylic bubbles for but we first learned of his ovens from the transparent clams in his underwater distributed sensor network. In fact, he was inspired to do plastics blowing from a childhood memory of the Air Force museum in Dayton, Ohio, where they visited the restoration hanger and watched the restorers blowing bubbles for a B-17 ball turret.
Though if you want to go smaller and simpler for something like a light fixture then you can get away with using a toaster oven, a PVC pipe, and a toilet flange.
A robot is made up of many hardware components each of which requires its own software. Even a small robot arm with a handful of servo motors uses a servo motor library.
Add that arm to a wheeled vehicle and you have more motors. Then attach some ultrasonic sensors for collision avoidance or a camera for vision. By that point, you’ve probably split the software into multiple processes: one for the arm, another for the mobility, one for vision, and one to act as the brains interfacing somehow with all the rest. The vision may be doing object recognition, something which is computationally demanding and so you now have multiple computers.
Break all this complexity into modules and you have a use case for ROS, the Robot Operating System. As this article shows, ROS can help with designing, building, managing, and even evolving your robot.
Continue reading “Modular Robotics Made Easier With ROS”
[GreatScott] has now joined the ranks of Electric Bike users. Or has he? We previously covered how he made his own lithium-ion battery pack to see if doing so would be cheaper than buying a commercially made one. But while it powered his E-bike conversion kit on his benchtop, turning the motor while the wheel was mounted in a vice, that’s no substitution for a real-world test with him on a bike on the road.
Since then he’s designed and 3D printed an enclosure for his DIY battery pack and mounted it on his bike along with most of the rest of his E-bike kit. He couldn’t use the kit’s brake levers since his existing brake levers and gear-shift system share an enclosure. There also weren’t enough instructions in the kit for him to mount the pedal assistance system. But he had enough to do some road testing.
Based on a GPS tracker app on his phone, his top speed was 43 km/h (27 miles per hour). His DIY 5 Ah battery pack was half full after 5 km (3.1 miles) and he was able to ride 11.75 km (7.3 miles) on a single charge. So, success! The battery pack did the job and if he needs to go further then he can build a bigger pack with some idea of how it would improve his travel distance.
Sadly though, he had to remove it all from his bike since he lives in Germany and European rules state that for it to be considered an electric bike, it must be pedal assisted and the speed must the be progressively reduced as it reaches a cut-off speed of 25 km/h (15 miles per hour). In other words, his E-bike was more like a moped or small motorcycle. But it did offer him some good opportunities for hacking, and that’s often enough. Check out his final assembly and testing in the video below.
Continue reading “[GreatScott] Tests His DIY Battery Pack On His E-Bike”
TVs are usually something you sit and passively watch. Not so for [Nate Damen’s] interactive, wearable TV head project, aka Atltvhead. If you’re walking around Atlanta, Georgia and you see him walking around with a TV where his head should be, introduce yourself! Or sign into Twitch chat and take control of what’s being displayed on the LEDs which he’s attached to the screen. Besides being wearable technology, it’s also meant to be an interactive art piece.
For this, his third version, the TV is a 1960’s RCA Victor Portable Television. You can see some of the TVs he found for previous versions on his hackaday.io page. They’re all truly vintage. He gutted this latest one and attached WS2812 LED strips in a serpentine pattern inside the screen. The LEDs are controlled by his code and the FastLED library running on an ESP8266. Power comes from four NiMH AA-format batteries, giving him 5 V, which he regulates down to 3.3 V. His phone serves as a WiFi hotspot.
[Nate] limits the commands so that only positive things can be displayed, a heart for example. Or you can tweak what’s being displayed by changing the brightness or make the LEDs twinkle. Judging by the crowds we see him attracting in the first video below, we’d say his project was a huge success. In the second video, Nate does a code walkthrough and talks about some of his design decisions.
Continue reading “RCA TV Gets New Life As Interactive Atltvhead”
When the time comes to add an object recognizer to your hack, all you need do is choose from many of the available ones and retrain it for your particular objects of interest. To help with that, [Edje Electronics] has put together a step-by-step guide to using TensorFlow to retrain Google’s Inception object recognizer. He does it for Windows 10 since there’s already plenty of documentation out there for Linux OSes.
You’re not limited to just Inception though. Inception is one of a few which are very accurate but it can take a few seconds to process each image and so is more suited to a fast laptop or desktop machine. MobileNet is an example of one which is less accurate but recognizes faster and so is better for a Raspberry Pi or mobile phone.
You’ll need a few hundred images of your objects. These can either be scraped from an online source like Google’s images or you get take your own photos. If you use the latter approach, make sure to shoot from various angles, rotations, and with different lighting conditions. Fill your background with various other things and even have some things partially obscuring your objects. This may sound like a long, tedious task, but it can be done efficiently. [Edje Electronics] is working on recognizing playing cards so he first sprinkled them around his living room, added some clutter, and walked around, taking pictures using his phone. Once uploaded, some easy-to-use software helped him to label them all in around an hour. Note that he trained on 24 different objects, which are the number of different cards you get in a pinochle deck.
You’ll need to install a lot of software and do some configuration, but he walks you through that too. Ideally, you’d use a computer with a GPU but that’s optional, the difference being between three or twenty-four hours of training. Be sure to both watch his video below and follow the steps on his Github page. The Github page is kept most up-to-date but his video does a more thorough job of walking you through using the software, such as how to use the image labeling program.
Why is he training an object recognizer on playing cards? This is just one more step in making a blackjack playing robot. Previously he’d done an impressive job using OpenCV, even though the algorithm handled non-overlapping cards only. Google’s Inception, however, recognizes partially obscured cards. This is a very interesting project, one which we’ll be keeping an eye on. If you have any ideas for him, leave them in the comments below.
Continue reading “Using TensorFlow To Recognize Your Own Objects”
One way to design an underwater monitoring device is to take inspiration from nature and emulate an underwater creature. [Michael Barton-Sweeney] is making devices in the shape of, and functioning somewhat like, clams for his open source underwater distributed sensor network.
The clams contain the electronics, sensors, and means of descending and ascending within their shells. A bunch of them are dropped overboard on the surface. Their shells open, allowing the gas within to escape and they sink. As they descend they sample the water. When they reach the bottom, gas fills a bladder and they ascend back to the surface with their data where they’re collected in a net.
Thus far he’s made a few clams using acrylic for the shells which he’s blown himself. He soldered the electronics together free-form and gave them a conformal coating of epoxy. He’s also used a thermistor as a stand-in for other sensors and is already working on a saturometer, used for measuring the total dissolved gas (TDG) in the water. Knowing the TDG is useful for understanding and mitigating supersaturation of water which can lead to fish kills.
He’s also given a lot of thought into the materials used since some clams may not make it back up and would have to degrade or be benign where they rest. For example, he’s been using a lithium battery for now but would like to use copper on one shell and zinc on another to make a salt water battery, if he can make it produce enough power. He’s also considering using 3D printing since PLA is biodegradable. However, straight PLA could be subject to fouling by underwater organisms and would require cleaning, which would be time-consuming. PLA becomes soft when heated in a dishwasher and so he’s been looking into a PLA and calcium carbonate filament instead.
Check out his hackaday.io page where he talks about all these and more issues and feel free to make any suggestions.