Generating Beetles From Public Domain Images

Ever since [Ian Goodfellow] and his colleagues invented the generative adversarial network (GAN) in 2014, hundreds of projects, from style transfers to poetry generators, have been produced using the concept of contesting neural networks. Unlike traditional neural networks, GANs can generate new data that fits statistically within the same set as the training set.

[Bernat Cuni], the one-man design team behind [cunicode] came up with the idea to generate beetles using this technique. Inspired by material published on Machine Learning for Artists, he decided to deploy some visual experiments with zoological illustrations. The training data was found from a public domain book hosted at archive.org, found through the Biodiversity Heritage Library. A combination of OpenCV and ImageMagick helped with individually extracting illustrations to squared images.

[Cuni] then ran a DCGAN with the data set, generating the first set of quasi-beetles after some tinkering with epochs and settings. After the failed first experiment, he went with StyleGAN, setting up a machine at PaperSpace with 1 GPU and running the training for >3 days on 128 px images. The results were much better, but fairly small and the cost of running the machine was quite expensive (>€125).

Given the success of the previous experiment, he decided to transfer over to Google CoLab, using their 12 hours of K80 GPU per run for free to generate some more beetles. With the intent on producing more HD beetles, he used Runway trained on 1024 px beetles, discovering much better results after 3000 steps. The model was moved over to Google CoLab to produce HD outputs.

He has since continued to experiment with the beetles, producing some confusing generated images and fun collectibles.

Continue reading “Generating Beetles From Public Domain Images”

It Turns Out, Robots Need Tough Love Too

Showing robots adversarial behavior may be the key to improving their performance, according to a study conducted by the University of Southern California. While a generative adversarial network (GAN), where two neural networks compete in a game, has been demonstrated, this is the first time adversarial human users have been used in a learning effort.

The report was presented at the International Conference on Intelligent Robots and Systems, describing the experiment in which reinforcement learning was used to train robotic systems to create a general purpose system. For most robots, a huge amount of training data is necessary in order to manipulate objects in a human-like way.

A line of research that has been successful in overcoming this problem is having a “human in the loop”, in which a human provides feedback to the system in regards to its abilities. Most algorithms have assumed a cooperating human assistant, but by acting against the system the robot may be more inclined to develop robustness towards real world complexities.

The experiment that was conducted involved a robot attempting to grasp an object in a computer simulation. The human observer observes the simulated grasp and attempts to snatch the object away from the robot if the grasp is successful. This helps the robot discern weak and firm grasps, a crazy idea from the researchers that managed to work. The system trained with the adversary rejected unstable grasps, quickly learning robust grasps for different objects.

Experiments like these can test the assumptions made in the learning task for robotic applications, leading to better stress-tested systems more inclined to work in real-world situations. Take a look at the interview in the video below the break.

Continue reading “It Turns Out, Robots Need Tough Love Too”

Lego Machine Uses Machine Learning To Sort Itself Out

In our opinion, the primary evidence of a properly lived childhood is an enormous box of every conceivable Lego piece, from simple bricks to girders and gears, all with a small town’s worth of minifigs swimming through it. It takes years of birthdays and Christmases to accumulate a Lego collection best measured by the pound, but like anything worth doing, it’s worth overdoing.

But what to do with such a collection? Digging through it to find Just the Right Piece™ can be frustrating, and bringing order to the chaos with manual sorting is just so impractical. How about putting some of those bricks to work with a machine-vision Lego sorter built from Lego?

[Daniel West]’s approach is hardly new – we’ve even featured brick-built Lego sorters before – but we’re impressed by its architecture. First, the mechanical system is amazing. It uses a series of conveyors to transport bricks from a hopper, winnowing the stream down as it goes. The final step is a vibratory feeder that places one piece on a conveyor at a time. Those pass under a camera attached to a Raspberry Pi, where OpenCV does background subtraction from the video stream, applies bounding boxes to the parts, and runs the images through a convolutional neural network (CNN) that’s been trained on a database of every Lego part. Servo-controlled gates then direct the parts into one of 18 bins. See it in action in the video below.

We must admit that we’re not sure what the sorting criteria are, as some bins seem nearly as chaotic as the input mix. Still, we appreciate the fine engineering, and award extra style points for all the Lego goodness.

Continue reading “Lego Machine Uses Machine Learning To Sort Itself Out”

AI Knows If The Pitch Is On Target Before You Do

Pitching a baseball is about accuracy and speed. A swift ball on target is the goal, allowing the pitcher to strike out the batter. [Nick Bild] created an AI system that can determine a ball’s trajectory in mid-flight, based on a camera feed.

The system uses an NVIDIA Jetson AGX Xavier, fitted with a USB camera running at 100FPS. A Nerf tennis ball launcher is used to fire a ball towards the batter. Once triggered, the AI uses the camera to capture two successive images of the ball in flight. These images are fed into a convolutional neural network (CNN), and the software determines whether the ball is heading for the strike zone, or moving off-target. It uses this information to light a green or red LED respectively to alert the batter.

While such a system is unlikely to appear in professional baseball anytime soon, it shows the sheer capability of neural network systems to quickly and effectively analyse data in ways simply impossible for mere humans. [Nick]’s future goals involve running the system on faster hardware, and expanding it to determine effects like spin and more accurate positioning within the strike zone.

We’ve seen CNNs do everything from naming tomatoes to finding parking spaces. Video after the break.

Continue reading “AI Knows If The Pitch Is On Target Before You Do”

Sara Adkins Is Jamming Out With Machines

Asking machines to make music by themselves is kind of a strange notion. They’re machines, after all. They don’t feel happy or hurt, and as far as we know, they don’t long for the affections of other machines. Humans like to think of music as being a strictly human thing, a passionate undertaking so nuanced and emotion-based that a machine could never begin to understand the feeling that goes into the process of making music, or even the simple enjoyment of it.

The idea of humans and machines having a jam session together is even stranger. But oddly enough, the principles of the jam session may be exactly what machines need to begin to understand musical expression. As Sara Adkins explains in her enlightening 2019 Hackaday Superconference talk, Creating with the Machine, humans and machines have a lot to learn from each other.

To a human musician, a machine’s speed and accuracy are enviable. So is its ability to make instant transitions between notes and chords. Humans are slow to learn these transitions and have to practice going back and forth repeatedly to build muscle memory. If the machine were capable, it would likely envy the human in terms of passionate performance and musical expression.

Continue reading “Sara Adkins Is Jamming Out With Machines”

Self-Driving Cars Are Predicting Driving Personalities

In a recent study by a team of researchers at MIT, self driving cars are being programmed to identify the social personalities of other drivers in an effort to predict their future actions and drive safer on roads.

It’s already been made evident that autonomous vehicles lack social awareness. Drivers around a car are regarded as obstacles rather than human beings, which can hinder the automata’s ability to identify motivations and intentions, potential signifiers to future actions. Because of this, self-driving cars often cause bottlenecks at four-way stops and other intersections, perhaps explaining why the majority of traffic accidents involve them getting rear-ended by impatient drivers.

The research taps into social value orientation, a concept from social psychology that classifies a person from selfish (“egoistic”) to altruistic and cooperative (“prosocial”). The system uses this classification to create real-time driving trajectories for other cars based on a small snippet of their motion. For instance, cars that merge more often are deemed as more competitive than other cars.

When testing the algorithms on tasks involving merging lanes and making unprotected left turns, the behavioral predictions were shown to improve by a factor of 25%. In a left-turn simulation, the automata was able to wait until the approaching car had a more prosocial driver.

Even outside of self-driving cars, the research could help human drivers predict the actions of other drivers around them.

Thanks [Qes] for the tip!

DeepPCB Routes Your KiCAD PCBs

Computers can write poetry, even if they can’t necessarily write good poetry. The same can be said of routing PC boards. Computers can do it, but can they do it well? Of course, there are multiple tools each with pluses and minuses. However, a slick web page recently announced deeppcb.ai — a cloud-based AI router — and although details are sparse, there are a few interesting things about the product.

First, it supports KiCAD. You provide a DSN file, and within 24 hours you get a routed SES file. Maybe. You get three or four free boards –apparently each week — after which there is some undisclosed fee. Should you just want to try it out, create an account (which is quick and free — just verify your e-mail and create a password). Then in the “Your Boards” section there are a few examples already worked out.

Continue reading “DeepPCB Routes Your KiCAD PCBs”