Neural Networks Using Doom Level Creator Like It’s 1993

Readers of a certain vintage will remember the glee of building your own levels for DOOM. There was something magical about carefully crafting a level and then dialing up your friends for a death match session on the new map. Now computers scientists are getting in on that fun in a new way. Researchers from Politecnico di Milano are using artificial intelligence to create new levels for the classic DOOM shooter (PDF whitepaper).

While procedural level generation has been around for decades, recent advances in machine learning to generate game content (usually levels) are different because they don’t use a human-defined algorithm. Instead, they generate new content by using existing, human-generated levels as a model. In effect they learn from what great game designers have already done and apply those lesson to new level generation. The screenshot shown above is an example of an AI generated level and the gameplay can be seen in the video below.

The idea of an AI generating levels is simple in concept but difficult in execution. The researchers used Generative Adversarial Networks (GANs) to analyze existing DOOM maps and then generate new maps similar to the originals. GANs are a type of neural network which learns from training data and then generates similar data. They considered two types of GANs when generating new levels: one that just used the appearance of the training maps, and another that used both the appearance and metrics such as the number of rooms, perimeter length, etc. If you’d like a better understanding of GANs, [Steven Dufresne] covered it in his guide to the evolving world of neural networks.

While both networks used in this project produce good levels, the one that included other metrics resulted in higher quality levels. However, while the AI-generated levels appeared similar at a high level to human-generated levels, many of the little details that humans tend to include were omitted. This is partially due to a lack of good metrics to describe levels and AI-generated data.

Example DOOM maps generated by AI. Each row is one map, and each image is one aspect of the map (floor, height, things, and walls, from left to right)

We can only guess that these researcher’s next step is to use similar techniques to create an entire game (levels, characters, and music) via AI. After all, how hard can it be?? Joking aside, we would love to see you take this concept and run with it. We’re dying to play through some gnarly levels whipped up by the AI from Hackaday readers!

Continue reading “Neural Networks Using Doom Level Creator Like It’s 1993”

Neural Network Zaps You To Take Better Photographs

It’s ridiculously easy to take a bad photograph. Your brain is a far better Photoshop than Photoshop, and the amount of editing it does on the scenes your eyes capture often results in marked and disappointing differences between what you saw and what you shot.

Taking your brain out of the photography loop is the goal of [Peter Buczkowski]’s “prosthetic photographer.” The idea is to use a neural network to constantly analyze a scene until maximal aesthetic value is achieved, at which point the user unconsciously takes the photograph.

But the human-computer interface is the interesting bit — the device uses a transcutaneous electrical nerve stimulator (TENS) wired to electrodes in the handgrip to involuntarily contract the user’s finger muscles and squeeze the trigger. (Editor’s Note: This project is about as sci-fi as it gets — the computer brain is pulling the strings of the meat puppet. Whoah.)

Meanwhile, back in reality, it’s not too strange a project. A Raspberry Pi watches the scene through a Pi Cam and uses a TensorFlow neural net trained against a set of high-quality photos to determine when to trip the shutter. The video below shows it in action, and [Peter]’s blog has some of the photos taken with it.

We’re not sure this is exactly the next “must have” camera accessory, and it probably won’t help with snapshots and selfies, but it’s an interesting take on the human-device interface. And if you’re thinking about the possibilities of a neural net inside your camera to prompt you when to take a picture, you might want to check out our primer on TensorFlow to get started.

Continue reading “Neural Network Zaps You To Take Better Photographs”

Listen To The Netherworld With Artificial Intelligence

It’s that time of year again, and with Halloween arguably being the hacker’s perfect holiday, we’re starting to see a tick up in projects with a spooky theme. Most seem to do with making some otherwise tame Halloween decorations scarily awesome, but this is different — using artificial intelligence to search for ghosts.

It seems like [Matt Reed]’s “DeepWhisper” project is meant to be taken as light-hearted fun for the spooky season, but there may be a touch of seriousness to his efforts to listen in on ghostly conversations. The principle behind this is electronic voice phenomena (EVP), whereby the metabolically and/or dimensionally challenged are purported to influence electronic systems, resulting in heavily processed audio clips that seem to have a whispered endearment from the departed or a threat from a malevolent spirit. DeepWhisper takes this a step further by using a Raspberry Pi to feed audio into the Google Cloud Speech API for analysis. If anything is whispered in one of the 110 or so languages Google knows, it’ll get displayed on a screen. [Matt] plans to set DeepWhisper up in the aptly-named Butchertown section of Nashville and live-stream the results next week.

It’ll be interesting to see what Google’s neural network makes out of the random noise it will probably only ever hear. And [Matt] is planning on releasing his code for all to see, so there may be some valuable cloud techniques to learn from DeepWhisper. But in the unlikely event that he does discover ghosts, it’s nice to know you can have the tools and the talent to bust ’em.

Continue reading “Listen To The Netherworld With Artificial Intelligence”

Neural Nets In The Browser: Why Not?

We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.

TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.

Continue reading “Neural Nets In The Browser: Why Not?”

Sorting Two Tonnes Of Lego

Have you ever taken an interest in something, and then found it’s got a little out of hand as your acquisitions spiral into a tidal wave of bags and boxes? [Jacques Mattheij] found himself in just that position with Lego. His online purchases had run away with him, and he had a garage packed with “two metric tonnes” of the little coloured bricks.

Disposing of Lego is fairly straightforward, there is a lively second-hand market. But to maximise the return it is important to be in control of what you have, to avoid packaging up fake, discoloured, damaged, or dirty parts. This can become a huge job if you do it by hand, so he built a Lego sorting machine to do the job for him.

The machine starts with a hopper for the loose Lego, with a slow belt that tips individual parts down a chute to a faster belt derived from a running trainer. On that they run past a camera whose images are analysed through a neural net, and based on its identification the parts are directed into appropriate bins with carefully timed jets of compressed air.

The result is a surprisingly fast way to sort large amounts of bricks without human intervention. He’s posted some videos, one of which we’ve placed below the break, so you can see for yourselves.

Continue reading “Sorting Two Tonnes Of Lego”

AI Generates Color Palettes; Has Remarkably Good Taste

Color palettes are key to any sort of visual or graphic design. A designer has to identify a handful of key colours to make a design work, making calls on what’s eye catching or what sets the mood appropriately. One of the problems is that it relies heavily on subjective judgement, rather than any known mathematical formula. There are rules one can apply, but rules can also be artistically broken, so it’s never a simple task. To this end, [Jack Qiao] created colormind.io, a tool that uses neural nets to generate color palettes.

It’s a fun tool – there’s a selection of palettes generated from popular media and sunset photos, as well as the option to generate custom palettes yourself. Colours can be locked so you can iterate around those you like, finding others that match well. The results are impressive – the tool is able to generate palettes that seem to blend rather well. We were unable to force it to generate anything truly garish despite a few attempts!

The blog explains the software behind the curtain. After first experimenting with a type of neural net known as an LSTM, [Jack] found the results too bland. The network was afraid to be wrong, so would choose values very much “in the middle”, leading to muted palettes of browns and greys. After switching to a less accuracy-focused network known as a GAN, the results were better – [Jack] says the network now generates what it believes to be “plausible” palettes. The code has been uploaded to GitHub if you’d like to play around with it yourself.

Check out this primer on neural nets if you’d like to learn more. We’d like to know – how do you pick a palette when starting a project? Let us know in the comments.

Learn Neural Network And Evolution Theory Fast

[carykh] has a really interesting video series which can give a beginner or a pro a great insight into how neural networks operate and at the same time how evolution works. You may remember his work creating a Bach audio producing neural network, and this series again shows his talent at explaining the complex topic so anyone may understand.

He starts with 1000 “creatures”. Each has an internal clock which acts a bit like a heart beat however does not change speed throughout the creature’s life. Creatures also have nodes which cause friction with the ground but don’t collide with each other. Connecting the nodes are muscles which can stretch or contract and have different strengths.

At the beginning of the simulation the creatures are randomly generated along with their random traits. Some have longer/shorter muscles, while node and muscle positions are also randomly selected. Once this is set up they have one job: move from left to right as far as possible in 15 seconds.

Each creature has a chance to perform and 500 are then selected to evolve based on how far they managed to travel to the right of the starting position. The better the creature performs the higher the probability it will survive, although some of the high performing creatures randomly die and some lower performers randomly survive. The 500 surviving creatures reproduce asexually creating another 500 to replace the population that were killed off.

The simulation is run again and again until one or two types of species start to dominate. When this happens evolution slows down as the gene pool begins to get very similar. Occasionally a breakthrough will occur either creating a new species or improving the current best species leading to a bit of a competition for the top spot.

We think the series of four short YouTube videos (all around 5 mins each) that kick off the series demonstrate neural networks in a very visual way and make it really easy to understand. Whether you don’t know much about neural networks or you do and want to see something really cool, these are worthy of your time.

Continue reading “Learn Neural Network And Evolution Theory Fast”