Learn Neural Network and Evolution Theory Fast

[carykh] has a really interesting video series which can give a beginner or a pro a great insight into how neural networks operate and at the same time how evolution works. You may remember his work creating a Bach audio producing neural network, and this series again shows his talent at explaining the complex topic so anyone may understand.

He starts with 1000 “creatures”. Each has an internal clock which acts a bit like a heart beat however does not change speed throughout the creature’s life. Creatures also have nodes which cause friction with the ground but don’t collide with each other. Connecting the nodes are muscles which can stretch or contract and have different strengths.

At the beginning of the simulation the creatures are randomly generated along with their random traits. Some have longer/shorter muscles, while node and muscle positions are also randomly selected. Once this is set up they have one job: move from left to right as far as possible in 15 seconds.

Each creature has a chance to perform and 500 are then selected to evolve based on how far they managed to travel to the right of the starting position. The better the creature performs the higher the probability it will survive, although some of the high performing creatures randomly die and some lower performers randomly survive. The 500 surviving creatures reproduce asexually creating another 500 to replace the population that were killed off.

The simulation is run again and again until one or two types of species start to dominate. When this happens evolution slows down as the gene pool begins to get very similar. Occasionally a breakthrough will occur either creating a new species or improving the current best species leading to a bit of a competition for the top spot.

We think the series of four short YouTube videos (all around 5 mins each) that kick off the series demonstrate neural networks in a very visual way and make it really easy to understand. Whether you don’t know much about neural networks or you do and want to see something really cool, these are worthy of your time.

Continue reading “Learn Neural Network and Evolution Theory Fast”

Mega-Plate Petri-Dish Lets You Watch The Evolution Of Bacteria

Rearchers of the Harvard Medical School built a 2 feet by 4 feet (61 x 122 cm) large petri-dish to visualize the evolution of bacteria. Their experiment induces mutations in E. coli bacteria by exposing them to gradually increasing concentrations of antibiotics.

Continue reading “Mega-Plate Petri-Dish Lets You Watch The Evolution Of Bacteria”

Evolving our Ideas to Build Something That Matters

When Jeffrey Brian “JB” Straubel built his first electric car in 2000, a modified 1984 Porsche 944, powered by two beefy DC motors, he did it mostly for fun and out of his own curiosity for power electronics. At that time, “EV” was already a hype among tinkerers and makers, but Straubel certainly pushed the concept to the limit. He designed his own charger, motor controller, and cooling system, capable of an estimated 288 kW (368 hp) peak power output. 20 lead-acid batteries were connected in series to power the 240 V drive train. With a 30-40 mile range the build was not only road capable but also set a world record for EV drag racing.

The “Electric Porsche 944” – by JB Straubel

The project was never meant to change the world, but with Tesla Motors, which Straubel co-founded only a few years later, the old Porsche 944 may have mattered way more than originally intended. The explosive growth between 2000 and 2010 in the laptop computer market has brought forth performance and affordable energy storage technology and made it available to other applications, such as traction batteries. However, why did energy storage have to take the detour through a bazillion laptop computers until it arrived at electro mobility?


You certainly won’t find that grail of engineering by just trying hard. Rather than feverishly hunting down the next big thing or that fix for the world’s big problems, we sometimes need to remind ourselves that even a small improvement, a new approach or just a fun build may be just the right ‘next step’. We may eventually build all the things and solve all the problems, but looking at the past, we tend to not do so by force. We are much better at evolving our ideas continuously over time. And each step on the way still matters. Let’s dig a bit deeper into this concept and see where it takes us.

Continue reading “Evolving our Ideas to Build Something That Matters”

73 Computer Scientists Created a Neural Net and You Won’t Believe What Happened Next

The Internet is a strange place. The promise of cyberspace in the 1990s was nothing short of humanity’s next greatest achievement. For the first time in history, anyone could talk to anyone else in a vast, electronic communion of enlightened thought, and reasoned discourse. The Internet was intended to be the modern Library of Alexandria. It was beautiful, and it was the future. The Internet was the greatest invention of all time.

Somewhere, someone realized people have the capacity to be idiots. Turns out nobody wants to learn anything when you can gawk at the latest floundering of your most hated celebrity. Nobody wants to have a conversation, because your confirmation bias is inherently flawed and mine is much better. Politics, religion, evolution, weed, guns, abortions, Bernie Sanders and Kim Kardashian. Video games.

A funny thing happened since then. People started to complain they were being pandered to. They started to blame media bias and clickbait. People started to discover that headlines were designed to get clicks. You’ve read Moneyball, and know how the use of statistics changed baseball, right? Buzzfeed has done the same thing with journalism, and it’s working for their one goal of getting you to click that link.

Now, finally, the Buzzfeed editors may be out of a job. [Lars Eidnes] programmed a computer to generate clickbait. It’s all done using recurrent neural networks gathering millions of headlines from the likes of Buzzfeed and the Gawker network. These headlines are processed, and once every twenty minutes a new story is posted on Click-O-Tron, the only news website you won’t believe. There’s even voting, like reddit, so you know the results are populist dross.

I propose an experiment. Check out the comments below. If the majority of the comments are not about how Markov chains would be better suited in this case, clickbait works. Prove me wrong.

AI via Super Mario evolution

Can Super Mario teach you to think? That’s the idea behind using a simple version of the game to teach artificial intelligence. [Oddball] calls this The Mario Genome and wrote at program that can take on the level with just two controls, right and jump. He gave the script 1000 Marios to run through the level. It then eliminates the 500 least successful and procreates back to 1000 using the 500 most successful. In this way the program completed the level in 1935 generations and completed it in the quickest possible time in 7705 generations. He’s posted the script for download so that you can try it yourself. It’s an interesting exercise we’d love to see applied to more random games, like Ms. Pac-Man.

[via Reddit]