[carykh] has a really interesting video series which can give a beginner or a pro a great insight into how neural networks operate and at the same time how evolution works. You may remember his work creating a Bach audio producing neural network, and this series again shows his talent at explaining the complex topic so anyone may understand.
He starts with 1000 “creatures”. Each has an internal clock which acts a bit like a heart beat however does not change speed throughout the creature’s life. Creatures also have nodes which cause friction with the ground but don’t collide with each other. Connecting the nodes are muscles which can stretch or contract and have different strengths.
At the beginning of the simulation the creatures are randomly generated along with their random traits. Some have longer/shorter muscles, while node and muscle positions are also randomly selected. Once this is set up they have one job: move from left to right as far as possible in 15 seconds.
Each creature has a chance to perform and 500 are then selected to evolve based on how far they managed to travel to the right of the starting position. The better the creature performs the higher the probability it will survive, although some of the high performing creatures randomly die and some lower performers randomly survive. The 500 surviving creatures reproduce asexually creating another 500 to replace the population that were killed off.
The simulation is run again and again until one or two types of species start to dominate. When this happens evolution slows down as the gene pool begins to get very similar. Occasionally a breakthrough will occur either creating a new species or improving the current best species leading to a bit of a competition for the top spot.
We think the series of four short YouTube videos (all around 5 mins each) that kick off the series demonstrate neural networks in a very visual way and make it really easy to understand. Whether you don’t know much about neural networks or you do and want to see something really cool, these are worthy of your time.
Continue reading “Learn Neural Network And Evolution Theory Fast”
A gearhead friend of ours sent along a link to a YouTube video (also embedded below) promising the world’s most powerful engine. Now, we’ll be the first to warn you that it’s just an advertisement, and for something that you’re probably not going to rush out and buy: the Wärtsilä 14RT marine engine.
A tiny bit of math: 96 cm cylinder diameter times 250 cm piston stroke = 1,809,557 CC. And it generates around 107,000 HP. That’s a fair bit, but it runs at a techno-music pace: 120
BPM RPM. With twelve cylinders, we’d love to hear this thing run. Two-strokes make such a wonderful racket! Wonder if they’ve tried to red-line it? It’s a good thing we don’t work at Wärtsilä.
Continue reading “The Most Powerful Diesel Engine”
Disclosed herein is a device for gauging medication dosage. The method may include displaying first, second and third navigation controls. A switch is connected in parallel to the relay contacts and is configured for providing a portion of the input power as supplemental load power to the output as a function of back EMF energy.
We’ve had patents on the mind lately, and have been reading a fair few of them. If you read patent language long enough, though, it all starts to turn into word-salad. But with his All Prior Art and All the Claims websites, [Alexander Reben] tosses this salad for real. He’s got computers parsing existing patents and randomly reassembling them.
Rather than hoping that his algorithm comes up with the next great idea, [Alexander] is hoping to nip the truly trivial ones in the bud. Because prior art — the sum of all pre-existing ideas — is enough to disqualify a patent, if an idea is so trivial that his algorithm could have come up with it, it’s sooner or later going to be off the table.
Most of the results are insane, of course. And it seems to be producing a patent at a rate of about one per 10-15 seconds, so we’re guessing that it’ll take quite a few years for these cyber-monkeys to come up with the works of Shakespeare. But with bogus and over-broad patents filtering through the system every day, it’s not implausible that some day it’ll prove useful.
[Via New Scientist, thanks Frank!]
I had the honor of speaking at the 2015 Hackaday SuperConference in November on the topic of Hackaday’s Editorial Vision. We are bringing to a close an amazing year in which our writing team has grown in every respect. We have more editors, writers, and community members than ever before (Hackaday.io passed 100,000 members). With this we have been able to produce a huge amount of high-quality original content that matters to anyone interested in engineering — the best of which is embodied in the expansive Omnibus Volume 2 print edition. 2015 also marked an unparalleled ground-game for us; we took the Hackaday Prize all over the world and were warmly greeted by you at every turn. And of course, the Hackaday SuperConference (where I presented the talk) is a major milestone: Hackaday’s first ever full-blown conference.
So this begs the question, what next? What is guiding Hackaday and where do we plan to go in the future? Enjoy this video which is a really a ‘State of the Union’ for Hackaday, then join me after the break for a few more details on why we do what we do.
Continue reading “Hackaday’s Editorial Vision”
Remember when CGA came out and made monocrome monitors look horrible? Well CGA is crap, VGA is where it’s at. Wait… weren’t there a couple of standards in between those two? Take a walk down memory lane and relive the evolution of computer display technology. You’ll start with displays that are more or less CRT oscilloscopes and end up in better than high-def territory. The article is an interesting read but for those with short attention spans jump to the fourth page and check out the chart of technologies, resolutions, and implementation dates. We’ve come a long way in a few short decades.