Deep Learning — the use of neural networks with modern techniques to tackle problems ranging from computer vision to speech recognition and synthesis — is certainly a current buzzword. However, at the core is a set of powerful methods for organizing self-learning systems. Multi-layer neural networks aren’t new, but there is a resurgence of interest primarily due to the availability of massively parallel computation platforms disguised as video cards.
The problem is getting started in something like this. There are plenty of scholarly papers that can be hard to wade through. Or you can grab some code from GitHub and try to puzzle it out.
A better idea would be to take a free class entitled: Practical Deep Learning for Coders, Part 1. The course is free unless you count your investment in time. They warn you to expect to commit about ten hours a week for seven weeks to complete the course. You can see the first installment in the video, below. Continue reading “Practical Deep Learning”


The hardware is simple. There’s the Raspberry-Pi — he’s got instructions on making it work with the Pi2, Pi2+, Pi3 or the Pi0. Since the Pi’s have limited audio capabilities, he’s using a DAC, the 


Lets use some cool retro transistors! I merrily go along for hours designing away. Carefully balancing the current of the long tailed pair input. Picking just the right collector power resistor and capacitor value to drive the transformer. Calculating the negative feedback circuit for proper low frequency cutoff and high frequency stability, and into the breadboard the parts go — jumper clips, meter probes, and test leads abound — a truly joyful event.

!["Get ready for a life that smells of hot plastic, son!" John Atherton [CC BY-SA 2.0], via Wikimedia Commons.](https://hackaday.com/wp-content/uploads/2016/12/656px-early_1950s_television_set_eugene_oregon.jpg?w=342)