From 50s Perceptrons To The Freaky Stuff We’re Doing Today

Things have gotten freaky. A few years ago, Google showed us that neural networks’ dreams are the stuff of nightmares, but more recently we’ve seen them used for giving game character movements that are indistinguishable from that of humans, for creating photorealistic images given only textual descriptions, for providing vision for self-driving cars, and for much more.

Being able to do all this well, and in some cases better than humans, is a recent development. Creating photorealistic images is only a few months old. So how did all this come about?

Continue reading “From 50s Perceptrons To The Freaky Stuff We’re Doing Today”

Hacking On TV: What You Need To Know

It seems to be a perennial feature of our wider community of hackers and makers, that television production companies come up with new ideas for shows featuring us and our skills. Whether it is a reality maker show, a knockout competition, a scavenger hunt, or any other format, it seems that there is always a researcher from one TV company or another touting around the scene for participants in some new show.

These shows are entertaining and engaging to watch, and we’ve all probably wondered how we might do were we to have a go ourselves. Fame and fortune awaits, even if only during one or two episodes, and sometimes participants even find themselves launched into TV careers. Americans may be familiar with [Joe Grand], for instance, and Brits will recognise [Dick Strawbridge].

It looks as if it might be a win-win situation to be a TV contestant on a series filmed in exotic foreign climes, but it’s worth taking a look at the experience from another angle. What you see on the screen is the show as its producer wants you to see it, fast-paced and entertaining. What you see as a competitor can be entirely different, and before you fill in that form you need to know about both sides.

A few years ago I was one member of a large team of makers that entered the UK version of a very popular TV franchise. The experience left me with an interest in how TV producers craft the public’s impression of an event, and also with a profound distrust of much of what I see on my screen. This prompted me to share experiences with those people I’ve met over the years who have been contestants in other similar shows, to gain a picture of the industry from more than just my personal angle. Those people know who they are and I thank them for their input, but because some of them may still be bound by contract I will keep both their identities and those of the shows they participated in a secret. It’s thus worth sharing some of the insights gleaned from their experiences, so that should you be interested in having a go yourself, you are forewarned. Continue reading “Hacking On TV: What You Need To Know”

The Neuron – A Hackers Perspective

It’s not too often that you see handkerchiefs around anymore. Today, they’re largely viewed as unsanitary and well… just plain gross. You’ll be quite disappointed to learn that they have absolutely nothing to do with this article other than a couple of similarities they share when compared to your neocortex. If you were to pull the neocortex from your brain and stretch it out on a table, you most likely wouldn’t be able to see that not only is it roughly the size of a large handkerchief; it also shares the same thickness.

The neocortex, or cortex for short, is Latin for “new rind”, or “new bark”, and represents the most recent evolutionary change to the mammalian brain. It envelopes the “old brain” and has several ridges and valleys (called sulci and gyri) that formed from evolution’s mostly successful attempt to stuff as much cortex as possible into our skulls. It has taken on the duties of processing sensory inputs and storing memories, and rightfully so. Draw a one millimeter square on your handkerchief cortex, and it would contain around 100,000 neurons. It has been estimated that the typical human cortex contains some 30 billion total neurons. If we make the conservative guess that each neuron has 1,000 synapses, that would put the total synaptic connections in your cortex at 30 trillion — a number so large that it is literally beyond our ability to comprehend. And apparently enough to store all the memories of a lifetime.

In the theater of your mind, imagine a stretched-out handkerchief lying in front of you. It is… you. It contains everything about you. Every memory that you have is in there. Your best friend’s voice, the smell of your favorite food, the song you heard on the radio this morning, that feeling you get when your kids tell you they love you is all in there. Your cortex, that little insignificant looking handkerchief in front of you, is reading this article at this very moment.

What an amazing machine; a machine that is made possible with a special type of cell – a cell we call a neuron. In this article, we’re going to explore how a neuron works from an electrical vantage point. That is, how electrical signals move from neuron to neuron and create who we are.

A Basic Neuron

Neuron diagram via Enchanted Learning

Despite the amazing feats a human brain performs, the neuron is comparatively simple when observed by itself. Neurons are living cells, however, and have many of the same complexities as other cells – such as a nucleus, mitochondria, ribosomes, and so on. Each one of these cellular parts could be the subject of an entire book. Its simplicity arises from the basic job it does – which is outputting a voltage when the sum of its inputs reaches a certain threshold, which is roughly 55 mV.

Using the image above, let’s examine the three major components of a neuron.

Soma

The soma is the cell body and contains the nucleus and other components of a typical cell. There are different types of neurons whose differing characteristics come from the soma. Its size can range from 4 to over 100 micrometers.

Dendrites

Dendrites protrude from the soma and act as the inputs of the neuron. A typical neuron will have thousands of dendrites, with each connecting to an axon of another neuron. The connection is called a synapse but is not a physical one. There is a gap between the ends of the dendrite and axon called a synaptic cleft. Information is relayed through the gap via neural transmitters, which are chemicals such as dopamine and serotonin.

Axon

Each neuron has only a single axon that extends from the soma, and acts similar to an electrical wire. Each axon will terminate with terminal fibers, forming synapses with as many as 1,000 other neurons. Axons vary in length and can reach a few meters long. The longest axons in the human body run from the bottom of the foot to the spinal cord.

The basic electrical operation of a neuron is to output a voltage spike from its axon when the sum of its input voltages (via its dendrites) crosses a specific threshold. And since axons are connected to dendrites of other neurons, you end up with this vastly complicated neural network.

Since we’re all a bunch of electronic types here, you might be thinking of these ‘voltage spikes’ as a difference of potential. But that’s not how it works. Not in the brain anyway. Let’s take a closer look at how electricity flows from neuron to neuron.

Action Potentials – The Communication Protocol of the Brain

The axon is covered in a myelin sheet which acts as an insulator. There are small breaks in the sheet along the length of the axon which are named after its discoverer, called Nodes of Ranvier. It’s important to note that these nodes are ion channels. In the spaces just outside and inside of the axon membrane exists a concentration of potassium and sodium ions. The ion channels will open and close, creating a local difference in the concentration of sodium and potassium ions.

Diagram via Washington U.

We all should know that an ion is an atom with a charge. In a resting state, the sodium/potassium ion concentration creates a negative 70 mV difference of potential between the outside and inside of the axon membrane, with there being a higher concentration of sodium ions outside and a higher concentration of potassium ions inside. The soma will create an action potential when -55 mV is reached. When this happens, a sodium ion channel will open. This lets positive sodium ions from outside the axon membrane to leak inside, changing the sodium/potassium ion concentration inside the axon, which in turn changes the difference of potential from -55 mV to around +40 mV. This process in known as depolarization.

Graph via Washington U.

One by one, sodium ion channels open along the entire length of the axon. Each one opens only for a short time, and immediately afterward, potassium ion channels open, allowing positive potassium ions to move from inside the axon membrane to the outside. This changes the concentration of sodium/potassium ions and brings the difference of potential back to its resting place of -70 mV in a process known as repolarization. Fro start to finish, the process takes about five milliseconds to complete. The process causes a 110 mV voltage spike to ride down the length of the entire axon, and is called an action potential. This voltage spike will end up in the soma of another neuron. If that particular neuron gets enough of these spikes, it too will create an action potential. This is the basic process of how electrical patterns propagate throughout the cortex.

The mammalian brain, specifically the cortex, is an incredible machine and capable of far more than even our most powerful computers. Understanding how it works will give us a better insight into building intelligent machines. And now that you know the basic electrical properties of a neuron, you’re in a better position to understand artificial neural networks.

Sources

Action Potential in Neurons, via Youtube

On Intelligence, by Jeff Hawkins, ISDN 978-0805078534

Ohm? Don’t Forget Kirchhoff!

It is hard to get very far into electronics without knowing Ohm’s law. Named after [Georg Ohm] it describes current and voltage relationships in linear circuits. However, there are two laws that are even more basic that don’t get nearly the respect that Ohm’s law gets. Those are Kirchhoff’s laws.

In simple terms, Kirchhoff’s laws are really an expression of conservation of energy. Kirchhoff’s current law (KCL) says that the current going into a single point (a node) has to have exactly the same amount of current going out of it. If you are more mathematical, you can say that the sum of the current going in and the current going out will always be zero, since the current going out will have a negative sign compared to the current going in.

You know the current in a series circuit is always the same, right? For example, in a circuit with a battery, an LED, and a resistor, the LED and the resistor will have the same current in them. That’s KCL. The current going into the resistor better be the same as the current going out of it and into the LED.

This is mostly interesting when there are more than two wires going into one point. If a battery drives 3 magically-identical light bulbs, for instance, then each bulb will get one-third of the total current. The node where the battery’s wire joins with the leads to the 3 bulbs is the node. All the current coming in, has to equal all the current going out. Even if the bulbs are not identical, the totals will still be equal. So if you know any three values, you can compute the fourth.

If you want to play with it yourself, you can simulate the circuit below.

The current from the battery has to equal the current going into the battery. The two resistors at the extreme left and right have the same current through them (1.56 mA). Within rounding error of the simulator, each branch of the split has its share of the total (note the bottom leg has 3K total resistance and, thus, carries less current).

Continue reading “Ohm? Don’t Forget Kirchhoff!”

On Point: The Yagi Antenna

If you happened to look up during a drive down a suburban street in the US anytime during the 60s or 70s, you’ll no doubt have noticed a forest of TV antennas. When over-the-air TV was the only option, people went to great lengths to haul in signals, with antennas of sometimes massive proportions flying over rooftops.

Outdoor antennas all but disappeared over the last third of the 20th century as cable providers became dominant, cast to the curb as unsightly relics of a sad and bygone era of limited choices and poor reception. But now cheapskates cable-cutters like yours truly are starting to regrow that once-thick forest, this time lofting antennas to receive digital programming over the air. Many of the new antennas make outrageous claims about performance or tout that they’re designed specifically for HDTV. It’s all marketing nonsense, of course, because then as now, almost every TV antenna is just some form of the classic Yagi design. The physics of this antenna are fascinating, as is the story of how the antenna was invented.

Continue reading “On Point: The Yagi Antenna”

Stupid Git Tricks

My apologies if you speak the Queen’s English since that title probably has a whole different meaning to you than I intended. In fact, I’m talking about Git, the version control system. Last time I talked about how the program came to be and offered you a few tutorials. If you are a dyed-in-the-wool software developer, you probably don’t need to be convinced to use Git. But even if you aren’t, there are a lot of things you can do with Git that don’t fit the usual mold.

Continue reading “Stupid Git Tricks”

Wrap Your Mind Around Neural Networks

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Continue reading “Wrap Your Mind Around Neural Networks”