How The Human Brain Stores Data

Evolution is one clever fellow. Next time you’re strolling about outdoors, pick up a pine cone and take a look at the layout of the bract scales. You’ll find an unmistakable geometric structure. In fact, this same structure can be seen in the petals of a rose, the seeds of a sunflower and even the cochlea bone in your inner ear. Look closely enough, and you’ll find this spiraling structure everywhere. It’s based on a series of integers called the Fibonacci sequence. Leonardo Bonacci discovered the sequence while trying to figure out how many rabbits he could make starting with just two. It’s quite simple — add the right most integer to the previous one to get the next one in the sequence. Starting from zero, this would give you 0-1-1-2-3-5-8-13-21 and so on. If one was to look at this sequence in the form of geometric shapes, they can create square tiles whose sides are the length of the value in the sequence. If you connect the diagonal corners of these tiles with an infinite curve, you end up with the spiral that you saw in the pine cone and other natural objects.

fib_01
Source via Geocaching

So how did mother nature discover this geometric structure? Surely it does not know math. How then can it come up with intricate and sophisticated structures? It turns out that this Fibonacci spiral is the most efficient way of squeezing the most amount of stuff in the least amount of space. And if one takes natural selection seriously, this makes perfect sense. Eons of trial and error to make the most copies of itself has stumbled upon a mathematical principle that permeates life on earth.

fb_02
Source via John Simmons

The homo sapiens brain is the product of this same evolutionary process, and has been evolving for an estimated 7 million years. It would be foolish to think that this same type of efficiency natural selection has stumbled across would not be present in the current homo sapiens brain. I want to impress upon you this idea of efficiency. Natural selection discovered the Fibonacci sequence solely because it is the most efficient way to do a particular task. If the brain has a task of storing information, it is perfectly reasonable that millions of years of evolution has honed it so that it does this in the most efficient way possible as well. In this article, we shall explore this idea of efficiency in data storage, and leave you to ponder its applications in the computer sciences.

Continue reading “How The Human Brain Stores Data”

Inceptionism: Mind Blown By What Neural Nets Think They See

Dr. Robert Hecht-Nielsen, inventor of one of the first neurocomputers, defines a neural network as:

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

These ‘processing elements’ are generally arranged in layers – where you have an input layer, an output layer and a bunch of layers in between. Google has been doing a lot of research with neural networks for image processing. They start with a network 10 to 30 layers thick. One at a time, millions of training images are fed into the network. After a little tweaking, the output layer spits out what they want – an identification of what’s in a picture.

The layers have a hierarchical structure. The input layer will recognize simple line segments. The next layer might recognize basic shapes. The one after that might recognize simple objects, such as a wheel. The final layer will recognize whole structures, like a car for instance. As you climb the hierarchy, you transition from fast changing low level patterns to slow changing high level patterns. If this sounds familiar, we’ve talked out about it before.

Now, none of this is new and exciting. We all know what neural networks are and do. What is going to blow your knightmind, however, is a simple question Google asked, and the resulting answer. To better understand the process, they wanted to know what was going on in the inner layers. They feed the network a picture of a truck, and out comes the word “truck”. But they didn’t know exactly how the network came to its conclusion. To answer this question, they showed the network an image, and then extracted what the network was seeing at different layers in the hierarchy. Sort of like putting a serial.print in your code to see what it’s doing.

They then took the results and had the network enhance what it thought it detected. Lower levels would enhance low level features, such as lines and basic shapes. The higher levels would enhance actual structures, such as faces and trees. ibisThis technique gives them the level of abstraction for different layers in the hierarchy and reveals its primitive understanding of the image. They call this process inceptionism.

 

Be sure to check out the gallery of images produced by the process. Some have called the images dream like, hallucinogenic and even disturbing. Does this process reveal the inner workings of our mind? After all, our brains are indeed neural networks. Has Google unlocked the mind’s creative process?  Or is this just a neat way to make computer generated abstract art.

So here comes the big question: Is it the computer chosing these end-product photos or a google engineer pawing through thousands (or orders of magnitude more) to find the ones we will all drool over?

image of drfitwood on a beach

Ask Hackaday: Not Your Mother’s Feedback

Imagine you were walking down a beach, and you came across some driftwood resting against a pile of stones. You see it in the distance, and your brain has no trouble figuring out what you’re looking at. You see driftwood and rocks – you can clearly distinguish between the two objects without a second thought.

Think about the raw data entering the brain. The textures of the rocks and the driftwood are similar. The colors are similar. The irregular shapes are similar. Thus the raw data entering the brain’s V1 area for both objects must be similar as well. Now think about the borders that separate the pieces of driftwood from the edges of the rocks. From a raw data perspective, there is no border, and likewise no separation because the two objects are so similar.  But yet your brain can clearly see a rock and a piece of driftwood – two distinctly different objects. So how does the brain do this? How does it so easily differentiate between the two? If the raw data on either side of the border separating the wood and the rocks is the same, then there must be an outside influence determining where that border is. Jeff Hawkins believes this outside influence is a very special and most interesting type of feedback. Read on as we explain and attempt to implement this form of feedback in our hierarchical structure of invariant representations.

Continue reading “Ask Hackaday: Not Your Mother’s Feedback”

binary hierarchy

Ask Hackaday: Sequences Of Sequences

ask-hackaday-invariant-representations-featured-image

 

In a previous article, we talked about the idea of the invariant representation and theorized different ways of implementing such an idea in silicon. The hypothetical example of identifying a song without knowledge of pitch or form was used to help create a foundation to support the end goal – to identify real world objects and events without the need of predefined templates. Such a task is possible if one can separate the parts of real world data that changes from that which does not. By only looking at the parts of the data that doesn’t change, or are invariant, one can identify real world events with superior accuracy compared to a template based system.

Consider a friend’s face. Imagine they were sitting in front of you, and their face took up most of your visual space. Your brain identifies the face as your friend without trouble. Now imagine you were in a crowded nightclub, and you were looking for the same friend. You catch a glimpse of her from several yards away, and your brain ID’s the face without trouble. Almost as easily as it did when she was sitting in front of you.

I want you to think about the raw data coming off the eye and going into the brain during both scenarios. The two sets of data would be completely different. Yet your brain is able to find a commonality between the two events. How? It can do this because the data that makes up the memory of your friend’s face is stored in an invariant form. There is no template of your friend’s face in your brain. It only stores the parts that do not change – such as the distance between the eyes, the distance between the eye and the nose, or the ear and the mouth. The shape her hairline makes on her forehead. These types of data points do not change with distance, lighting conditions or other ‘noise’.

One can argue over the specifics of how the brain does this. True or not true, the idea of the invariant representation is a powerful one, and implementing such an idea in silicon is a worthy goal. Read on as we continue to explore this idea in ever deeper detail.

Continue reading “Ask Hackaday: Sequences Of Sequences”

Ask Hackaday: What Are Invariant Representations?

Your job is to make a circuit that will illuminate a light bulb when it hears the song “Mary Had a Little Lamb”. So you breadboard a mic, op amp, your favorite microcontroller (and an ADC if needed) and get to work. You will sample the incoming data and compare it to a known template. When you get a match, you light the light. The first step is to make the template. But what to make the template of?

“Hey boss, what style of the song do you want to trigger the light? Is it children singing, piano, what?”

Your boss responds:

“I want the light to shine whenever any version of the song occurs. It could be singing, keyboard, guitar, any musical instrument or voice in any key. And I want it to work even if there’s a lot of ambient noise in the background.”

Uh oh. Your job just got a lot harder. Is it even possible? How do you make templates of every possible version of the song? Stumped, you talk to your friend about your dilemma over lunch, who just so happens to be [Jeff Hawkins] – a guy whose already put a great deal of thought into this very problem.

“Well, the brain solves your puzzle easily.” [Hawkins] says coolly. “Your brain can recall the memory of that song no matter if it’s vocal, instrumental in any key or pitch. And it can pick it out from a lot of noise.”

“Yea, but how does it do that though!” you ask. “The pattern’s of electrical signals entering the brain have to be completely different for different versions of the song, just like the patterns from my ADC. How does the brain store the countless number of templates required to ID the song?”

“Well…” [Hawkins] chuckles. “The brain does not store templates like that”. The brain only remembers the parts of the song that doesn’t change, or are invariant. The brain forms what we call invariant representations of real world data.”

Eureka! Your riddle has been solved. You need to construct an algorithm that stores only the parts of the song that doesn’t change. These parts will be the same in all versions – vocal or instrumental in any key. It will be these invariant, unchanging parts of the song that you will look for to trigger the light. But how do you implement this in silicon?

Continue reading “Ask Hackaday: What Are Invariant Representations?”