Mapping The Human Brain And Where This May Lead Us

In order to understand something, it helps to observe it up close and study its inner workings. This is no less true for the brain, whether it is the brain of a mouse, that of a whale, or the squishy brain inside our own skulls. It defines after all us as a person; containing our personality and all our desires and dreams. There are also many injuries, disorders and illnesses that affect the brain, many of which we understand as poorly as the basics of how memories are stored and thoughts are formed. Much of this is due to how complicated the brain is to study in a controlled fashion.

Recently a breakthrough was made in the form of a detailed map of the cells and synapses in a segment of a human brain sample. This collaboration between Harvard and Google resulted in the most detailed look at human brain tissue so far, contained in a mere 1.4 petabytes of data. Far from a full brain map, this particular effort involved only a cubic millimeter of the human temporal cortex, containing 57,000 cells, 230 millimeters of blood vessels and 150 million synapses.

Ultimately the goal is to create a full map of a human brain like this, with each synapse and other structures detailed. If we can pull it off, the implications could be mind-bending.

Continue reading “Mapping The Human Brain And Where This May Lead Us”

An electronic neuron implemented on a purple neuron-shaped PCB

Hackaday Prize 2023: Explore The Basics Of Neuroscience With This Electronic Neuron

Brains are the most complex systems in the universe, but their basic building blocks are surprisingly simple — the complexity arises from billions of neurons, axons and synapses working together. Simulating an entire brain therefore requires vast computing resources, but if it’s just a few cells you’re interested in, you don’t need much: a handful of op-amps and transistors will do the job, as [Sebastian Billaudelle] has demonstrated. He has designed an electronic neuron called Lu.i that does everything a real neuron does, in a convenient package suitable for educational use.

[Sebastian]’s neuron implements what’s known as the leaky integrate-and-fire model, first proposed by [Louis Lapicque] as a simple model for a neuron’s behavior. Basically, the neuron acts as an integrator that stores all incoming charge in a capacitor and generates a spiky output signal once its voltage reaches a certain threshold level. The capacitor is slowly discharged however, which means the neuron will only “fire” when it gets a strong enough input signal.

Two neuron-shaped PCBs exchanging signalsA couple of MCP6004 op-amps implement this model, with an LM339 comparator acting as the threshold detector. The neuron’s inputs are generated by electronic synapses made from logic-level MOSFETS. These circuits route signals between different neurons and can be manually set to either source or sink current, thereby increasing or decreasing the neuron’s voltage level.

All of this is built onto a neat purple PCB in the shape of a nerve cell, with external connections on the tips of its dendrites. The neuron’s internal state is made visible by an LED bar graph, giving the user an immediate feel for what’s going on inside the network. Multiple neurons can be connected together to form reasonably complex networks that can implement things like oscillators or logic functions, examples of which are shown on the project’s GitHub page.

The Lu.i project is a great way to teach the basics of neuroscience, turning dry differential equations into a neat display of signals racing around a network. Neurons are fascinating things that we’re learning more about every day, enabling things like brain-computer interfaces and neuromorphic computing.

MRI Resolution Progresses From Millimeters To Microns

Neuroscientists have been mapping and recreating the nervous systems and brains of various animals since the microscope was invented, and have even been able to map out entire brain structures thanks to other imaging techniques with perhaps the most famous example being the 302-neuron brain of a roundworm. Studies like these advanced neuroscience considerably but even better imaging technology is needed to study more advanced neural structures like those found in a mouse or human, and this advanced MRI machine may be just the thing to help gain better understandings of these structures.

A research team led by Duke University developed this new MRI technology using an incredibly powerful 9.4 Tesla magnet and specialized gradient coils, leading to an image resolution an impressive six orders of magnitude higher than a typical MRI. The voxels in the image measure at only 5 microns compared to the millimeter-level resolution available on modern MRI machines, which can reveal microscopic details within brain tissues that were previously unattainable. This breakthrough in MRI resolution has the potential to significantly advance understanding of the neural networks found in humans by first studying neural structures in mice at this unprecedented detail.

The researchers are hopeful that this higher-powered MRI microscope will lead to new insights and translate directly into advancements healthcare, and presuming that it can be replicated, used on humans safely, and becomes affordable, we would expect it to find its way into medical centers as soon as possible. Not only that, but research into neuroscience has plenty of applications outside of healthcare too, like the aforementioned 302-neuron brain of the Caenorhabditis elegans roundworm which has been put to work in various robotics platforms to great effect.

Continue reading “MRI Resolution Progresses From Millimeters To Microns”

Mice Play In VR

Virtual Reality always seemed like a technology just out of reach, much like nuclear fusion, the flying car, or Linux on the desktop. It seems to be gaining steam in the last five years or so, though, with successful video games from a number of companies as well as plenty of other virtual reality adjacent technology that seems to be picking up steam as well like augmented reality. Another sign that this technology might be here to stay is this virtual reality headset made for mice. Continue reading “Mice Play In VR”

Neuromorphic Computing: What Is It And Where Are We At?

For the last hundred or so years, collectively as humanity, we’ve been dreaming, thinking, writing, singing, and producing movies about a machine that could think, reason, and be intelligent in a similar way to us. The stories beginning with “Erewhon” published in 1872 by Sam Butler, Edgar Allan Poe’s “Maelzel’s Chess Player,” and the 1927 film “Metropolis” showed the idea that a machine could think and reason like a person. Not in magic or fantastical way. They drew from the automata of ancient Greece and Egypt and combined notions of philosophers such as Aristotle, Ramon Llull, Hobbes, and thousands of others.

Their notions of the human mind led them to believe that all rational thought could be expressed as algebra or logic. Later the arrival of circuits, computers, and Moore’s law led to continual speculation that human-level intelligence was just around the corner. Some have heralded it as the savior of humanity, where others portray a calamity as a second intelligent entity rises to crush the first (humans).

The flame of computerized artificial intelligence has brightly burned a few times before, such as in the 1950s, 1980s, and 2010s. Unfortunately, both prior AI booms have been followed by an “AI winter” that falls out of fashion for failing to deliver on expectations. This winter is often blamed on a lack of computer power, inadequate understanding of the brain, or hype and over-speculation. In the midst of our current AI summer, most AI researchers focus on using the steadily increasing computer power available to increase the depth of their neural nets. Despite their name, neural nets are inspired by the neurons in the brain and share only surface-level similarities.

Some researchers believe that human-level general intelligence can be achieved by simply adding more and more layers to these simplified convolutional systems fed by an ever-increasing trove of data. This point is backed up by the incredible things these networks can produce, and it gets a little better every year. However, despite what wonders deep neural nets produce, they still specialize and excel at just one thing. A superhuman Atari playing AI cannot make music or think about weather patterns without a human adding those capabilities. Furthermore, the quality of the input data dramatically impacts the quality of the net, and the ability to make an inference is limited, producing disappointing results in some domains. Some think that recurrent neural nets will never gain the sort of general intelligence and flexibility that our brains offer.

However, some researchers are trying to creating something more brainlike by, you guessed it, more closely emulates a brain. Given that we are in a golden age of computer architecture, now seems the time to create new hardware. This type of hardware is known as Neuromorphic hardware.

Continue reading “Neuromorphic Computing: What Is It And Where Are We At?”

DIY Neuroscience Hack Chat

Join us on Wednesday, February 24 at noon Pacific for the DIY Neuroscience Hack Chat with Timothy Marzullo!

Watch a film about a mad scientist from the golden age of Hollywood and chances are good that among the other set pieces, you’ll see human brains floating in jars of cloudy fluid wired up to electrodes and fancy machines. It’s all made up, of course, but tropes work because they’re based on a kernel of truth, and we in the audience know that our brains and the other parts of our nervous system do indeed work on electricity. Or more precisely, excitable tissues in our nervous systems pass electrochemical signals between themselves as waves of potential across cell membranes.

Studying this electrical world locked away inside our heads is a challenging, but by no means impossible, pursuit. Usable signals can be picked up, amplified, digitized, and recorded to help us understand what’s going on when we think, feel, move, sleep, wake, or just be. Neuroscience has made tremendous strides looking at these signals, but the equipment to do so has largely remained the province of large universities and teaching hospitals with ample budgets, leaving the amateur neuroscientist out of luck.

Tim Marzullo, co-founder of Backyard Brains, is looking to change all that. While working on his Ph.D. in neuroscience at the University of Michigan, he and Greg Gage looked for ways to make the tools of neuroscience research affordable to everyone. The result is the Neuron SpikerBox, a low-cost bioamplifier that can tap into the “spikes” of action potential in live neurons. Open-source tools like these have helped educators bring neuroscience experiments to STEM students, and even helped other scientists set up novel, low-cost experiments.

Tim will join us on the Hack Chat to talk about doing DIY neuroscience and designing the instruments that make it possible. Bring your “mad scientist” questions as we push back the veil of ignorance on how our brains work, one neuron at a time.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, February 24 at 12:00 PM Pacific time (UTC-8). If time zones have you tied up, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

 

Continue reading “DIY Neuroscience Hack Chat”

Open-Source Neuroscience Hardware Hack Chat

Join us on Wednesday, February 19 at noon Pacific for the Open-Source Neuroscience Hardware Hack Chat with Dr. Alexxai Kravitz and Dr. Mark Laubach!

There was a time when our planet still held mysteries, and pith-helmeted or fur-wrapped explorers could sally forth and boldly explore strange places for what they were convinced was the first time. But with every mountain climbed, every depth plunged, and every desert crossed, fewer and fewer places remained to be explored, until today there’s really nothing left to discover.

Unless, of course, you look inward to the most wonderfully complex structure ever found: the brain. In humans, the 86 billion neurons contained within our skulls make trillions of connections with each other, weaving the unfathomably intricate pattern of electrochemical circuits that make you, you. Wonders abound there, and anyone seeing something new in the space between our ears really is laying eyes on it for the first time.

But the brain is a difficult place to explore, and specialized tools are needed to learn its secrets. Lex Kravitz, from Washington University, and Mark Laubach, from American University, are neuroscientists who’ve learned that sometimes you have to invent the tools of the trade on the fly. While exploring topics as wide-ranging as obesity, addiction, executive control, and decision making, they’ve come up with everything from simple jigs for brain sectioning to full feeding systems for rodent cages. They incorporate microcontrollers, IoT, and tons of 3D-printing to build what they need to get the job done, and they share these designs on OpenBehavior, a collaborative space for the open-source neuroscience community.

Join us for the Open-Source Neuroscience Hardware Hack Chat this week where we’ll discuss the exploration of the real final frontier, and find out what it takes to invent the tools before you get to use them.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, February 19 at 12:00 PM Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about. Continue reading “Open-Source Neuroscience Hardware Hack Chat”