Neural networks use electronic analogs of the neurons in our brains. But it doesn’t seem likely that just making enough electronic neurons would create a human-brain-like thinking machine. Consider that animal brains are sometimes larger than ours — a sperm whale’s brain weighs 17 pounds — yet we don’t think they are as smart as humans or even dogs who have a much smaller brain. MIT researchers have discovered differences between human brain cells and animal ones that might help clear up some of that mystery. You can see a video about the work they’ve done below.
Neurons have long finger-like structures known as dendrites. These act like comparators, taking input from other neurons and firing if the inputs exceed a threshold. Like any kind of conductor, the longer the dendrite, the weaker the signal. Naively, this seems bad for humans. To understand why, consider a rat. A rat’s cortex has six layers, just like ours. However, whereas the rat’s brain is tiny and 30% cortex, our brains are much larger and 75% cortex. So a dendrite reaching from layer 5 to layer 1 has to be much longer than the analogous neuron in the rat’s brain.
These longer dendrites do lead to more loss in human brains and the MIT study confirmed this by using human brain cells — healthy ones removed to get access to diseased brain cells during surgery. The researchers think that this greater loss, however, is actually a benefit to humans because it helps isolate neurons from other neurons leading to increased computing capability of a single neuron. One of the researchers called this “electrical compartmentalization.” Dig into the conclusions found in the research paper.
We couldn’t help but wonder if this research would offer new insights into neural network computing. We already use numeric weights to simulate dendrite threshold action, so presumably learning algorithms are making weaker links if that helps. However, maybe something to take away from this is that less interaction between neurons and groups of neurons may be more helpful than more interaction.
If someone suggests you spend time working on boring projects, would you take that advice? In this case, I think Kipp Bradford is spot on. We sat down together at the Hackaday Superconference last fall and talked about medical device engineering, the infrastructure in your home, applying Sci-Fi to engineering, and yes, we spoke about boring projects.
Kipp presented a talk on Devices for Controlling Climates at Supercon last year. It could be argued that this is one of those boring topics, but very quickly you begin to grasp how vitally important it is. Think about how many buildings on your street have a heating or cooling system in them. Now zoom out in your mind several times to neighborhood, city, state, and country level. How much impact will a small leap forward have when multiplied up?
Let’s face it, one of the challenges of wearable electronics is that people are filthy. Anything you wear is going to get dirty. If it touches you, it is going to get sweat and oil and who knows what else? And on the other side it’s going to get spills and dirt and all sorts of things we don’t want to think about on it. For regular clothes, that’s not a problem, you just pop them in the washer, but you can’t say the same for wearable electronics. Now researchers at MIT have embedded diodes like LEDs and photodetectors, into a soft fabric that is washable.
Traditionally, fibers start as a larger preform that is drawn into the fiber while heated. The researchers added tiny diodes and very tiny copper wires to the preform. As the preform is drawn, the fiber’s polymer keeps the solid materials connected and in the center. The polymer protects the electronics from water and the team was able to successfully launder fabric made with these fibers ten times.
Stand up right now and walk around for a minute. We’re pretty sure you didn’t see everywhere you stepped nor did you plan each step meticulously according to visual input. So why should robots do the same? Wouldn’t your robot be more versatile if it could use its vision to plan a path, but leave most of the walking to the legs with the help of various sensors and knowledge of joint positions?
That’s the approach [Sangbae Kim] and a team of researchers at MIT are taking with their Cheetah 3. They’ve given it cameras but aren’t using them yet. Instead, they’re making sure it can move around blind first. So far they have it walking, running, jumping and even going up stairs cluttered with loose blocks and rolls of tape.
Two algorithms are at the heart of its being able to move around blind.
The first is a contact detection algorithm which decides if the legs should transition between a swing or a step based on knowledge of the joint positions and data from gyroscopes and accelerometers. If it tilted unexpectedly due to stepping on a loose block then this is the algorithm which decides what the legs should do.
The second is a model-predictive algorithm. This predicts what force a leg should apply once the decision has been made to take a step. It does this by calculating the multiplicative positions of the robot’s body and legs a half second into the future. These calculations are done 20 times a second. They’re what help it handle situations such as when someone shoves it or tugs it on a leash. The calculations enabled it to regain its balance or continue in the direction it was headed.
There are a number of other awesome features of this quadruped robot which we haven’t seen in others such as Boston Dynamics’ SpotMini like invertible knee joints and walking on three legs. Check out those features and more in the video below.
Most of us take it for granted that water is as close as your kitchen tap. But that’s not true everywhere. Two scientists at MIT have a new method for harvesting water from fog, especially fog released from cooling towers such as those found from power plants. It turns out, harvesting water from fog isn’t a new idea. You typically insert a mesh into the air and collect water droplets from the fog. The problem is with a typical diameter of 10 microns, the water droplets mostly miss the mesh, meaning they typically extract no more than 2% of the water content in the air.
The team found two reasons for the low efficiency. Water clogs the mesh openings which can be somewhat mitigated by using coated meshes that shed water quickly. Even in the lab that only increases the yield to about 10%. The bigger problem, though, is basically only some of the droplets hit the mesh, and even those that do may not stick because of drag. Fine meshes can help but are harder to make and have low structural integrity. Their solution? Inject ions into the fog to charge the water droplets and impart the opposite charge on the mesh.
When engineering a solution to a problem, an often-successful approach is to keep the design as simple as possible. Simple things are easier to produce, maintain, and use. Whether you’re building a robot, operating system, or automobile, this type of design can help in many different ways. Now, researchers at MIT’s Little Devices Lab have taken this philosophy to testing for various medical conditions, using a set of modular blocks.
Each block is designed for a specific purpose, and can be linked together with other blocks. For example, one block may be able to identify Zika virus, and another block could help determine blood sugar levels. By linking the blocks together, a healthcare worker can build a diagnosis system catered specifically for their needs. The price tag for these small, simple blocks is modest as well: about $0.015, or one and a half cents per block. They also don’t need to be refrigerated or handled specially, and some can be reused.
This is an impressive breakthrough that is poised to help not only low-income people around the world, but anyone with a need for quick, accurate medical diagnoses at a marginal cost. Keeping things simple and modular allows for all kinds of possibilities, as we recently covered in the world of robotics.
While robots have been making our lives easier and our assembly lines more efficient for over half a century now, we haven’t quite cracked a Jetsons-like general purpose robot yet. Sure, Boston Dynamics and MIT have some humanoid robots that are fun to kick and knock over, but they’re far from building a world-ending Terminator automaton.
But not every robot needs to be human-shaped in order to be general purpose. Some of the more interesting designs being researched are modular robots. It’s an approach to robotics which uses smaller units that can combine into assemblies that accomplish a given task.
We’ve been immersing ourselves in topics like this one because right now the Robotics Module Challenge is the current focus of the Hackaday Prize. We’re looking for any modular designs that make it easier to build robots — motor drivers, sensor arrays, limb designs — your imagination is the limit. But self contained robot modules that themselves make up larger robots is a fascinating field that definitely fits in with this challenge. Join me for a look at where modular robots are now, and where we’d like to see them going.