Musical Mod Lets MRI Scanner Soothe the Frazzled Patient

Hackers love to make music with things that aren’t normally considered musical instruments. We’ve all seen floppy drive orchestras, and the musical abilities of a Tesla coil can be ear-shatteringly impressive. Those are all just for fun, though. It would be nice if there were practical applications for making music from normally non-musical devices.

Thanks to a group of engineers at Case Western Reserve University in Cleveland, there is now: a magnetic resonance imaging machine that plays soothing music. And we don’t mean music piped into the MRI suite to distract patients from the notoriously noisy exam. The music is actually being played through the gradient coils of the MRI scanner. We covered the inner working of MRI scanners before and discussed why they’re so darn noisy. The noise basically amounts to Lorenz forces mechanically vibrating the gradient coils in the audio frequency range as the machine shapes the powerful magnetic field around the patient’s body. To turn these ear-hammering noises into music, the researchers converted an MP3 of [Yo Yo Ma] playing [Bach]’s “Cello Suite No. 1” into encoding data for the gradient coils. A low-pass filter keeps anything past 4 kHz from getting to the gradient coils, but that works fine for the cello. The video below shows the remarkable fidelity that the coils are capable of reproducing, but the most amazing fact is that the musical modification actually produces diagnostically useful scans.

Our tastes don’t generally run to classical music, but having suffered through more than one head-banging scan, a half-hour of cello music would be a more than welcome change. Here’s hoping the technique gets further refined.

Continue reading “Musical Mod Lets MRI Scanner Soothe the Frazzled Patient”

Eyes On The Prize Of Glucose Monitoring

People with diabetes have to monitor their blood regularly, and this should not be a shock to anyone, but unless you are in the trenches you may not have an appreciation for exactly what that entails and how awful it can be. To give a quick idea, some diabetics risk entering a coma or shock because drawing blood is painful or impractical at the moment. The holy grail of current research is to create a continuous monitor which doesn’t break the skin and can be used at home. Unaided monitoring is also needed to control automatic insulin pumps.

Alphabet, the parent company of Google, gave up where Noviosense, a Netherlands company owned by [Dr. Christopher Wilson], may gain some footing. Instead of contact lenses which can alter the flow of fluids across the eye, Noviosense places their sensor below the lower eyelid. Fluids here flow regardless of emotion or pain, so the readings correspond to the current glucose level. Traditionally, glucose levels are taken through blood or interstitial fluid, aka tissue fluid. Blood readings are the most accurate but the interstitial fluid is solid enough to gauge the need for insulin injection, and the initial trial under the eyelid showed readings on par with the interstitial measurements.

Hackers are not taking diabetes lying down, some are developing their own insulin and others are building an electronic pancreas.

Via IEEE Spectrum.

Why is Continuous Glucose Monitoring So Hard?

Everyone starts their day with a routine, and like most people these days, mine starts by checking my phone. But where most people look for the weather update, local traffic, or even check Twitter or Facebook, I use my phone to peer an inch inside my daughter’s abdomen. There, a tiny electrochemical sensor continuously samples the fluid between her cells, measuring the concentration of glucose so that we can control the amount of insulin she’s receiving through her insulin pump.

Type 1 diabetes is a nasty disease, usually sprung on the victim early in life and making every day a series of medical procedures – calculating the correct amount of insulin to use for each morsel of food consumed, dealing with the inevitable high and low blood glucose readings, and pinprick after pinprick to test the blood. Continuous glucose monitoring (CGM) has been a godsend to us and millions of diabetic families, as it gives us the freedom to let our kids be kids and go on sleepovers and have one more slice of pizza without turning it into a major project. Plus, good control of blood glucose means less chance of the dire consequences of diabetes later in life, like blindness, heart disease, and amputations. And I have to say I think it’s pretty neat that I have telemetry on my child; we like to call her our “cyborg kid.”

But for all the benefits of CGM, it’s not without its downsides. It’s wickedly expensive in terms of consumables and electronics, it requires an invasive procedure to place sensors, and even in this age of tiny electronics, it’s still comparatively bulky. It seems like we should be a lot further along with the technology than we are, but as it turns out, CGM is actually pretty hard to do, and there are some pretty solid reasons why the technology seems stuck.

Continue reading “Why is Continuous Glucose Monitoring So Hard?”

Katrina Nguyen Automates Her Mice

When embarking on a career in the life sciences, it seems like the choice of which model organism to study has more than a little to do with how it fits into the researcher’s life. I once had a professor who studied lobsters, ostensibly because they are a great model for many questions in cell biology; in actuality, he just really liked to eat lobster. Another colleague I worked with studied salt transport in shark rectal glands, not because he particularly liked harvesting said glands — makes the sharks a tad grumpy — but because he really liked spending each summer on the beach.

Not everyone gets to pick a fun or delicious model organism, though, and most biologists have had to deal with the rats and mice at some point. It’s hard to believe how needy these creatures can be in terms of care and feeding, and doubly so when feeding is part of the data you’re trying to collect from them. Graduate student Katrina Nguyen learned this the hard way, but rather than let her life be controlled by a bunch of rodents, she hacked a solution that not only improved her life, but also improved her science. She kindly dropped by the Hackaday Superconference to tell us all about how she automated her research.

Continue reading “Katrina Nguyen Automates Her Mice”

Overlooked Minimalism in Assistive Technology

If your eyes are 20/20, you probably do not spend much time thinking about prescription eyeglasses. It is easy to overlook that sort of thing, and we will not blame you. When we found this creation, it was over two years old, but we had not seen anything quite like it. The essence of the Bear Paw Assistive Eating Aid is a swiveling magnet atop a suction cup base. Simple right? You may already be thinking about how you could build or model that up in a weekend, and it would not be a big deal. The question is, could you make something like this if you had not seen it first?

Over-engineered inventions with lots of flexibility and room for expansion have their allure. When you first learn Arduino, every problem looks like a solution for that inexpensive demo board and one day you find yourself wearing an ATMEGA wristwatch. Honestly, we love those just as much but for an entirely different reason. When all the bells and whistles are gone, when there is nothing left but a robust creation that, “just works,” you have created something beautiful. Judging by the YouTube comments of the video, which can be seen below the break, those folks have no trouble overlooking the charm of this device since the word “beard” appears 95 times and one misspelling for a “bread” count of one. Hackaday readers are a higher caliber and should be able to appreciate its elegance.

The current high-tech solution for self-feeding is a robot arm, not unlike this one which is where our minds went when we heard about an invention about eating without using hands, and we will always be happy to talk about robot arms.

Continue reading “Overlooked Minimalism in Assistive Technology”

Kind of the Opposite of a Lightsaber

Lightsabers are an elegant weapon for a more civilized age. Did you ever consider that cutting people’s hands off with a laser sword means automatically cauterized wounds and that lack of blood results in a gentler rating from the Motion Picture Association? Movie guidelines aside, a cauterizing pen is found in some first aid kits, but at their core, they are a power source and a heating filament. Given the state of medical technology, this is due for an upgrade, and folks at Arizona State University are hitting all the marks with a combination of near-infrared lasers, gold particles, and protein matrix from silk.

Cauterizing relies on intense heat, or chemicals, to burn flesh but this process uses less power by aiming the near-IR laser at only the selected areas, and since near-IR can penetrate soft-tissue it goes deep without extra heating. The laser heats the gold, and that activates the silk proteins. Early results are positive but lots of testing remains and it still will not belong in the average first aid kit for a while, lasers and all, but surgery for beloved pets and tolerable humans could have recovery time reduced with this advance.

If this doesn’t sate your need for magical space knight weaponry, we have options aplenty.

Via IEEE Spectrum. Image: starwars.com

Brain Cell Electronics Explains Wetware Computing Power

Neural networks use electronic analogs of the neurons in our brains. But it doesn’t seem likely that just making enough electronic neurons would create a human-brain-like thinking machine. Consider that animal brains are sometimes larger than ours — a sperm whale’s brain weighs 17 pounds — yet we don’t think they are as smart as humans or even dogs who have a much smaller brain. MIT researchers have discovered differences between human brain cells and animal ones that might help clear up some of that mystery. You can see a video about the work they’ve done below.

Neurons have long finger-like structures known as dendrites. These act like comparators, taking input from other neurons and firing if the inputs exceed a threshold. Like any kind of conductor, the longer the dendrite, the weaker the signal. Naively, this seems bad for humans. To understand why, consider a rat. A rat’s cortex has six layers, just like ours. However, whereas the rat’s brain is tiny and 30% cortex, our brains are much larger and 75% cortex. So a dendrite reaching from layer 5 to layer 1 has to be much longer than the analogous neuron in the rat’s brain.

These longer dendrites do lead to more loss in human brains and the MIT study confirmed this by using human brain cells — healthy ones removed to get access to diseased brain cells during surgery. The researchers think that this greater loss, however, is actually a benefit to humans because it helps isolate neurons from other neurons leading to increased computing capability of a single neuron. One of the researchers called this “electrical compartmentalization.” Dig into the conclusions found in the research paper.

We couldn’t help but wonder if this research would offer new insights into neural network computing. We already use numeric weights to simulate dendrite threshold action, so presumably learning algorithms are making weaker links if that helps. However, maybe something to take away from this is that less interaction between neurons and groups of neurons may be more helpful than more interaction.

Watching them probe neurons under the microscope reminded us of probing on an IC die. There’s a close tie between understanding the brain and building better machines so we try to keep an eye on the research going on in that area.

Continue reading “Brain Cell Electronics Explains Wetware Computing Power”