The build starts with an off-the-shelf lamp base and a smart LED bulb as the light source, though you could swap those out as desired for something like a microcontroller, a USB power supply, and addressable LEDs if you were so inclined. The software package Slicer is then used to take an MRI brain scan and turn it into something that you can actually 3D print. It’ll take some cleaning up to remove artifacts and hollow it out, but it’s straightforward enough to get a decent brain model out of the data. Alternatively, you can use someone else’s if you don’t have your own scan. Then, all you have to do is print it in a couple of halves, and pop it on the lamp base, and you’re done!
It’s a pretty neat build. Who wouldn’t love telling their friends that their new brain lamp was an accurate representation of their own grey noodles, after all? It could be a fun gift next time Halloween rolls around, too!
Exploring the mysteries of quantum mechanics surely seems like an endeavor that requires room-sized equipment and racks of electronics, along with large buckets of grant money, to accomplish. And while that’s generally true, there’s quite a lot that can be accomplished on a considerably more modest budget, as this as-simple-as-it-gets nuclear magnetic resonance spectroscope amply demonstrates.
First things first: Does the “magnetic resonance” part of “NMR” bear any relationship to magnetic resonance imaging? Indeed it does, as the technique of lining up nuclei in a magnetic field, perturbing them with an electromagnetic field, and receiving the resultant RF signals as the nuclei snap back to their original spin state lies at the heart of both. And while MRI scanners and the large NMR spectrometers used in analytical chemistry labs both use extremely powerful magnetic fields, [Andy Nicol] shows us that even the Earth’s magnetic field can be used for NMR.
[Andy]’s NMR setup couldn’t be simpler. It consists of a coil of enameled copper wire wound on a 40 mm PVC tube and a simple control box with nothing more than a switch and a couple of capacitors. The only fancy bit is a USB audio interface, which is used to amplify and digitize the 2-kHz-ish signal generated by hydrogen atoms when they precess in Earth’s extremely weak magnetic field. A tripod stripped of all ferrous metal parts is also handy, as this setup needs to be outdoors where interfering magnetic fields can be minimized. In use, the coil is charged with a LiPo battery for about 10 seconds before being rapidly switched to the input of the USB amp. The resulting resonance signal is visualized using the waterfall display on SDR#.
[Andy] includes a lot of helpful tips in his excellent write-up, like tuning the coil with capacitors, minimizing noise, and estimating the exact resonance frequency expected based on the strength of the local magnetic field. It’s a great project and a good explanation of how NMR works. And it’s nowhere near as loud as an MRI scanner.
Of all the high-tech medical gadgets we read about often, the Magnetic Resonance Imaging (MRI) machine is possibly the most mysterious of all. The ability to peer inside a living body, in a minimally invasive manner whilst differentiating tissue types, in near real-time was the stuff of science fiction not too many years ago. Now it’s commonplace. But how does the machine actually work? Real Engineering on YouTube presents the Insane Engineering of MRI Machines to help us along this learning curve, at least in a little way.
The basic principle of operation is to align the spin ‘axis’ of all the subject’s hydrogen nuclei using an enormous magnetic field produced by a liquid-helium-cooled superconducting electromagnet. The spins are then perturbed with a carefully tuned radio frequency pulse delivered via a large drive coil.
After a short time, the spins revert back to align with the magnetic field, remitting a radio pulse at the same frequency. Every single hydrogen nucleus (just a proton!) responds at roughly the same time, with the combined signal being detected by the receive coil (often the same physical coil as the driver.)
There are two main issues to solve. Obviously, the whole body section is ‘transmitting’ this radio signal all in one big pulse, so how do you identify the different areas of 3D space (i.e. the different body structures) and how do you differentiate (referred to as contrast) different tissue types, such as determine if something is bone or fat?
By looking at the decay envelope of the return pulse, two separate measures with different periods can be determined; T1, the spin relaxation period, and T2, the total spin relaxation period. The first one is a measure of how long it takes the spin to realign, and the second measures the total period needed for all the individual interactions between different atoms in the subject to settle down. The values of T1 and T2 are programmed into the machine to adjust the pulse rate and observation time to favor the detection of one or the other effect, effectively selecting the type of tissue to be resolved.
The second issue is more complex. Spatial resolution is achieved by first selecting a plane to virtually slice the body into a 2D image. Because the frequency of the RF pulse needed to knock the proton spin out of alignment is dependent upon the magnetic field strength, overlaying a second magnetic field via a gradient coil allows the local magnetic field to be tuned along the axis of the machine and with a corresponding tweak to the RF frequency an entire body slice can be selected.
All RF emissions from the subject emanate from just the selected slice reducing the 3D resolution problem to a 2D problem. Finally, a similar trick is applied orthogonally, with another set of gradient coils that adjust the relative phase of the spins of stripes of atoms through the slice. This enables the use of a 2D inverse Fourier transform of multiple phase and frequency combinations to image the slice from every angle, and a 2D image of the subject can then be reconstructed and sent to the display computer for the operator to observe.
Neuroscientists have been mapping and recreating the nervous systems and brains of various animals since the microscope was invented, and have even been able to map out entire brain structures thanks to other imaging techniques with perhaps the most famous example being the 302-neuron brain of a roundworm. Studies like these advanced neuroscience considerably but even better imaging technology is needed to study more advanced neural structures like those found in a mouse or human, and this advanced MRI machine may be just the thing to help gain better understandings of these structures.
A research team led by Duke University developed this new MRI technology using an incredibly powerful 9.4 Tesla magnet and specialized gradient coils, leading to an image resolution an impressive six orders of magnitude higher than a typical MRI. The voxels in the image measure at only 5 microns compared to the millimeter-level resolution available on modern MRI machines, which can reveal microscopic details within brain tissues that were previously unattainable. This breakthrough in MRI resolution has the potential to significantly advance understanding of the neural networks found in humans by first studying neural structures in mice at this unprecedented detail.
The researchers are hopeful that this higher-powered MRI microscope will lead to new insights and translate directly into advancements healthcare, and presuming that it can be replicated, used on humans safely, and becomes affordable, we would expect it to find its way into medical centers as soon as possible. Not only that, but research into neuroscience has plenty of applications outside of healthcare too, like the aforementioned 302-neuron brain of the Caenorhabditis elegans roundworm which has been put to work in various robotics platforms to great effect.
You’ve probably had a company not support one of your devices as long as you’d like, whether it was a smart speaker or a phone, but what happens if you have a medical implant that is no longer supported? [Liam Drew] did a deep dive on what the failure of several neurotechnology startups means for the patients using their devices.
Recent advances in electronics and neurology have led to new treatments for neurological problems with implantable devices like the Autonomic Technologies (ATI) implant for managing cluster headaches. Now that the company has gone out of business, users are left on their own trying to hack the device to increase its lifespan or turning back to pharmaceuticals that don’t do the job as well as tapping directly into the nervous system. Since removing defunct implants is expensive (up to $40k!) and includes the usual list of risks for surgery, many patients have opted to keep their nonfunctional implants. Continue reading “What Happens When Implants Become Abandonware?”→
This is a harrowing tale of close-source technology, and how a medical device that relies on proprietary hard- and software essentially holds its users hostage to the financial well-being of the company that produces it. When that company is a brash startup, with plans of making money by eventually pivoting away from retinal implants to direct cortical stimulation — a technology that’s in it’s infancy at best right now — that’s a risky bet to take. But these were people with no other alternative, and the technology is, or was, amazing.
One blind man with an implant may or may not have brain cancer, but claims that he can’t receive an MRI because Second Sight won’t release details about his implant. Those bugs in your eyes? When the firm laid off its rehab therapists, patients were told they weren’t going to get any more software updates.
If we were CEO of SecondSight, we know what we would do with our closed-source software and hardware right now. The company is facing bankruptcy, has lost significant credibility in the medical devices industry, and is looking to pivot away from the Argus system anyway. They have little to lose, and a tremendous amount of goodwill to gain, by enabling people to fix their own eyes.
Thanks to [Adrian], [Ben], [MLewis], and a few other tipsters for getting this one in!
Among brain researchers there’s a truism that says the reason people underestimate how much unconscious processing goes on in your brain is because you’re not conscious of it. And while there is a lot of unconscious processing, the truism also points out a duality: your brain does both processing that leads to consciousness and processing that does not. As you’ll see below, this duality has opened up a scientific approach to studying consciousness.
Are Subjective Results Scientific?
In science we’re used to empirical test results, measurements made in a way that are verifiable, a reading from a calibrated meter where that reading can be made again and again by different people. But what if all you have to go on is what a person says they are experiencing, a subjective observation? That doesn’t sound very scientific.
That lack of non-subjective evidence is a big part of what stalled scientific research into consciousness for many years. But consciousness is unique. While we have measuring tools for observing brain activity, how do you know whether that activity is contributing to a conscious experience or is unconscious? The only way is to ask the person whose brain you’re measuring. Are they conscious of an image being presented to them? If not, then it’s being processed unconsciously. You have to ask them, and their response is, naturally, subjective.
Skepticism about subjective results along with a lack of tools, held back scientific research into consciousness for many years. It was taboo to even use the C-word until the 1980s when researchers decided that subjective results were okay. Since then, here’s been a great deal of scientific research into consciousness and this then is a sampling of that research. And as you’ll see, it’s even saved a life or two.