Uploaded by [themonitorsolution], each fourteen-minute 1080p video depicts what a patient monitor would look like in various situations, ranging from an adult in stable condition to individuals suffering from ailments such as COPD and sepsis. There’s even one for a dead patient, which makes for rather morbid watching.
Now we assume these are intended for educational purposes — throw them up on a display and have trainees attempt to diagnose what’s wrong with the virtual patient. But we’re sure clever folks like yourselves could figure out alternate uses for these realistic graphics. They could make for an impressive Halloween prop, or maybe they are just what you need to get that low-budget medical drama off the ground, finally.
Honestly, it seemed too cool of a resource not to point out. Besides, it’s exceedingly rare that we get to post a YouTube video that we can be confident none of our readers have seen before…at the time of this writing, the channel only has a single subscriber. Though with our luck, that person will end up being one of you lot.
Medical equipment is not generally known for being inexpensive, with various imaging systems usually weighing in at over a million dollars, and even relatively simpler pieces of technology like digital thermometers, stethoscopes, and pulse oximeters coming in somewhere around $50. As the general pace of technological improvement continues on we expect marginal decreases in costs, but every now and then a revolutionary piece of technology will drop the cost of something like a blood pressure monitor by over an order of magnitude.
Typically a blood pressure monitor involves a cuff that pressurizes against a patient’s arm, and measures the physical pressure of the blood as the heart forces blood through the area restricted by the cuff. But there are some ways to measure blood pressure by proxy, instead of directly. This device, a small piece of plastic with a cost of less than a dollar, attaches to a smartphone near the camera sensor and flashlight. By pressing a finger onto the device, the smartphone uses the flashlight and the camera in tandem to measure subtle changes in the skin, which can be processed in an app to approximate blood pressure.
The developers of this technology note that it’s not a one-to-one substitute for a traditional blood pressure monitor, but it is extremely helpful for those who might not be able to afford a normal monitor and who might otherwise go undiagnosed for high blood pressure. Almost half of adults in the US alone have issues relating to blood pressure, so just getting information at all is the hurdle this device is attempting to overcome. And, we’ll count it as a win any time medical technology becomes more accessible, more inexpensive, or more open-source.
Of all the high-tech medical gadgets we read about often, the Magnetic Resonance Imaging (MRI) machine is possibly the most mysterious of all. The ability to peer inside a living body, in a minimally invasive manner whilst differentiating tissue types, in near real-time was the stuff of science fiction not too many years ago. Now it’s commonplace. But how does the machine actually work? Real Engineering on YouTube presents the Insane Engineering of MRI Machines to help us along this learning curve, at least in a little way.
The basic principle of operation is to align the spin ‘axis’ of all the subject’s hydrogen nuclei using an enormous magnetic field produced by a liquid-helium-cooled superconducting electromagnet. The spins are then perturbed with a carefully tuned radio frequency pulse delivered via a large drive coil.
After a short time, the spins revert back to align with the magnetic field, remitting a radio pulse at the same frequency. Every single hydrogen nucleus (just a proton!) responds at roughly the same time, with the combined signal being detected by the receive coil (often the same physical coil as the driver.)
There are two main issues to solve. Obviously, the whole body section is ‘transmitting’ this radio signal all in one big pulse, so how do you identify the different areas of 3D space (i.e. the different body structures) and how do you differentiate (referred to as contrast) different tissue types, such as determine if something is bone or fat?
By looking at the decay envelope of the return pulse, two separate measures with different periods can be determined; T1, the spin relaxation period, and T2, the total spin relaxation period. The first one is a measure of how long it takes the spin to realign, and the second measures the total period needed for all the individual interactions between different atoms in the subject to settle down. The values of T1 and T2 are programmed into the machine to adjust the pulse rate and observation time to favor the detection of one or the other effect, effectively selecting the type of tissue to be resolved.
The second issue is more complex. Spatial resolution is achieved by first selecting a plane to virtually slice the body into a 2D image. Because the frequency of the RF pulse needed to knock the proton spin out of alignment is dependent upon the magnetic field strength, overlaying a second magnetic field via a gradient coil allows the local magnetic field to be tuned along the axis of the machine and with a corresponding tweak to the RF frequency an entire body slice can be selected.
All RF emissions from the subject emanate from just the selected slice reducing the 3D resolution problem to a 2D problem. Finally, a similar trick is applied orthogonally, with another set of gradient coils that adjust the relative phase of the spins of stripes of atoms through the slice. This enables the use of a 2D inverse Fourier transform of multiple phase and frequency combinations to image the slice from every angle, and a 2D image of the subject can then be reconstructed and sent to the display computer for the operator to observe.
One of the most critical skills in emergency medicine is airway management. Without a patent airway, a patient has about four minutes to live, so doctors and paramedics put a huge amount of effort into honing their intubation skills. They have to be able to insert an endotracheal tube quickly and efficiently, without damaging sensitive structures like the vocal cords. It’s a tricky skill to master without a ton of practice.
The perfect tool to practice these skills is a video laryngoscope, but these are wildly expensive and reserved for clinical use. Luckily, with a little ingenuity and a cheap USB borescope, [Dr. Adam Blumenberg] and [Dr. Erin Falk] were able to come up with this low-cost video-assisted laryngoscopy setup to reach as many students as possible. The idea is to use a single-use laryngoscope blade, which replicates the usual tool used to visualize the patient’s vocal cords. The blade is made from clear plastic, which makes it perfect for the application. The borescope is passed through an opening in the blade and affixed to it with adhesives. A little Dremel work might be necessary to get the optical axes of the blade and the camera to line up; failing that, there’s always the option to disassemble the camera to get a better angle.
The chief advantage of this setup, aside from being cheap, is that it’s something that it’s not intended to be used on patients. Along with an airway manikin, the tricked-out borescope can sit in a conference room waiting for students to have a go. Using a large screen allows the whole group to watch the delicate procedure and learn from the mistakes of others. It may not be as detailed a simulation environment as some, but “blade time” is really what counts here.
Prosthetics are complicated, highly personal things. They must often be crafted and customized precisely to suit the individual. Additive manufacturing is proving a useful tool in this arena, as demonstrated by a new 3D printed nose design developed at Swansea University. And a bonus? It’s vegan, too!
Often, cartilage from the ribcage is used when reconstructing a patient’s nose. However, this procedure is invasive and can lead to health complications. Instead, a nanocellulose hydrogel made from pulped softwood, combined with hyaluronic acid, may be a viable printable material for creating a scaffold for cartilage cells. The patients own cartilage cells can be used to populate the scaffold, essentially growing a new nose structure from scratch. The technique won’t just be limited to nose reconstructions, either. It could also help to recreate other cartilage-based structures, such as the ear.
As with all new medical technologies, the road ahead is long. Prime concerns involve whether the material is properly bio-compatible, particularly where the immune system is concerned. However, the basic idea is one that’s being pursued in earnest by researchers around the world, whether for cosmetic purposes or to grow entire organs. As always, if you’re secretly 3D printing functional gallbladders in your basement, don’t hesitate to drop us a line.
One of the challenges of diagnosing diseases is identifying them early. At this stage, signs may be vague or confusing, or difficult to identify. Early diagnosis is often tied to the best possible treatment outcomes, so there’s plenty of incentives to improve methods in this way.
There are plenty of problems that are easy for humans to solve, but are almost impossibly difficult for computers. Even though it seems that with modern computing power being what it is we should be able to solve a lot of these problems, things like identifying objects in images remains fairly difficult. Similarly, identifying specific sounds within audio samples remains problematic, and as [Eivind] found, is holding up a lot of medical research to boot. To solve one specific problem he created a system for counting coughs of medical patients.
This was built with the idea of helping people with chronic obstructive pulmonary disease (COPD). Most of the existing methods for studying the disease and treating patients with it involves manually counting the number of coughs on an audio recording. While there are some software solutions to this problem to save some time, this device seeks to identify coughs in real time as they happen. It does this by training a model using tinyML to identify coughs and reject cough-like sounds. Everything runs on an Arduino Nano with BLE for communication.
While the only data the model has been trained on are sounds from [Eivind], the existing prototypes do seem to show promise. With more sound data this could be a powerful tool for patients with this disease. And, even though this uses machine learning on a small platform, we have seen before that Arudinos are plenty capable of being effective machine learning solutions with the right tools on board.