A team at the Wireless Bioelectronics Lab at the National University of Singapore led by [Dr John Ho] announced the results of their new Wireless Sensing (WiSe) smart sutures program last month. Their system consists of a specially prepared patch of polymer gel (the sensor) which is sewn into the wound using a silk suture coated with a conductive polymer. An external reader scans the sensor to monitor the status of the wound.
The concept is not unlike a NFC public transportation card, although with simplified electronics. There is no microcontroller or digital data being transferred. Rather, the sensor behaves like a tuned tank. The gel on the sensor was designed to degrade if the wound becomes infected, changing capacitance of the sensor structure and thus shifting its resonant frequency.
If you’ve ever had the misfortune to experience surgery, no doubt the surgeon and nurses drove home the importance of diligent monitoring of the wound for early signs of infection. These smart sutures allow detection of wound infection even before symptoms can seen or felt. They can be used on internal stitches up to 50 mm inside the body. More details can be read in this paper, and we covered another type of smart sensor back in 2016.
In many ways, the human body is like any other machine in that it requires constant refueling and maintenance to keep functioning. Much of this happens without our intervention beyond us selecting what to eat that day. There are however times when due to an accident, physical illness or aging the automatic repair mechanisms of our body become overwhelmed, fail to do their task correctly, or outright fall short in repairing damage.
Most of us know that lizards can regrow tails, some starfish regenerate into as many new starfish as the pieces which they were chopped into, and axolotl can regenerate limbs and even parts of their brain. Yet humans too have an amazing regenerating ability, although for us it is mostly contained within the liver, which can regenerate even when three-quarters are removed.
We don’t see many EMG (electromyography) projects, despite how cool the applications can be. This may be because of technical difficulties with seeing the tiny muscular electrical signals amongst the noise, it could be the difficulty of interpreting any signal you do find. Regardless, [hut] has been striving forwards with a stream of prototypes, culminating in the aptly named ‘Prototype 8’
The current prototype uses a main power board hosting an Arduino Nano 33 BLE Sense, as well as a boost converter to pump up the AAA battery to provide 5 volts for the Arduino and a selection of connected EMG amplifier units. The EMG sensor is based around the INA128 instrumentation amplifier, in a pretty straightforward configuration. The EMG samples along with data from the IMU on the Nano 33 BLE Sense, are passed along to a connected PC via Bluetooth, running the PsyLink software stack. This is based on Python, using the BLE-GATT library for BT comms, PynPut handing the PC input devices (to emit keyboard and mouse events) and tensorflow for the machine learning side of things. The idea is to use machine learning from the EMG data to associate with a specific user interface event (such as a keypress) and with a little training, be able to play games on the PC with just hand/arm gestures. IMU data are used to augment this, but in this demo, that’s not totally clear.
All hardware and software can be found on the project codeberg page, which did make us double-take as to why GnuRadio was being used, but thinking about it, it’s really good for signal processing and visualization. What a good idea!
Obviously there are many other use cases for such a EMG controlled input device, but who doesn’t want to play Mario Kart, you know, for science?
Typically, to improve one’s eyesight, we look to tools like corrective lenses or laser eye surgery to improve optical performance. However, [Casey Connor 2] came across another method, that uses light exposure to improve color vision, and set about trying to achieve the same results at home.
A recent study published in Nature showed that a single exposure to 670 nm light for 3 minutes lead to an improvement in color perception lasting up to a week. The causative method is that cones in the eye get worse at producing ATP as we age, and with less of this crucial molecule supplying energy to cells in the eye, our colour perception declines. Exposure to 670 nm light seems to cause mitochondria in the eye to produce more ATP in a rather complicated physical interaction.
For [Casey’s] build, LEDs were used to produce the required 670 nm red light, installed into ping pong balls that were glued onto a pair of sunglasses. After calculating the right exposure level and blasting light into the eyes regularly each morning, [Casey] plans on running a chromaticity test in the evenings with a custom Python script to measure color perception.
[Casey] shows a proper understanding of the scientific process, and has accounted for the cheap monitor and equipment used in the testing. The expectation is that it should be possible to show a relative positive or negative drift, even if the results may not be directly comparable to industry-grade measures.
We’re eager to see the results of [Casey]’s testing, and might even be tempted to replicate the experiment if it proves successful. We’ve explored some ocular topics in the past too, like the technology that goes into eyeglasses. Video after the break.
Surprisingly there are no pre-symptomatic screening methods for the common cold or the flu, allowing these viruses to spread unbeknownst to the infected. However, if we could detect when infected people will get sick even before they were showing symptoms, we could do a lot more to contain the flu or common cold and possibly save lives. Well, that’s what this group of researchers in this highly collaborative study set out to accomplish using data from wearable devices.
Participants of the study were given an E4 wristband, a research-grade wearable that measures heart rate, skin temperature, electrodermal activity, and movement. They then wore the E4 before and after inoculation of either influenza or rhinovirus. The researchers used 25 binary, random forest classification models to predict whether or not participants were infected based on the physiological data reported by the E4 sensor. Their results are pretty lengthy, so I’ll only highlight a few major discussion points. In one particular analysis, they found that at 36 hours after inoculation their model had an accuracy of 89% with a 100% sensitivity and a 67% specificity. Those aren’t exactly world-shaking numbers, but something the researchers thought was pretty promising nonetheless.
One major consideration for the accuracy of their model is the quality of the data reported by the wearable. Namely, if the data reported by the wearable isn’t reliable itself, no model derived from such data can be trustworthy either. We’ve discussed those points here at Hackaday before. Another major consideration is the lack of a control group. You definitely need to know if the model is simply tagging everyone as “infected” (which specificity does give us an idea of, to be fair) and a control group of participants who have not been inoculated with either virus would be one possible way to answer that question. Fortunately, the researchers admit this limitation of their work and we hope they will remedy this in future studies.
[Manivannan] walks the reader through the board’s setup and everything looks to be pretty straightforward. He ultimately rigged together a very primitive dashboard for viewing all his vitals in real-time, demonstrating how you could put together your own patient dashboard for remote monitoring of vitals or other sensor signals. He emphasizes that all this is powered through AWS, giving him some added security layers that are critical for protecting his data from unwanted viewers.
Obviously, losing an eye would be bad for your vision. But if you think about it, it is also a detriment to your appearance. You might not need a prosthetic eye, and you can certainly rock an eye patch, but a lot of people with this problem get an artificial or “glass” eye. These glass eyes are hand-painted disks that fit into the eye socket. However, a British man now has a new kind of eye prosthesis that is 3D printed, a technology that can potentially cut waiting time for patients in half.
The existing process is lengthy because it requires taking a mold of the eye socket and manually matching the remaining eye with the new artificial eye. With the 3D printed technology, scans of the eye socket and the other eye make this process much simpler.
Moorfields Eye Hospital, the source of the eye, says that a conventional eye takes about six weeks, but the new ones take no more than three weeks. The patient only needs to spend about a half-hour doing the scans before the wait starts. We presume it can be made for less cost, as well.