An ophthalmoscope is a device used to examine the back of the eye. This is useful for diagnosing everything from glaucoma, diabetic retinopathy, to detecting brain tumors. As you would expect from anything related to medicine, these devices cost a lot, making them inaccessible for most of the world’s population. This project for the Hackaday Prize is for an ophthalmoscope that can be built for under $400.
An ophthalmoscope is a relatively simple device, that really only requires a clinician to wear a head-mounted lamp and hold a condensing lens in front of the patient’s eye. Light is reflected off the retina and into the clinician’s view. Of course, the simplest ophthalmoscope requires a bit of training to get right, and there’s’ no chance of being able to take a picture of a patient’s retina to share with other clinicians.
The Open Indirect Ophthalmoscope gets around these problems by using a digital camera in the form of a Raspberry Pi camera module. This camera, with the help of a 3 W LED, is able to image the back of the eye, snap a picture, and send that image anywhere in the world. It’s a simple device that can be constructed from a few mirrors, a cheap lens, and a few 3D-printed parts, but is still very valuable for the detection of ophthalmological disorders.
Oh God, first a “Slit-lamp”, then this.
That is not an ophtalmoscope, that is a fundus photo camera. It is still used for ophtalmoscopy, but a handheld ophtalmoscope can be used for a wider number of things, than this thing.
Moreover, an ophtalmoscope costs 30-70$ on Aliexpress / Amazon / Ebay, I doubt those DIY efforts will be cost-effective.
If they insist on a photo device (Send a retina photo to an opthtalmologist for a consultation, maybe?), then printing a 3D adapter for a wide range of smatphones that will just house a set of lenses and bright LEDs is still better, since a lot of 3rd world countries’ residents are byuing cheap chinese phones, again because they are cost-effective and double as a phone and an internet terminal that can also run applications and store information on itself.
The hackers in question could just effing googled “ophtalmoscope” for a vague idea what the people need.
^^ This.
Hey! This is a very good point. I’m the team lead of this project, perhaps I can shed some light..
For starters – we’re not simply a bunch of hackers making something for fun – we’re actually staff at a not-for-profit eye hospital in Hyderabad, India (the L V Prasad Eye Institute) and for the past 28 years this hospital has actually been treating people from underrepresented populations. Yes,we live in a third world country and interact with doctors from the hospital every day. We understood their needs by spending countless hours in the clinic as well as the field – which is what this device has been built for. You see, the LV Prasad model involves door-to-door screenings of rural populations, and the people who actually do this are not very highly skilled people, and usually use a direct ophthalmoscope – and that too mostly for screening cataracts. They’re not skilled enough to detect retinal pathologies and a lot of time may have missed them. We’re trying to empower this field staff to be able to take images and automatically screen them on cloud – for better triage and screening. There’s a 28 year old healthcare delivery model which takes over thereafter for follow-up. I encourage you to read more on http://www.lvpei.org/patientcare/patientstories/
Alright, your next point – why are we calling it an ophthalmoscope when it’s actually a fundus imaging camera? That’s because it’s based on the principle of an indirect ophthalmoscope (using a 20D lens) and yes, the present iteration doesn’t allow the fine control over the 20D lens by which a skilled clinician can look at the peripheral retina – but it’s planned for successive iterations. Our primary focus was on getting good retinal images easily and quickly.
I strongly believe it is a misconception that “just stick an adaptor on a smartphone” is the solution to the problems of the third world. A lot of these smartphones are severely compromised in terms of hardware capabilities and are mainly built for entertainment (videos, music and a bit of games). They seldom use stock android and there’s so much diversity in the hardware and software that we won’t really be able to build a general solution. That being said, I would love to see myself proved wrong! We decided to build custom hardware which is easily available as a commodity for the open source community.
Do have a look at our latest build logs – we have a cloud-based deep learning network running which is giving quite good results in grading the images! Integrating this would have been a bit of a task with the $30-70 units you spoke of, though we shall certainly aim to make it work!
I hope this resolves some reservations you may have had! We’re really in this to do good, and we welcome constructive feedback and criticism like this!
Is there anywhere that more pictures of it can be seen?
Ones of it in bits or being assembled.
Are consecutive pictures of the same eye alike enough to use signal processing to spot differences?
If you follow the link in the post you’ll get to the project log with a lot of info.
https://hackaday.io/project/11943/logs
Well that is confusing. I didn’t see them this morning and read the whole thing. I shouldn’t read hackaday until I wake up probably.
A 3W LED that shines directly into the eye? Isn’t that a bit too much?
I think it is quite diffused. They used a light meter and considered the output similar to an overcast sky.
That said, I wouldn’t want to ‘look inside’ it myself. But I definitely see your point.
they need to be able to both shrink and enlarge the pupil so the brightness is probably controlled, combine that with diffusion and i dont think it is that crazy.
I take pictures all the time to put into the medical record by using an indirect lens and my phone. The technique takes a bit of practice but there isn’t really anything extra you need to carry as you have your lenses and phone on your person anyway. There are actually adapters you can buy/make that will hold the lens out a correct distance so that the process becomes very easy. Example of the technique is illustrated here: http://www.hindawi.com/journals/joph/2013/518479/
Hi! Interestingly, we (the team on this project) are working indirectly with Shizuo Mukai’s team at Cambridge, MA via our collaborators at MIT. It’s a small world! And we got the idea of using the 20D lens from his work, as well as the work of OpthalmicDocs in NZ. Shizuo’s solution involves the use of an iPhone, which we wanted to avoid. However it is an excellent piece of work!
Good news Sir, thanks to HaD and its cost saving tricks we were able to diagnose you have disease X.
Great, what’s the treatment?
I’m sorry but we lack the funds for that kind of stuff, we just hand you a note with the disease you have.
Oh, ha, OK, thanks.
Oh we have your treatment right here.
https://thumbs.dreamstime.com/z/folded-cane-blind-blackout-glasses-19517359.jpg
That made me sad.
Hey! This is actually a very good point. I’m the team lead of this project and allow me to explain…
We’re not simply a bunch of hackers making something for fun – we’re actually staff at a not-for-profit eye hospital in Hyderabad, India (the L V Prasad Eye Institute) and for the past 28 years this hospital has actually been treating people from underrepresented populations. So we have a 28-year old distribution model ready and set, with staff and, believe it or not, plenty of funds! So what we’re doing is augmenting this public health delivery model with technology. Traditionally, the patients would need to come all the way to the nearest well-equipped clinic which might be hundreds of kilometers/miles away to get an accurate fundus photo and a diagnosis. Treatment is given for free – look up thousands of case studies at http://www.lvpei.org/patientcare/patientstories/
I hope this resolves some reservations you may have had! We’re really in this to do good, and we welcome constructive feedback and criticism like this!
Yes, it may be able to take a picture of the retina but so what?
I can build a camera to take a picture of the moon but be unable to resolve any detail. With the retina, the diseases that are the most damaging appear as tiny pathology – for example diabetic retinopathy which starts as tiny blood clots down at around 10 microns. Anyone can see gross pathology, you can do that with a well made $50 hand-held ophthalmoscope.
As for taking retinal images with a phone, that is incredibly dangerous – you will be simply unable to see most subtle pathology which will put your patients’ sight and possibly lives at risk.