Eye-Controlled Wheelchair Advances From Talented Teenage Hackers

[Myrijam Stoetzer] and her friend [Paul Foltin], 14 and 15 years old kids from Duisburg, Germany are working on a eye movement controller wheel chair. They were inspired by the Eyewriter Project which we’ve been following for a long time. Eyewriter was built for Tony Quan a.k.a Tempt1 by his friends. In 2003, Tempt1 was diagnosed with the degenerative nerve disorder ALS  and is now fully paralyzed except for his eyes, but has been able to use the EyeWriter to continue his art.

This is their first big leap moving up from Lego Mindstorms. The eye tracker part consists of a safety glass frame, a regular webcam, and IR SMD LEDs. They removed the IR blocking filter from the webcam to make it work in all lighting conditions. The image processing is handled by an Odroid U3 – a compact, low cost ARM Quad Core SBC capable of running Ubuntu, Android, and other Linux OS systems. They initially tried the Raspberry Pi which managed to do just about 3fps, compared to 13~15fps from the Odroid. The code is written in Python and uses OpenCV libraries. They are learning Python on the go. An Arduino is used to control the motor via an H-bridge controller, and also to calibrate the eye tracker. Potentiometers connected to the Arduino’s analog ports allow adjusting the tracker to individual requirements.

The web cam video stream is filtered to obtain the pupil position, and this is compared to four presets for forward, reverse, left and right. The presets can be adjusted using the potentiometers. An enable switch, manually activated at present is used to ensure the wheel chair moves only when commanded. Their plan is to later replace this switch with tongue activation or maybe cheek muscle twitch detection.

First tests were on a small mockup robotic platform. After winning a local competition, they bought a second-hand wheel chair and started all over again. This time, they tried the Raspberry Pi 2 model B, and it was able to work at about 8~9fps. Not as well as the Odroid, but at half the cost, it seemed like a workable solution since their aim is to make it as cheap as possible. They would appreciate receiving any help to improve the performance – maybe improving their code or utilising all the four cores more efficiently. For the bigger wheelchair, they used recycled car windshield wiper motors and some relays to switch them. They also used a 3D printer to print an enclosure for the camera and wheels to help turn the wheelchair. Further details are also available on [Myrijam]’s blog. They documented their build (German, pdf) and have their sights set on the German National Science Fair. The team is working on English translation of the documentation and will release all design files and source code under a CC by NC license soon.

19 thoughts on “Eye-Controlled Wheelchair Advances From Talented Teenage Hackers

  1. Very encouraging to see younger students being encouraged and given access to robotics clubs and 3D printers! Way to go Germany :)
    OpenCV can apparently do multitasking, but an easier way to speed things up would be to get a lower res image from the webcam – pupil detection is essentially blob detection after thresholding, and should work just as well at lower res.
    Good luck in the finals guys!

  2. Yes, I would agree. Actually quite appropriate too that HaD is starting an FPGA series as that is the route I’d take with regards to obtaining sufficient response/low latency. Embedded Micro has a really good tutorial on implementing blob tracking in Verilog w/ code [https://embeddedmicro.com/tutorials/mojo-fpga-projects/hexapod/blob-tracking].

    This probably is the ideal tech for this use case, but granted FPGA’s can sometimes be a bit of a ‘hill climb’, there is also the Pixy to consider [https://www.adafruit.com/products/1906].

    And finally, given that say the Pi does have already a built in camera architecture, converting to grey-scale first, as with ‘eyewriter’– why not design one’s own ‘tracking algorithm’ ? This may be the simplest approach and I have a feeling a lib like OpenCV is doing a lot of ‘heavy lifting’ in this case which is not necessary, as the ‘composition’ of the ‘eye scene’ easily allows one to abstract a certain density of occult or dark pixels and i.d. the object in question.

    1. Thanks – yes, I tried that at first. I startet working with the raspi IR camera module, but it needs a special cable (flex board?) and this cable was really short, flat but broad so you wouldn’t have enough space to mount the camera in fron oft he eye with the raspi beeing seperated from it. You would have to mount it extremely near the face (balancing as well as aesthtic issues) so we used USB ;-)

    1. If I remember correctly, the Wii remote’s sensor would be poor for this application. It’s capable of tracking up to four relatively small bright points of light and transmit their coordinates. Eye tracking is almost the opposite. You track the center of a single large dark circle. And since everything in the Wii sensor is done in hardware (that’s what makes it so great) I don’t think there’s an easy way to change what it tracks. A module capable of tracking dark blobs would be perfect for eye tracking, but I’m not aware any that can do it as efficiently and cheaply as the Wii remotes track light points.

      1. When exposed to IR, the pupils become bright white (like a cat’s eye in the dark). I am no eye-tracking expert, but as far back as I can remember, that property is what has been used to track eye movement. That is why they used IR and removed the IR filter of the camera specifically.

        Trying to track dark blobs would generate an awful lot of errors I think.

        The Wii sensor would probably not work so well up close without some type of lens to refocus. They would probably be better at a longer distance, tracking using both eyes.

        I have been wanting to experiment with this, but have too many other projects on my plate right now.

        1. I had forgotten about the retroreflective properties of the pupil.Take a look at that Wikipedia article again and you’ll notice pupil can either appear to be bright white or dark black depending on the angle of the light source in relation to the camera.
          This website seems to best illustrate the concept. http://www.ime.usp.br/~hitoshi/framerate/node2.html

          Most recent eye tracking projects that I have noticed (including this science fair project) use the dark-pupil method. But I could see how an all-in-one sensor would benefit from simply tracking the biggest brightest point.

          1. In the example with the dark pupil, it is still tracking the position of the bright dot, but in relation to the dark pupil. The bright dot stays in one place as the pupil moves. Then you use where in the dark circle the bright spot appears to track the position of the eye. A Wii sensor might not work best in this scenario unless you used two IR sources.

            The other difference between the two systems is the distance and angle of the illumination source from the eye.

            At any rate, obviously a Wii sensor implementation would need to be slightly different, but the fact that the sensor itself does all the processing and is designed for tracking and position data, it is worth experimenting with.

            I imagine it would work fairly well with eye tracking at a distance without much modification by tracking IR reflection from two eyes.

    1. Hahaha! Nice joke ;-)
      We did think about that, of course – the wheelchair does not just follow eye movements; they give a direction command, which then has to be confirmed in a certain timeframe.

      So you first have to look in the direction you want to go (forwards, back, left, right) and then you have to confirm that you really want to go this way. Because else it might happen like you said ;-)
      At the moment we simulate the confirm-command by a simple switch, but that will be done by a twinkle or slight cheek muscle movement.
      BTW: We asked several people (not handicapped) to test the setup and they did really need some time to adjust to the eye-hand-coordination (looking and confirming). So using a cheek muscle to confirm would make thinks more complicated during the testing at the moment ;-)

    1. The same could be said about your comment! ;-)

      I any case I did intend to provide some reflection on the human interfacing issue. Some times you look where you want to go (i.e. destination) and sometimes you look at where you don’t want to go (i.e. obstacle avoidance)… How to handle this issue is want my comment was all about!

    2. It was a “joke”.

      Once artificial intelligences have got the hang of this thing earthlings call “humour”, could we get them to teach it to some of the autists we’ve got round here, please?

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.