The use of brainwaves as control parameters for electronic systems is becoming quite widespread. The types of signals that we have access to are still quite primitive compared to what we might aspire to in our cyberpunk fantasies, but they’re a step in the right direction.
A very tempting aspect of accessing brain signals is that it can be used to circumvent physical limitations. [Jerkey] demonstrates this with his DIY brain-controlled electric wheelchair that can move people who wouldn’t otherwise have the capacity to operate joystick controls. The approach is direct, using a laptop to marshall EEG data which is passed to an arduino that simulates joystick operations for the control board of the wheelchair. From experience we know that it can be difficult to control EEGs off-the-bat, and [Jerky]’s warnings at the beginning of the instructable about having a spotter with their finger on the “off” switch should well be followed. Maybe some automated collision avoidance would be useful to include.
We’ve covered voice-operated wheelchairs before, and we’d like to know how the two types of control would stack up against one another. EEGs are more immediate than speech, but we imagine that they’re harder to control.
It would be interesting albeit somewhat trivial to see an extension of [Jerkey]’s technique as a way to control an ROV like Oberon, although depending on the faculties of the operator the speech control could be difficult (would that make it more convincing as an alien robot diplomat?).
This article gives me another idea for wheelchair control: A headset with camera attached which points down at the users eyes.
When the use looks right, the chair turns right. When looking left it turns left.
Forward motion count be controlled by looking down but includes a time out. This would force the user to look up every few hundred milliseconds to make sure they are travelling in the right location. So forward motion is kind of incremental. With experience of operation, the movement would become pretty fluid I guess.
I’m sure this kind of eye movement monitoring is already an existing technology. I remember seeing an article where a camera inside a TV screen could monitor what parts of a cartoon attracted kids the most.
Damn, wish I could be bothered to put this into practice…
@smoker_dave
You should check the ‘eye-writer’ by Zach Lieberman, he has a pretty nifty solution for this using ordinary webcams. Plus he shares everything.
*points* Oh, hey. A ‘Hackers’ poster.
@smoker take a minute alternating between looking down and straight ahead as close to every few hundred milliseconds as you can estimate. dizzy yet?
The problem with using eye-tracking for movement is that you need to look at objects but not move as well. Say, if you’re crossing the street. Or reading a sign. I guess it could work if you also required some sort of command system (up up down down left left right right blink blink?) to initiate different modes. But that starts to get too complex for the user…
Easier than that. You can have LEDs that flash at different rates, that flashing can be picked up in the brain waves. Just time the pulse and decide on what frequency/LED is being looked at, and you have more control (choices).
So it’d be easy to implement a simple “keypad” in LED’s, add a LCD screen for menu’s, and you can gain a pretty wide range of control.
The only problem is it takes a couple of seconds to sense/decode the frequency. So the system is kinda slow. Not too bad though.
nice idea @George Johnson, this could work.
Certainly means there is something specific to “lock on” to, a bit like a ping on a submarine.
Just a thought, but i did wonder if an mp3 recorder with a “front end” consisting of an array of lm567 based frequency generators hooked up to the eeg could record activity from many electrodes “on the fly”
one idea i had is to use tuning diodes as they can adjust over a relatively wide range with no current change (only voltage) so could be handy for picking up really small signals.
Awesome stuff! Just wondering, has anyone had any experience with the Emotiv Epoc?
The software seems to take a reasonable approach, asking users to focos on a certain action (push, pull, rotate cw/ccw, disappear, etc) while collecting data. Then it tries to match patterns in the collected data, so you don’t have to spend hours upon hours training to raise a single alpha-channel or some such.
The theory is sound, the demos I’ve seen are good.. I have to know, just how good is it?