Pay no attention to the man behind the curtain. It’s a quote from the Wizard of Oz but also an interesting way to look at our interactions with electronics. The most natural interactions free us from thinking about the ones and zeros behind them. Your next challenge is to build an innovative interface for humans to talk to machines and machines to talk to humans. This is the Human-Computer Interface Challenge!
The Next Gen of HCI
A Human-Computer Interface (or HCI) is what we use to control computers and what they use to
control us get information to us. HCIs have been evolving since the beginning. The most recent breakthroughs include touchscreens and natural-language voice interaction. But HCI goes beyond the obvious. The Nest thermostat used a novel approach to learning your habits by observing times and days that people are near it, and when the temperature setting is changed. This sort of behavior feels more like the future than having to program specific times for temperature control adjustments. But of course we need to go much further.
You don’t need to start from scratch. There are all kinds of great technologies out there offering APIs that let you harness voice commands, recognize gestures, and build on existing data sets. There are chips that make touch sensing a breeze, and open source software suites that let you get up and running with computer vision. The important thing is the idea: find something that should feel more intuitive, more fun, and more natural.
The Best Interfaces Have Yet to Be Dreamed Up
No HCI is too simple; a subtle cue that makes sure you don’t miss garbage collection day can make your day. Of course no idea is too complex; who among you will work on a well-spoken personal assistant that puts Jarvis to shame? We just saw that computers sound just like people if you only tell them to make random pauses while speaking. There’s a ton of low-hanging fruit in this field waiting to be discovered.
An HCI can be in an unexpected place, or leverage interactions not yet widely used like olfactory or galvanic responses. A good example of this is the Medium Machine which is pictured above. It stimulates the muscles in your forearm, causing your finger to press the button. The application is up to you, and we really like it that Peter mentions that Medium Machine reaches for something that wouldn’t normally come to mind when you think about these interfaces; something that hasn’t been dreamed up yet. Get creative, get silly, have some fun, and show us how technology can be a copilot and not a dimwitted sidekick.
You have until August 27th to put your entry up on Hackaday.io. The top twenty entries will each get $1,000 and go on to the finals where cash prizes of $50,000, $20,000, $15,000, $10,000, and $5,000 await.
19 thoughts on “Human-Computer Interface Challenge: Change How We Interact With Computers, Win Prizes”
That picture at the top looks like an arm-breaking machine.
Yes, it’s obviously a torture device – release the button and a big hammer (not shown) comes down and breaks your arm.
Wow, that got dark for you pretty quickly. I did not see the same thing as you at all on this one.
Nothing dark at all. It just draws bone Stress/Strain curve.
Since i collect vintage electro therapy machines and restore them to their former glory the first thing I thought of was a computer controlled human. fire a pulse on the two supports and make the human press the button. :)
But then now you guys mention it I do see some similarities to the device used in the 1988 movie Taffin. :)
A human, “Useless Machine?” I’ll take ten!
Yup, definitely forearm roulette. Which I think somehow seems more comfortable than running my wrist and elbow back and forth over a set of rollers to scroll… Or mouse… Or whatever?
>”The Best Interfaces Have Yet to Be Dreamed Up”
The best interface is of course one that can be used by everyone, doesn’t cause muscle strain, is fast, responsive, gives good feedback, doesn’t need any special devices to carry with you, is easy to learn… in other words a wire directly to your brain.
Wire to the brain is technically a “special device” and also – it gives potential to infection and brain damage.
not to mention no two brains have the same structural development, thus requiring at least a custom calibration scheme for every user. Remember, your brains output only makes sense to your body because they’ve worked in unison since you were a fetus. There’s a reason that, despite decades of work by both companies and individuals, neural interfaces are still largely in their infancy. Even getting reliable cursor full movement out of one is still fairly impressive, never trying to type, or even execute mire than a handful of preset commands.
One responsibly fast method was the humming mouse where one could make sounds at different pitches to move a mouse cursor. It worked faster then any brain to computer interface and with less errors. Sadly your coworkers may not like the sounds all day.
Reminds me of https://m.youtube.com/watch?v=0JElywbkSbY
huh, that’s pretty impressive, never seen that before!
Operation may be a higher-level thing. Sort of the difference between tapping the pots, and reading off the driver.
What is a “special device” and what technical aspects define that?
I’ve always heard a BMI is a form of HID. I’ve never heard the initialism HCI before, by the way. What’s the distinction there between that and HID?
Definitely, I’m so ready for it. But with our current tech we’re still a ways off so we’ll have to wait. Brain interfaces will certainly solve a number of problems like RSI, disability/accessibility, small displays (like smartwatches), NO displays (like earbuds or just having the phone in your pocket). We have a good future ahead of us.
Great user interface, useless (hardware): https://www.youtube.com/watch?v=Z86V_ICUCD4
Great user interface, useless (software): https://en.wikipedia.org/wiki/%22Hello,_World!%22_program
Terrible user interface, useless: https://en.wikipedia.org/wiki/Rube_Goldberg_machine
Terrible user interface, very useful: https://en.wikipedia.org/wiki/Assembly_language
Great user interface, very useful: https://en.wikipedia.org/wiki/Artificial_general_intelligence
And everything else in between, as summarized by “usability-usefulness axiom”: https://www.sciencedirect.com/science/article/pii/S1386505606001547
To conclude, solve general AI, win HCI challenge and prizes….
“You don’t need to start from scratch.”
Yes you have. Every possible human-machine interface people think of, at least in the IT field, is based on the assumption we have to control a desktop just like we were in a office. Get rid of this abstraction first, then think of other interfaces, otherwise you’re only swapping a problem with another one. And by the way, the mouse has to go for good.
Yeah starting from scratch does seem necessary. Look at smartphones and the massive changes in UI and interaction. It took many years of mobile devices before we got touch UIs that were actually good, and the end result is nothing like a desktop’s keyboard and mouse UI. If you try to make your interfaces work with existing mouse-driven software you’ll end up with a sub-par emulation of a mouse.
Maybe computers of the future won’t want to interact with humans and chose to support the planet’s rat population instead?
Please be kind and respectful to help make the comments section excellent. (Comment Policy)