The race for the next revolutionary input design is an ongoing event. [Clayton Miller’s] newest offering in the contest is a multitouch concept that separates the display from the screen and is meant to utilize all fingers. His video explanation includes a description of the physical input device, a software implementation, and a demonstration of how a finished system will work. After the break we’ll look at the hardware, the software, and the concept video.
The implementation is pretty simple. A pressure sensitive and proximity sensitive pad is used as the interface. The hardware can tell when your fingers are resting on it and when there are pressure increases for “clicking” inputs. This is basically a very large laptop touch-pad that can also sense pressure. This removes the issue of hands obstructing the screen that you encounter with multi-touch displays.
[Clayton] goes further with his design. He’s come up with a Graphic User Interface concept that should be incredibly simple to implement. The example is a Linux-based system that modifies how, where, and when menus and windows are used. The multi-touch pad has zones to the left and right edges that control the menu system. A single finger acts in the same way a mouse cursor does. Two fingers work for click-and-drag as well as pinch zooming. Three fingers do the same for different windows.
The video is well made and the concept seems like it could be right around the corner. Possible caveats to widespread adoption include the learning curve for a transition from a mouse to this, as well as the dexterity necessary to use it well. We’d like to get our hands on one, and would be interested in working with something similar to the BumpTop to manage data and organize our digital storage in a more physical way.
37 thoughts on “10gui: Multi-touch For All Ten Digits”
looks like a wonderful alternative to the mouse, can’t wait to see it commercialized.
The technology to do this is here. It will take a few iterations to get it right. Not sure if i agree about the finger occlusion problem as the iphone is not difficult to navigate even with fat fingers obscuring the view! An excellent example of getting ones point across.
I complained about this the other day, who pronounces GUI ‘gooey’?
looks interesting, Flawed, but interesting. This looks like it will take up alot of room, and being designed for both hands, will not allow the device to be situated to the right of the keyboard in a traditional desk setup many of us will have.
The other major problem i have had with touch and multitouch devices is, ironically enough, feel. Wether its a laptop trackpad, an ipod touch or an old school touchscreen, they have no feel, no responce to what you are doing. Unlike a keyboard or mouse, which have that mechanical confirmation of what you are doing, rather than a visual one.
That said the concept looks awesome, I think it would be a little easier to get used to if it was a single hand device. But i am intrigued and will be keeping my eye on this one.
So how do I make it?
I want this..
Instructions to how to start building a prototype would be great
opensource for the software to be used with our own multi-touch pads would be great too
An interesting variation that would probably not be practical for normal usage but might work well for specific application would be to have a dual display – one horizontal, one vertical. The horizontal display would mirror the vertical one, and would be touch sensitive. This would give you both the ability to touch the screen itself, and the ability to see the screen without it being occluded by your hands/fingers.
I like this idea a lot. I do have a few concerns, though.
As for the desktop concept, I agree with their concept of simplifying it. A “3D Desktop” is a bad way to go, it just means more clutter and more brain power to figure out where objects exist in the desktop’s space. The linear approach is a pretty good idea, best I’ve seen yet, but it seems to have a drawback. People do actually like, and use, multiple application windows at one time. Instant messaging applications are a perfect example. How does that fit into a linear view? Its application window and contacts list are not suited for a linear desktop, but perhaps those could be rethought in order to work in this manner.
Another question I have: Is a keyboard necessary anymore? One of the last images shows this input device sitting directly in front of a keyboard. With a full 10-finger input scheme, they could just increase its size a bit and completely remove the need for a keyboard. Yes, a keyboard provides useful tactile feedback to the user, but perhaps retraining the user’s means of textual input to use something with less tactile feedback could help reduce physical desktop clutter. It would also create less strain on the user, as he/she would not have to move from using one input device to another.
I’m not at all sold on this line of thinking. I’m extremely comfortable using a keyboard and mouse. I personally have no desire for multitouch interfaces – it just seems too gimmicky and unnatural in most implementations. Not only that, think about your average/novice computer user(grandma) – they can barely comprehend a single point of interest, let alone the need for multi-touch. Next-gen PC input should be designed starting with the comfort and posture of the user.
@VV: I agree – the idea here is that multi-touch will give some sort of unprecedented level of interaction, but that’s just not happening until something like a Tangible/Textured Multi-Touch interface arises.
But, of course, all of this will become moot when computers are mind controlled.
I like the idea of having the touch part down by the keyboard and have croishairs on the monitor but i dont like their gui at all. Imagine youre referencing youre going to have to keep scrolling back and forth. We will see how it works out in the end.
@James question – Think Chinese water torture, only your fingers are the drops of water. This lesson has been learned once in projection keyboards.
I like this idea, but I think it downplays the importance of multitouch interations within applications too much in favor of interacting with the window environment. Generally speaking, most of what I do, I think, on the computer is within applications (save for the initial step of loading and arranging the apps.) A lot of apps could have the potential to use many more than just two or three fingers.
I already have a great input device. The keyboard, the one and only input device needed. In combination with the great window manager Ratpoison of course! Say goodbye to the rodent!
I want one. Make it happen.
I like the idea, but don’t think it will work with the touch area below the keyboard. The touch area could be placed in the middle of the keyboard in between the two halves that are usually split on ergonomic keyboards.
Wow that looks amazing.
I like the touch interface verry much but not the keyboard.
As said before why not get rid of it tottaly.
The GUI would run verry nice on a wider than wide screen. I saw a cinema lcd screen a couple of days ago and it was amazing. I think it was 21:9 if i remember corectly.
That was a huge screen and i loved it!
But at 4000 euros the 60 inch ultra wide screen is going to remain a dream.
Does anybody knows if you can buy a multytouch cappacitive-resistive combo drawing tablet yet?
I would like to make a multytouch interface myself but i dont like to have a very high box or bloking the cammera view. Is it possible to register touches with a camera mounted on top of a screen?
Does anyone has a Wacom bamboo tablet? It seems quite small but totaly useable as a touchpad.
Is the multitouch implementation done well?
Is it compatible with touchlib?
I saw this the other day. My idea for integrating a keyboard: you have ridges on the outlines of where keys would go (possibly fixed, possibly raised and lowered), and make the surface actually clickable in those areas. That is, if pressed down hard enough, the surface buckles in a bit, like a traditional keyboard.
Then, you could do cool virtual keyboard stuff, like selecting accented characters a la iPhone, but still have the feel of a regular keyboard.
The biggest problem with that approach is determining whether the device should be in keyboard or multitouch mode.
Looks hokey to me.
i think this is a great idea, but for tabbed applications have 3 finger app scrolling do up and down to swap tabs/opened files in the same program.
also a setup of horizontal vertical horizontal would seem to make sense too.
Looks interesting, they could also have the touchpad as a screen in order to remove the keyboard altogether. I think it’ll be one of those things that until you get to play with it for real will seem like more trouble than its worth.
This is not a very good idea and there are two big reasons why.
The first is indirectness of control. Anybody out there who has used a graphics tablet (like the wacom) will tell you that what they really want is a cintiq or equivalent thereof. (For those that don’t know, a cintiq is a computer drawing tablet with the screen built in. You draw directly onto the screen.)
When you use a regular tablet it can be very good for inputing lines and the like but because the input and output are too far apart there is a kind of learning curve. Whereas with something like a cintiq it is just like everyday physical manipulation where you and what you are working with interact directly.
This is one of the reasons people like touchscreen and multi-touch interaction; it has interactions that seem more real and thus are easier for everyone to grasp.
The other reason that this is a bad idea is the unneccesary complexity of the contenuum style desktop.
Imagine you find five people who have never used a computer but all share the same aptitude for it after being taught with the mouse i.e. they can all complete a task on the computer in more or less equivalent time. This time would be your baseline.
Now imagine you take the five and give each of them a different method of control. One gets the 10gui, another the standard multi-touch, another a graphics tablet, another a cintiq, and the control is the guy who keeps the original mouse.
They are each taught to use the input method given an hour to familiarise themselves with it.
Then they are tested. All five are given tasks to complete that require mostly mouse input. In the end the one using the mouse would probably be fastest still just because of familiarity but the rest would fall into a fairly predictable pattern.
Touch/multitouch then cintiq then tablet then 10gui.
In the end the 10gui system would be left in the dust in terms of both time and accuracy. But all this becomes even more pointless when considered against similarly priced BCIs and haptic interfaces. (They aren’t available yet but they will be)
I like the idea of different numbers of fingers controlling different levels of functionality, but I agree that example setup showing the interface directly below the keyboard looks awkward. Shifting between the two effectively would be difficult. However I don’t believe removing the keyboard is the solution. I think the most likely application of this kind of system would be similar to the keyboard/mouse configuration currently in use- right hand on the interface, left on the keyboard. If using both hands on the interface is necessary, perhaps it could be divided into two parts, one on either side of the keyboard. I think a multi-touch system like this one could easily replace a mouse, but not a mouse AND a keyboard.
Having a giant touchpad below the keyboard wouldn’t work for me, because I like to rest my hands below the keyboard and I’d end up having my palms doing a whole lot of clicking. Instead they should combine the keyboard with the touchpad and just have a bar on the top that you touch to switch between keyboard and mouse.
As for their horizontally stacked windows, it would be nicer if you could shrink them just horizontally until they are just title bars, sort of like the xbox, instead of turning them into tiny icons. I still like the openness of my desktop where I can arrange windows however I want. Oh and GUI is definately pronounced gooey. :D
@K: If there is a camera is in the bottom end of the screen recording your hands, you could opt to blend in an image of the hands in the computer screen. That would greatly enhance the visual feedback experience, even more than the graphical representation of the fingers as is currently suggested.
@K’s point on indirectness of control — I disagree. I use a Bamboo (just recently upgraded to one with multi-touch… the implementation isn’t good, but I’m hoping it’ll improve via support or OSS) primarily for drawing and taking notes. I’ve also used Cintiqs in the past. The problem with the Cintiq is even though the technology is amazing and very effective — there’s a slight delay. It’s hard to notice usually, but if you drag your stylus across the screen at a decent clip, “drawing” a line in Photoshop, for instance, there’s a not-insignificant gap between the tip of your stylus and the “ink” on the document, or the cursor on the screen. The delay is still there if you use an indirect version as well, but it’s less noticeable /because/ of the separation. My experience with both has led me to conclude that it’s a personal preference, but my friend (a graphic designer) uses the Cintiq for certain things, and an Intuos (non-screen model, higher-end than the Bamboo line that I use) for the others. Once you use a single device heavily enough, you mentally memorize the mapping of surface to screen — it takes some getting used to, but it’s very workable and I can easily click a small button anywhere on screen using my tablet, without having to “hover and find”. So it comes down to personal preference.
As for this idea — It’s interesting. I like the idea of throwing out old design practices so that we can better explore what’s possible with the new technology — while I doubt we’ll be moving entirely away from the traditional model (relative mousing utilizing an X/Y coordinate, which quite frankly I wouldn’t /want/ to get rid of entirely) it’s definitely worthwhile to explore and come up with ideas without worrying about old conventions. As larger, n-finger touch surfaces become more and more standard (a laptop that had a smart touch surface for the entirety of it’s palm rest, and was intelligent enough to know not to react to a resting palm, would be spectacular, for instance), brainstorming for application and usability improvements is going to become crucial. It’s great to talk about ideas, even if a complete overhaul of the system’s GUI shell is unrealistic.
As for people who think multi-touch is just a fad: yeah, you’re right in some ways, some of the stuff people are claiming MT is good for is just ridiculous. However, it’s a useful new technology that you shouldn’t discount in it’s entirety. I didn’t expect it to be very useful when I got it on my current computer (MacBook Pro, pre-unibody so it still has a distinct mouse button) moving from a Sony with a classic trackpad (which I really loved, honestly). However, it really has changed my usage of the computer — firstly, it’s eliminated the dead space on the right and bottom of the trackpad that was for scrolling. Two finger scrolling took about ten minutes to get used to, and not having /any/ dead space on the trackpad just feels so much nicer. Three-finger gestures make for nice Home/End, Back/Forward commands mapped in my browser, and the two-finger right-click is elegant in how natural it is once you get good at it — it lets your thumb rest anywhere on the mouse button (and on more recent models, anywhere on the anywhere :P… not sure how I feel about that, it’d definitely be weird at first) rather than constraining it to the left half when you want to click. This lets you basically have your index finger and thumb in a “pinch” configuration, vertically aligned. It’s hard to describe, but anyone who’s spent much time with one of these models will understand what I’m saying and likely agree with at least some of it. I personally haven’t found as much utility for the pinch/reverse pinch and rotate motions… but some people love them. They’re great on devices with smaller real estate, where zooming is a more common action. Honestly, I see no reason, since we have the technology, to /not/ move in the multitouch direction, even if the changes it brings are less revolutionary than some might predict.
I’d use two separate units for each hand. I’d have a button on one to overlay an on screen keyboard with key highlighting. I’d use a refined driver and app service framework for it.
Conceptually it’s impressive. I’d love to try it out. It’d be interesting to see how well it works in a real-world context, rather than just a demo.
I love the idea, but it wouldn’t be functional for shooters… I need my mouse for those epic headshots! Ever try to play a shooter on a laptop touch pad? It just isn’t functional…
Other than shooters, I think it’s a great idea! It would be great for video editing/photo editing, and I’d love to try an RTS on it!
I think the makers of the Optimus Maximus keyboard had plans to make something like this. Google “Optimus Tactus”
1: People has used horizontal tables to work on paper documents and physical objects without much complaining about neck pain. Using a touchscreen in a drafting table configuracion should not be too difficult. Perhaps midways from horizontal to vertical is best. The problem of the visual obstructions from the hands would remains, though.
2: IMO, it will be more difficult to find a window when constrained to one dimension, there is less “distance to go” to the desired window if the windows manager uses both dimensions of the screen. The user only has to keep certain order on things (and bigger monitors, ideally).
3: The (multi)touch screen and the (multi)touch pad are not mutually exclusive. The screen could also tilt forward and down to a drafting table position when convenient.
I love this IDEA
i think though the keyboard can be cut out and a few change here and there.
OF COURSE TIS NOT PERFECT! were just getting started.
plz dont complain unless YOUR GONNA DO SOMETHING AZBNOUT IT AND POST IT HERE!
I would love to get a hold of the sorce codes and build one myself…
@Edd Me, most of the people I work with.
GUI = Gooey
SQL = Sequel
Qt = Cute
Why do i have to open one app at a time though? couldn’t I open a web browser a text editor and a photo editor without bringing up a menu for each one? Kewl ideo tho
Whay can not emulate this in keypad of MacBook pro by Linux.
the is option install linux on Mac,
so there is option to use the Multi touch keys and simulate the 10th gui,
The development shouldnt be problem.
have you gonna to Pwn2Own your smartphone ?
I like some of Claton Miller ideas too. So I start working on it. I Hope to know what you think about that : http://wpfcon10uum.codeplex.com/
Kudos to R. Miller and I have nothing but respect for his pioneering intellect, but I think 10\GUI may be a solution in search of a problem. The issue of your hands & wrists blocking the screen is overstated. The only reason it’s a problem on the OS’s of today is because those operating systems were all designed with a “top-down” philosophy (ie: all the important buttons and menus are on the top of the screen, with taskbar/dock/startmenu/etc on the bottom).
Simple solution to that problem – just orient your GUI to favor the bottom of the screen. Put all the menus, buttons, minimize\maximize\close at the bottom the screen or active window. With this layout, you’d have an OS well-suited for the kind of multitouch control we’re familiar with from iPad\iPhone type devices, combined with a good onscreen keyboard (perhaps with haptic feedback) that comes up when needed and disappears when not.
You could get really good (and easy to learn) functionality with this type of setup, without needing a physical keyboard, mouse, or trackpad. Of course, you’d probably still want a physical keyboard for extensive typing, but that could easily be housed in a keyboard tray under your drafting table touch screen. Picture the ergonomics and you get an idea how well this could work.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)