I've been on an Apple binge lately. What with the ubiquitous iPod, the (soon-to-be ubiquitous) iPhone, Apple's transition to Intel processors and Steve Jobs' style of presenting cool stuff, it's easy to get wrapped up in the cool side of modern computing. So to what does Jobs attribute Apple's success in recent years? Simple user interfaces.
Normally, the term ""user interface"" refers to the buttons, menus and icons that make up a program's display to the user. However, when I think of user interfaces, one example pops into the forefront of my mind: ""Minority Report."" Though most people have become accustomed to interacting with computers using a keyboard and mouse, clicking on icons and buttons and typing phrases into search engines, the concept of using hand gestures to move objects around the screen is much more intuitive and exciting.
We've come close. For many years, specialized devices such as card readers at retailers have used touchscreen interfaces, although many have required the use of a stylus or heavy, awkward pressing. More recently, companies have started demonstrating screens that support multiple touches at once. Videos of professors interacting with screens using both hands fluidly have captured the imagination of internet surfers everywhere and have caused many students around the country (my roommates included) to attempt to emulate the technique.
The first consumer product with multitouch functionality is the iPhone itself, set to be released in June, although it remains to be seen how many features will support that method of control. Additionally, Nintendo has had success with its motion-sensing Wii controller, enabling users to interact with entertainment in new ways.
The problem with these new input devices is one of processing power. A mouse and keyboard are simple due to their discreet nature. A key is either down or it isn't, and the keyboard can easily signal the computer every time a button is pressed. Although the input from a mouse feels much smoother, it is also merely sending discreet movements to the computer for interpretation. Both of these methods require little to no processing and filtering on the computer's end of things and so the response to the input is lightning fast.
Once you decide to start interacting with the user on a more fluid level, the amount of effort needed to decipher the user's wishes grows exponentially. Take, for instance, a debit card touchscreen. Was that quick tap a press for a PIN entry, or merely an accidental bump? Was Tom Cruise trying to fast-forward the video or throw it onto another screen? The amount of processing time needed to interpret the motions cause the system to be bogged down in algorithms and suddenly it takes several seconds to get any response.
Engineers and programmers are trying to simplify the problem of detecting arbitrary motions and gestures. Instead of mounting a camera and figuring out where someone's fingers are dynamically, make them wear gloves with sensors and located individual points. Instead of tracking finger movements over an image, make them press on the surface and have the surface tell you where the fingers are.
My bet is that the motion sensing future is coming and may very well be ushered in by Apple, but these ideas are still many years from fruition. For now we're stuck in the stone age with our boring old keyboards and mice. Besides, who wants to spend hours a day holding your arms up just to move some documents?
Keaton Miller is a junior majoring in math and economics. He's looking forward to the day when computers read his mind and write his papers exactly the way he would, except better. He would have rather written about the monument to Ron Jeremy's anatomy that was constructed across the street from his house this weekend, but figured that it wasn't scientific enough. Let him know your dreams at keatonmiller@wisc.edu