Cutting the Keyboard

My primary machine has always been a laptop with a full keyboard. For six months, I was dedicated to performing my primary mobile computing activities on the iPad and was mostly unimpressed. However, today most of the tech world agrees that the tablet form-factor is the winning strategy to pursue, so it seems inevitable that a paradigm shift is happening that will move us away from using the keyboard as a primary input mechanism. The question is what will stubborn users like myself move toward using when physical keyboards are completely eliminated? In order to answer this, we also need to consider questions that relate to how the keyboard is still advantageous and how we can replace these advantages with something equal or better.

Note: I will be making comments on both the physical keyboard and a virtual, on-screen keyboard like on the iPad. Unless I note otherwise, any mentions of “keyboard” refer to the physical keyboard.

Tactile Feedback

The keyboard provides natural, haptic feedback to the user. I love Apple keyboards because of the soft bounce that you get from the keys and the long-lasting life they seem to have. You also know exactly what you get when pressing a key on a keyboard, whereas on a tablet, there may be no feedback provided but what the result of the key-press was on-screen.

The computer has come from primitive, machine-exposed roots. Punch cards used to be the primary input mechanism. When a user wanted to make an input to the system, the only route to do so was some physical action that you could feel. The keyboard is the last derivation of this concept. After the keyboard is phased out, the concept of feedback will be radically altered.

While some Android devices provide a small vibration for users, there is little distinction as to what key was pressed given the close space on an on-screen keyboard that small. Bigger tablet devices would need a larger vibrations, which only add weight to devices that are designed to be lighter and faster. So the physical feedback you experience is not as informative as it is like in the punch-card era or during the current, keyboard-using era.

Curiously, the F and J keys on the iPad do have the small line on them, though they provide no functional purpose. They likely exist to stick to the physical metaphor that Apple is famously incorporating into their apps, including those in Lion like iCal.

Comfort

After using an iPad for six months, I noticed that taking notes was not possible for me. While creating small events and short memos wasn’t a daunting task, large comments or lecture notes became cumbersome on a tablet. For one, tablets do not come with a stand by default, so without a third-party case you must resort to using your lap, which means you strain your neck downward whenever using the device. If you do have a third party stand/case, they may not be properly elevated to where your hands are comfortable and to where your eyes can see the content. For example, the case of choice on my iPad had a sturdy, sleek stand, but the angle was just right so the glare from the halogen lightbulbs in the lecture room would force me to prop myself over the device to eclipse the glare from behind me.

On a keyboard, taking notes is commonplace. We have all been raised on desktop computers that have a keyboard slightly raised on the bottom and a screen in front of you. Touch-typing is widely taught as early as middle school so kids can use the computer with greater ease than pen and paper. Most college classes now expect students to have a laptop with them.

With a lack of tactile feedback and bounce, the tablet screen keyboard becomes tiring to type on because of the tendency to tap harder on the screen. To me this stems from the fact that we implicitly demand feedback from the device other than from the screen, so tapping hard is a subliminal message to the tablet: “Did you get that?” All the harder tapping means your finger muscles get tired faster, so typing anything more than 250-500 words can be a pain.

Reinventing input

Input is generically any information you want to give to a device. Physical input is how you physically transfer this information into the device. Physical inputs on computers generally include:

  • Keypresses on a physical keyboard
  • Movements/clicks on a physical mouse or set of buttons
  • Movements or taps (including gestures) on a trackpad
  • Movements or taps (including gestures) on a tablet screen (flat glass)
  • Inserting peripherals and movements proprietary to those devices

The general concept when inventing an input is to see how you can transform any of the above into a better user experience. Apple invented multi-touch gestures, and before that created the first mouse. Android can use swype gestures for their screen keyboard, understanding that users are more comfortable with tracking their finger on a screen keyboard. It isn’t a far leap to guess that Apple may use glass keyboards in the future for their laptops that enable gestures similar to that on the iPad. Perhaps the same on-screen toggles for accent marks will appear under your fingers beneath the glass or on the screen or both. If tablets completely dominate mobile computing, maybe the laptop form-factor will completely be eliminated and the keyboard will remain optional like with the iPad. But the concept of key presses will still persist if you consider only an evolution of the items above.

A reinvention of input could be based on voice input. Apple’s recent move to bake Nuance based voice-recognition makes it so people can make natural commands to the phone to begin typing a text-message or web search. Dragon Diction already takes care of this, but the issues of privacy and non-native support have resulted in a non-mainstream product. Android uses Google based search with voice input capability. Google voice-search is also available on your computer as of last month. Voice input is a radically different approach to traditional input, but the obvious problems of intelligence have slowed the growth of this technology thus far.

Eye-tracking or inputs based on intentionality are hot topics in cognitive science research and future technology efforts. The concepts are not unlike something out of Star Trek — a user can look at an application or think about an application to launch it instantly. Computers could understand a user’s behavior patterns and make inferences on what the user would like to do next. For example, if someone checks their email and wishes to reply to a message, the computer would pick up on that brain signal or notice the user move their eyes to the reply option and automatically suggest (or simply open) a new composition. This futuristic approach has its own obvious caveats. How do we measure intentionality and how do we know that we are right? How do we calibrate the technology for different eye-types/brain-types? These questions have complex answers, so it is unlikely this revolution can occur any time soon, but the concept is already being considered in primitive forms today.

This entry was posted in Android, Apple, Features, Google, iOS, Lion, Mobile and tagged , , . Bookmark the permalink.