I have been working on an ebook for Roamtouch, gestureKit, and their Indiegogo campaign (coming up soon - Stay tuned for September 3!). So gestures have been at the top of my mind lately, and I believe they will emerge as a new way to interact with devices. More to come later this week - this is definitely a Part 1!
Gestures are language. We use gestures to complement our speech, elaborating and exaggerating a story. Some people can't tell a story unless their hands are available to add emphasis to their points and visually illustrate their story in the air.
Sign language is a gesture language. Mimes tell their stories through gestures. Gestures dominate the game of charades.
Sometimes we forget that you don't need to have a voice to communicate.
This is why I don't understand why we don't treat device gestures like a language, or at least, as much as we should. We treat device gestures as if our fingers are an extension of a mouse or a keyboard. We point and select. We type. But do we really interact with the device?
A mouse, an OS menu, even a UI were created for us by engineers to communicate with computing devices. Engineers created command languages so machines can "do" something; it was a language created for a non-living intelligence that is expanded daily to get a machine complete an action.
If we created a language to communicate with devices that we also created, why didn't we make the way to communicate with a device consistent with how we communicate with each other?
Why did we create computer language in a very inorganic way?
I have been fascinated with language for years. I almost studied linguistics (ok, so I was also almost a math major, and dozens of other majors. Ah, the joys of being young). In grad school, I loved the philosophies and teachings of Derrida and his views on language, specifically what he defined in his book, Of Grammatology.
Let's start with the definition of Phonocentrism, which is one of the basics for language definition:
Phonocentrism is the belief that sounds and speech are inherently superior to, or more primary than, written language. Those who espouse phonocentric views maintain that spoken language is the primary and most fundamental method of communication whereas writing is merely a derived method of capturing speech.
Derrida felt that phonocentrism downplayed written language and communication, and in a way he is right. Written language is just as much as a separate language, so to speak (no pun intended) as spoken language.
Of Grammatology (1967) is an examination of the relation between speech and writing and of the ways in which speech and writing develop as forms of language. According to Derrida, writing has often been considered to be derived from speech, and this attitude toward the relation of speech and writing has been reflected in many philosophic and scientific investigations of the origin of language. However, the tendency to consider writing as an expression of speech has led to the assumption that speech is closer than writing to the truth or logos of meaning and representation. Derrida argues that the development of language actually occurs through an interplay between speech and writing, and that because of this interplay, neither speech nor writing may properly be described as being more important to the development of language.
Speech and writing are different expressions of the same language. Each contributes to the development of the other. This is why we could say that texting is changing written language and spoken language. We now use the expressions "O-M-G," as well as "cray cray," and other text-only terms in spoken language. Texting, or typing, which is writing, is changing spoken English.
The typed influences the written which influences the spoken language (and vice versa). Both, and now all, sides refine how we communicate with each other, the written refining and consolidating the words used; the spoken being more expressive and more directly expressing thoughts.
Derrida was right.
But how would Derrida's perspective work with gestures?
Probably the same way.
Gestures for devices should be created in a more organic way - not through definitions made in operating systems, defined by a select few. Gestures will evolve with the written and spoken word, to associate an action or task with a motion. Sure, we need some initial definitions, like we do with swipes and taps. But what about "buy now," or "cash a check?" Ideally, those gestures should be defined by a group, like a language is defined by a society.
We know what "I love you" is in sign language - a gesture language. Someday, we will know what "on," "off," "call," "buy now," etc. are in machine gestures. And given how much our language changes every day - I'm sure there will be more gestures evolving.
More to come.