[Issue 1] - 2m09s
This post was originally posted on www.albertdong.com. On June 5th, 2019, it has been transferred here.
~
Working in AR has led me to spend an unhealthy amount of time recently thinking about the new interaction paradigms that may arise alongside this new technology. One of the most compelling ones for me is gesture-based interactions.
Looking at the current major interaction paradigms, we can abstract two major commonalities from them that are crucial for their continued existence: 1) they are agnostic across most population demographics — age, language, and major factors native to the human condition and 2) they require relatively similar levels of effort in the general case and a distinctly lower level of effort in their specific niche cases.
The traditional notions of gesture-based interactions, as exampled by Microsoft Kinect and the cult-classic: Minority Report, are antithetical to both of these commonalities.
Present gestural language is segmented across demographics. Like memes, gestures are created by the collective consciousness of a group and are only used + understood within the group. Those outside the group lack the context and mental models to interpret gestures of other groups. Demographical divides create the most obvious groups - splintering populations across age, race, and culture.
Present gestural language also contains multiple instances of complex movements that are exhausting at scale. i.e. pointing with the finger to select objects or placing a finger to the lips. Generally, all full-arm movements require a level of effort an order of magnitude above wrist-only movements (writing) and finger-only movements (tapping).
My personal belief here is that for a gesture-based interaction paradigm to evolve beyond being just a gimmick, the interaction has to be comprised of a set of primitives in which people can put together (with context) to form the command they want AND it has to be a non-trivial improvement above existing paradigms in non-niche scenarios.
A few months ago, I was very fortunate to have the opportunity to further develop this thesis by joining the team at Pison Technologies as their first interaction designer focused on gestures. Our research surrounds decoding our body's nerve and muscle signals — coupling these signals with an IMU, we can understand a broad range of finger, wrist, and arm movements that were made by a user.
Unlike traditional notions of gestural interfaces that are translated by cameras using computer vision, Pison gestures are translated by a small wearable wristband, allowing for commands to be sent discreetly and effortlessly from spatially-agnostic locations — commands as simple as a flick of a finger.
I’m excited to see what we create.