There’s a good deal of buzz around hypothetical details of the upcoming TouchID replacement.
I don’t have a strong opinion about the quality of the experience for unlocking my iPhone with my face. If it works, then great. If it doesn’t then I’d be disappointed.1 What really gets me excited is how this aligns with three other items that Apple has actually commented on: Siri voice control, A.R., and Machine Learning.
Instead of a new way to unlock your phone, imagine all four of these technologies coalescing toward a new input model. A method of input that doesn’t involve direct interaction with the screen, or screaming at a piece of glass in public, a combination of eye tracking and lip reading. I’m hopeful that we have not stopped imagining new ways to control a computer. The mouse is not the pinnacle, no more than touch is.
I think of it this way: the mouse (and finger) are the direct action of my desire and it takes a lot of effort to produce these actions.2 An input method that, through a combination of interpreting eye movements, facial expressions, and my daily patterns, translates my intent to an action seems like the ideal. Subtlety moving my lips while a machine learning model interprets those micro-expressions into actions or text on the screen may well be a future goal.
This is certainly an interesting time as our current technology catches up with our childhood fantasy. Let’s not forget that a lot of those fantasies formed in the 50’s and 60’s and only settled into our imaginations around the same time as the pet rock. I think we can still do better.