In his book The Best Interface is No Interface (and this accompanying article) Golden Krishna describes how this obsession with screens and screen-based thinking has ushered in an age of shallow apps for everything, each with their very own context and all of them vying for the user’s attention. Today, even large amounts of content are published in bespoke apps rather than on the publisher’s website because … well, we’ve got to have an app, too, haven’t we? Besides, there’s DRM and perceived copyright issues but that’s a subject matter for another post. This app-centric approach in some cases has even lead us to app-based designs which are by far worse than the designs they replaced. The car key example in the article is a hilarious example of that.
Not only will psychologists or aspiring productivity experts tell you that interruptions and multitasking are bad for getting things done but for creative professionals such as writers like Scott Adams getting out of the flow is all the more detrimental: Ideas may be lost and while being interrupted by some Facebook push message just takes a second, getting back into the flow requires 10 minutes or more at best. Sounds like productivity heaven, doesn’t it?
What Scott Adams suggests to remedy this is completely rethinking digital interfaces from today’s perspective and doing away with the cruft and cargo cult accumulated from 3 decades of home and office computing. His somewhat radical idea is to simply present the user with a blank screen. The user then starts typing or speaking. The computing device (smartphone, tablet, desktop PC, you name it …) then attempts to infer context from:
- user input
- task history
- current location
- current time
This context allows the device to make elaborate guesses about the user’s intent. If for example I start typing ‘Chr’ the device will infer that Chris is one of my work contacts I frequently write emails to. Therefore, the device will wrap everything typed henceforth in an email message, send that message and attach it to the appropriate mail thread. All without me having to go through the usual open-app-swipe-swipe-type-tap fest I have to endure when I want to send an email today. Same goes for location and time context. Such a device could make guesses about what task I most likely want to accomplish given a certain setting (think ‘office at 2 p.m.’ vs. ‘home at 7 p.m.’ for instance).
In a way this how the UNIX command-line interface works to a certain extent. When you type a few characters and press TAB the operating system will try and infer the command or file name you most likely wanted to type. However, while very useful this kind of command-line autocompletion in most cases still is pretty dumb and doesn’t consider a whole lot of context other than the current file system location.
Another analogy is how Spotlight on OS X and iOS should actually work: As your primary user interface that not only allows you to search for objects but also enables you to interact with those objects in every conceivable way. Intelligent agents such as Siri and Google Now come into play here as well. As of now, those are hardly anything more than novelties but once they actually understand context and allow you to accomplish tasks accordingly without having to switch to some app they will live up to their full potential. This is pretty much how the artificial intelligence depicted in the film ‘Her’ works: ‘She’ takes a backseat position, the user interface disappears and it is in doing so that the user interface becomes much more powerful because the user can almost subconsciously accomplish tasks without ever leaving his current context or losing focus.