Personal computing has come a long way in our short lives: as kids we happily typed commands into consoles, complete with ordered optional arguments and were riveted to the resulting gloriousness of Moon Buggy, Space Invaders, and the absolute kingpin: prince.exe megahit
This required us to learn specific keywords that triggered functions; already a giant leap forward from where computers had started. We had moved to using a language humans easily understood. The GUI came not long after, complete with pointing device, and the commands disappeared. We could now move a device in our physical world that controlled a digital pointer!
Then came the touch screen… bahn bahn baaaaaaahhhhnnn!
We’re now at the pinnacle of point-at-click with direct use of our fingers. The interface between digital and human is transparent and we touch and drag things, push buttons, swipe and pinch with perfect ease. Apple calls it magic. And in some ways, it is.
It may feel like eons ahead of our Windows 3.1 days, but the interaction is actually identical. This has been an incredible achievement and it has become inclusive to a vast majority of people otherwise afraid of technology, but truthfully, we have only improved the interface. Unfortunately Microsoft’s Sustainable Future has us never leaving this utopia. They seem to be satisfied with increasing the number of screens around us and the connections between them. But this shiny, touchy-swipey future we have to look forward to, is simply more of what we have now, and I truly hope that’s not the best we can do.
The beauty of moving from a command line to point-and-click, was the change from us working by computer standards, to them working to fit ours. We no longer needed to know specific keywords, we could discover, we could play; we could learn and create. No instructions needed. And the future I look forward to moves that outside our limited screens into the world we interact with.
We already have cars that start when you’re near, that turn on headlights when you need them, and windshield wipers that start when they detect rain. Some cars brake to keep you on course, and to avoid accidents, they respond to you and the environment around you. I think more devices need to be this responsive. I want my technology to respond to stimuli I may not directly provide.
Imagine your walls adjusting colour or texture just enough to reflect the weather outside; imagine your foyer controls its temperature to ease you outside, warn you of the jacket you’d need; all without looking at yet another screen with a number on it. Imagine your email was sorted by the context of the project it is a part of; your phone judges your mood and atmosphere and redirects calls that would be intrusive or counterproductive (wink, wink…).
Imagine that the volume of your music was always perfect adjusted to ambient noise; your ladder was always the perfect height; your kitchen helped balance your body chemistry by producing just the right supplement… don’t settle for another screen with more things to click, apps to download, settings to enable… imagine technology worked for you.
Note: Don’t get me started on Google Glass. That’s a step backward to needing to learn commands and special gestures all over again. It makes a great tech demo, but I also don’t want my experiences documented solely from my perspective. Much longer discussion… maybe soon.