I have been thinking about the role of user experience design in the ever-increasing automation of everything.
Previously, when I thought about automation, it was in the context of manufacturing, where robotics were and are supplanting human labor. But when you think about it, any technology or tool is really the amplification of human labor: the hole that two people can dig with their bare hands in one hour, could be dug in half that time by 1 person with a shovel. When we talk about designing software that is more user-friendly, more usable, more efficient, we are essentially faced with the same equation.
Two examples. Example 1: an information management system for a national grocery store chain is redesigned to be more user-friendly and lets the data team get work done more quickly. Example 2: A website for a utility company is reorganized so that customers can better find information, resulting in a reduction of calls to the call center.
Faced with such gains in productivity (aka less human labor needed to get the same job done) enabled by better software, where do the gains go? Who benefits and who loses out? Well, there is a number of scenarios that you could imagine.
On a recent episode of the Accidental Tech Podcast, the hosts discussed the future of computing. You can also read the blog post on the same subject that Marco Arment wrote here, which of course does not represent the viewpoints of the other two hosts. Although they were discussing the “future of computing”, more narrowly speaking their interest was on the future of programming and the creative industry, which have more specialized needs around file manipulation, large screens, and ergonomics.
The discussion centered on whether the iPad and iOS in general will extend itself to also include these needs, or whether the Mac and MacOS will continue to serve those needs and begin to more closely approximate iOS. It’s a good discussion and worth listening to.
But here I want to unpack a bit more about how computing is defined, and revisit some historical ideas about computing’s future. I like to think that there are two definitions of “computing”, one narrow and one general.
What is the short-term cost of switching contexts while working when your favorite social media app or news site is just one Command-Tab away? Or just one tap on the home screen of your phone? The answer is: virtually nothing.
The long-term cost, however, is significant. And by “long-term” that could just mean by the end of that same day.
From “Read This Story Without Distraction (Can You?)” by Verena von Pfetten:
As much as people would like to believe otherwise, humans have finite neural resources that are depleted every time we switch between tasks, which, especially for those who work online, Ms. Zomorodi said, can happen upward of 400 times a day, according to a 2016 University of California, Irvine study. “That’s why you feel tired at the end of the day,” she said. “You’ve used them all up.”
I’m introducing a new blog post series where I bullet point out some things I learned from reading an article. I’m calling it TIL (which usually stands for “today I learned”, but in my case I’m saying “things I learned” since I might have read the article in question a while back). This series has the benefit of a) forcing me to write up a précis of what I learned (hopefully solidifying it for me in my memory some more) and b) giving you a TL;DR summary of the article’s highlights.
Things I learned:
- you hear tech companies claim that a lack of diversity upstream (in universities and K-12) cause the lack of diversity in their hiring, but that’s not the case – just look at the stats
- blind hiring is inspired by the blind auditioning process for positions in an orchestra, where the applicant cannot be seen, only heard (they even use rugs to muffle the sound of high heels, which by itself can bias the judges!).
- People aren’t good at hiring:
Introduction of the Zen IA blog post series
In this series I propose to write short exegeses of sayings and fables drawn from the Zen tradition, with the aim of understanding how they might shed light on the work we do in information architecture. Is applying the principles of Zen to the field of information architecture a wildly inappropriate appropriation of Zen? Maybe, maybe not. If you find it useful, use it. If not, discard it.
I am writing these as I read through Daisetz T. Suzuki’s Zen and Japanese Culture, a rather old-school exposition on Zen, originally published in Japanese in 1938, and published in English in 1959. My background in Zen is rather scant, consisting of a scattering of readings in Zen, including: The New World of Philosophy by Abraham Kaplan, The Three Pillars of Zen by Roshi Phillip Kapleau, The History of Buddhism by Donald Lopez, Zen Mind, Beginner’s Mind by Shunryu Suzuki, and Buddhism Plain and Simple by Steve Hagen.
Beyond reading, I have on occasion (and not any time recently) gone to Zen Buddhist temples on Sunday mornings, and had an on-again, off-again meditation practice. I like to excuse my inconsistency and the sporadic nature of my exploration of Zen by saying to myself enlightenment is right at hand, you don’t need to go looking for it. Sometimes I can even convince myself.