How Far Have We Come?
Ten years ago, I gave the opening keynote, entitled “The Wisdom of Experience,” at the Agile08 conference in Toronto, Ontario. I was recently reminded of this talk and dug it out of my hard disk. It occurred to me that others might find it interesting also. Things were different back then. The iPhone was just a year old, and still only a qualified success. The Web was ubiquitous only among tech circles and social media was in its infancy. A lot of software still ran native, that is, not in a browser and not in an app. Most software was still being built by IT departments using waterfall methods. Interaction design was widely known, but not widely accepted. Many of the ideas in this talk are still relevant and important, while others may seem strange or old, but I have included them nonetheless. Coincidentally, in a few weeks I will be giving the opening keynote speech at another agile conference, this one in Bangalore, India. My talk will be about how to create ethical software. It will be very different from this one.
(The text of this speech was posted in January 2018. I added the original PowerPoint slides to it in September 2019. A note at the end of this posting explains why)
Most of you know me as an interface designer.
But I am also a programmer. I started programming in 1974. I’ve invented software; designed it; engineered it; built it; documented it; fixed it; and studied it. Some of it was small, but most of it was big software used by millions of people. I’ve always been independent, but I’ve sold my software to some of the biggest companies in the game, like CA and Microsoft. My coding chops are pretty rusty these days, but I love software; I understand software; and I want to talk about software.
Software is different! Software is delightful! It is the perfect medium: powerful, fast, and infallible. It is utterly malleable, shapeable, protean, plastic, and controllable. Programmers are masters of this immense creative power.
Software is extremely difficult to write, to read, to design, and to understand. It demands more thought from the practitioner than any other creative medium. It requires more attention to detail, and more attention to the big picture. Getting software right means getting it perfect, because software runs in a perfectly brittle environment, and a single bug will halt the largest program. Thus it should come as no surprise that programmers are very special people.
The people who build software are extraordinary. Programmers are the smartest people in the human race. Programmers are among the hardest working people. They love to solve difficult problems; the more difficult the better. In fact, Po Bronson says, “They will keep fixing what isn’t broken until it is broken.” Programmers are, in general, very conscientious, very nice, and very concerned people with a strong desire to please.
Programming is an invisible activity; watching a programmer work is about as revealing as watching oysters grow. Code is invisible, undetectable, intangible, and inscrutable. Programming doesn’t lend itself to a conventional, hierarchical work structure; but rather to quiet, solo efforts, or loosely affiliated, small groups of collaborating experts. Software is made of bits, and building software isn’t really like building things out of atoms. Sure there are a lot of similarities, but the differences are large, they manifest in unexpected ways, and they profoundly affect our thinking. Sometimes the simplest code does the biggest job, like filling an entire warehouse with a gazillion zero bits, while the lengthiest, most complex code does something infinitesimally tiny, like separating your postal code from your street address.
Programming is craft. All software is bespoke, custom, made by hand. It is achingly slow; as slow to write each line of code today as it was 30 years ago.
Unlike the identical bricks in a wall, every line of code in a program is different from every other. Like other crafts, you can only estimate how long a job will take once you’ve completed it. It is not art, not science, not engineering.
Art doesn’t have to do anything practical; software does. The artist can change the problem if the solution becomes too difficult; the programmer cannot. Of course, some code is so beautiful that we can’t help but regard it as art.
The goal of science is to learn; while the goal of programming is to apply that knowledge. Sure, scientists often write programs, and all programmers are as rational as scientists. But a scientist can still be successful when he proves that his hypothesis is wrong; programmers don’t have that luxury. Of course, some code is so intelligent and orderly that we can’t help but regard it as science.
The familiar phrase, “Engineers build bridges” isn’t really true. Engineers design bridges with paper and mathematical models. Ironworkers build bridges with steadiness, focus, plans, and muscle. Of course, some code is so powerful, so marvelous, that we can’t help but regard it as engineering.
Programmers are excellent designers. Programming is extremely creative, and programming is a continuous exercise in problem solving. Programmers may not be very good at designing things for humans to look at or to interact with, but they are very, very good at designing things for computers to crunch.
Programming is craft, just like hanging drywall. The drywall hanger isn’t an artist because he can’t change the problem to suit his solution; he isn’t a scientist because he can’t simply learn about sheetrock; and he isn’t an engineer because he can’t simply draw a diagram of a wall, but must actually construct it. The drywall hanger is a craftsman, and we are just like him. But software is unique, and we are very different from any drywall hanger. We are better trained, better educated, our work is more difficult, the medium far less tractable while being far more malleable. We work in a more complex environment, our medium addresses a much greater range of uses, our tools are far more complex, our projects take far longer, and our work is largely unsupervised. But we are craftsmen none the less.
The most important part of software doesn’t exist. The most critical part of software is the interstice between programs. Such interstices are called “interfaces.”
Interfaces are the place-between. Successful interfaces last a long time, and very few of them are as well thought out as they should be.
There are human interfaces, where the software talks to humans, and there are APIs, short for Application Program Interfaces, where software talks to other pieces of software. Although I have devoted much of my professional career to human interfaces, and written several books on the subject, it is the software interfaces, the APIs, which are without doubt the most important interfaces.
You can imagine software as a straight and narrow highway crossing a swamp: it’s fast, easy, and safe if you stay on the highway. But it is slow, difficult and dangerous just a few feet to either side. Using an existing API is like staying on the highway, while writing your own code is like descending into the muck. Using any existing software’s API is orders of magnitude cheaper, easier, and safer than writing your own. Pre-written software, COM objects, libraries, and web services of all sorts are like the highway’s lure. So most programming consists of stitching together commercially available parts. Programmers make expedient choices, urged on by their managers, and use pre-written code. The programmers save time and effort; the managers save time and money. But those libraries are probably not optimal for your users or your business.
The wrong tool beckons because it is cheaper and easier, and design choices are not made based on what works well for the job. Saving time and money using existing code can be the worst thing for the end user. Very few business people understand just how critical such decisions really are. Once a bad engineering decision is made, few organizations can muster the will to change it, and the programmers and the users suffer through endless, hopeless revisions.
Throughout digital history, about every 7 years the programming community throws a collective tantrum, breaks all of its toys, and transfers its attention to a new one. In the 60s flowcharts defeated chaos. In the early 70s structured code defeated spaghetti code and the goto. In the 80s object oriented programming dominated. In the late 80s the buzzword was reusable code.
In the 90s, when lots of code was ready to be reused, everything had to be Web-based, so nothing was reused. In the 00s, agile programming is the new toy. All of these toys are excellent. All of them have advanced the practice of programming.
All of these new movements in programming methodology share a common flaw. They all argue to be the one true method, applicable in all cases. But as software wise man Fred Brooks said, “There is no silver bullet.” The endless debates about tools and techniques are almost always devoid of specific application. One programmer posits that Java is best; another argues that C++ is best. Neither side says what for. Great flame wars rage on the internet without ever mentioning context. There is a tacit yet universal assumption throughout the world of programmers that programming is homogeneous; that it is a seamless and uniform practice; that the merits of a given method are its merits at all times, in all places, and for all practitioners. This is simply not true.
Agile is different from those other methods. It is the first indigenous movement among programmers that is about process and people, rather than about tools and techniques. It demands involvement by other programmers (pair programming, code review). It demands involvement by other non-programmers (users, SMEs, designers). It recognizes that programmers can’t do it all by themselves. It is refreshingly humble, saying “I don’t know best.” What’s more, Agile formalizes the fundamental truth that programming is incremental.
Agile originated as a way to force users to define what they want. Agile has grown and prospered as a way to force managers to stop sucking so badly. Agile has, incidentally, created a toehold for interaction designers in the software construction process.
Building software is a multi-stage process. It is not a homogeneous process of coding, coding, and more coding, but is composed of at least four distinctly different stages of effort, each one with unique characteristics. This has been recognized for a long time but is widely misunderstood.
“Waterfall” is the name of an obsolete method of multi-stage software construction management. Waterfall was an attempt to solve the central problem in software construction: which is not knowing what to do until after you’ve done it. Waterfall proposed that each stage be made perfect before proceeding on to the next. But each stage cannot be made perfect, and…
Waterfall doesn’t work! It just produces inappropriate crap that the end-users inevitably reject in disgust.
What’s more, the waterfall method introduced other, new, problems: Incompetent people can hide inside their silos generating mistrust. Management has little or no visibility into the process, so failures can lurk undetected in the opacity. Practitioners can work in isolation without collaborating. Little concurrent progress can be made in other stages causing delay. And waterfall never addressed the design problem competently, and design was set up to fail once again.
Waterfall not only hands off the work, but it hands off the responsibility, too. The designers own the product until they turn it over to the engineers, who own the product until they turn it over to the programmers, who own the product until they turn it over to testing.
But…responsibility must be ongoing. The designers should never own the entire product, but they should always own the design. The engineers should never own the entire product, but they should always own the way it is constructed. The programmers should never own the entire product, but they should always own production. As a result, the boundaries between stages can’t be so perfectly squared off.
The time boundaries of each stage are fuzzy. None of the stages can be started and completed independent of the others. There is considerable overlap in time. There is considerable collaboration between the various disciplines.
Many people have overreacted to waterfall’s failures by oversimplifying the problem. But that doesn’t mean building software isn’t a multi-stage process. There are significant, meaningful differences in the various stages of software construction. What’s more, the non-programming stages of software creation can be as large, complex, and as time consuming as the programming stages; and certainly, they are just as important.
Here are the four primary stages of software creation. Each stage is very different from the others. Each stage requires different tools and techniques, and often different people. One of the major causes of pain, frustration, and failure in the software business is confusing these stages, so I’m going to take pains here to differentiate them. The first stage, “The Big idea” is where the vision, or essential theme, of the product is determined. This first stage of ideation is almost always moot by the time programmers or designers get involved in a project, so I include it here just for the sake of completeness. “Design” is where the product’s users are identified and examined, and we determine what the product actually does for them. “Engineering” is where the technical construction issues are identified and examined, and the construction tools and techniques are determined. “Construction” is where the product is built, tested, debugged, and fine tuned. Let’s examine the differences between these four stages.
Three of the stages ask questions, while only one makes a statement.
The four stages consist of only two goals. The three stages that ask questions have correctness as their goal. The one imperative stage assumes correctness and instead aims for efficiency.
Each stage utilizes a different tool set. The Big idea requires insight and perspective. Design demands focus on humans: the users, the SMEs, the stakeholders. Engineering demands focus on technology: the architecture, the tools, the systems. Construction demands focus on productivity: the deadlines, the constraints, the blueprints.
The four stages require three different states of mind. The Big idea stage is unique: It is impossible to predict how it will be accomplished. It is often a solitary moment of brilliant insight occurring the instant you awake; then again it could happen in a staff meeting or in the shower. The middle two stages are both design stages and are of necessity collaborative; The fizzing of creative minds bouncing ideas off of each other is characteristic. Working with others in an egalitarian give-and-take is what makes these stages so productive. But the last stage, the construction stage, is all about flow; about getting into a productive state and staying there as long as possible. Stopping to investigate or collaborate is a costly interruption of the flow.
The four stages require three different procedural approaches. To conceive some Big idea, it’s likely that you must iterate; trying different things until you find the key that unlocks the problem. But once the right key is found, additional keys can’t help. Insight is singular, not incremental. The two middle stages; the two design stages, can both be considered as a series of spikes; as a continuous series of experiments, where the successful ones are kept and the failures discarded. Slowly, the keepers accumulate. Design is both iterative and incremental. Construction increments, but never iterates. The right way to build is one function at a time, each one in the right place, and never going back to re-work. You accomplish this because each function’s right place is known in advance. Iterating in construction just throws your money away.
I’m sure you can see that the two middle stages, the design stages, are agile. They are about investigation, problem-solving, openness, teamwork, collaboration, iteration, incrementing, explaining, and sharing, all in an effort to achieve the right answer. These two stages can really benefit from the open, collaborative, democratic, iterative nature of the Agile process. This is true regardless of which people or what craft is involved.
And the two outer stages are the fragile stages. The first is utterly unpredictable and magical. It can come from anyone, anywhere, at any time. You cannot really seek it out, but only cherish it when it happens. And do remember that is almost always completed before we get there, and we rarely encounter it in any project. The last stage is fragile because it is so big, so lengthy, so delicate, so difficult, and so critical to the success of the whole, that disturbing it in any way is foolishly, hellishly expensive. In these two stages, there is simply no advantage to putting lots of people in a room together to work openly and collaboratively, regardless of how intelligent or well-intentioned they are. The construction stage in particular demands quiet, uninterrupted, solitude: programming time. As you can clearly see, each stage is different, and each stage requires different skills, tools, and temperament.
Confusion between the four stages is the prime cause of software project failure. This has been called Agile, but it isn’t really. The resulting struggle causes lots of mistakes, and generates lots of extra work. Some ignorant observers see all of that extra work and are really impressed. But he who makes the most chips is not the best carpenter.
The sad thing is that most software projects are poorly run. The four stages are confused and conflated in most companies. Even though the Big idea stage is rarely seen, some programmers act like having brilliant ideas is their main job description. Production programming is routinely done concurrently with design engineering, burdening both stages with expensive irrelevancies. Design engineering typically proceeds with little or no knowledge of users and their goals. This makes costly back-tracking and re-working inevitable. The software construction process is widely dysfunctional. Failure and disappointment is considered normal.
The motivation behind Agile is to fix this dysfunctional software construction process. Recently, I’ve been interviewing Agile programmers as part of my research, and I’ve learned some very interesting things. Probably the most interesting, but quite unsurprising, thing I’ve learned is that many programmers have put their faith in Agile because it is a strong defense against a world filled with incompetence.
And programmers feel surrounded by incompetence: Arbitrary executives, ignorant managers, unfit programmers, guessing designers, pointlessly draconian schedules, suspicion, and mistrust.
Agile methods temper incompetence in several ways. It brings outsiders into the software construction process. It gives programmers license to discard bad code by refactoring. And it gives users and managers a taste of the programmers’ reality. After years of bearing all of the responsibility, while having none of the final authority, I don’t blame programmers for creating a coping tool.
For years the waterfall method has posited that big, complete, well-documented design done up front solves problems. We know that it doesn’t.
Programmers have created the pejorative “Big Design Up Front,” or “BDUF,” to describe the documentation that Waterfall necessitates. Agile defeats BDUF by allowing software construction to continue without any significant design documentation.
Agile is a powerful tool for coping with unreasonable customers. It lets users walk a mile in the shoes of the programmer. It forces the users to confront some of the difficult choices that fill the programmer’s day. It turns these former adversaries into allies, or at least declares a truce.
Agile is a powerful tool for coping with incompetent designers. The new discipline of interface design has attracted a lot of wanna-bes and posers, who imagine that their color sense or esthetic vision qualifies them to design complex software behavior. They believe that putting a pretty “skin” on software makes it easy to use, or that being nice to users makes them more productive. They guess at solutions, without caring that each of their guesses causes lots of wasted work for the programmer. Agile forces them to bring a little discipline and responsibility to their work, just like programmers have to.
Actually there aren’t that many incompetent, foolish managers out there who believe that it is their job to annoy and disrupt hard-working programmers. Most of the managers who make our lives miserable with bad choices are not fundamentally bad or incompetent people. Their problem is that their tools and their discipline have let them down. I’d like to take a few minutes to give you some insight into the manager’s dilemma. I do this so that you can defend yourself more effectively against his or her misconceptions.
Most of what passes for the discipline of management these days is the discipline of industrial management, perfected during the industrial age. Software and user-experience are what dominate our economy, not factories, and we are now well into the post-industrial age. But the overwhelming majority of managers in the world today still think they are managing industrial age factories, and are still treating programmers as though they were assembly line workers.
Established industrial age management practice is based on command and control. When your factory workforce is neither educated nor self-motivated, control is vital. It is typically hierarchical; with delegation to technical specialties (that’s us). Management’s primary tools are authority, money, and prestige.
But programming is post-industrial. Programmers are what management guru Peter Drucker calls “Knowledge workers.” We are typically smarter, more highly educated, and better trained than our managers. We honor competence, not authority; we crave skill, not money; and we strive for respect, not prestige. Unlike factory workers, we are self-directed and know what needs to be done better than any manager. Our job satisfaction comes from the quality of our work, not from the performance of the parent corporation.
To differentiate between industrial and post-industrial management, Drucker introduces the complementary ideas of effectiveness and efficiency. In the industrial age, you made your product cheaper, so you could reduce prices, so you could sell more of it, so you could make more money: that is efficiency. In the post-industrial age, you make your product better, so it becomes more desirable, so you can sell more of it, so you can make more money: that is effectiveness.
In the post-industrial age, reducing costs simply reduces the quality of the product, which reduces the desirability of the product, which diminishes sales, and so does not result in increased profits. You can be a highly efficient company, saving lots of money, while still failing utterly. Speed is usually a measure of efficiency, too. You can be increasing your velocity while still failing to increase the desirability of your offerings. Going fast in the wrong direction is worse than going slow in the right one. While I certainly don’t advocate wasting time or money, there is no significant benefit to reducing costs in the post-industrial age, yet most management practice is all about cost reduction.
But these executives are neither stupid nor blind. They know that the current methods of software construction aren’t working. They can’t imagine that they are at fault, but they are aware of the problem. And they are at peace with it. They have given up on improving the process. They have concluded that frustration and failure are normal. They expect programmers to be surly, designers to whine, and users to complain.
With an expectation of failure, it is no wonder that managers have little confidence in the outcome of software projects. They’ve seen giant, successful, well-funded companies spend years launching colossal failures, like Windows Vista and Yahoo’s Panama. They’ve also seen undergrads and misfits with hand-me-down computers create colossal successes, like FaceBook and Flickr. Well, if you don’t understand software, that can seem pretty random. And if you think success is random, then spending less for a lottery ticket makes more sense than spending more. It is this lack of confidence in the result, along with the knowledge that the size of the investment doesn’t correlate to success that makes managers wary of spending money on software.
And the single best way to spend less money on software is to spend less time. This is why managers are always in a hurry; constantly exhorting their people to be “first to market.” This is why programmers are never given enough time to do their job well. This is why we are forced to begin construction well before fundamental design and engineering decisions have been made. This is why we are forced to ship prototypes.
The rush to build software as fast as possible is a management strategy for coping with the invisibility and uncertainty of the software construction process. It serves no useful business function by itself. It gives rise to zero-sum tactics, with everyone fighting for apparently scarce resources. No competent business person would choose speed over correctness if they had confidence in the outcome.
There is no evidence whatsoever that being first to market confers a business advantage. There is abundant evidence that being late to market with a better quality product does confer a significant business advantage. For example, the Archos Jukebox was the first battery-powered, hard disk-based, 6 gigabyte, portable MP3 player. It beat the iPod to market by more than a year yet it was a dismal failure because it was very poorly designed.
In other words there is no large group of people out there waiting in a breathless delirium to purchase your lousy product sooner rather than later.
When managers give you a deadline, ask why that deadline exists. Their answer will inevitably pass the buck to their superior (remember command and control). Ask your boss whether he wants a successful product or a product delivered on some arbitrary date. Challenge him boldly.
Tell managers you want to work to a standard of quality, not time. Tell her that a higher quality of user experience will generate greater sales, and it will have a more positive effect on the bottom line than cost reduction.
Managers will often demand proof that questing for quality will have some measurable “return on investment”. That’s an easy one. Just agree to provide ROI numbers using the same system they currently use. No such system exists. When a manager demands that you justify your efforts, simply ask her the same in reverse. How does she justify her current methods? You will hear one of three answers: my boss told me to do it; it will reduce costs; or some vague, indefensible generalization about better products. You may not be able to win these arguments immediately, but you will be able to sow seeds of doubt in the manager’s mind. You can begin the slow process of converting your boss into a post-industrial manager. And interaction designers are going to help out enormously with this task.
In my experience, engineering and building concurrently is the number one cause of projects that are expensive and problematic. The organization refuses to invest in determining how the program should be built before it goes ahead and builds it. This generates lots of extra, wasted work. And once the ill-conceived program is out there, it must be supported, which is the main reason why so many programmers are miserable.
In my experience, designing and engineering concurrently is the number two cause of expensive and problematic software projects. The organization refuses to invest in determining what the program does before it goes ahead and addresses detailed technical issues.
Agile vanquishes silos and brings the disciplines together. Agile gets results because it wants results. Remember, before Agile, most programmers wanted to create good programs. But Agile wants to create good products, and this revised emphasis makes everyone involved change their focus towards the correct goal.
The rapid iteration so vital to Agile is key to effectively integrating the disparate disciplines. Errors and missteps can be detected and corrected quickly. Less bad code needs to be discarded.
One of the underlying principles of Agile is that coding starts on Day zero. That’s because programmers know that code is the one thing that can’t be argued with, and it forces everyone to pay attention. Assuring that the code actually operates at the end of a short timebox assures that it will be available for inspection. This is how Agile brings management visibility into the process.
That visibility extends beyond management, to everyone. But this very visibility exposes Agile’s weakness: understanding and interpreting the input of users, managers, and other stakeholders.
In my discussions with Agile programmers, in addition to the complaints about apparent incompetence, some very clear patterns emerge. Programmers want to spend less time arguing and more time crafting better software. Programmers want to stop mindlessly following stacks of BDUF towards bad software and unhappy customers. Programmers want more job satisfaction.
The pattern that emerges the clearest is the desire to create better products. Agile programmers have learned that when the users of their programs have a good experience, the programmers feel good, too. Agile programmers crave the sensation of pleasing users with products that satisfy. And that is exactly what interaction designers want the most, too.
Interaction designers also feel surrounded by incompetence. They also want to spend less time arguing and more time designing. They also want more job satisfaction. Above all, interaction designers also want to make users happy with products that satisfy.
What I don’t understand is, why aren’t interaction designers an Agile programmer’s best friend? Why don’t Agile programmers seek out the assistance of interaction designers?
Interaction designers love to do the tasks that programmers don’t like, like observing and interviewing users, and negotiating with managers. Interaction designers can make sense of human behavior in the same way that a programmer can make sense of a computer’s behavior.
Not only do interaction designers enjoy studying users and other stakeholders, but interaction designers know how to interpret what they observe. Their key contribution is translating research into actionable sketches and narratives that programmers can use. Their work frees programmers from communications failures, management misunderstandings, and wasted work. Those sketches and narratives are like user stories except that they are more accurate, more precise, more detailed, more correct, more complete, more readable, and more understandable by managers as well as by programmers.
Because the interaction designer addresses the business rationale for the project as well as the user rationale, it can bridge the communications and understanding gap between programmers and business people.
Agile brings outsiders into the software construction process. Interaction design makes the contributions of those outsiders effective and useful. In an enterprise Agile shop, the interaction designers can bring some sense to the requirements definition process. In a small team Agile shop, the interaction designers can bring some clarity and focus to the behavioral definition process. In both size shops, the interaction designers can give management more confidence in the outcome, which will reduce the time pressure managers put on programmers.
Just like the craft of programming, interaction design demands the correct aptitude, training, and experience to be good at the job. It typically takes as long to train a journeyman interaction designer as it takes to train a journeyman programmer. Interaction designers work just as hard as programmers do, and they take responsibility for the quality of their designs, just as programmers do. Above all, interaction designers can create compelling and useful descriptions of software form and behavior that will ease the programmers’ job. Creating user stories is arguably the Agile programmers weakest skill. It is the interaction designers’ strongest one.
The Agile movement is based on lots of knowledge, experience, and wisdom learned over the years by professional programmers. Wedged in among all of that wisdom, however, is one big, commonly-held, erroneous assumption. Programmers, and most other people too, assume that asking people what they want results in useful answers. But human beings are not capable of giving answers that can be used without significant transformative effort. Let me show you why this is true.
Users don’t provide information that is directly useful to technologists. It isn’t because people are stupid or obstinate. It is just that they — we — are victims of cognitive illusions. There is a large and growing body of scientific evidence corroborating what programmers know empirically: humans don’t know what they want, what they need, or even what they do. And they are utterly blind to the real reasons why they do what they do.
Important recent work in evolutionary, behavioral, and cognitive psychology in conjunction with neuroscience has shed much light on the problem. Scientists such as Stephen Pinker at MIT; Clifford Nass and Byron Reeves at Stanford; Marc Gerstein at Columbia Business School, Pfeffer & Sutton at Stanford; Mlodinow at UCBerkeley; Abrahamson at Columbia; Linden at Johns Hopkins; Claxton at Oxford; Schwartz at Swarthmore, and the brilliant baboonologist Robert Sapolsky have all studied and written extensively on the various phenomena of apparently irrational human behavior. All humans, programmers, managers, and interaction designers included, are subject to a broad range of perceptual and value distortions.
For example, we all familiar with visual illusions, which are widespread in human cognition. They are typically by-products of our evolved survival mechanisms. In this image, the two circles at the center appear to be different sizes, but they are actually the same size. We would be foolish to think that we have such visual illusions in our minds but that we don’t have equivalent reasoning illusions. Here’s an example of a famous reasoning illusion developed by Kahneman and Tversky at Cambridge. Read this description of a woman named Linda and then answer the question.
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and she also participated in anti-nuclear demonstrations.
Which is more likely:
1. Linda is a bank teller
2. Linda is a bank teller and is active in the feminist movement
If you are like me and 85% of the rest of the human race, you chose #2, but that can’t possibly be correct. According to the laws of probability, answer #2 cannot have a greater probability than answer #1. But it turns out that this common “Conjunction Fallacy” occurs in doctors’ diagnoses and in lawyers’ judgments too. There are many such documented cognitive distortions.
Cognitive psychologists refer to forces like:
Loss aversion: our tendency to go to great lengths to avoid possible losses.
Value attribution: our inclination to imbue a person or thing with certain qualities based on initial perceived value.
Commitment bias: playing not to lose instead of playing to win.
Pygmalion effect: performance expectations are self-fulfilling.
Tyranny of small decisions: too many choices can be debilitating.
Evolutionary Psychology: our brains are shaped for fitness, not for truth.
Management fads: the manager’s willingness to decide based on gut feel, popular fads, or the ideas of the last guy he sat next to on the airliner.
Abilene syndrome: groups choose things that no individual group member wants.
Cognitive friction: regarding sufficiently complex behavior as human.
Memory distortion: unpleasant things and rarely occurring things assume larger roles in our memory.
Hawthorne effect: productivity increases under observation.
Stockholm syndrome: hostages falling in love with their captors
Diagnosis bias: our blindness to all evidence that contradicts our initial assessment of a person or situation.
Everyone has reasoning illusions; not just users. This means that if you ask senior executives, or subject matter experts, they will tell you things that may or may not be true. And your own illusions will often cause you to misinterpret the true meaning of what you hear. Product managers, designers, and programmers getting together to brainstorm good product ideas are also subject to these perceptual distortions.
While interaction designers are pretty good at inventing user interfaces, lots of programmers and product managers are good at that, too. Where interaction designers can deliver significant, unique value is in their ability to distill useful answers from the distorted, raw data extracted from, or contributed by, humans. Interaction designers use ethnographic research, user models, scenario construction, role-playing, and other specialized tools for the distillation process. By the way, all of these methods are Agile.
Interaction designers are like vintners, taking tiny, bitter, thick-skinned, seed-filled, cabernet sauvignon grapes, and turning them into sweet, drinkable wine. One of the bigger problems that interaction designers can solve is the one posed by the need to address requirements. The word “requirements” is a business term of art supposedly describing a list of desired features, that, if included in your software, should allow market success. Most users of the term imagine that if you ask users or customers what features they would like in a given software product, the answers they give you are “requirements.” But all of those cognitive illusions guarantee that the requirements you gather will be nothing but a handful of tiny, bitter grapes.
You need an expert, with tools and training, to turn that raw data into useful requirements. The requirements problem generally afflicts Agile programmers in large, old-school, big iron companies. The programmers work in the IT department and their software is usually regarded as a service to the rest of company. A list of desired features requested by your customers, or thought up by marketers, if included in your software, will not allow market success.
Giving people what they ask for doesn’t result in success. Your customers aren’t the same as your users. Neither your customers nor your users know what they want or even what they do. What people tell you has little bearing on the truth. Good user experience isn’t dependent on features. Radically different products can have identical features. A list of features is not the same as the design of behavior. Expertise in a subject doesn’t correlate to expertise in designing software behavior. Requirements cannot be “gathered” like colored eggs on Easter morning.
As you can see, interaction designers have significant work to do before coding begins. Long before Day Zero, when features and interface are on the table, bigger questions need to be answered about who, exactly, is the user, and what, exactly, will make him happy. This work consists of observing and interviewing users and other stakeholders, then transforming this raw data into useful narrative design tools. Work performed here assures that the team will build the right product, and not just some loose collection of suggested features. If you wait to answer these questions until after coding has started, you may find yourself throwing out many weeks of work, instead of just doing some minor tweaks or refactoring. This work can also be performed in parallel with other marketing efforts and need not delay the start of programming.
Agile programmers in big-iron, IT-based companies have the additional difficulty of dealing with directives handed down from a distant manager or a remote marketing department. Programmers are forced to assume that the product concept is correct, and has been vetted. In fact, this is frequently not true. Businesses often simply grab a technology or a market and proceed without ever thinking things through.
While the first work product of programmers is a tentative solution to the user’s needs, the first work product of interaction designers is a narrative restatement of the problem being addressed. These narrative descriptions of users and their goals are compelling stories primarily composed for managers and marketers. When these non-technical people read them, they can clearly see if their product and market ideas are viable. An executive or marketer may not be able to look at a screen and judge whether what he sees is correct or not, but he will be able to read stories about real users and make that judgment. This can prevent the entire team from heading down a blind alley.
Interaction designers have even more significant work to do after coding begins. Most of it is block-and-tackle, three-yards-and-a-cloud-of-dust interface design, but it also includes reconciling with the overarching purpose of the product the hundreds of good ideas that inevitably pop to the surface during coding. Just because an idea is good, doesn’t mean that it is good for the target user. And just because an idea is feasible, doesn’t mean that it is worth putting into the product. The interaction designer can make the decision easy by bringing to the surface the underlying reasons why an idea is good or bad. By keeping the group so informed, the decision can remain open and fair.
Agile programmers in small team, Web-oriented companies collaborate closely with all of the other disciplines, and their software is usually regarded as a product offered to the outside world. In these companies, Agile programmers, marketers, designers, and product managers, are tasked with working in democratic teams to brainstorm good product feature ideas in order to know what to build. While it is true that good ideas can come from anybody involved in the Agile process, those ideas are still more like grapes than they are like wine.
Once again, you need an expert interaction designer to translate all of those good ideas into an orderly vision of behavior that will truly please the end user. Just delivering features won’t do it. You can imagine the interaction designer’s interpretative work as similar to the programmer’s code. During the coding process itself, the programmer owns the code. Once it works, he gives it back to the team for acceptance and modification. Similarly, during the user investigation, the interaction designer owns the interpretations. Once the user narratives are sketched out, she gives them back to the team for acceptance and modification. It isn’t a refutation of democracy, just a recognition of particular aptitudes, skills, and abilities. By letting interaction designers focus on users and their goals, it leaves the programmers to focus on technology.
Remember, earlier I said that confusing behavioral design together with engineering design is the number two cause of failed software projects.
Having skilled interaction designers tackle the user-centered issues is how to solve this. Both of these stages are agile, but they can now be tackled by craft specialties with appropriate skills and tools.
Interestingly, omitting interaction design, and dealing directly with users and stakeholders doesn’t really harm the Agile programming process. It only harms the end result. It is not uncommon to have a successful, Agile development project that still fails to satisfy the user. Once again, managers are left scratching their heads, wondering why if the project ran so smoothly the result was so mediocre. “Well, that’s just the nature of software,” they think.
Some programmers are so baffled by these difficulties that they throw up their hands and cry uncle. This was particularly true in the early Agile days.
My esteemed colleague, Kent Beck, the creator of Extreme Programming says “you can’t know what users are going to find valuable in a piece of software.” People like Kent know that the one thing they can depend on is code, so they start coding as soon as possible. Then they demand lots of corrective feedback from their captive user agent. This assuages the programmers’ feelings of going off into the weeds by himself, but it often means that he just drags the user off into the weeds with him.
All programmers know the heartbreak of giving a user what they ask for, only to have them reject it on delivery. The programmer thinks “I gave them what they wanted, and then they didn’t want it.” So they deduce: “What they wanted must have changed.” Apparently, the programmer thinks, the requirements have changed. And if requirements are changing, why bother to design? But this is a self-fulfilling prophecy: if you don’t design, requirements will always appear to change.
I say you can know! Interaction design allows you to know with confidence what the user will find valuable in software. If you regard humans logically, knowing is impossible. But if you have the tools to deconstruct illogical human behavior and see through their cognitive illusions, knowing what they want is no more difficult than writing a program in Java. This means that interaction design can reduce the Agile programmer’s workload significantly, without materially affecting the programmers’ work methods. Similarly, the interaction design process reassures business people that they are making the right decisions, and that their teams are moving forward more productively. Where Agile methods give managers visibility into the technical side of the process, interaction design gives managers visibility into the human side of the process.
Now that I’ve shown you the role that interaction design can play both before and after Day zero, I’d like to briefly suggest just one more process improvement that will greatly ease the Agile programmer’s burden. This improvement doesn’t involve interaction design. It’s all about programmers.
Frederick Brooks is arguably the wisest man in the world of software. In a seminal essay he first wrote over thirty years ago, he said “Plan to throw one away,” meaning that you will throw one away whether you want to or not. He learned that you have to write it once in order to learn what you are doing and how to do it. You write it a second time to make it good enough to actually deploy publicly. In my experience, Brooks is correct. Before Agile, programmers created a program, shipped it, then proceeded to try to build it right the second time. Except that they could never build it right the second time because they had all of the incorrect, legacy code to cope with. Not to mention the legacy of unhappy users who tried it the first time and got their fingers burned.
Agile is happy to discard bad code; it just does it in lots of little chunks instead of in one big one. The Agile way is better, too, because it cuts out and removes bad code along the way. When completed, the program is a better platform for subsequent improvement.
But imagine what we could do if we actually planned to throw the first one away, rather than being surprised by having to do so. You are going to write it twice anyway, whether the discarded first one is in tiny parts or one big chunk, so you might as well make the first time count for the max. Brooks says that we will do things twice. I say we should acknowledge this truth and maximize the first time for understanding, and maximize the second time for efficiency.
The first one, the discardable one, should be spare and lean and correct.
Its purpose is to demonstrate correctness, so its construction should be iterated, collaborated, and inspected. You should take as much time as needed to get it right; it’s a learning tool, after all. It should be Agile. It is a bare minimum, engineering proof for internal use only, and you should never, ever ship it.
The second one, the shippable one, should be full and complete and built with the maximum of efficiency. Its purpose is to be deployed successfully, so it must be complete, error-free, well-architected, easy-to-modify. Its construction should be steady, purposeful, and entirely pre-planned. You should take as little time as possible to get it out the door. It has no need to be Agile, because all design and engineering issues will have already been asked, answered, and demonstrated.
When the demands of construction are removed from the engineering process, everything becomes faster and easier. The two design stages, interaction design and design engineering, remain purely Agile; fast, lean, iterative, questing always for correctness, unburdened by the need to create shippable code.
The final construction stage becomes purely productive. The programmer can achieve a state of flow and be incredibly efficient. The earlier, iterative stages showed how to do it right, and now the programmer can simply proceed with full knowledge of what he is about. No backtracking, no refactoring, no discarding of code, because all of that was already done at its most efficient level. The production programmer can estimate how long it will take to program things very accurately for the simple reason that it has already been done. His estimates are very reliable because he knows that there won’t be any nasty, technical, worm holes for him to fall into. That will already have been proven in the Agile stages.
The software world has a long and checkered history of process improvement for the sake of programmers, while never really making life much easier for our users. Nothing much has changed with the advent of the World Wide Web. Most software has become easier to use simply by becoming less powerful. And Agile methods, so far, have also been mostly about making things easier for programmers, and users have still gotten mostly lip service. The good news is that Agile methods are the first to genuinely make room for interaction designers, whose only goal is to make software better for users. This means that Agile may be the best thing to ever happen to interaction design. And that would be the best thing to ever happen to software users.
I owe a debt of gratitude to Jeff Patton for inviting me to speak at this conference. I was quite reluctant to go, but Jeff simply would not accept “no” for an answer. I am forever grateful for his persistence.
I was pleased to discover — upon republishing the text of this speech here on Medium in January of 2018 — just how much interest there was. The PowerPoint slides in the actual presentation were, shall we say, graphically immature, so I omitted them from this republication.
But I had forgotten that, some months after I presented this talk in Toronto, I had posted the text and the slides on the Cooper website. Cooper’s new owners have recently let that original posting of slides and text die. I only learned about this when readers contacted me asking for the now-unavailable slides. One of them was author and teacher Leo Frishberg. He called it “one of the best presentations I’ve seen to help contextualize UX and Agile,” and said he frequently uses it — slides and all — in his classes. So I decided to put the original, tacky, slides up along with the text. I considered reformatting, republishing, and reposting it as a new essay, but I finally decided that simply adding the slides to the 2018 posting was best, primarily because it preserves the highlights and comments that you generous readers have contributed.