Wednesday, October 17, 2007

User Fiendly

No, that title is not a typo. If you didn't think it was, look again. If you noticed, you may not be surprised that my topic is voice response systems.

I can always tell when my wife is negotiating a company's telephone system's speech recognition menus by the note of restrained frustration in her voice. I thought of this when I answered a call from Verizon Wireless wanting to survey me about their own voice response system for customer service. The first question was, "Did you use the voice response system, or did you choose to use the telephone keypad?" When I said, "Telephone keypad," that was all they wanted to know.

For at least several decades now, clueless executives have somehow been under the impression that speech recognition is the magic bullet that will make all technology approachable and user friendly. Nothing could be further from the truth. Remember when the same magical qualities were attributed to GUI's? A thousand graphical interfaces ensued that were just as hard to use as DOS or UNIX.

The key point that I think these people miss is that it's not the form of the interaction (text, graphical, key presses, voice), but rather what you do with it. The intuitive nature of the graphical interface derives from the use of appropriate metaphors; the graphics simply enable a broader range of possible metaphors than is allowed by a command line interface. Similarly, speech recognition does not itself make a system easier to use - it can actually make it much harder to use. What matters is what you do with it.

On this point, a fully functional and reliable natural language system would be ideal. Lacking that, a constrained, limited vocabulary is second best, but still probably slower and therefore less desirable than simple key presses. An open-ended, pseudo-natural language interface that sometimes works and sometimes fails in unpredictable ways, doesn't react appropriately to expressions of frustration, and gets in the way instead of greasing the skids is the worst possible solution. Yet that's the one we have now. So why, exactly, do companies think this is easier than pressing keys?

Sunday, September 23, 2007

Blame the User

“Oh, come on,” Richard said quietly to himself.

“Sorry?” Bill, or Bob, or whatever lifted his head from behind Richard’s monitor.

“Sorry, not you,” Richard said. “It’s this stupid magazine. Everyone’s blaming the financial industry for the subprime mess, but what about the home buyers? Why doesn’t anyone make them take responsibility for signing up for loans they couldn’t repay?”

“Good point, sir.” Bob or Bill put his head back down and tapped some more keys.

“How much longer, anyway?” said Richard.

“It’ll be another hour or so. Um, did you change any of the settings on your anti-spyware utility?”

“Oh, I don’t know. Maybe. It was running real slow so I changed a bunch of settings in various places until it sped up. Why?”

Bill or Bob sighed. “Well, you’ve got a major spyware infestation, including several stealth keyloggers installed by Trojans. They’ve probably captured all your passwords and account settings. Do you access the company trading accounts from this machine?”

“Sure, I have to. So, what do you have to do?”

“Well, sir, you should probably make sure there haven’t been any unauthorized transactions from your account. In the short term, I can get you up and running but I’ll have to reformat your drive and reinstall Windows. You’ll lose all your files, but your computer will be up again. I may be able to restore your files from backup but I’ll have to make sure they’re clean.”

“OK. Whatever.” Richard went back to his reading while his computer made the Windows shutdown and startup noises several times in a row. Finally, he folded the magazine and hurled it across the room, neatly hitting the rim of his wastebasket and knocking it over. “Damn. You’d think people would know better than to sign false income statements. I just don’t get it.”

He pondered for a few more minutes. Then, “If people aren’t going to be responsible, how do you keep it from happening again?”

Bob or Bill looked up and hesitated for a moment. “Well, sir, if it were up to me, I wouldn’t let anyone use a computer unless they knew what they were doing.”

Saturday, September 15, 2007

Feature Creep

I've often heard users and designers bemoan "feature creep" and express the wish that manufacturers would limit the number of features supported by their products in order to make them more simple to use. While I sympathize with the goal, I don't think that avoiding feature creep is necessarily the solution.

Consider the iPod. A recent issue of MIT's Technology Review magazine (May 2007) focused on design, and frequently held up the iPod as a design ideal. Don Norman, speaking of Apple in general, stated, "The hardest part of design, especially consumer electronics, is keeping features out." Mark Rolston, senior vice president of creative at Frog Design, said, "The most fundamental thing about Apple that's interesting to me is that they're just as smart about what they don't do. Great products can be made more beautiful by omitting things."

So, back to the iPod. It originally began life as a music player. Along the way, it added a calendar, contacts, notes, alarm clocks, world clocks, stopwatch, audiobooks, picture viewer, video, podcasts, and games, and it can be used as an external hard drive, even a bootable one for an OS X machine. The iPod Touch adds internet surfing, a You Tube viewer, online access to the iTunes music store, and the ability to buy whatever song you're currently hearing at Starbuck's.

Hmmm... Isn't this the very definition of feature creep?

And yet, the iPod remains beloved, an icon of good design. And very deservedly so.

Because what makes the iPod easy to use is not feature restraint, but rather the fact that all of its many features work the same way. The user need only learn one general rule about how the interface works and can apply that rule to pretty much every function.

So perhaps the trick isn't in avoiding feature creep, but rather in avoiding "rule creep".

Sunday, June 3, 2007

Six Sigma, Innovation, and Usability

This week's (June 11, 2007) Business Week cover story is about how Six Sigma "almost smothered" 3M's culture of innovation. The gist of the story is that 3M's attempt to use Six Sigma in every corner of its operations, including R&D, caused innovation to become more incremental, more safe, and more saddled with administrative overhead. Consequently, higher risk programs weren't pursued and 3M has slipped markedly in measures of innovation, such as the proportion of revenues coming from recently developed products.

Having been through a Six Sigma transformation at a large organization, I have a few thoughts on this.

First, it seems to me that user experience is one of the most important and least appreciated aspects of quality. As a proponent of applying more structured methods to user requirements and usability assessment, I think that formal user requirements methods and structured usability testing fit right in with the spirit of Six Sigma and other quality programs. If Six Sigma, for example, causes an organization to move from focus groups to more effective human performance-based measures of usability, I'm all for it. I hope that anyone advocating for Six Sigma within an organization will recognize that user experience and usability are key aspects of quality and deserve the rigor and emphasis traditionally paid to engineering and manufacturing.

Second, I think that everyone should learn something about statistics. I think that some understanding of probability and statistics is actually necessary in order to not only work effectively, but also to make informed decisions in most aspects of life, including voting. When an organization adopts Six Sigma and makes everyone learn the basics of statistical analysis, everyone benefits, including society at large.

However, no program dedicated to process improvement should itself become a process impediment, and the indiscriminant application of Six Sigma to all parts of an organization has a lot of potential to do just that. It's well understood that many of the most profitable break-through products come from serendipitous discoveries enabled by exploratory research that may not be undertaken in a risk-averse, highly controlled environment. This is what the Business Week article focuses on.

To me, the benefits and drawbacks of Six Sigma all stem from the same source: the fact that Six Sigma tries to substitute objectivity for subjectivity and data for intuition wherever possible. I think this is both useful and appropriate in some applications, such as administration, logistics, and manufacturing, but not so much in R&D. Research and design, particularly the exploratory types that lead to breakthrough products, depend to a large extent on subjectivity and intuition and are easily stifled by processes that seek to drive them out.

Furthermore, even where it's useful and appropriate, Six Sigma can still fall down because it's very susceptible to garbage-in/garbage-out. In Design for Six Sigma, for example, people often have to enter ratings of competitive position and other market environment factors into an analysis process, and these form the basis for subsequent decisions. Many of these factors can't be directly quantified or measured and must necessarily be subjectively estimated. Lengthy analysis processes can often include chains of subjective judgments, and small errors in early steps can compound into much larger errors at the end. It very easy, in Six Sigma, to end up with a product that looks like fact or data but is actually largely fiction, because of the use of formal methods to force subjective products into an apparently objective framework.

The bottom line, I think, is that Six Sigma and other quality programs can be very useful and productive if used within their valid limits, but can be highly counterproductive otherwise.

Wednesday, April 4, 2007

PowerPoint Power Tip # 1

Need to delete or change an object that's being covered by another object that you don't want to move or delete? Click and drag over both objects to select them both, then hold the Shift key while you click on the objects. This should de-select the object in front, since that's the one that will intercept the click. Now you should have only the object behind selected; you can now delete it, move it using the arrow keys, or whatever else you want to do with it.

Tuesday, March 13, 2007

The Right Answer

One of our friends here recently sent out a flyer on behalf of the local historical society asking people to submit their stories of how they learned about Point Roberts and how they got here. The flyer also asked people to donate $10 to the society. Unfortunately, the wording of the flyer made it appear that people had to donate in order to submit their stories, and the wording of the request for stories was so open-ended that some of the responses were unusable.

A university student was wrestling with the design of an experiment. She knew the general topic to be investigated and had a general approach in mind, but couldn't arrive at the specific approach or the steps that needed to be taken.

A group of engineers at a company I worked with were trying to design an input device for a workstation. They were trying out several options and having a hard time deciding which one was the best.

These situations all have one thing in common: inadequately defined requirements. I think this is one of the most common mistakes made by designers of every type, and I think it stems partly from a tendency to think one already knows all the requirements, or that the requirements are obvious and don't need to be spelled out. But my friend at the historical society might have had a better response if he had first spelled out what the desired product was (a specific type of story and a willingness, separate from the story, to make a donation), the student would have had an easier time designing the experiment if she had articulated the experimental question to be resolved, and the team of engineers would have been better able to decide which design was best if they had specified some criteria first.

You may think this goes without saying, but I can't count the number of meetings I've been in that have gone either nowhere or in circles until someone said, "What are the requirements?" Perhaps it's one of those lessons that are so basic that we need to be reminded of them continually, because we take them for granted otherwise. And, as illustrated by these three situations, I think this lesson goes beyond product or interface design to life in general. After all, how do you know what the right answer is if you haven't defined the requirements?

Thursday, March 1, 2007

New Term...

...for a poorly designed interface: "untuitive".

Saturday, February 24, 2007

The Features/Usability Bind

One of the implications of Moore's Law, which states that processing power doubles about every eighteen months, is that in about eighteen months you'll be able to buy products that are twice as complicated as the ones you can buy today. This seems inevitable in a commoditized electronic products marketplace, in which manufacturers seem only able to compete based on the number of features they can fit into a product.

The twin trends of technology products are smaller form factors are more features. This leads to what I call the "features/usability bind", in which smaller devices with smaller physical UIs have to access larger numbers of functions. Inevitably, this ends up requiring that input devices (even simple pushbuttons) have to become multi-function; even if the interface remains relatively simple, the underlying functional logic becomes more complex. Furthermore, unless the problem is addressed at the level of the functional logic, complexity will continue to grow with proportionally with features. Perhaps this is why so many electronic products are returned as defective because people can't figure out how to use them.

Engineers have often stated that they expect technological progress to hit a brick wall when the laws of physics catch up with Moore's Law and no more transistors can be crammed onto a chip. I think the real brick wall of technology isn't the number of transistors that can fit on a chip, but rather the number of rules that will fit in a user's head. After all, the user has to learn and remember all the rules that govern how the product works; if these rules aren't intuitive, they must be learned by rote, or the user will simply decide that the feature isn't worth the effort.

I think that there are three basic classes of usability problems associated with electronic products today: modes, convoluted functional logic, and hidden functions. Let me briefly describe each, as I see them.

The technical definition of "modes" from a UI perspective is when the same user action produces different results when the control or system is in different states. A mode error led to the crash of an Airbus A320 in Strasbourg, France, when the pilot, attempting to dial a 3.3 degree flight path angle into the autopilot, instead selected the vertical speed mode, causing the value to actually be 3300 feet per minute. Mode errors in consumer electronics products don't usually have such dramatic consequences, but they can be annoying. One that I typically encounter is when I try to change the channel on my satellite TV receiver but instead cause the TV to revert to "tuner" mode because the remote was in TV mode, and the TV thought I was trying to change its internal tuner channel. A lot of people have come up after presentations to tell me that they've tossed the universal remote control and gone back to the five separate ones because they were tired of making similar errors all the time.

Convoluted functional logic is when accessing a function requires following a complex or hard-to-recall series of steps. For example, storing a phone number in a speed dial location of a phone that I have requires you to press PROG to put the phone into "program" mode, then press the speed dial button where you want to store the number, then dial the number on the keypad, then press another speed dial button labeled MEMORY which, when the phone is in program mode, stores the number you just typed into the memory location you selected at the start of the sequence. Then, you press PROG to take the phone out of "program" mode and put it back into "phone" mode. (This example has the added benefit of again demonstrating a problem with modes.) Another example would be that, on a minidisc recorder I used to own, you had to press and hold the RECORD button and simultaneously press the VOLUME button in order to select manual record level. Have you ever had trouble figuring out how to turn off the alarms on a hotel clock radio? Convoluted logic was probably the culprit.

Hidden functions are typically press-and-hold functions that aren't revealed anywhere in the interface. A friend of mine had to leave a car wash once because he couldn't get the antenna down; when he pressed the radio POWER button, the system would alternate between radio and CD player. It turned out that he had to press and hold the button for a second or so in order to actually turn the power off.

I'll bet that most of the problems people have using products are due to one of these three classes of problems, and not to poor design of the interface itself. One of the ironies is that modes, for example, are often intended to simplify product use by collecting similar functions into the larger umbrella of a mode. But then the user has to learn about the modes, and they themselves become a source of complexity. This is why mode errors have been implicated in several aircraft accidents and at least one cruise ship accident in recent years.

Ultimately, these three classes of problems represent rules that the user must learn and remember. I think we're already at a point where people don't use all the features of products they already own, so trying to sell them new products based on additional features may soon become a losing proposition.

In my view, the best way out of the features/usability bind is to decouple the conceptual complexity of the product from the functional complexity, so the user doesn't have to learn new rules in order to use new features. Note that this is not an interface-level problem. If the functional logic is hard to use, the best interface in the world won't make the product easy to use. One strategy is to apply appropriate metaphors at the level of the product's logic, rather than just the interface. We're used to thinking about metaphors in terms of what icons represent, how radio buttons and check boxes work, and so forth. But the principles of metaphors can be carried to the level of the logic and, ideally, applied across the entire range of product functions. In other words, make the product work like the user thinks, so the user doesn't have to learn how the product works.

The desktop metaphor of the Mac and Windows interfaces is a good example of this, because it re-defines the underlying functions of the operating system into a conceptual world that the user already knows. And it offers a way out of the functions/usability bind. After all, Windows is substantially easier to use than DOS was, but it's infinitely more functionally complex.

Friday, February 23, 2007

PowerPoint Prototyping: an Introduction

I've had several occasions recently to develop quick prototypes of new display designs, and some small, focused usability evaluations of design options in cases where analysis wasn't able to resolve all of the outstanding questions or issues. I've found PowerPoint to be ideal for this, for several reasons:

- Its familiar graphics front end make developing the prototype graphics quick and easy. Although PowerPoint doesn't have as much graphic power as, say, Photoshop, the Format Autoshape dialog (right click on an object) provides more than enough control to create graphics that are precise enough for interface prototyping.

- Its hyperlink features make it possible to build what are essentially "screen shots", or slides of what the interface would look like in each possible state, and simulate navigating between them when the user selects the available options on each slide. The hyperlink features (accessed by the Action Settings dialog) include both mouse click events (go to the link when you click on the object) and mouse over events (go to the link when you simply move the cursor over the object). This makes it possible to simulate such things as highlighting individual options in drop-down menus when you move the cursor through the menu.

- Its Visual Basic for Applications (VBA) back end makes it possible to build in much more complex behaviors than would be supported just through hyperlinks. For example, a VBA subroutine triggered by an object can cause another object to change in some way. I'll show an example of this below.

- Furthermore, VBA can be used to record user actions and send them to Excel. In this way, a prototype can be turned into a desktop usability test, with user selections, what slide they were looking at, relevant prototype status variables, and time tags all captured automatically in Excel for easy analysis. This makes it possible to quickly develop a prototype that can be used to automatically capture user performance data such as how much time it took a user to make a decision or complete a task, and what errors or choices they made along the way.

- Because most people in business already have PowerPoint and Excel, designers and researchers can develop prototypes, usability tests, and experiments with tools they already own instead of having to buy special-purpose products. Furthermore, in some cases, the files for a study can be sent to the user/subject and run remotely on their own machines. This makes running a small study much more convenient than it otherwise might be.

Depending on the complexity of the interface being prototyped, the actual prototype can be developed in anything from a day to a couple of weeks, and an entire usability study can be accomplished in a few days or weeks. To illustrate this, I've built a small prototype of a clothes dryer control panel. You can download it from here. After downloading the file, open it and enable macros. Select slideshow mode, and select a cycle. You can select a new cycle at any time, and adjust the dryness level and overall cycle time. You can also select START and watch as the panel goes through an abbreviated dry cycle. When the cycle is done, or at any other time, select PAUSE/CANCEL to return to the starting condition. For reference, this prototype took me about a day to build.

Here's how the demo works: the controls and displays that do not change behaviors from slide to slide are all built on the master slide so they're always available. The time window contains a variable that is set by the cycle buttons and adjusted by the time adjustment buttons and the dryness level selection. By implementing these controls and displays on the master slide, the settings are stable from slide to slide. The cycle buttons themselves are implemented in the various slides to make it easy to change the embedded LED in each button with a new cycle selection. Although this could have been done in code, this was one case where simply drawing the graphic changes was easier than writing the code, and it's a nice illustration of how PowerPoint gives you several ways of doing something.

What's the point of all this? My major point is that you may already own all the tools you need to create truly rapid UI prototypes that can also serve as platforms for usability tests with objective human performance data. And you don't need to be a VBA expert to get started. You can look at the code in this prototype by returning to the regular view (exit the slide show) and selecting TOOLS, MACROS, VISUAL BASIC EDITOR. The VBA help files (select HELP and help contents, and navigate to PowerPoint Visual Basic reference) contain all the explanations and code examples you may ever need to build your own prototypes.

Thursday, February 22, 2007

Displays vs. Controls

Situation awareness has been a significant focus area for human factors over the last twenty years or so. I suspect that much of the emphasis on SA can be traced to Earl Wiener's finding, in the early studies of flight deck automation, that one of the most common questions in the flight deck was "what's it doing now?" The importance of SA has since been reinforced by the apparent causes of accidents in some automated aircraft. For example, the first three of the A320 accidents that occurred shortly after its introduction involved very experienced flight crews losing track of basic, fundamental aspects of the flight: airspeed, altitude, vertical speed, energy, etc. I think it's unlikely that an experienced pilot would ever lose track of any of these parameters in an older, non-automated aircraft, so SA certainly seems to be the important issue when it comes to managing highly automated, safety-critical systems.

But is it really the most important issue? Is "what's it doing now" really the most important question?

What causes a human error-related accident in the first place? Is the root cause a failure to notice what the system is doing? Or is it instead an input error that causes the system to enter an undesired, unexpected state? If it weren't for the initial error, would it be so important to notice what the system is doing?

I'd like to suggest that the question, "How do I get it to do what I want it to do?" is actually more important than the question, "What's it doing now?" After all, the failure to accurately communicate intent to the system is what causes the undesired state in the first place. An ounce of prevention is worth a pound of cure; in this case, getting the system to do what you want it to do is prevention, and figuring out what it's doing is the hoped-for cure. Ultimately, I think that SA to detect the error is less important than preventing the error in the first place.

Unfortunately, I think that we, as a community of designers, engineers, human factors people, etc. have put a disproportionate amount of emphasis on SA at the expense of helping the user avoid input errors in the first place. For many people, human factors is synonymous with "displays", and controls are taken for granted. Perhaps that's why a modern airplane has big, beautiful, high bandwidth displays with which to communicate to the pilot, and the pilot has knobs, buttons, and a relatively primitive keyboard with which to communicate to the airplane.

I once counted up the number of papers presented at the Human Factors and Ergonomics Society annual meeting that dealt with displays and compared them with the number that dealt with controls. The display-related papers outnumbered the controls-related papers by about five to one. I then did the same exercise for the International Symposium on Aviation Psychology; there, there were 159 papers related to displays and 6 related to controls.

I suspect that the same thing may be true in product design. Designers seem to pay a lot of attention to the formatting and appearance of displays, but assume that the user will learn whatever control logic is provided. Hence, we're stuck with alarm clocks whose alarms we can't figure out how to shut off, car radios we can't figure out how to program, etc. And if you think of it, I'll bet that when people have difficulty figuring out how to use a product, it's probably because they're hung up on how to get it to do what they want it to do, rather than trying to figure out what the product is actually doing.

There's one other reason that I think input logic is more important than SA: since people often see what they expect to see, what they thought they did affects what they think the system will do. When pilots select the wrong flight control mode, they may miss all the visual indications that the mode is wrong because they "know" what they told the airplane to do, and they interpret what they see in light of that expectation. Again, the input error is the root cause and SA, at its best, can only catch the original error, but the error and its departure from the user's expectations hampers subsequent SA.

In the interests of preventing such errors in the first place, I'd like to suggest that we start placing more emphasis on controls and input logic, instead of devoting so much attention to SA. We need to make functional logic more intuitive, less complex, and less error-prone. We need to start applying all the cognitive science we've been doing for the past thirty years to control use. Again, an ounce of prevention....

Wednesday, February 21, 2007

New Human Factors Blog

Welcome to my human factors and user-centered design blog. I am a human factors consultant with 25 years of experience in human factors research and design, and I want to use this blog to share thoughts, lessons learned, tools, and best practices with anyone grappling with usability questions and challenges. Some of the topics I intend to cover include the following:

- Many product designers and researchers are looking for convenient, easy-to-use, flexible, and inexpensive rapid prototyping tools. If you have Microsoft Office, you already have the basic tools you need to set up your own portable prototyping environment and usability lab. Did you know that PowerPoint can serve as a very capable rapid prototyping platform, and that with a little Visual Basic for Applications code, you can capture user selections along with time tags and send them to Excel for automated recording of user performance data? I'll describe how to do this in a series of posts.

- I believe that most usability problems associated with electronic products are not due to poor user interface designs, but rather to poor functional logic. In other words, the problem is not typically how the product looks and feels, but rather how it works underneath the interface. If the functional logic is hard to learn and remember, the best UI in the world isn't going to make the product easy to use. I'll discuss some of the usability issues related to functional logic and how to address them.

- The design is only as good as the requirements, and the requirements are only as good as the analysis. I'll present a number of analysis methodologies I've developed and found useful on various projects. Furthermore, the requirements themselves may have an optimal structure in a human-centered process. Using a hierarchy of mission requirements, operational requirements, functional requirements, information requirements, and display/control requirements, along with the appropriate analysis methods for each stage, can help resolve many, if not most, design problems and issues before the design itself is even begun.

- Usability practitioners often debate the merits of expert design reviews vs. formal usability testing, and there's been some research on which is better, or at least more appropriate, for different problems and stages of design. There's a third option: design analysis tools. These can range from checklist-like forms to computer-based tools that apply heuristic reasoning to diagnose interface design problems that may lead to specific kinds of error. This is particularly useful because errors are often hard to produce and observe in the lab. Structured usability analysis tools are a particular interest of mine, and I intend to devote a lot of attention to them.

If any of these topics is of particular interest to you, please let me know and I'll delve into it/them first. I always welcome comments and questions. You can post comments here, or email me at vic@uird.com. If you'd like to learn more about me and our services, please visit my (very simple) web site at www.uird.com.

Thanks for reading -

Vic Riley