“Oh, come on,” Richard said quietly to himself.
“Sorry?” Bill, or Bob, or whatever lifted his head from behind Richard’s monitor.
“Sorry, not you,” Richard said. “It’s this stupid magazine. Everyone’s blaming the financial industry for the subprime mess, but what about the home buyers? Why doesn’t anyone make them take responsibility for signing up for loans they couldn’t repay?”
“Good point, sir.” Bob or Bill put his head back down and tapped some more keys.
“How much longer, anyway?” said Richard.
“It’ll be another hour or so. Um, did you change any of the settings on your anti-spyware utility?”
“Oh, I don’t know. Maybe. It was running real slow so I changed a bunch of settings in various places until it sped up. Why?”
Bill or Bob sighed. “Well, you’ve got a major spyware infestation, including several stealth keyloggers installed by Trojans. They’ve probably captured all your passwords and account settings. Do you access the company trading accounts from this machine?”
“Sure, I have to. So, what do you have to do?”
“Well, sir, you should probably make sure there haven’t been any unauthorized transactions from your account. In the short term, I can get you up and running but I’ll have to reformat your drive and reinstall Windows. You’ll lose all your files, but your computer will be up again. I may be able to restore your files from backup but I’ll have to make sure they’re clean.”
“OK. Whatever.” Richard went back to his reading while his computer made the Windows shutdown and startup noises several times in a row. Finally, he folded the magazine and hurled it across the room, neatly hitting the rim of his wastebasket and knocking it over. “Damn. You’d think people would know better than to sign false income statements. I just don’t get it.”
He pondered for a few more minutes. Then, “If people aren’t going to be responsible, how do you keep it from happening again?”
Bob or Bill looked up and hesitated for a moment. “Well, sir, if it were up to me, I wouldn’t let anyone use a computer unless they knew what they were doing.”
Sunday, September 23, 2007
Saturday, September 15, 2007
Feature Creep
I've often heard users and designers bemoan "feature creep" and express the wish that manufacturers would limit the number of features supported by their products in order to make them more simple to use. While I sympathize with the goal, I don't think that avoiding feature creep is necessarily the solution.
Consider the iPod. A recent issue of MIT's Technology Review magazine (May 2007) focused on design, and frequently held up the iPod as a design ideal. Don Norman, speaking of Apple in general, stated, "The hardest part of design, especially consumer electronics, is keeping features out." Mark Rolston, senior vice president of creative at Frog Design, said, "The most fundamental thing about Apple that's interesting to me is that they're just as smart about what they don't do. Great products can be made more beautiful by omitting things."
So, back to the iPod. It originally began life as a music player. Along the way, it added a calendar, contacts, notes, alarm clocks, world clocks, stopwatch, audiobooks, picture viewer, video, podcasts, and games, and it can be used as an external hard drive, even a bootable one for an OS X machine. The iPod Touch adds internet surfing, a You Tube viewer, online access to the iTunes music store, and the ability to buy whatever song you're currently hearing at Starbuck's.
Hmmm... Isn't this the very definition of feature creep?
And yet, the iPod remains beloved, an icon of good design. And very deservedly so.
Because what makes the iPod easy to use is not feature restraint, but rather the fact that all of its many features work the same way. The user need only learn one general rule about how the interface works and can apply that rule to pretty much every function.
So perhaps the trick isn't in avoiding feature creep, but rather in avoiding "rule creep".
Consider the iPod. A recent issue of MIT's Technology Review magazine (May 2007) focused on design, and frequently held up the iPod as a design ideal. Don Norman, speaking of Apple in general, stated, "The hardest part of design, especially consumer electronics, is keeping features out." Mark Rolston, senior vice president of creative at Frog Design, said, "The most fundamental thing about Apple that's interesting to me is that they're just as smart about what they don't do. Great products can be made more beautiful by omitting things."
So, back to the iPod. It originally began life as a music player. Along the way, it added a calendar, contacts, notes, alarm clocks, world clocks, stopwatch, audiobooks, picture viewer, video, podcasts, and games, and it can be used as an external hard drive, even a bootable one for an OS X machine. The iPod Touch adds internet surfing, a You Tube viewer, online access to the iTunes music store, and the ability to buy whatever song you're currently hearing at Starbuck's.
Hmmm... Isn't this the very definition of feature creep?
And yet, the iPod remains beloved, an icon of good design. And very deservedly so.
Because what makes the iPod easy to use is not feature restraint, but rather the fact that all of its many features work the same way. The user need only learn one general rule about how the interface works and can apply that rule to pretty much every function.
So perhaps the trick isn't in avoiding feature creep, but rather in avoiding "rule creep".
Sunday, June 3, 2007
Six Sigma, Innovation, and Usability
This week's (June 11, 2007) Business Week cover story is about how Six Sigma "almost smothered" 3M's culture of innovation. The gist of the story is that 3M's attempt to use Six Sigma in every corner of its operations, including R&D, caused innovation to become more incremental, more safe, and more saddled with administrative overhead. Consequently, higher risk programs weren't pursued and 3M has slipped markedly in measures of innovation, such as the proportion of revenues coming from recently developed products.
Having been through a Six Sigma transformation at a large organization, I have a few thoughts on this.
First, it seems to me that user experience is one of the most important and least appreciated aspects of quality. As a proponent of applying more structured methods to user requirements and usability assessment, I think that formal user requirements methods and structured usability testing fit right in with the spirit of Six Sigma and other quality programs. If Six Sigma, for example, causes an organization to move from focus groups to more effective human performance-based measures of usability, I'm all for it. I hope that anyone advocating for Six Sigma within an organization will recognize that user experience and usability are key aspects of quality and deserve the rigor and emphasis traditionally paid to engineering and manufacturing.
Second, I think that everyone should learn something about statistics. I think that some understanding of probability and statistics is actually necessary in order to not only work effectively, but also to make informed decisions in most aspects of life, including voting. When an organization adopts Six Sigma and makes everyone learn the basics of statistical analysis, everyone benefits, including society at large.
However, no program dedicated to process improvement should itself become a process impediment, and the indiscriminant application of Six Sigma to all parts of an organization has a lot of potential to do just that. It's well understood that many of the most profitable break-through products come from serendipitous discoveries enabled by exploratory research that may not be undertaken in a risk-averse, highly controlled environment. This is what the Business Week article focuses on.
To me, the benefits and drawbacks of Six Sigma all stem from the same source: the fact that Six Sigma tries to substitute objectivity for subjectivity and data for intuition wherever possible. I think this is both useful and appropriate in some applications, such as administration, logistics, and manufacturing, but not so much in R&D. Research and design, particularly the exploratory types that lead to breakthrough products, depend to a large extent on subjectivity and intuition and are easily stifled by processes that seek to drive them out.
Furthermore, even where it's useful and appropriate, Six Sigma can still fall down because it's very susceptible to garbage-in/garbage-out. In Design for Six Sigma, for example, people often have to enter ratings of competitive position and other market environment factors into an analysis process, and these form the basis for subsequent decisions. Many of these factors can't be directly quantified or measured and must necessarily be subjectively estimated. Lengthy analysis processes can often include chains of subjective judgments, and small errors in early steps can compound into much larger errors at the end. It very easy, in Six Sigma, to end up with a product that looks like fact or data but is actually largely fiction, because of the use of formal methods to force subjective products into an apparently objective framework.
The bottom line, I think, is that Six Sigma and other quality programs can be very useful and productive if used within their valid limits, but can be highly counterproductive otherwise.
Having been through a Six Sigma transformation at a large organization, I have a few thoughts on this.
First, it seems to me that user experience is one of the most important and least appreciated aspects of quality. As a proponent of applying more structured methods to user requirements and usability assessment, I think that formal user requirements methods and structured usability testing fit right in with the spirit of Six Sigma and other quality programs. If Six Sigma, for example, causes an organization to move from focus groups to more effective human performance-based measures of usability, I'm all for it. I hope that anyone advocating for Six Sigma within an organization will recognize that user experience and usability are key aspects of quality and deserve the rigor and emphasis traditionally paid to engineering and manufacturing.
Second, I think that everyone should learn something about statistics. I think that some understanding of probability and statistics is actually necessary in order to not only work effectively, but also to make informed decisions in most aspects of life, including voting. When an organization adopts Six Sigma and makes everyone learn the basics of statistical analysis, everyone benefits, including society at large.
However, no program dedicated to process improvement should itself become a process impediment, and the indiscriminant application of Six Sigma to all parts of an organization has a lot of potential to do just that. It's well understood that many of the most profitable break-through products come from serendipitous discoveries enabled by exploratory research that may not be undertaken in a risk-averse, highly controlled environment. This is what the Business Week article focuses on.
To me, the benefits and drawbacks of Six Sigma all stem from the same source: the fact that Six Sigma tries to substitute objectivity for subjectivity and data for intuition wherever possible. I think this is both useful and appropriate in some applications, such as administration, logistics, and manufacturing, but not so much in R&D. Research and design, particularly the exploratory types that lead to breakthrough products, depend to a large extent on subjectivity and intuition and are easily stifled by processes that seek to drive them out.
Furthermore, even where it's useful and appropriate, Six Sigma can still fall down because it's very susceptible to garbage-in/garbage-out. In Design for Six Sigma, for example, people often have to enter ratings of competitive position and other market environment factors into an analysis process, and these form the basis for subsequent decisions. Many of these factors can't be directly quantified or measured and must necessarily be subjectively estimated. Lengthy analysis processes can often include chains of subjective judgments, and small errors in early steps can compound into much larger errors at the end. It very easy, in Six Sigma, to end up with a product that looks like fact or data but is actually largely fiction, because of the use of formal methods to force subjective products into an apparently objective framework.
The bottom line, I think, is that Six Sigma and other quality programs can be very useful and productive if used within their valid limits, but can be highly counterproductive otherwise.
Wednesday, April 4, 2007
PowerPoint Power Tip # 1
Need to delete or change an object that's being covered by another object that you don't want to move or delete? Click and drag over both objects to select them both, then hold the Shift key while you click on the objects. This should de-select the object in front, since that's the one that will intercept the click. Now you should have only the object behind selected; you can now delete it, move it using the arrow keys, or whatever else you want to do with it.
Tuesday, March 13, 2007
The Right Answer
One of our friends here recently sent out a flyer on behalf of the local historical society asking people to submit their stories of how they learned about Point Roberts and how they got here. The flyer also asked people to donate $10 to the society. Unfortunately, the wording of the flyer made it appear that people had to donate in order to submit their stories, and the wording of the request for stories was so open-ended that some of the responses were unusable.
A university student was wrestling with the design of an experiment. She knew the general topic to be investigated and had a general approach in mind, but couldn't arrive at the specific approach or the steps that needed to be taken.
A group of engineers at a company I worked with were trying to design an input device for a workstation. They were trying out several options and having a hard time deciding which one was the best.
These situations all have one thing in common: inadequately defined requirements. I think this is one of the most common mistakes made by designers of every type, and I think it stems partly from a tendency to think one already knows all the requirements, or that the requirements are obvious and don't need to be spelled out. But my friend at the historical society might have had a better response if he had first spelled out what the desired product was (a specific type of story and a willingness, separate from the story, to make a donation), the student would have had an easier time designing the experiment if she had articulated the experimental question to be resolved, and the team of engineers would have been better able to decide which design was best if they had specified some criteria first.
You may think this goes without saying, but I can't count the number of meetings I've been in that have gone either nowhere or in circles until someone said, "What are the requirements?" Perhaps it's one of those lessons that are so basic that we need to be reminded of them continually, because we take them for granted otherwise. And, as illustrated by these three situations, I think this lesson goes beyond product or interface design to life in general. After all, how do you know what the right answer is if you haven't defined the requirements?
A university student was wrestling with the design of an experiment. She knew the general topic to be investigated and had a general approach in mind, but couldn't arrive at the specific approach or the steps that needed to be taken.
A group of engineers at a company I worked with were trying to design an input device for a workstation. They were trying out several options and having a hard time deciding which one was the best.
These situations all have one thing in common: inadequately defined requirements. I think this is one of the most common mistakes made by designers of every type, and I think it stems partly from a tendency to think one already knows all the requirements, or that the requirements are obvious and don't need to be spelled out. But my friend at the historical society might have had a better response if he had first spelled out what the desired product was (a specific type of story and a willingness, separate from the story, to make a donation), the student would have had an easier time designing the experiment if she had articulated the experimental question to be resolved, and the team of engineers would have been better able to decide which design was best if they had specified some criteria first.
You may think this goes without saying, but I can't count the number of meetings I've been in that have gone either nowhere or in circles until someone said, "What are the requirements?" Perhaps it's one of those lessons that are so basic that we need to be reminded of them continually, because we take them for granted otherwise. And, as illustrated by these three situations, I think this lesson goes beyond product or interface design to life in general. After all, how do you know what the right answer is if you haven't defined the requirements?
Thursday, March 1, 2007
New Term...
...for a poorly designed interface: "untuitive".
Labels:
human factors,
product design,
UI design,
usability
Saturday, February 24, 2007
The Features/Usability Bind
One of the implications of Moore's Law, which states that processing power doubles about every eighteen months, is that in about eighteen months you'll be able to buy products that are twice as complicated as the ones you can buy today. This seems inevitable in a commoditized electronic products marketplace, in which manufacturers seem only able to compete based on the number of features they can fit into a product.
The twin trends of technology products are smaller form factors are more features. This leads to what I call the "features/usability bind", in which smaller devices with smaller physical UIs have to access larger numbers of functions. Inevitably, this ends up requiring that input devices (even simple pushbuttons) have to become multi-function; even if the interface remains relatively simple, the underlying functional logic becomes more complex. Furthermore, unless the problem is addressed at the level of the functional logic, complexity will continue to grow with proportionally with features. Perhaps this is why so many electronic products are returned as defective because people can't figure out how to use them.
Engineers have often stated that they expect technological progress to hit a brick wall when the laws of physics catch up with Moore's Law and no more transistors can be crammed onto a chip. I think the real brick wall of technology isn't the number of transistors that can fit on a chip, but rather the number of rules that will fit in a user's head. After all, the user has to learn and remember all the rules that govern how the product works; if these rules aren't intuitive, they must be learned by rote, or the user will simply decide that the feature isn't worth the effort.
I think that there are three basic classes of usability problems associated with electronic products today: modes, convoluted functional logic, and hidden functions. Let me briefly describe each, as I see them.
The technical definition of "modes" from a UI perspective is when the same user action produces different results when the control or system is in different states. A mode error led to the crash of an Airbus A320 in Strasbourg, France, when the pilot, attempting to dial a 3.3 degree flight path angle into the autopilot, instead selected the vertical speed mode, causing the value to actually be 3300 feet per minute. Mode errors in consumer electronics products don't usually have such dramatic consequences, but they can be annoying. One that I typically encounter is when I try to change the channel on my satellite TV receiver but instead cause the TV to revert to "tuner" mode because the remote was in TV mode, and the TV thought I was trying to change its internal tuner channel. A lot of people have come up after presentations to tell me that they've tossed the universal remote control and gone back to the five separate ones because they were tired of making similar errors all the time.
Convoluted functional logic is when accessing a function requires following a complex or hard-to-recall series of steps. For example, storing a phone number in a speed dial location of a phone that I have requires you to press PROG to put the phone into "program" mode, then press the speed dial button where you want to store the number, then dial the number on the keypad, then press another speed dial button labeled MEMORY which, when the phone is in program mode, stores the number you just typed into the memory location you selected at the start of the sequence. Then, you press PROG to take the phone out of "program" mode and put it back into "phone" mode. (This example has the added benefit of again demonstrating a problem with modes.) Another example would be that, on a minidisc recorder I used to own, you had to press and hold the RECORD button and simultaneously press the VOLUME button in order to select manual record level. Have you ever had trouble figuring out how to turn off the alarms on a hotel clock radio? Convoluted logic was probably the culprit.
Hidden functions are typically press-and-hold functions that aren't revealed anywhere in the interface. A friend of mine had to leave a car wash once because he couldn't get the antenna down; when he pressed the radio POWER button, the system would alternate between radio and CD player. It turned out that he had to press and hold the button for a second or so in order to actually turn the power off.
I'll bet that most of the problems people have using products are due to one of these three classes of problems, and not to poor design of the interface itself. One of the ironies is that modes, for example, are often intended to simplify product use by collecting similar functions into the larger umbrella of a mode. But then the user has to learn about the modes, and they themselves become a source of complexity. This is why mode errors have been implicated in several aircraft accidents and at least one cruise ship accident in recent years.
Ultimately, these three classes of problems represent rules that the user must learn and remember. I think we're already at a point where people don't use all the features of products they already own, so trying to sell them new products based on additional features may soon become a losing proposition.
In my view, the best way out of the features/usability bind is to decouple the conceptual complexity of the product from the functional complexity, so the user doesn't have to learn new rules in order to use new features. Note that this is not an interface-level problem. If the functional logic is hard to use, the best interface in the world won't make the product easy to use. One strategy is to apply appropriate metaphors at the level of the product's logic, rather than just the interface. We're used to thinking about metaphors in terms of what icons represent, how radio buttons and check boxes work, and so forth. But the principles of metaphors can be carried to the level of the logic and, ideally, applied across the entire range of product functions. In other words, make the product work like the user thinks, so the user doesn't have to learn how the product works.
The desktop metaphor of the Mac and Windows interfaces is a good example of this, because it re-defines the underlying functions of the operating system into a conceptual world that the user already knows. And it offers a way out of the functions/usability bind. After all, Windows is substantially easier to use than DOS was, but it's infinitely more functionally complex.
The twin trends of technology products are smaller form factors are more features. This leads to what I call the "features/usability bind", in which smaller devices with smaller physical UIs have to access larger numbers of functions. Inevitably, this ends up requiring that input devices (even simple pushbuttons) have to become multi-function; even if the interface remains relatively simple, the underlying functional logic becomes more complex. Furthermore, unless the problem is addressed at the level of the functional logic, complexity will continue to grow with proportionally with features. Perhaps this is why so many electronic products are returned as defective because people can't figure out how to use them.
Engineers have often stated that they expect technological progress to hit a brick wall when the laws of physics catch up with Moore's Law and no more transistors can be crammed onto a chip. I think the real brick wall of technology isn't the number of transistors that can fit on a chip, but rather the number of rules that will fit in a user's head. After all, the user has to learn and remember all the rules that govern how the product works; if these rules aren't intuitive, they must be learned by rote, or the user will simply decide that the feature isn't worth the effort.
I think that there are three basic classes of usability problems associated with electronic products today: modes, convoluted functional logic, and hidden functions. Let me briefly describe each, as I see them.
The technical definition of "modes" from a UI perspective is when the same user action produces different results when the control or system is in different states. A mode error led to the crash of an Airbus A320 in Strasbourg, France, when the pilot, attempting to dial a 3.3 degree flight path angle into the autopilot, instead selected the vertical speed mode, causing the value to actually be 3300 feet per minute. Mode errors in consumer electronics products don't usually have such dramatic consequences, but they can be annoying. One that I typically encounter is when I try to change the channel on my satellite TV receiver but instead cause the TV to revert to "tuner" mode because the remote was in TV mode, and the TV thought I was trying to change its internal tuner channel. A lot of people have come up after presentations to tell me that they've tossed the universal remote control and gone back to the five separate ones because they were tired of making similar errors all the time.
Convoluted functional logic is when accessing a function requires following a complex or hard-to-recall series of steps. For example, storing a phone number in a speed dial location of a phone that I have requires you to press PROG to put the phone into "program" mode, then press the speed dial button where you want to store the number, then dial the number on the keypad, then press another speed dial button labeled MEMORY which, when the phone is in program mode, stores the number you just typed into the memory location you selected at the start of the sequence. Then, you press PROG to take the phone out of "program" mode and put it back into "phone" mode. (This example has the added benefit of again demonstrating a problem with modes.) Another example would be that, on a minidisc recorder I used to own, you had to press and hold the RECORD button and simultaneously press the VOLUME button in order to select manual record level. Have you ever had trouble figuring out how to turn off the alarms on a hotel clock radio? Convoluted logic was probably the culprit.
Hidden functions are typically press-and-hold functions that aren't revealed anywhere in the interface. A friend of mine had to leave a car wash once because he couldn't get the antenna down; when he pressed the radio POWER button, the system would alternate between radio and CD player. It turned out that he had to press and hold the button for a second or so in order to actually turn the power off.
I'll bet that most of the problems people have using products are due to one of these three classes of problems, and not to poor design of the interface itself. One of the ironies is that modes, for example, are often intended to simplify product use by collecting similar functions into the larger umbrella of a mode. But then the user has to learn about the modes, and they themselves become a source of complexity. This is why mode errors have been implicated in several aircraft accidents and at least one cruise ship accident in recent years.
Ultimately, these three classes of problems represent rules that the user must learn and remember. I think we're already at a point where people don't use all the features of products they already own, so trying to sell them new products based on additional features may soon become a losing proposition.
In my view, the best way out of the features/usability bind is to decouple the conceptual complexity of the product from the functional complexity, so the user doesn't have to learn new rules in order to use new features. Note that this is not an interface-level problem. If the functional logic is hard to use, the best interface in the world won't make the product easy to use. One strategy is to apply appropriate metaphors at the level of the product's logic, rather than just the interface. We're used to thinking about metaphors in terms of what icons represent, how radio buttons and check boxes work, and so forth. But the principles of metaphors can be carried to the level of the logic and, ideally, applied across the entire range of product functions. In other words, make the product work like the user thinks, so the user doesn't have to learn how the product works.
The desktop metaphor of the Mac and Windows interfaces is a good example of this, because it re-defines the underlying functions of the operating system into a conceptual world that the user already knows. And it offers a way out of the functions/usability bind. After all, Windows is substantially easier to use than DOS was, but it's infinitely more functionally complex.
Subscribe to:
Comments (Atom)