Situation awareness has been a significant focus area for human factors over the last twenty years or so. I suspect that much of the emphasis on SA can be traced to Earl Wiener's finding, in the early studies of flight deck automation, that one of the most common questions in the flight deck was "what's it doing now?" The importance of SA has since been reinforced by the apparent causes of accidents in some automated aircraft. For example, the first three of the A320 accidents that occurred shortly after its introduction involved very experienced flight crews losing track of basic, fundamental aspects of the flight: airspeed, altitude, vertical speed, energy, etc. I think it's unlikely that an experienced pilot would ever lose track of any of these parameters in an older, non-automated aircraft, so SA certainly seems to be the important issue when it comes to managing highly automated, safety-critical systems.
But is it really the most important issue? Is "what's it doing now" really the most important question?
What causes a human error-related accident in the first place? Is the root cause a failure to notice what the system is doing? Or is it instead an input error that causes the system to enter an undesired, unexpected state? If it weren't for the initial error, would it be so important to notice what the system is doing?
I'd like to suggest that the question, "How do I get it to do what I want it to do?" is actually more important than the question, "What's it doing now?" After all, the failure to accurately communicate intent to the system is what causes the undesired state in the first place. An ounce of prevention is worth a pound of cure; in this case, getting the system to do what you want it to do is prevention, and figuring out what it's doing is the hoped-for cure. Ultimately, I think that SA to detect the error is less important than preventing the error in the first place.
Unfortunately, I think that we, as a community of designers, engineers, human factors people, etc. have put a disproportionate amount of emphasis on SA at the expense of helping the user avoid input errors in the first place. For many people, human factors is synonymous with "displays", and controls are taken for granted. Perhaps that's why a modern airplane has big, beautiful, high bandwidth displays with which to communicate to the pilot, and the pilot has knobs, buttons, and a relatively primitive keyboard with which to communicate to the airplane.
I once counted up the number of papers presented at the Human Factors and Ergonomics Society annual meeting that dealt with displays and compared them with the number that dealt with controls. The display-related papers outnumbered the controls-related papers by about five to one. I then did the same exercise for the International Symposium on Aviation Psychology; there, there were 159 papers related to displays and 6 related to controls.
I suspect that the same thing may be true in product design. Designers seem to pay a lot of attention to the formatting and appearance of displays, but assume that the user will learn whatever control logic is provided. Hence, we're stuck with alarm clocks whose alarms we can't figure out how to shut off, car radios we can't figure out how to program, etc. And if you think of it, I'll bet that when people have difficulty figuring out how to use a product, it's probably because they're hung up on how to get it to do what they want it to do, rather than trying to figure out what the product is actually doing.
There's one other reason that I think input logic is more important than SA: since people often see what they expect to see, what they thought they did affects what they think the system will do. When pilots select the wrong flight control mode, they may miss all the visual indications that the mode is wrong because they "know" what they told the airplane to do, and they interpret what they see in light of that expectation. Again, the input error is the root cause and SA, at its best, can only catch the original error, but the error and its departure from the user's expectations hampers subsequent SA.
In the interests of preventing such errors in the first place, I'd like to suggest that we start placing more emphasis on controls and input logic, instead of devoting so much attention to SA. We need to make functional logic more intuitive, less complex, and less error-prone. We need to start applying all the cognitive science we've been doing for the past thirty years to control use. Again, an ounce of prevention....
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment