I'd like to start a series of entries about the role that user assistance can play in what I call progressive user adoption. User adoption describes the rate that users will accept a new product or new technology. People who discuss user adoption usually mean it in the sense of the initial decision to accept or reject the technology or product as well as the ongoing re-enforcement of that decision. By progressive user adoption, however, I will be focusing on the tendency of users (or reluctance) to progress more deeply into the features, functionality, or frequency with which they use a new technology or product. Of partciular interest is why users' adoption curves typically plateau out at a suboptimal level.
Let me start with some concrete examples of what I'm talking about. Whenever anyone of us starts a new job one of the first questions we ask about the phone system is, "How do I dial out?" We learn what we have to learn in order to make the initial adoption decision. Fast forward several months (years) later and see if that person has learned to do a 3-way conference call (or in my case, even make a simple transfer). In many cases the answer is NO.
And how often have you had to edit a document someone did in Word only to find out that no style tags have been applied. All layout and typographic effects have be done with tabs, paragraph returns (sometimes one between paragraphs, sometimes two or three) and by manually bolding and changing text size to create headings? Why was this person not using the style tag feature that would have made the process so much easier and the output more consistent?
In short, why do people quit learning before they're done learning what they need to know?
Well, first off, why do we care about this premature leveling of the learning curve? If they've bought the software, why should we care how well they use it?
As is so often the case, the first question you need to ask is, "What's your business model?" More and more, due to e-commerce on the Internet, revenue around a product is transaction based. For example, I worked for a company that provided online bill pay software and the processing of the payment that went on behind the scenes. It made money everytime someone paid a bill with its product. The more bills someone paid, the more money the company made. Transaction-based products have a lot of skin in the game around progressive user adoption.
Of course, there is e-commerce, where user activity is directly related to revenue. Do you think Amazon.com wants me to stop shopping on their web site after I've bought my books? Do you think they would like me to progressively adopt them as my music and electronic gadget supplier as well?
And even non-transaction based applications have an interest in my progressive adoption of the features that give their product a competitive advantage or increase my satisfaction and loyalty. Nobody uses WordStar anymore, not because it did not produce good looking documents, but because it was displaced by GUI-based word processors that made it easier to adopt advanced features, such as style tags, automatic headings, etc.
And as we see Google and Microsoft moving into the web app space where revenue will be tied into usage, progressive user adoption will become critical in those kinds of applications as well.
So What's the Problem?
Having been involved in online banking and online bill pay applications, I have been very interested in understanding why users' adoption stops at less than optimal utilization of a product. The following explanation is based on observations made in formal usability tests, focus group research, contextual studies, and is supported by published research such as Everett Rogers' seminal work in Rogers, E. M. Diffusions of Innovations (4th ed.), New York: Free Press, 1995 and an interesting model called the Technology Acceptance Model, see Davis, F. D., Bagozzi, R. P., and Washaw, P. R. “User Acceptance of Computer Technology: A Comparison of Two Theoretical Models,” Management Science, 35, 1989, pp. 982-1003
In short, people quit learning before they're done learning for the following two reasons:
- They shift from a learning/exploration mode to a task orientation mode. When users can meet their initial goals, they stop exploring. Instead, they focus on doing what they came to do, e.g., paying bills or writing a report. In other words, they don’t look for ways to do what they don’t know they could do. I discuss this problem in general with why users abandon help procedures in a proceedings paper called "Procedures: The Sacred Cow Blocking the Road?"
- A reduced benefit/effort ratio. The benefit/effort ratio is less attractive for incremental improvement than for initial adoption. There is a big difference between “If I don’t learn how to make a phone call, I cannot get in touch with my essential contacts.” and “If I don’t learn how to transfer a call, I can’t pass an outside caller on to someone else in my organization.” The benefit side of the ratio is often diminished in the eyes of the user by existing alternatives that allow the user to reach a goal, although in a less efficient manner. In the call transfer example, the user can always give the outside caller the third party’s extension and ask them to redial that party directly.
I think that user assistance can have a positive effect on progressive user adoption if designed to do so. It can also have catastrophic consequences if done poorly. (I'm not making any specific references to Clippy here; I'm only saying.)
My next series of blogs will continue to explore how user assistance can be an asset to a company where progressive adoption advances the business model.