Monday, December 18, 2006
Before moving off the topic of laying out a suggested progression for user adoption, I would like to discuss the two main progressive adoption dimensions: efficiency improvement and feature adoption. In essence, efficiency improvement says, "There's a better way to do what you're doing." Feature adoption says, "You can do more things than you're doing."
Efficiency Improvement
The biggest challenge you face with efficiency improvement is that you are coming in low on the benefit/effort ratio. By that I mean that the user is already getting the task done and you're trying to get the user to invest immediate time and energy for a longer term gain. This ranks right up there with telling overweight people they need to diet and telling smokers they need to quit. In other words, don't expect the user community to hoist you on their shoulders and carry you in a display of triumphant gratitude. In this dimension, we are going to want to look at strategies that minimize the adoption effort, possibly embedding shortcuts and some degree of functionality within the user assistance itself.
Feature Adoption
Depending on your business model, increased feature adoption could be a real sweet spot for you. Even though there is still an increased effort required on the part of the user, the one thing you have going for you is being able to offer benefits the user is not currently getting. The dominant strategies here will be to show (illustrate or demonstrate) the new state and communicate the ease with which the user can get there (and cancel back out).
Considerations
When laying out your adoption profile, think about along which dimension you will be taking the user. If you are only going to improve efficiency and the user does not do that task very often--you might just want to let that dog lie sleeping, or at worst, give it a gentle nudge and move on.
The better payoff is along the feature adoption dimension; spend your time and creative energy there.
Wednesday, December 13, 2006
The secret to progressive adoption is to stop thinking of adoption as a Yes/No state on the part of the user, rather think of it as incremental adoptions over a period of time. Map out the basic core features that would represent minimal adoption and apply principle 1 to those ("don't get in the way"). Next decide what levels logically lead the user through a comfortable progression pattern over time.
For example, in online bill pay, we decided it was too much to ask a user to start by turning off paper bills and having the system pay electronic bills automatically. They first had to build a trust in the system. The best progression seemed to be:
- Get the bill in the mail and pay manually online.
- Authorize getting the bill electronically but still pay manually online.
- Authorize routine bills to be received electronically and payed automatically online.
Two elements you should consider when planning a progression profile are:
- Level of trust required. Plan a progression that allows the user to build trust with the system. Trust can mean a lot of things, trust you with my data, trust you with my SSN, trust you with my credit card number, etc. It can also mean I trust that all this work is going to get me what I want. For example, MS Excel's Chart Wizard lets you see how your data will be graphically displayed at each step in the process.
- Level of skill required. Move the user along incrementally from basic skills to get core value to more advanced skills to leverage greater value. For example, MS Word starts with a default template in place. Using templates should not be an initial requirement, but should be planned as a step that happens after the user has made the initial adoption. Steps along the skill dimension should be sized for easily managed progression. Don't make the user have to learn a lot to get more value. As long as the perceived increase in value is proportional to perceived effort to get there, you have a workable progression profile.
I will discuss concrete user assistance techniques that can be applied to support progressive adoption over my next several blogs.
Stay posted.
Monday, December 11, 2006
In my last blog entry I introduced the concept of progressive user adoption, moving a user further along in terms of the frequency of use, number of features used, or the depth of functionality (moving from basic to advanced). This week I will start to explore principles of progressive adoption, especially where user assistance can be involved.
Priniciple One: Don't interfere with core functionality.
Keep the basic tasks (the prime reason for the user being in your application) easy to do. This could be Clippy's fatal flawhe intrudes when I don't need him, forcing me to get off task to dismiss him. His lame attempts to be precious do not make me want to kill him more, just kill him more slowly and in imaginative ways.
How do you apply this principle? For one, when the user assistance intervenes, make the intervention easy to ignore without action. If you force the user to dismiss the intervention, you are detracting from the core experience. Mirosoft Project does this fairly well. For example, if you add a resource to a task, an icon lets you know there is a tip. If you click the icon, a popup opens asking do you want to increase the work or shorten the duration? Based upon what mode you are in, it has already made the appropriate decision and has marked it as the default choice. If you just plow ahead and keep working, the popup goes away and the default choice stays in effect. So as a user, I get two opportunities to ignore the progressive help. I learn to ignore the tip icon when I know what the tip is about, and I can just keep working when I get the tip without having to select the default choice. I do have to click back into the desktop, however; it would be even better if I did not have to even do that.
Probably one of the most important dynamics in progressive adoption is "readiness," the user must be at a state that is ready to accept the change. Until then, coaching or coaxing the user to a new level of product use can detract from the quality of the core experience and you end up losing the user [insert clever fishing metaphor hereit's early in the morning and I'm too tired to do it myself].
So the bottom line in progressive user adoption is to measure all interventions against the yardstick of "Does this interrupt the core task?" If the answer is yes, change the intervention.
Thursday, December 07, 2006
I'd like to start a series of entries about the role that user assistance can play in what I call progressive user adoption. User adoption describes the rate that users will accept a new product or new technology. People who discuss user adoption usually mean it in the sense of the initial decision to accept or reject the technology or product as well as the ongoing re-enforcement of that decision. By progressive user adoption, however, I will be focusing on the tendency of users (or reluctance) to progress more deeply into the features, functionality, or frequency with which they use a new technology or product. Of partciular interest is why users' adoption curves typically plateau out at a suboptimal level.
Let me start with some concrete examples of what I'm talking about. Whenever anyone of us starts a new job one of the first questions we ask about the phone system is, "How do I dial out?" We learn what we have to learn in order to make the initial adoption decision. Fast forward several months (years) later and see if that person has learned to do a 3-way conference call (or in my case, even make a simple transfer). In many cases the answer is NO.
And how often have you had to edit a document someone did in Word only to find out that no style tags have been applied. All layout and typographic effects have be done with tabs, paragraph returns (sometimes one between paragraphs, sometimes two or three) and by manually bolding and changing text size to create headings? Why was this person not using the style tag feature that would have made the process so much easier and the output more consistent?
In short, why do people quit learning before they're done learning what they need to know?
Why Care?
Well, first off, why do we care about this premature leveling of the learning curve? If they've bought the software, why should we care how well they use it?
As is so often the case, the first question you need to ask is, "What's your business model?" More and more, due to e-commerce on the Internet, revenue around a product is transaction based. For example, I worked for a company that provided online bill pay software and the processing of the payment that went on behind the scenes. It made money everytime someone paid a bill with its product. The more bills someone paid, the more money the company made. Transaction-based products have a lot of skin in the game around progressive user adoption.
Of course, there is e-commerce, where user activity is directly related to revenue. Do you think Amazon.com wants me to stop shopping on their web site after I've bought my books? Do you think they would like me to progressively adopt them as my music and electronic gadget supplier as well?
And even non-transaction based applications have an interest in my progressive adoption of the features that give their product a competitive advantage or increase my satisfaction and loyalty. Nobody uses WordStar anymore, not because it did not produce good looking documents, but because it was displaced by GUI-based word processors that made it easier to adopt advanced features, such as style tags, automatic headings, etc.
And as we see Google and Microsoft moving into the web app space where revenue will be tied into usage, progressive user adoption will become critical in those kinds of applications as well.
So What's the Problem?
Having been involved in online banking and online bill pay applications, I have been very interested in understanding why users' adoption stops at less than optimal utilization of a product. The following explanation is based on observations made in formal usability tests, focus group research, contextual studies, and is supported by published research such as Everett Rogers' seminal work in Rogers, E. M. Diffusions of Innovations (4th ed.), New York: Free Press, 1995 and an interesting model called the Technology Acceptance Model, see Davis, F. D., Bagozzi, R. P., and Washaw, P. R. “User Acceptance of Computer Technology: A Comparison of Two Theoretical Models,” Management Science, 35, 1989, pp. 982-1003
In short, people quit learning before they're done learning for the following two reasons:
- They shift from a learning/exploration mode to a task orientation mode. When users can meet their initial goals, they stop exploring. Instead, they focus on doing what they came to do, e.g., paying bills or writing a report. In other words, they don’t look for ways to do what they don’t know they could do. I discuss this problem in general with why users abandon help procedures in a proceedings paper called "Procedures: The Sacred Cow Blocking the Road?"
- A reduced benefit/effort ratio. The benefit/effort ratio is less attractive for incremental improvement than for initial adoption. There is a big difference between “If I don’t learn how to make a phone call, I cannot get in touch with my essential contacts.” and “If I don’t learn how to transfer a call, I can’t pass an outside caller on to someone else in my organization.” The benefit side of the ratio is often diminished in the eyes of the user by existing alternatives that allow the user to reach a goal, although in a less efficient manner. In the call transfer example, the user can always give the outside caller the third party’s extension and ask them to redial that party directly.
What's Next?
I think that user assistance can have a positive effect on progressive user adoption if designed to do so. It can also have catastrophic consequences if done poorly. (I'm not making any specific references to Clippy here; I'm only saying.)
My next series of blogs will continue to explore how user assistance can be an asset to a company where progressive adoption advances the business model.
Stay posted.
Wednesday, December 06, 2006
Today's blog is for die-hard writers who get a buzz from talking about rhetoric. No tools or technology today; I'm going through enough of that on the day job :-)
I was structuring a formal analogy the other day, you know--A:B::C:D (read A is to B the way that C is to D), and wondered what the preferred sequence should be. Should the new relationship be in the AB slot with CD being the relationship the reader is already familiar with, or should AB be the familiar relationship and CD be the one that is new to the reader?
I've always been a big fan of using a Given-New rhetoric when trying to explain complicated material. In that scheme you make the topic (subject) of the sentence some concept the reader is already familiar with, and you introduce the new concept in the predicate. Then the next sentence can take the predicate from the previous sentence and make it the subject, since it has now become a "given." The technique allows you to build up a knowledge base, so to speak, within the reader in small, manageable steps.
For example, let's say you had to explain DITA to a reader base for whom it would be a new concept. Watch how in the following text, the subjects of the sentences are concepts that are already familiar to the reader. Pay particular attention to the dance that ensues from a concept going from the predicate position in one sentence (where it was the "new" concept) to being the subject in the next sentence (because it is now a "given" concept). The following explanation assumes that the concept of structured writing is a familiar one to the reader.
A form of structured writing that has gained much popularity in recent years is DITA. DITA stands for Darwin Information Type Archictecture and is an XML-based approach to authoring. XML is the mark-up language that enables authors to share content across different platforms and among different documents.You get the idea. This is a blog and that was a quick example, so don't edit me too critically on it. Like any horse, Given-New can be ridden to death and its overuse can leave your discourse sounding "sing-songish" and feeling mechanical. None-the-less, I have found that it is often a good technique for first drafts of paragraphs where I feel I have to move the readers across a rather large gap between what they already know and what they need to know.
But that logic didn't "feel" right to me when trying to put an analogy together; the order of New-Given seemed better within that device. For example, let's say I am trying to explain DITA topics to someone who is already familiar with Information Mapping. Which of the following analogies works better?
- Topics in DITA are similar to maps in Information Mapping.
- Maps in Information Mapping are similar to topics in DITA.
I think the first works better even though it is leading with the new concept and relating it to a given. Maybe because in context, it would appear in a discussion about topics, and, at least in that context, it would be the given topic.
But beyond that, I think there is something to be gained in an analogy by posing the strange relationship first and then grounding it in the familiar. It seems to be consistent with a principle I have noticed in instructional design: Students have no way to process a solution until they experience the problem. In other words, it's best to raise the question before providing the answer as an isolated fact.