Thursday, November 30, 2006
Sometimes we operate under the false myth that we must write user assistance for the lowest common denominator. I think this leads to bad help quite frankly. The better approach is to have a multichannel approach to user assistance and target channels toward the appropriate level of expertise for that channel.
I'm working on an embedded user assistance model (a dedicated help pane on the application UI), and this principle has suddenly clarified things for me. The issue came up, how far do we go with the embedded user assistance? My answer for embedded user assistance is, "Not too far." This channel is excellent for users that are almost smart enough to not need assistance. If the gap is large, other channels like elearning, tutorials, etc. are the appropriate place to deal with those needy ones.
In other words, it's OK to say, "You have to be this tall to ride this ride."
Once we accept this, then we can focus user assistance at the audience more appropriately.
Let's say you were doing an embedded user assistance for a word processor, specifically the part of the application where you do Headings and Footers. I'd note in the embedded user assistance that headings can be automated by inserting a StyleRef field. I might add that this helps users find a topic by browsing the document header.
But what if someone doesn't understand style tags, should we put help about that in the embedded UA? What about principles of document design in general and what constitutes good heading hierarchies and should the StyleRef refer to Heading 1, Heading 2 or what?
Nope, nope, and nope. A snippet of help in a narrow sidebar in the middle of an off-main-page task is no time and place to educate the user about document design. It is a good place to ooch a fairly competent user to a higher level of efficiency or performance.
Put the training bit somewhere else.
Besides, what are the odds that your lowest common denominator is doing headings anyway?
Wednesday, November 29, 2006
And as Jerry Seinfeld would say, "Not that there'd be anything wrong with that." But I need to regroup and get my strategist hat back on here at my day job, and I feel the need to articulate and summarize what it is I do as a User Assistance Architect that is different from what I did as a technical writer.
I seem to spend more time building models than producing documents. I do task analysis, just as a technical writer would do, but I seem to be less interested in "what a user needs to do" as much as "what would a user need to know?" And beyond that, I abstract one more level to "what kinds of information does a user need?"
I define patterns a lot. We have a department Wiki and I have a published pattern language I follow in posting patterns to our Wiki. By the way, I have an article coming out in the January/February issue of Interactions, the SIGCHI magazine. That issue will be a special topic issue edited by Fred Sampson focusing on User Assistance. My article is entitled "A pattern language approach to user assistance" (so much for coy titles). I hope folks get a chance to read it. I will be doing a presentation on this same topic at the WritersUA conference in Long Beach in March.
I wireframe a lot. I never did that as a technical writer, and frankly, I don't see a lot of technical communicators doing that. Wireframes let me model how the user assistance will behave. One reason we don't do a lot of that as technical communicators is that we are bound by the authoring tools. But that is tied into the model that Help is a separate application. As we get into more interactive models where user assistance is blended into the application, we need to wireframe how that works. Wireframing and use case modeling are two nifty disciplines I picked up while working as a UX designer at my previous job.
But I don't do a lot of use case modeling, and I'm not sure why not. Perhaps the pattern language approach fills the need that use cases did when I was designing UIs. But the other day, I did find myself looking at wireframes and asking about alternate and exception cases, so the discipline is still there and seems to influence me.
Content Management and Publishing Technologies
I spend a lot of time researching how we can author, store, retrieve, compile, and display information. Five years ago I would have been thinking about writing and publishing documents.
And somewhere in all that will eventually come architecture and tools.
Thanks for your patient ear. I'm stoked again.
My job is (1) to understand how our users apply information to their tasks, (2) how best to structure and deliver that information within the contexts of those tasks, and (3) how to author and manage that information so that it can be meet 1 and 2.
I gotta get to work!
Monday, November 27, 2006
In a tryptophan-induced semi coma this weekend, I experienced a convergence that tied my previous blog in with several seemingly disparate topics. The blog is from November 17, where I bemoan having to wade through so much meta discourse to get to actual content, e.g., "This chapter is about... This topic is about... This procedure is about..." Over the long holiday weekend, I was reading an article in the current issue of Technical Communication by some researchers in Washington state (BTW, kudos to the research leadership of Jan Spyridakis at the Un. of Washington) who studied the effects of the frequency of headings in online and print documents. The upshot of the research is that having too many headings is distracting in both print and online, but even more so for online documentation.
In my blog, I noted that the problem seemed more annoying to me when navigating a PDF through the bookmarks (which coincided with the block label headings) instead of scanning the printed manual. The research seems to validate that was not an isolated reaction. The extra navigation adds cognitive loading. But, the research also pointed out that too many headings had an aggravated negative effect in online documents even when the headings were not part of the navigation scheme, but occurred when readers scrolled through a multi-heading, monolithic block of text.
My explanation is that reading online is like looking through a periscope; whereas reading print is like looking at the landscape from an open deck. In looking through a periscope, we seem to focus on detail more; therefore, we are more likely to interrupted by the headings (the speed-bump effect I describe in my earlier blog). The same thing happens to me when I read the program listings for my cable. The movie listing gives the cast first and then the blurb about the movie. Even though I have no interest in the cast, I find myself reading it. I think it is an effect of the periscopic focus from scrolling through the movie list.
Have you noticed on CSI that when the investigators enter the crime scene, they never turn the overhead light on? They use flashlights instead. My theory is that it helps them focus on detail and not be distracted by the broader landscape, so to speak. It forces periscopic focus.
The research reminded me and validated again that the online reader experience is less forgiving than the print experience. We need to get to the point as directly as possible.
As an avid Information Mapper, it also gave me pause to consider the potential downside to chunking at a too granular level, especially where limited screen real estate promotes aligning block labels with the body of the text (as opposed to the marginal outdenting more common in print presentation). In that presentation scheme, headings are more likely to interrupt the flow.
It also raises some interesting questions about structured writing in general where content is written independently of presentation media. Can content be authored with media-agnostic assumptions?
The good news is that the Goldilocks principle still prevails: Although too much is much too much online, just right seems to be just right in both print and online.
Friday, November 17, 2006
[Warning: Taking any of the advice in today's blog could prevent you from winning awards in publication competitions.]
Metadiscourse is talking about the talking or writing about the writing. For example, the beginning of this sentence is metadiscourse; it has no content but tells you that what follows is an example. Metadiscourse can be a useful device to help listeners and readers know how to process what is about to come. Which, as an aside, has always made me doubt the effectiveness of putting them at the end of the discourse, as in this sentence for example.
Metadiscourse exists at the document level as well. For example, a table of contents is a form of metadiscourse.
I sometimes find myself having to wade through layers of metadiscourse to get to the value of a document. This seems most inconvenient when I am trying to navigate a PDF manual using the bookmarks. It seems like it takes me way too many clicks to get to where I find anything of value.
Ah, the chapter on Painting Widgets, just what I need. Let me click on Introduction:
"Introduction: This chapter is about how to paint widgets."
Hmmm. OK. Let me click on Overview.
"Overview: This chapter has the following topics:
- All About Widgets
- All About Paint
I'll just click on Procedures:
"This section describes the following procedures:
- Selecting a color
- Preparing the widget
- Painting the widget"
Let's just go to Painting the Widget
"Follow this procedure to paint a widget"
We need to get readers to the good stuff quicker. How?
Don't make the TOC (or bookmarks in a PDF) overly detailed. Maybe just a listing of the chapters is all that is needed. An information-mapped document probably only needs chapter titles and map titles. Listing every block label in the bookmarks or TOC is probably excessive.
Stop writing chapters called "About this Guide" where we tell the reader why we italicize some words, do some in courier, some in bold, etc. I don't think the following scenario happens:
Hmmm here's a definition and the word browser is in italics. Let me go to Chapter One and see what's up with that. Oh, apparently browser is the term being defined. Glad I looked up the Conventions Used in this Guide piece.
And let's avoid Intros that restate the Topic title in sentence formats, or stem sentences that restate topic headings, etc. Let's rethink if every chapter needs a local TOC (or not call it Overview in the Bookmarks).
It's not that I don't know about nor value advanced organizers. But they're kind of like speed bumps; they're good too, but too many in short succession make me put the Wrangler in 4-wheel drive and take to the sidewalk.
End of Rant
Thursday, November 16, 2006
Heisenberg says the more accurately we know where an electron is, the less accurately we can know what its velocity is. And vice versa. John Carroll talks about the Heisenberg Uncertainty Principle of Training: The more complete the training is, the less usable it is; the more usable it is, the less complete it is.
I believe the same goes for user assistance. The weight of being complete is not without its costs. For example, I recently read a user manual that explained the log in screen. By the way, this screen has two fields and one button. One field is labeled UserName and the other is labeled Password. The button is labeled Login.
It took a page with a screen shot to document how to log in. It turns out, after reading the manual, that I am supposed to put my UserName in the UserName field and my Password in the Password field. Then, according to the manual, I need to click on the button called Login.
There was some extra information: If I don't know my UserName and Password I should contact my System Administrator. And to get to the login page I need to type the IP address of the machine that is hosting this particular application in the URL address of my browser. Well, if I didn't know either of those things and went to the user assistance, I still don't know.
My question of the day is: If the UI is well designed and tells the user everything the user needs to know, do we need to document it at all?
What's the harm?
Why not document even the obvious? I can think of two reasons:
- It gets in the way. This user guide was 86 pages long. I can very quickly make it 85: Don't document the login screen. Let's assume that somewhere in that document is the one page the user needs. An 86-page document has 85 distractors (wrong or useless pages for that problem). An 85-page document has only 84 distractors. Not a big improvement, but hey, I was only on page two, who knows what I could do if I dug deeper.
- It fools us (the writers) into thinking we have documented the user's need. Maybe what this guide should have documented is how to read the IP address off of a machine. It was odd that it assumed a user would need help figuring out what to put in the Password field but would be adept at figuring out an IP address.
Wednesday, November 15, 2006
I attended a delightful presentation at the local STC meeting last night called "Why I Didn't Hire You." Slides were clever, speaker was witty, and the content was a good encapsulation of conventional wisdom and sound advice for technical writers looking to get hired.
And that's what disturbed me.
The only part I liked was the part was about using the applicant's resume as an indication of the applicant's document design and information organizational skills. Right on!
The disturbing part was the behavioral advice concerning the interview: Hiring managers make their decisions based on criteria that have no correlation to what makes a writer successful.
Speaker's advice: "Dress professionally; who would you hire from the four men in this slide?" The right answer was the older white guy in the suit. One of the wrong answers was the younger African American man well-dressed but wearing a turtleneck shirt. Anyone want to venture a guess as to the speaker's demographic?
Similar question for the slide with four women. The winner was the attractive woman in a dress suit and perky tie. Loser was the slightly overweight woman wearing slacks and a man's tie.
My question was, "Who in these pictures look like the really good writers and editors I've worked with?" Losers in that category included the older white guy in the suit and the woman wearing the dress suit and perky tie.
OK, bad question. Try this one, "Who in these pictures look like the development team our writers would work with. Oops, same answer as before.
Other disturbing advice (disturbing because it really is practical and accurate): Don't ask questions about the work hours or the environment, like cubes versus offices. Yes, God forbid that hiring managers should act like they are recruiting talent, like they have a need to fill and they should try to understand what the candidates would like to know about where they will spend the majority of their conscious hours. The jobs are things the managers have and they will choose who is worthy to receive them.
What is wrong here? We have set up a system that evaluates candidates on criteria unrelated to success on the job, and we encourage candidates to present themselves disingenuously. What makes us think this is a formula for success?
I'd like to change the rules:
Candidates: Dress appropriately for the work environment and people you most likely will interface with. Be clean and neat, but be you.
Hiring Managers: Does the person look and act like someone who would fit in with the writers and SMEs he or she would work with.
Candidates: Ask questions that will help you make your job decision, don't make up stuff to sound good.
Hiring Mangers: Answer the candidate's questions and take them at face value. They have skin in the game too and have a right to interview you about how they will be treated by you.
Interviewing and hiring are fraught with subjectivity. Don't make it harder by introducing artificial criteria that at best can only tell you how well someone interviews.
It's bad enough that we practice all this deception when choosing life partners and people to make babies with. Must we muddy up the workplace as well?
Monday, November 13, 2006
I'm working right now on what will be essentially a getting started workbook. It will probably consists of an interactive document that queries the user for configuration-specific information, such as network topology, operating modes, etc., and it will provide specific user assistance for the user's configuration requirements. The latter might be delivered in discrete deployment guides (probably delivered as conventional PDFs) while the interactive document would be used primarily to determine which guide to point the user to.
So the core of the workbook is NOT procedural information, that comes later. The core must be conceptual information and guidance information so that the user can make informed decisions about how to configure the product. The flow of the topics will be determined by the deployment process, with the most important information being conceptual (background about the product) and guidance (considerations, criteria, and consequences of decisions that the user must make). After that, the user can be directed to detailed procedural information.
I've had a tendency in the past to view procedural information as the backbone of a user assistance document. The old P-K analysis approach: Define what procedures the user must do, then analyze what other kinds of knowledge you must impart for them to understand the procedures. I'm certainly not throwing that baby away with the bath, but I'm coming to see less and less importance in defining the sequence of steps and seeing more importance in imparting expertise to support the user's application-level goals.
In short, if it's that hard to figure out how to work the application, shoot the UI developer. The challenge for the UA should be helping the user figure out how to apply the application to the user's goals.
Make the higher order information (what I've been calling conceptual and guidance) the backbone of the user assistance, and let procedural information branch off and out from that core.
Friday, November 10, 2006
This one has me pondering. We have a user interface where the user can enter IP addresses. If the user wishes to enter multiple IP addresses, the instruction is to separate them with a comma. The UI displays the addresses as the user has entered them. Very similar to how you see multiple email address in the To field of an email. No problem.
On a new interface, when the user types a comma, the display treats it the way it would a carriage return, putting the new IP address on the next line. Huh! Easier to read and see what addresses have been added; different user experience. It raises two questions I find interesting:
- Should the UI display what the user typed or what the user decided? Using the comma to tell the computer, "This is a new IP address" is easy for the input phase, but should it preclude the computer from acknowledging that input in a way that is easier to process visually for the user.
- To what degree should innovation be constrained by convention (or consistency)?
No answers today. I'm enjoying the questions too much :-)
Thursday, November 09, 2006
My earliest exposure to flow charts was as trouble-shooting aids. As I watched people use them, they did not seem very effective; users often got lost and the user experience rarely seemed to end with the trouble getting shot.
This last year, however, I have found myself going to flowcharting as an analysis tool, one to help me understand complex navigations or tasks where logical branching played an important part. For example, in one application, clicking the Done button could take different users to different locations depending on what path they had taken or decisions they had made.
More recently, I have been using flowcharts to understand how a complex task is done (configuring a network security appliance), especially to understand the different contingencies and how the user path is affected.
I use Visio's standard template for flowcharting and sit in design sessions with my laptop projected. The team of SMEs, information architect, technical writer, and I have been mapping the flow and logical branches of a very complicated process in order to create an interactive guide that could query the user about configuration decisions and deliver the appropriate information.
I have also created four new icons in my template, one for each of the main kinds of information:
An interesting pattern is emerging. Where there are decision/branching diamonds, there is often a need for conceptual and guidance information. In other words, the user needs some background about the domain and also needs expert insight into the decision to be made. For example, if a branch requires that the user decide between "transparent" or "routing" mode, the user assistance must make sure the user understands these terms (conceptual information) and also provide guidelines for when to choose one over the other, implications for that choice, etc (guidance information).
Procedural information icons tend to show up at action blocks in the flow.
Nothing shocking here, but it's nice to change lenses every now and again and find that the same features you thought were important still show up in the landscape.
So don't discount the value of flow-charting as a collaborative task-analysis tool, and be aware that it can then be easily turned into a contextual information requirements tool.
Wednesday, November 08, 2006
In one of my earliest blogs I pointed out that the term "user" implied something "used." Part of understanding how to craft effective user assistance requires an understanding of different ways things are used.
I see applications as falling into one of (or drifting among) three levels of toolness:
- Extension tool
- Cognitive tool
- Electronic Performance Support System (hat-tip to Gloria Geary)
Extension tools are a lot like simple mechanical tools: They extend a natural physical capacity. Think of a crescent wrench. It grabs a nut much the way our fingers would, just stronger and tighter. The wrench's handle amplifies the natural torque our arm provides. Similar thinking for a hammer: Its head is like a small hard fist and its handle amplifies the power of our arm.
A simple word processor (or one where just the basic functions of text entry and editing are used) is an extension tool. We can type, erase, and print pretty much the way we would write (or talk) manually--just faster and more legibly.
Cognitive tools help us think. What if instead of just sitting down at the word processor and typing a memo, I started in the outline format and organized my thoughts. Then as I wrote, I used the outline to evaluate the flow of my argument and dragged elements around until I felt the document flowed better? There is more going on here than using the word processor to make the task easier from the mechanical perspective. The tool is supporting higher order processes of rhetoric, composition, and critical thinking.
An Electronic Performance Support System (EPSS) brings data and domain expertise (guidance) to the user. Today's word processor, with spell check, templates, wizards, and collaboration tools, is very close to acting like an EPSS if not actually doing so.
Implications for User Assistance Architecture
As user assistance matures in a product, it moves the product up the tool hierarchy. When user assistance as a separate help file largely goes away and is replaced by more proactive strategies within the UI, it has elevated the product to an EPSS.
THAT is the value sweet spot.
Sunday, November 05, 2006
Philisophically, technical writers fall into two epistemological camps (hey! it's my blog; I can use words like that if I want): Positivism and Constructivism.
Positivists view reality as singular and rigid: An apprehendable reality is assumed to exist, driven by immutable natural laws and mechanisms. Knowledge of the way things are is conventionally summarized in the form of time- and context-free generalizations. (Guba and Lincoln 1994, p. 109)
Positivist technical communicators tend to define a product by a finite set of features and functions. An accurate and complete cataloging and description of the features and functions will render an accurate and complete description of the product.
Positivists view the relationship between the knower and the thing known as dualistic: There is a distinct separation between the knower and the known. They view reality as being objective: Facts are true or false. They view the role of the technical communicator as being an unbiased describer of a product's functionality.
Constructivists view reality as pluralistic: Reality is expressible in a variety of symbol and language systems. They also see it as plastic: Reality is stretched and shaped to fit purposeful acts of intentional human agents. (Schwandt 1994, p. 125)
Constructivist technical communicators define a product by how people interact with it. No description can ever be complete or totally accurate since the permutations of possible user contexts are too complex.
Constructivists view the relationship between the knower and the thing known as transactional: Meanings are created, negotiated, sustained, and modified within a specific context of human action. The means or process by which the inquirer arrives at this kind of interpretation of human action (as well as the ends or aim of the process) is called Verstehen (understanding). (Schwandt 1994, p. 120).
They also see it as subjective: Facts are deemed viable or not viable within a community of practice.
Constructivist technical communicators interpret product functionality in light of both the user contexts and the developers intentions.
Whereas we take many of our disciplines and values in technical communication from our positivist past, the future of user assistance lies in a constructivist vision .
Guba, E. G., and Y. S. Lincoln. 1994. Competing paradigms
in qualitative research. In Handbook of qualitative
research, ed. N. K. Denzin and Y. S. Lincoln. Thousand
Oaks, CA: Sage Publications.
Schwandt, T. A. 1994. Constructivist, interpretivist
approaches to human inquiry. In Handbook of qualitative
research, ed. N. K. Denzin and Y. S. Lincoln. Thousand
Oaks, CA: Sage Publications.
Friday, November 03, 2006
[Musical segue into this piece: Paul McCartney singing in the background, "Some people say we've had enough of silly taxonomies"]
Well, I look around me and I say it isn't so.
Actually, I just want to tweak a couple that have been around for awhile just to put them into more of an architectural context.
Two taxonomies that dominate technical writing are Information Mapping® and one whose origins escape me, but which I read most recently in the Wiley Encyclopedia of Electrical and Electronics Engineering.
Information Mapping identifies the following seven types of information:
- Process description—explanations
- Concepts—definitions and examples
- Facts—physical characteristics
- Classification—types and categories
The Wiley Encyclopedia of Electrical and Electronics Engineering identifies the following four types:
As a user assistance architect, however, I am more interested in a taxonomy that lets me analyze the user's information needs, i.e., go through a workflow or screenflow and ask, "What kind of information would the user need here? Information Mappers will argue that its taxonomy will work fine--and I won't disagree.
But I like the simpler Wiley model, with one tweak. I would replace Instructional with Guidelines. For the kinds of products I support, that makes more sense for me.
By and large, I use the first three the way the encyclopediaclopedia defines them.
Conceptual, in the sense I want to use it, is broader than the information mapping definition and applies to any background information that the user might need to understand a screen or procedure. In essence, conceptual information is about the product or application domain, but it has no action context.
Procedural is what it has always meant—steps in the right order.
Reference is the look-up details like specifications, glossaries, and command syntax. Meant to be dived into at some particular snippet and not meant to be read like a coherent discourse.
Guidelines is a somewhat different twist than instructional. Guidelines are provided at distinct points in a workflow or screenflow where the user must make a decision, e.g., enter a value or select/deselect a feature. Guidelines coincide somewhat with Information Mapping's Principles. They should be action oriented and help users understand the following:
- What should they consider when making the decision?
- What are typical or recommended starting points or selections?
- What are the impacts of their selection?
- How would they monitor the correctness of their decision?
- How would they adjust or tune their decision?
Let's say that a user is in MS Excel and is using a statistical function to calculate the probability outcome for a t-test of independent means.
Conceptual help would explain what a t-test is and define the required inputs/ouputs, i.e., alpha and p.
Procedural help would go through the steps, including navigation to get to the function arguments dialog and how to select the data fields directly from the spreadsheet.
Reference help might give the actual formulas being used in the function.
Guidance help would assist the user in selecting the appropriate value for alpha. And that's the rub! You need a researcher to give you that insight, not the worksheet designer or programmer. But it sure would be helpful for someone to tell you:
That information, combined with the conceptual help that would have elaborated a bit more on the definition of alpha, would help the user make a better-informed decision.
Alpha lets you set the level of risk you are willing to take for rejecting a true difference. A typical value of 0.1 is used for many marketing and social science research projects. Where harm could come from accepting a false finding as true, for example in a medical research project or one that would influence a high dollar investment, more conservative values of 0.05 and even 0.01 are often used.
Setting this value too high could result in your claiming there was a real difference between the two samples when in fact there wasn't.
Setting this value too low could result in your rejecting the claim that there was a real difference between the two samples when in fact there was.
Lower alpha values usually require higher sample sizes to be practical.
As you plan a user assistance design for an application, look for opportunities for the four types of user assisatnce described above, and be particularly diligent about identifying the need for Guidance help. It is probably our biggest shortcoming in the user assistance world.
Wednesday, November 01, 2006
This week I judged some online competition entries for STC, and I reviewed an encyclopedia article on Electronic Documentation. The encyclopedia article talked about the main navigational schemes: Linear, Hierarchical, Web, and Grid. The entries I looked at for STC had classic HTML Help structures of hierarchical TOC with extensive web linking among the topics.
I think we overlook the basic structure that works best in user assistance: The Hub (or its extended model, the Snowflake). A hub has a central page with links off of that page. The navigation is fairly limited, however, between hub and satellite pages. You go to the satellite page and you return to the hub. Kind of like the Hokey Pokey: "You put your right leg in; you take your right leg out." The Snowflake consists of hubs arranged in larger systems of hubs.
Hub structures let users explore safely: step in, step back. They maintain a mental model that is easy to visualize and keep track of.
I'm not recommending pure hubs. It's great to be able to take shortcuts back to the top of the structure, and any good navigation system will be a hybrid. But I think a dominant model must emerge if the user is going to be able to create a mental map of the land. Overall, I think the hub is the easiest model.
Practical Implications for UA Design
Keep your information model simple, and resist the urge to link to topics that do not have direct and immediate impact on the user's context. For example, there is no need to link to a topic on Configuring Reports from a topic on Configuring Work Flows, just because they both deal with "configuring."
If the Hokey Pokey teaches us an important lesson in UA architecture, another childhood lesson can also be relevant. An elaborate trail of breadcrumbs never got anyone out of the woods.