Friday, February 08, 2008

A Model for Assessing Value--
I've been thinking and talking a lot lately about our value as technical communicators. I can see that my thinking has been influenced by my background in instructional design, specificially by Kirkpatrick's model of evaluation. In his model, he states that instruction can be assessed at four levels:
Level 1: Reactions
Level 2: Learning
Level 3: Transfer
Level 4: Results

I think we can easily apply this same model to how we evaluate our communication products as well as our contribution to our sponsors.

To summarize the four levels in the context in which Kirkpatrick proposed them, I put them in the context of a training course on how to correctly operate a drill press. Let's assume the training had been triggered by excessive scrap rates coming out of the machine shop.
  1. Reactions. How did the students react to the training itself? This is generally assessed through a course evaluation sheet, e.g., course met my expectations, instructor was knowledegable, yadda, yadda, yadda.
  2. Learning. Did the students learn anything? This is generally assessed through testing or lab observations. Students could be obseved by the instructor who certifies them using a checklist of targeted drill press competencies.
  3. Transfer. Did the student go back to the job and apply the new knowledge or skills correctly and effectively? Employees coming out of the training could be assessed by the shop supervisor who observes if the they are applying the right techniques.
  4. Results. Did the training solve the business problem that triggered it? Did the scrap rate go down?

By the way, Phillips adds a fifth level, ROI, which would compare the cost of the training to the dollar value of the reduced scrap and compare it to other investments that could have been made.I'll blog on that one later.

So let's see how Kirkpatrick's model could be applied to technical communication.

Level 1: Reactions--We see this in reader response cards and in usability tests where participants are asked to rate various aspects of a product or document. Lots of the research done on fonts or layout stop at this level of evaluation, e.g., which document looks more professional?


Level 2: Learning--To me this translates to: If the user read the document, did he or she understand it and did it apply to the task at hand. For example, did the Quick Start card work in the lab when we specifically asked users to use it?


Level 3: Transfer--Oooooh, the toughie. Did users improve their performance in real life based on the documentation? For example, did real patients comply better with their medication protocol when given redesigned instructions?


Level 4: Results--Did we achieve the business goal the document was meant to achieve? For example, did Support Desk calls go down, did medical claims decrease, did user registrations increase on a redesigned web page, did percentage of transactions completed go up, etc.

I think we need to emphasize levels 3 and 4 more. I might be wrong, but I haven't come across a research study on fonts that tested if users completed tasks faster or made fewer errors depending on which font face the Help was written in. (So why do we fight so passionately about it?) So I would like to see research in our field increase its emphasis on user performance. (Level 3)

I would also like to see more discussion about the how better informed, better performing users make positive impacts on an organization's business performance. (Level 4)

No comments: