Monday, March 16, 2009


I'm working on a number of fronts around the issue of Value, as in the value of my profession to my company, the value of my department to my division, my value to my department, the value of STC to my profession, etc. A couple of aha! moments for me:
  • Good organizations are not cutting stupid programs or laying off poor performers in the face of this economy! Good organizations have already cut their stupid programs and gotten rid of their poor performers. All that's left to cut are good programs and good people, so the rebuttal "We can't cut this program or this position because [however 'they're good' translates]" doesn't mean anything.
  • We need to quit talking about mission and vision and talk only about deliverables and value here for awhile. What do I produce and how does it add value; what do we produce and how does it add value?
  • We need to articulate how we assess the value [it] adds.
  • We then make that assessment the litmus test for every initiative.


Anonymous said...

I'd love to hear more about how you assess the value! In what appeared to be lean times ('05-ish) it was hard enough to dedicate resources to value-based metrics. In the current economic setting, it must be even more difficult.


Julia said...

Great post Michael. Your value to yourself and your employer is always something we should be thinking about, regardless of the state of the economy. (IMO)

Mike Hughes said...

Assessing Value
I think it is helpful to talk about what you would assess if you could measure it--before worrying about how to get at it. And each of these value statements should start with "User Assistance adds to the bottom line by ..." For example, you might end that with "...increasing the re-enrollment rate in our offering because the UA helps the user get and see more benefit from our features." Granted, it would be hard to isolate re-enrollment effects or lack thereof to user assistance, (if re-enrollment goes up, everyone from Sales to UI design will claim credit; if it goes down, we would blame the economy--you get it). But now you can start to look at changes and proposed changes against this rubric (rather than a metric). For example, you could show how user assistance contained more domain expertise and less click-this drag-that. You could also include qualitative usability test data to support that users get a better understanding of how a feature benefits their own operation.
We need a much better answer than this, I know, but I think this gets us started.