An aspect of tool use and evaluation that sometimes gets overlooked in tool discussions is to what degree a tool helps the user think better. The academic phrase for tools that do this is cognitive tools. Cognitive tools are closely related to performance support tools, but I draw the following distinctions:
- A performance support tool manages workflow and pushes data to the user based on the programmed expertise about the job domain the tool is programmed to support. A troubleshooting script is a performance support tool. I can use it to solve my problem but not get any smarter about troubleshooting.
- A cognitive tool helps me think about the problem space or job domain.
I like tools that have a bit of both. For example, a word processor is typically a performance support tool. It lets me enter and edit text efficiently. But I often switch into Outline mode partway into writing a document and look at just my headings. All of a sudden I can see structural flaws or problems with flow that I had not noticed while immersed in the content. I can then shift topics around until I see a structure that feels right. It is the tool's ability to tap into my intuitive processes that gives it a value above its ability to enter, copy, paste, and spell-check.
Now that I am sensitive to this intuition support aspect, I am seeing all kinds of examples. Watch the video embedded in a blog about the future of the Web to see how faceted metadata searches can tap into an intuitive need to understand more but when you can't articulate the question. Warning: the initial example assumes that the reader has a well articulated question, but watch later examples as the tool helps a vision of possible relationships emerge.
Pivot tables in Excel (or pilot data tables in Symphony) have the same ability to let me kind of "poke around" in the data until I stumble on interesting relationships I had not thought to inquire about.
Ours is a very rational culture that holds a strong belief that good documents start with a clear outline, good training starts with crisp objectives, and good design starts with exhaustive requirements.
I think real projects start with some unstructured poking around that eventually converges on a clear vision of what needs to be done. Furthermore, a goodly portion of that poking around has to be done in production mode (including production tools).
So I would add a criterion to evaluating tools: Do they support cognitive activities such as analysis and relationship modeling; specifically, can they support "what if" experiments that are easily reversed if they go nowhere useful and on the other hand can be easily implemented into the production model if they prove useful.