Using functional magnetic resonance imaging (fMRI) technology, members of the Center for Cognitive Brain Imaging have gained deep insight into the way human brains categorize objects. In a breakthrough that demonstrates the interdepartmental cooperation here at Carnegie Mellon, neuroscientists Marcel Just and Vladimir Cherkassky and computer scientists Tom Mitchell and Sandesh Aryal have arrived at results that bode well for human-computer interfaces and neuropsychiatry.
Their research has concluded that humans represent all non-human objects in terms of three classes or dimensions. Just defines these dimensions as having to do with eating, shelter, and the way the object is used. He explained that when one sees an object, the brain thinks, “Can I eat it? How do I hold it? Can it give me shelter?” Indeed, all concrete objects are represented in terms of these three dimensions, much in the way that all places in space are represented by the three dimensions that we experience every day.
OK, sounds a little "out there" at first. But think about one of the most ubiquitous of icons and navigational constructs on the web: "Home." Does that say "Gimme Shelter" or what?
Thinking about it a little further, we consume (eat) data, we store (shelter) it, and we move it (hold it) around. I know, the jokes come too quickly. Consider the following alternatives to the traditional Save and Cancel buttons:
Still, I think the research findings are provocative and deserve some serious consideration for UI applications.