My favorite Scrum Master used the word "uniqify" in a sprint planning call today, as in "Could you uniqify that expression?" I poked him through the instant message tool we have because he had recently taken me to task for using the word "epistemology" in a design session.
It turns out that this is a fairly common pattern in Ruby, as in stringify, uniquify, htmlify, etc.
The rule seems to be that the verb [term]ify means to take the ensuing object of the verb and give it the qualities of the [term].
Of course, this has instances in our day-to-day language, that's where all these types of linguistic manipulations have their origin. To "liquify" is to to make the object like liquid. Its opposite is "solidify." It's very similar to putting "ize" at the end of a term to make a verb--but I sense a subtle difference I haven't yet quite distilled.
As an amateur linguist, I LOVE when a language can do this. Arabic does it on steroids, having a very complex set of patterns that decomposes just about every word down to a tri-literal root. Verb forms take their nuances from the root, as do noun patterns for the doer, the done-to, and the where-done.
English doesn't do it nearly enough. Sure, add and "er" to the end of a verb to get the doer, as in writer, rider, sleeper. Sometimes an "ee" to get the done-to, as in payee. Do we even have a pattern for deriving the noun for where-done from the verb? For example, the Arabic word for school, madresa, comes from a pattern that makes it "the place where studying is made to happen."
HTMLify is my favorite so far. "Can you express this in HTML?" becomes "HTMLify it."
Tonight I'm going to mealify some leftovers. Anyone who thinks that means "reheat" has never seen what I can do for leftovers.
Tuesday, August 31, 2010
Tuesday, August 24, 2010
Knowledge Life Cycle and Social Media
I usually try to keep work and blog separate, but I gotta say I really like working for IBM and especially the IBM Security Services group. Why? I get paid to have some really interesting conversations.
Last week I was in a meeting where we were discussing the right way to use Lotus Connections in our work team--versus a department Wiki we already have.
Lotus Connections is social network app that has forums, file uploads, activities, wiki, yadda yadda. BTW, it's pretty good. But the discussion was "When do we use that tool versus our more formal department Wiki--where we keep departmental procedures and such?"
The conclusion was actually quite elegant in its simplicity: We will use Lotus Connections to hold the conversation; we will use the Wiki to curate the answer. I'm sure that's not original--but we got there on our own, nonetheless.
I think this is a pattern that has lots of applications where social media has started to get traction. Online user doc vs. user forums, for example. STC SIGs and its Body of Knowledge is another example that I think about a lot and that seems to apply here.
I think this is a field that needs a lot of discussion and open dialogue, specifically, how to manage the life cycle of knowledge from ideation and churn, through vetted "this is how it is," and eventually to "this is last week's dead fish."
Of course, my traditional mistake is to impose formal process and control on what should be left open and organic, but still, I feel that some sort of process or guidelines would be useful. Here's my laundry list of questions. More questions, answers, general trouble-making--all are welcome:
Last week I was in a meeting where we were discussing the right way to use Lotus Connections in our work team--versus a department Wiki we already have.
Lotus Connections is social network app that has forums, file uploads, activities, wiki, yadda yadda. BTW, it's pretty good. But the discussion was "When do we use that tool versus our more formal department Wiki--where we keep departmental procedures and such?"
The conclusion was actually quite elegant in its simplicity: We will use Lotus Connections to hold the conversation; we will use the Wiki to curate the answer. I'm sure that's not original--but we got there on our own, nonetheless.
I think this is a pattern that has lots of applications where social media has started to get traction. Online user doc vs. user forums, for example. STC SIGs and its Body of Knowledge is another example that I think about a lot and that seems to apply here.
I think this is a field that needs a lot of discussion and open dialogue, specifically, how to manage the life cycle of knowledge from ideation and churn, through vetted "this is how it is," and eventually to "this is last week's dead fish."
Of course, my traditional mistake is to impose formal process and control on what should be left open and organic, but still, I feel that some sort of process or guidelines would be useful. Here's my laundry list of questions. More questions, answers, general trouble-making--all are welcome:
- What are the stages of knowledge or what are the categories of maturity/credibility, whatever?
- What other dimensions should go into "whatever" in the question above?
- Where does knowledge live during these stages?
- Who moves its classification during its lifecycle?
- At what point is someone liable for the consequences if knowledge is acted on and proves wrong?
- When can knowledge be branded as intellectual property in an evolutionary model like this, and who gets to own it?
Friday, August 20, 2010
Mixed message
Wednesday, August 18, 2010
The Software Development Death Cycle
Does this look like your development cycle?
All is joy and celebration on the product management side when the project begins. Then comes the iterations as the marketing requirements document is passed back and forth with engineering. (BTW, is it just me or does it seem we lose about 1/3 of the project's useful development time in this phase?) Then engineering goes to work and produces something they send to QA. The product manager sees it and goes postal. Well, been there!
I worked at a place where there was a product manager--let's call him Dave--that kept stirring the pot in QA when he got a glimpse of the UI coming out of development. The design and development team wanted to take out a contract on him. "How do we keep Dave out of the process that late in the game? He's creating too much churn."
I saw the problem differently--why were we disappointing Dave, the guy who brought us the work to begin with? I could understand if we were disappointing the customer, after all , they hadn't written the requirements, Dave had. I could understand if we were disappointing our partners, after all , they hadn't written the requirements, Dave had. But how was it that we were disappointing Dave?
I concluded it was the requirements process itself that was failing. It relied on words, and words were screwing the deal. As a technical communicator, that was a harsh realization, but I have come to learn the following:
OK, here's how it works.
Here's why this works:
No big magic, but the core ingredients are not substitutable:
All is joy and celebration on the product management side when the project begins. Then comes the iterations as the marketing requirements document is passed back and forth with engineering. (BTW, is it just me or does it seem we lose about 1/3 of the project's useful development time in this phase?) Then engineering goes to work and produces something they send to QA. The product manager sees it and goes postal. Well, been there!
I worked at a place where there was a product manager--let's call him Dave--that kept stirring the pot in QA when he got a glimpse of the UI coming out of development. The design and development team wanted to take out a contract on him. "How do we keep Dave out of the process that late in the game? He's creating too much churn."
I saw the problem differently--why were we disappointing Dave, the guy who brought us the work to begin with? I could understand if we were disappointing the customer, after all , they hadn't written the requirements, Dave had. I could understand if we were disappointing our partners, after all , they hadn't written the requirements, Dave had. But how was it that we were disappointing Dave?
I concluded it was the requirements process itself that was failing. It relied on words, and words were screwing the deal. As a technical communicator, that was a harsh realization, but I have come to learn the following:
- Gopen and Swan are right when they say "We cannot succeed in making even a single sentence mean one and only one thing; we can only increase the odds that a large majority of readers will tend to interpret our discourse according to our intentions. "
- Words, then, create the illusion of agreement.
- And my own realization is that any time words are a problem, more words are never the solution.
OK, here's how it works.
- Start with a list of the requirements, doesn't have to be pretty or even very good. It's a conversation starter.
- Sit in a room with a product manager, UX designer, and a developer and start creating a scenario that illustrates a requirement. Make it an explicit example. Who is the user, what's the problem he's trying to solve, imagine how our product would fit in. Tell the story and draw pictures (wireframes).
- Do that for all the requirements or at least for the most important.
- Have the developers size the solutions.
- Let Product Management select from the solutions as much as the development bandwidth will allow.
- Go forth and code.
Here's why this works:
- The value that project managers bring is that they understand the problem space. Detailed requirements documents tend to end up being solution oriented--takes them out of their sweet spot.
- Developers design better solutions when you clue them in up front what the problem space is. Treat them like architects and not like carpenters.
- UX folks can model a product's behavior and validate it with stake holders and users with little to no code (that equates to fast and cheap).
No big magic, but the core ingredients are not substitutable:
- Early in the process
- Collaborative among product management, development, and UX
Tuesday, August 17, 2010
This can only mean one thing...
A while back I blogged that a website I use had been redesigned--seemingly to reach an audience younger than me. Here is what that design looked like:
Well, I went back today and it looks like this:
What made them change? Two theories. One is that they read my blog. Seeing as how I did not make the list of most influential bloggers in Tech Comm I am summarily dismissing that theory.
That leaves the most common reason UIs undergo abrupt and dramatic changes: The president's spouse or parents tried to use the site and complained.
Happens every time.
Well, I went back today and it looks like this:
What made them change? Two theories. One is that they read my blog. Seeing as how I did not make the list of most influential bloggers in Tech Comm I am summarily dismissing that theory.
That leaves the most common reason UIs undergo abrupt and dramatic changes: The president's spouse or parents tried to use the site and complained.
Happens every time.
Friday, August 13, 2010
Good, Better, Best
Someone corrected my use of personas the other day and pointed out that the plural is personae.
File under reason 42.b of "Why people hate technical communicators."
It reminds me of "data."
A good sentence says "The data is clear on this."
The better sentence says "The data are clear on this" because we know that the singular is datum and data is the plural.
The best sentence says "The data is clear on this" because that's how real people talk.
Miller Williams, a former poet laureate, said he wanted to write poetry that cats and dogs could understand.
I don't know about you, but I never met a dog that could relate to personae--not even the ones named Rex.
File under reason 42.b of "Why people hate technical communicators."
It reminds me of "data."
A good sentence says "The data is clear on this."
The better sentence says "The data are clear on this" because we know that the singular is datum and data is the plural.
The best sentence says "The data is clear on this" because that's how real people talk.
Miller Williams, a former poet laureate, said he wanted to write poetry that cats and dogs could understand.
I don't know about you, but I never met a dog that could relate to personae--not even the ones named Rex.
Monday, August 09, 2010
In defense of my [pejorative] self
Some descriptions seem to carry negative baggage and get thrown at me from time to time. The only problem is that not only do I find these terms NOT pejorative, in fact, I have worked hard to earn them.
One is "writer." I remember sitting in a meeting and having someone voice her concern that several people in the room had referred to themselves as "technical writers." (I was one.) I know the history of this. The Bureau of Labor Statistics (BLS) has a rather outdated definition for technical writer. This person was advocating the title "technical communicator" to differentiate what we do from this outdated definition.
I grew up wanting to be a writer. When asked what I would most like to be, I never answered "a communicator." I think that the role technical writer is a legitimate subset of the profession known as technical communication. Technical writers focus on communicating with words. The problem with the BLS definition was not the term "writer," they just seriously understated what goes into technical writing.
I don't want to undermine a campaign to get technical writers more respect and more pay; I just don't want to have to apologize for what I do, and in fact am pleased to do, i.e., being a technical writer. Sometimes I'm something else; in my current job, for example, I am a user experience architect, another role in the field of technical communication. But when I take on the task of writing user assistance, I'm OK telling folks I'm a technical writer.
Another pejorative is "academic." In its negative sense, it means "irrelevant to real world applicability." In its positive sense, it can mean well studied in the research that has been done in a field and capable of generating valid, reliable knowledge by conducting original research.
I've worked real hard to try to qualify for that latter meaning, so I chafe a little when my desire to apply rigor is branded "academic" and meant to imply "irrelevant."
BTW, I'm sometimes branded pedantic--and that one I deserve and should try to be less of.
One is "writer." I remember sitting in a meeting and having someone voice her concern that several people in the room had referred to themselves as "technical writers." (I was one.) I know the history of this. The Bureau of Labor Statistics (BLS) has a rather outdated definition for technical writer. This person was advocating the title "technical communicator" to differentiate what we do from this outdated definition.
I grew up wanting to be a writer. When asked what I would most like to be, I never answered "a communicator." I think that the role technical writer is a legitimate subset of the profession known as technical communication. Technical writers focus on communicating with words. The problem with the BLS definition was not the term "writer," they just seriously understated what goes into technical writing.
I don't want to undermine a campaign to get technical writers more respect and more pay; I just don't want to have to apologize for what I do, and in fact am pleased to do, i.e., being a technical writer. Sometimes I'm something else; in my current job, for example, I am a user experience architect, another role in the field of technical communication. But when I take on the task of writing user assistance, I'm OK telling folks I'm a technical writer.
Another pejorative is "academic." In its negative sense, it means "irrelevant to real world applicability." In its positive sense, it can mean well studied in the research that has been done in a field and capable of generating valid, reliable knowledge by conducting original research.
I've worked real hard to try to qualify for that latter meaning, so I chafe a little when my desire to apply rigor is branded "academic" and meant to imply "irrelevant."
BTW, I'm sometimes branded pedantic--and that one I deserve and should try to be less of.
Friday, August 06, 2010
Phrases to Avoid
A tweet sent me to a web site that lists phrases to avoid in technical writing. Personally, I find their list to be pretty mild. Things like:
a majority of -- most
a sufficient amount of -- enough
according to our data -- we find
I'd like to add MY list of phrases that should not show up in your documentation:- Hey, dick head, ...
- After you put out the fire...
- Your browser is as lame as you are.
- If you figure out what this feature does, let our tech writers know.
- Some side effects might include...
- In our defense...
Thursday, August 05, 2010
The bandwidth discussion continued
Yesterday's blog about bandwidth and information attracted some very insightful comments and got me to thinking more about the issue of "Do videos take advantage of their bandwidth?" In other words, are they proportionally better given how much more information they convey?
The ensuing discussion brought a couple of things to mind. I remember from one of my technical communication courses that line drawings are often preferred in a manual (over photographs) because photos have too much detail. Drawings help focus the reader on the detail you want to draw attention to. I find the same principle with low fidelity wire frames over screen prototypes. I think the same can be said sometimes of video--is the fidelity a value-add or a distraction?
Another interesting paradox I noticed is that we tend to assume that videos are good for the neophyte. And Ken Hilburn makes an interesting point about how we instinctively filter out the unnecessary detail of a video. But that is more true for the experienced user than the neophyte. For example, I once tried to teach my mother-in-law how to use Yahoo email. She had a hard time getting through the browser because she thought everything was important. I was constantly saying things like "that's a banner ad, ignore it," or "that's the disclaimer text." We forget how media literate experienced users are, and how adept their filters are.
Where all of this has led me is not to dismiss video, but to approach it with a designer's eye much the way Tufte would have us look at a chart, namely, is each byte of information worth the bandwidth. Another way phrasing the question is "Am I taking advantage of the bandwidth?"
Let's revisit the example of the dobro video. If the purpose were to instruct, then maybe a good design would be to have a synchronized split screen of closeups on the picking hand and the slide hand. That way the bandwidth would be more fully invested in the information of value.
(BTW, please don't take this as being critical of those generous musicians who share on YouTube--I am so grateful for what they do for absolutely free.)
So if you are thinking of doing video, ask what is the information of interest and plan the video to put its bandwidth on that information. Eliminate spurious mouse movements, focus on fields of interest, shade out non-relevant areas of the UI, etc. When we do traditional video, we point and focus the camera. Same mindset for screen captures--don't just sit the "camera" on a tripod and shoot the whole landscape.
Wow, makes me want to wear my director's beret. Hoping for cooler weather soon.
The ensuing discussion brought a couple of things to mind. I remember from one of my technical communication courses that line drawings are often preferred in a manual (over photographs) because photos have too much detail. Drawings help focus the reader on the detail you want to draw attention to. I find the same principle with low fidelity wire frames over screen prototypes. I think the same can be said sometimes of video--is the fidelity a value-add or a distraction?
Another interesting paradox I noticed is that we tend to assume that videos are good for the neophyte. And Ken Hilburn makes an interesting point about how we instinctively filter out the unnecessary detail of a video. But that is more true for the experienced user than the neophyte. For example, I once tried to teach my mother-in-law how to use Yahoo email. She had a hard time getting through the browser because she thought everything was important. I was constantly saying things like "that's a banner ad, ignore it," or "that's the disclaimer text." We forget how media literate experienced users are, and how adept their filters are.
Where all of this has led me is not to dismiss video, but to approach it with a designer's eye much the way Tufte would have us look at a chart, namely, is each byte of information worth the bandwidth. Another way phrasing the question is "Am I taking advantage of the bandwidth?"
Let's revisit the example of the dobro video. If the purpose were to instruct, then maybe a good design would be to have a synchronized split screen of closeups on the picking hand and the slide hand. That way the bandwidth would be more fully invested in the information of value.
(BTW, please don't take this as being critical of those generous musicians who share on YouTube--I am so grateful for what they do for absolutely free.)
So if you are thinking of doing video, ask what is the information of interest and plan the video to put its bandwidth on that information. Eliminate spurious mouse movements, focus on fields of interest, shade out non-relevant areas of the UI, etc. When we do traditional video, we point and focus the camera. Same mindset for screen captures--don't just sit the "camera" on a tripod and shoot the whole landscape.
Wow, makes me want to wear my director's beret. Hoping for cooler weather soon.
Wednesday, August 04, 2010
Bandwidth and Information
I decided recently that I had "stopped growing musically." You have to be a Catholic flower child from the 60s to inflict that kind of guilt and deprecation on yourself over what is meant to be a hobby--something you do for fun.
(not from the 60s, but you get the picture)
So I hit upon a plan. I found this great dobro player, Martin Gross, who has a terrific YouTube channel. So my plan is to learn one song of his every month. OK, all happy again now that I am "working" at having fun.
As I've been working on "Blues Stay Away from Me" I've made a couple of observations.
The old way of learning a song was to play it on the record player and keep hacking away at it until you figured out how the person was playing it. Essentially, not much has changed except that with YouTube you get the video channel as well as the audio. And honestly, that makes it easier, but not in proportion to the orders of magnitude increase in information that the video makes available. Anyone who's worked on televisions is well aware of the difference in bandwidth between the video signal and the audio. There's just a lot more information in the video signal, and Mother Nature is an exacting accountant (the cost for transmitting information is bandwidth--the more information you are hauling, the wider the highway has to be).
Seriously, if you had to choose between learning a song by just listening to it without seeing the video, or watching it without hearing the audio, you'd be much better off just listening.
Kind of ironic seeing that the video has a ton more information in it. So what gives?
Well, most of the visual information is irrelevant. The color of the guitar, the spacing between the strings, the freckles on Martin's hand, etc. The most important information is what fret he is putting the slide on and what strings he is plucking. Since this is not a split-screen video, those two pieces of information are at opposite ends of the display and it takes a bit of replay sometimes to figure out what he's doing.
It might be true that a picture is worth a thousand words, but apparently a K of sound is worth a Meg of video.
About this time you're checking the header of this blog, thinking it was supposed to be about user experience and user assistance stuff. Well, it made me think about screen cam versus written procedures. Does the same thing apply here?
I think it does. I've felt for a long time that if the only thing we have to say is click this and type that, then a video is not the way to go (and a LOT of software videos are of that variety). Lot of bandwidth for just a little information. I wonder if Tufte's concept of chart junk and data to ink ratio can be applied to useful info/bandwidth analysis. Things like tone and physical manipulation in motion seem to justify the kind of bandwidth that video carries. I don't think of this is as a transmission efficiency issue, no more than Tufte was trying to save ink costs. The human bandwidth and ability to focus is more at issue here.
Plus, it's easier to scan a written procedure to get to that snippet of information I need than it is with a video.
So the point is twofold:
(not from the 60s, but you get the picture)
So I hit upon a plan. I found this great dobro player, Martin Gross, who has a terrific YouTube channel. So my plan is to learn one song of his every month. OK, all happy again now that I am "working" at having fun.
As I've been working on "Blues Stay Away from Me" I've made a couple of observations.
The old way of learning a song was to play it on the record player and keep hacking away at it until you figured out how the person was playing it. Essentially, not much has changed except that with YouTube you get the video channel as well as the audio. And honestly, that makes it easier, but not in proportion to the orders of magnitude increase in information that the video makes available. Anyone who's worked on televisions is well aware of the difference in bandwidth between the video signal and the audio. There's just a lot more information in the video signal, and Mother Nature is an exacting accountant (the cost for transmitting information is bandwidth--the more information you are hauling, the wider the highway has to be).
Seriously, if you had to choose between learning a song by just listening to it without seeing the video, or watching it without hearing the audio, you'd be much better off just listening.
Kind of ironic seeing that the video has a ton more information in it. So what gives?
Well, most of the visual information is irrelevant. The color of the guitar, the spacing between the strings, the freckles on Martin's hand, etc. The most important information is what fret he is putting the slide on and what strings he is plucking. Since this is not a split-screen video, those two pieces of information are at opposite ends of the display and it takes a bit of replay sometimes to figure out what he's doing.
It might be true that a picture is worth a thousand words, but apparently a K of sound is worth a Meg of video.
About this time you're checking the header of this blog, thinking it was supposed to be about user experience and user assistance stuff. Well, it made me think about screen cam versus written procedures. Does the same thing apply here?
I think it does. I've felt for a long time that if the only thing we have to say is click this and type that, then a video is not the way to go (and a LOT of software videos are of that variety). Lot of bandwidth for just a little information. I wonder if Tufte's concept of chart junk and data to ink ratio can be applied to useful info/bandwidth analysis. Things like tone and physical manipulation in motion seem to justify the kind of bandwidth that video carries. I don't think of this is as a transmission efficiency issue, no more than Tufte was trying to save ink costs. The human bandwidth and ability to focus is more at issue here.
Plus, it's easier to scan a written procedure to get to that snippet of information I need than it is with a video.
So the point is twofold:
- Written words are still an incredibly efficient channel for conveying information. Quit beating yourself (or others) up if you consider yourself a writer and that to be your primary channel. "I am technical writer, hear me roar."
- If you can afford to throw a video or two into the user assistance, do something worthy with that bandwidth.
Subscribe to:
Posts (Atom)