Once more, with feeling

How many licks does it take to get to the center of a Tootsie Roll Pop(tm)?  One lump, or two?  If I’ve heard it once, I’ve heard it a thousand times.  We as a intelligent beings are obsessed with numbers.  We always want to know how much, how many  or how few something takes.  I heard once in a course on effective presentation style that a good way to get an audience’s attention was to just list a seemingly random set of numbers on a whiteboard or screen, then fill in the labels to indicate the significance of those numbers as the presentation progresses.

A number that comes up often in the field of knowledge management is how many clicks it takes to reach a resolution during a knowledge query operation.  There is a well founded concern for efficiency at work here but what is a reasonable number of clicks to find a resolution?  I’ve read requirements for maximums and averages of from five all the way down to one.  I’ve even heard a marketing VP state that their goal for their product was zero clicks which would apparently involve the system answering the query correctly before it’s even asked.

So while R&D works on precognitive query engines, those of us still on the sunnier side of the Twilight Zone must work with what we have available.  Not that we have to give up the farm but most things worth doing involve compromise and trade-offs.  Give a little in one area to get a little in another and you can end up with a much more robust and valuable solution.  In the case of searching and querying knowledge management repositories the trade-off I propose is to accept the fact that the typical query of knowledge is a two-step process.  (Numbers again.)  It might be difficult to give up on the ideal of jumping straight to the answer but let’s try starting with just finding the question in the knowledgebase first.  A searcher has the following advantages working for him or her when searching for a question:

  • The searcher knows more about the question than anything else.
  • If anything at all is known about the answer that knowledge is likely vague and possibly inaccurate.
  • The searcher is more likely to recognize the question when they see it appear in a list of search results.
  • Even if the initial query results in the best answer being at top of the list, it might as well not exist if the searcher doesn’t recognize it when they see it.
  • Once a matching question has been found, there will likely be a small number of matching answers to consider – the searcher will have to evaluate only those answers matched to the selected question instead of every answer that matches their query in some obscure way.

Just to be clear, I’m using the term “question” here in a somewhat generic way.  It might be a literal question for which exists a known answer.  It might be a goal the searcher needs to accomplish for which their exists one or more processes that reach that goal.  It might be a symptom or an error message that the searcher might encounter while using some product or service.

Imagine a scenario where the user inputs their question, goal or symptom as a query to the knowledgebase – the first click.  The available targets and potential search results for that query are in question, goal or symptom form so, with even a nominal degree of natural language processing, there is likely to be a good match.  That match, highly ranked, is presented as a search result and, because its author has been careful to use the voice of the customer, the searcher recognizes it as being relevant to their quest and clicks it – that’s two.  Now, in the special case where there is only one possible answer or resolution for the selected question, the user is done – two clicks total.  There might be more than one possible resolution to consider but it will be a relatively small set to evaluate and the searcher will have more confidence that the correct resolution is findable because of their initial success in finding a match to their question.  There might even be additional assistance in the form of tagging or clarifying questions to help the searcher select the best answer – one or two more clicks.

So on that trade-off we talked about earlier we’re up to from two to four clicks to get to the answer.   But on the gain side of the equation, there should be much more consistency in those numbers as we avoid the somewhat common scenario where the searcher clicks through several answers trying to find one that addresses their issue or leads toward their goal.  And the worst case scenario is that the searcher’s question is not in the knowledgebase at all.  This isn’t as bad as it sounds though because the best way to fail in a search is to fail quickly and cleanly – confident that there’s nothing hiding in the content.  And we will have captured in the logs the question that’s missing so it will be relatively easy to evaluate what if anything should be added to the knowledgebase.  And there’s still plenty of room for automation to (for example) filter any available solutions according to something known about the searcher or their context – subtract one click.

I admit it’s a bit of a departure from the typical approach to searching knowledge.  Leave a comment and let me know if you’ve tried it – or if you’re willing to give it a try.