03:24:12 am on January 15, 2010 |
Let’s Rethink SEARCH, Shall We?
The one constant in my support KM career has been working with search tools – I got involved in creating the CD-ROMs for support right from the commercial inception of that technology in the late 80’s, then in knowledge base search tools of all sorts, then on the web. It’s something that’s been taken for granted as a core support capability, and one that’s now ubiquitous on the web. So I’m as jaded as anyone else of PRESUMING that search is a core, if not THE core way we should be accessing knowledge. There’s a kind of magic to it – I type in a few words and somehow the right info steps forward from a sea of stuff. Cool!
And yet, on the whole this experience has been far from satisfying in the support & service arena. Search success rates on average hover in the 30-40% range, there’s all manner of hoops organizations jump through to figure out how to get the right content to show up, the tools are often complex or if not have very simple capabilities. And when you DO get this all working, there seem to be interminable IT, website design, content issues that degrade or alter search and force maintenance of some kind or another to persist a meaningful search experience. All in all it can be a real pain to field and run a search application! (I’ve blogged earlier about issues with search and support, see “Support Search – Why It Can’t ‘Be Just Like Google“.)
That would all be worth it if we felt that search provided the optimal resolution experience for customers. For tightly scoped content sets, with an educated audience that knows which terms and objects are relevant to specific issues, and a well-maintained toolset that’s a fit to the business tasks, search probably is the optimal situation. Back when only techies used search (yes, there was such a time) this was the rule of thumb. But as the types of content, users and scenarios search is applied to have grown exponentially, this mode of interaction has proven weak and ultimately wrong for many situations. And yet a lot of other interaction methods (scripts, browsing, etc.) can be complex and hard to navigate as well, and don’t apply to all situations. Finally, database-style interactions work well for very tightly scoped requests (‘what’s my balance?’), but have almost no flexibility. What’s the next option?
Ideally we can construct user experiences that have the flexibility and dynamic access of search, but with a more defined context than just text, which return well-structured, targeted information based precisely on the context of the moment. In other words, we need a hybrid of capabilities working together to quickly frame the context of a request and respond in a manner that’s as easy as search (but more powerful). How can that happen?
The answer to that relies upon how well we can define the questions and interactions we want to support. There are a class of requests (the majority of support questions in many businesses) that can be most efficiently handled through a combination of simple user input, structured data and clear support context. I believe a middle ground is emerging between search and portal interaction, that leverages the best of all three capabilities:
– the robust, clear context one can establish through portal personalization or CRM (identity, account info, history, etc.)
– the dynamic, highly scalable and flexible data access capabilities of search tools
– the power of well formatted, specific database information and queries
So to paraphrase Monty Python “search ain’t dead – it’s restin’!” We’ve been using the search hammer so long every data access problem starts to look like a nail. To the degree we can take a richer look at how our support issues are constructed, and how people experience them, we can construct richer tools and responses. All the pieces are available to us, we just have to start thinking creatively about how to use them. But the RESOLUTION WORKFLOW is the place we must start – always has been!
Ask yourself 3 basic questions, and start to visualize your capabilities from there:
1. What are the range of actual ANSWERS to a class of questions? Not the content, the object, but the information itself?
2. What are the key parameters, qualities, attributes that establish the CONTEXT of those questions?
3. How much does a system need to know to deliver the answer – what context can be used to trigger the best answer?
This may seem a bit abstract, but we need to boil out the tools and get back to the use cases we’re trying to address. Once we do that we will find we can recast the tools we already use to greater effect, by combining them into a richer, more integrated experience. The result: better ANSWERS to more QUESTIONS. And isn’t that what this is all about?
“The Knowledge Advocate”