03:42:34 am on April 16, 2008 |
The first question asked when people look to initiate, update or expand their knowledge management programs is: “what’s the ROI on this effort”? To date in the support & service industry we’ve done a relatively weak job of answering this question, given the huge potential value that IS out there. Those of us in the knowledge management world know from experience that well-executed KM drives improved productivity in the call center, increased customer satisfaction and self-service usage, and can enable organizations to expand and deepen their offerings in previously unimaginable ways. So why is KM so hard to measure?
My experience has been that people fall prey to several different traps in trying to measure KM value:
1. We measure what we can, not what we should – transactions, hits, general sat surveys do not provide direct input to what’s useful (or NOT) about KM-related activity
2. We don’t ‘put all the dots together’: make the hard, sometimes complex but critical correlations between specific KM-related activities (accessing information and applying it to provide service) and the top-line measures (handle time, self-service success, customer satisfaction) that are important to the business.
3. We don’t really measure at all! Even with the elaborate ROI and business case studies done routinely for technology purchases and program justifications, once the program is brought online few if any resources, processes, or focal measures are put into place to create and evolve visibility into what’s happening as a result of improved knowledge creation, structuring and/or delivery.
So in large measure (pun intended) we’re missing the point – measures are a program in themselves, a form of ‘knowledge management about knowledge management’!
Metrics need the same design, care and feeding as any other critical initiative. So how do we get there? Let’s start with WHAT we want measures to do. Often measures discussions bog down quickly into debates over what measure proves what, often without a whole lot of actual experience to back up any one assertion. I propose that instead of focusing on specific measures yet we discuss what criteria we need to bring visibility and clarity to KM activity. I’ll propose 3 myself and see if others have seen effective ways to focus on what’s important. Let’s try to use examples, too, to keep the conversation grounded and clear.
Criteria for Useful Support & Service KM Measures
1. Provides some form of relationship between one or more knowledge creation/access activities and specific service transactions performed. Example: how much a specific piece of content is used in relationship to successful (or unsuccessful) service delivery scenarios.
2. Gives good general trend information across an information/topic set, to help provide visibility to which areas of the KM environment have greater or lesser success. Example: knowledge gap reports that show the high and low success rates of knowledge access, broken down by product and/or topic.
3. Gives insights into the patterns of successful (or unsuccessful) knowledge access by support agents. Too often we ignore one of the most critical areas of productivity: support agents’ ability to get and use the best information quickly. Example: KM tool usage reports that break down usage, success patterns and range of inquiries by agent and/or group.
Please note this is not yet any form of dashboard or even measurement framework, just a discussion of what would be meaningful. What’s been meaningful for YOU? Perhaps as important, what has NOT worked?