• 09:27:34 pm on October 9, 2009 | 0

    Numbers, Numbers Everywhere but How should we THINK?

    Call centers are notorious at producing STATISTICS – they can tell you how many people called, average talk time by product, agent, what percentage of people are on hold for how long, when people abandon, which calls get escalated to 2nd tier, etc. etc.  Often organizations have regular meetings to review the stats, wring hands over worrysome trends, take action on emergencies indicated by the numbers.  It can become THE management method, especially when the organization has certain service levels or utilization targets that are critical.

    When it comes to knowledge I wish I could say there was a similar zeal for measurement and metrics-based action.  In my experience there isn’t.  Why?

    Organizations often track simple things like which documents are viewed the most, search queries, documents created, but these are mostly transactional measures.  They don’t add up to insights on HOW and WHERE knowledge is driving value for an organization, or what to do when it’s not.

    I’ve spent quite a bit of time thinking through this problem, as the interesting paradox is that organizations that invest in KM unilaterally demand a clear business case and ROI for technology, services and program investments.  So up front anyone who wants to initiate or update their KM capabilities must generate some definite projections about how knowledge will impact the business.  Once the project is funded, often months or even years after completion, however, I find again and again that organizations aren’t measuring the things that were committed up front to demonstrate KM value.  This makes it difficult for these initiatives to garner the mind-share and focus they need to get more resources and funding, which in turn keeps the resources available for measuring at a minimum.  You get the pattern.  Measurement is a CRITICAL component of KM, both in managing the day-to-day process and in keeping an organization aware of how well KM is working and what’s required to improve overall.

    In a nutshell this situation might be attributed to the following factors:

    • KM value measures require a multi-layered, multi-source reporting that isn’t natural or easy for tools or organizations to generate
    • KM analysis isn’t easy to do either, with the data – some business intelligence expertise is usually in order
    • There’s often a poor understanding of what analysis is needed, and actually a ‘missing layer’ of value correlation

    Here are the layers I’m talking about:

    Layer 1:  Knowledge Availability

    Is the right knowledge actually online for the questions and issues involved?  Is that information findable for representative queries?  Is it consumable – up to date, accurate, easy for the audience to read?  These are the baseline stats on knowledge availability you’re most likely to find.  They can tell you some things about the distribution of content & demand, where there may be trouble spots, and overall quality requirements.  But in themselves they say little about how knowledge affects support & service as a business.

    Layer 2: Knowledge Usage

    In order to define how knowledge development and delivery activities drive business value it’s necessary to figure out when the act of FINDING and USING knowledge actually helped solve, simplify or shorten time to resolution.  This itself requires some form of user feedback that’s correlated with the knowledge being used, which can be rolled up into trends by products, questions, information types, etc. Still, this information is typically available in KM tools today, and can be readily correlated with user feedback or support agent input.  It just needs to be done – and often isn’t.

    Layer 2: Knowledge Value

    In order to correlate the effective usage of information to top-line business value one must be able to assess where knowledge is actually driving improved KPI’s for the business.  Where is knowledge re-use shortening handle time, improving consistency or accuracy of answers, driving call deflection?  THIS is the missing layer of analysis in most organizations.  It requires the ability to roll up the Knowledge Availability and Usage data and cross-reference it against a well-defined feedback about the KPI one desires to track.  Typically this is labor-intensive and requires a focused program in itself, to be sure all the numbers and variables are properly tracked, since the top-line measures involve a lot more than just the knowledge component.  It’s necessary to find an area with relatively stable support issues, not a lot of business reorganization around it and enough traffic to get good data.

    Folks this CAN be done!  I’ve seen it and done it.  I worked at Microsoft in the late 90’s as the Content Architect for the support website- through targeted analysis and response to the trends we saw we were able to progressively introduce additional content, improved search features, and overall user experience features that drove user-defined success from the 30 per cent range to the 60 per cent range and up.  We did this by carefully studying which queries and document views were driving success, where the gaps were, what types of failure were occurring.  We had tools and techniques for gathering the right data, we had a business intelligence analyst for the support organization who helped us craft the right analytics, and we had the people who could own follow-up to make the necessary content, tool or web improvements.

    It CAN be done!  It just needs that passion and zeal that we reserve usually for our call stats.  For any organization with high support volume – whether it be a huge self-service hit rate, or lots of call center traffic – it’s well worth the effort to find out where and how to push satisfaction and success numbers. Even a few percentage points of improvement add up to millions of dollars in savings.  Yet we don’t “eat our vegetables” – this stuff is hard, relatively unglamorous work.  And we as an industry haven’t done a great job of pushing our organizations to embrace a clear measurement methodology.  But I know that when the day comes that we CAN and DO measure the value KM pushes for the organization will be the day we may suddenly move to the center and forefront of perceived value from support knowledge.  For it’s still true that “knowledge is golden” – it’s our stock and trade, really, it’s what people look to us for, and the more we can show the value of that the greater service we’re doing to ourselves and our business.

    Viva La KB!

    John Chmaj
    “The Knowledge Advocate”

    Advertisements
     

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: