Updates from RSS

  • 10:40:04 am on February 8, 2010 | 0 | # |
    Tags: , ,

    “Begin with the End in Mind: User Demand”

    This paraphrase from Steven Covey’s 7 Habits summarizes the beginning and end of the KM game plan itself.  Knowledge Management practices are fundamentally all means to one end:  the right person finding the best information to meet their need at the right time.  “Knowledge” in KM  refers to information tailored for use as well – specific to the task or inquiry, formatting the key elements in readily consumable fashion. So the  activities, tools and resources used to enable KM drive to this one objective.  Success = access to knowledge, whether it’s a customer readily self-serving, engaging in effective online chat or mail, or a customer service rep finding everything they need just in time to drive a quick, quality service interaction.

    It stands to reason, then, that the first thing one needs to know to build towards the goal of meeting user requests is to know precisely what that demand IS.   Furthermore, within that demand it is imperative to know what combinations of information, in what forms, will meet those needs.  So to start thinking about any form of KM a few basic but critical questions need to be answered:

    1. Who are my USERS?  What are their expectations, level of understanding and expertise, ability to engage your tools, processes and data?
    2. What is the amount and distribution of the types of QUESTIONS they ask, what needs are there to satisfy?  This must be understood at more granular level than simple topics or questions – it’s key to define as rich a taxonomy of issue types and classes as possible.
    3. What types of INFORMATION do they require?  What’s going to satisfy the common requests?  Again, the further you can define the full spectrum of content types, interactions, and information aggregations the tighter the fit wil be.
    4. How can they best CONSUME the information on each channel, given the constraints and opportunities of that channel?

    (More …)

     
  • 03:24:12 am on January 15, 2010 | 1 | # |
    Tags: , , , , ,

    Let’s Rethink SEARCH, Shall We?

    The one constant in my support KM career has been working with search tools – I got involved in creating the CD-ROMs for support right from the commercial inception of that technology in the late 80’s, then in knowledge base search tools of all sorts, then on the web.  It’s something that’s been taken for granted as a core support capability, and one that’s now ubiquitous on the web.  So I’m as jaded as anyone else of PRESUMING that search is a core, if not THE core way we should be accessing knowledge.   There’s a kind of magic to it – I type in a few words and somehow the right info steps forward from a sea of stuff.  Cool!

    And yet, on the whole this experience has been far from satisfying in the support & service arena.  Search success rates on average hover in the 30-40% range, there’s all manner of hoops organizations jump through to figure out how to get the right content to show up, the tools are often complex or if not have very simple capabilities.  And when you DO get this all working, there seem to be interminable IT, website design, content issues that degrade or alter search and force maintenance of some kind or another to persist a meaningful search experience. All in all it can be a real pain to field and run a search application!  (I’ve blogged earlier about issues with search and support, see “Support Search – Why It Can’t ‘Be Just Like Google“.)

    (More …)

     
  • 10:44:27 am on December 30, 2009 | 0 | # |
    Tags:

    New Years’ Resolution: Think about CONTEXT, not CONTENT

    As we enter the New Year, I’d like to propose a resolution for 2010 – that we get over CONTENT, and start thinking about CONTEXT.

    The world of information is fragmenting and multiplying rapidly. The idea that ‘content’ exists in one repository, in a tidy consistent format, is already antiquated. From databases to blogs, from documentation to FAQ’s, the places and types of information at our disposal are endless, and as such, all equally potentially confusing. To provide knowledge to others to enable their use of our products and services we can’t focus on endlessly rationalizing all these resources, but rather on optimizing HOW and WHEN they are presented.

    CONTEXT means the question at hand, the identity of the questioner, the environment surrounding the question – all these factors condition WHAT information is needed, and HOW it should be presented. The goal is the ANSWER, not finding ‘content’. The most important factors are knowing what someone is looking for – the processes and interactions necessary to establish the context of any inquiry are the path to the fastest, quickest answer.

    The answer might be field in a database record, a knowledge base solution, or a detailed step-by-step procedure or policy. We can’t deliver the best answer until we truly know the question – sounds obvious, and good support agents do this type of contextual inquiry on every call. Yet more often we trundle our customers (and support agents!) through layers and layers of tools, vending up reams of content, as if finding information was the essential task. This is an old, techno-centric view of knowledge management that’s changing. The successful companies are figuring out better ways of rapidly establishing effective paths of inquiry that lead to a wide array of potential answers. The experience is one of scent and traction – users feel like the whole organization is tilted properly towards them and the way they think and use products. Using these tools will quickly become an efficient and preferred method of interaction, as will doing business with the companies that provide them.

    I heard an interesting thing at a meeting this year. During a roundtable on knowledge management one of the attendees suddenly said “you know – search is actually an act of failure!” His “aha” was that if a user is forced to guess at terms that exist in some repository of content, then sift through titles for the answer, the game is already lost. The actual question and context of the user has been reduced to a one or two word query against some indeterminate cloud of information. The user has more to tell about their need, and we have more tools to help them establish the best path to the best answer.

    So let’s make 2010 about CONTEXT. Let’s think clearly, carefully and completely about how people use our products, where their questions arise (in THEIR way of thinking), and how they would best traverse our services capabilities to get the most out of them. Then let’s build THAT. This type of approach will yield a whole new level of insight about what is possible, and encourage us to create service experiences that map to the way our customers and support staff think.

    It’s a whole new world in 2010 – it just requires a shift in perspective.

    Have a Great New Year!

    John Chmaj

    “The Knowledge Advocate”

     
  • 09:09:07 pm on December 9, 2009 | 0 | # |
    Tags: , , ,

    KM Governance: It’s a Lifestyle, not a Project

    I’ve had several requests lately to address the issues and approach to knowledge management “governance” — the practices, commitments and plans that define how organizations work together. Aside from the many tactical issues and details an organization needs to address, there are a few things about knowledge management governance that are worth considering at the outset whenever you think about how to initiate or improve it:

    1.  Knowledge Management doesn’t exist or belong to any one group — it’s the COMPANY’S job to do it.

    Because content usually surfaces through search tools and websites, it’s often lumped in with other IT or web initiatives in terms of who owns it, how it’s governed, etc.  The limitation with this approach is it doesn’t address the fundamental organizational alignment necessary to achieve effective knowledge management.  The content stakeholders, support organization (these two usually overlap), and support mechanisms (e.g. technology) all play a part in getting ‘knowledge to market’.  It’s not any one group’s assignment.

    That being said, clear leadership and drive has to be centered somewhere in the organization to create and sustain this alignment.  This usually happens at the executive level within or among some part of the services and support organization, in conjunction with the products group.  But the scope and responsibility of creating and delivering knowledge MUST involve ALL the stakeholders involved — marketing, products, support, content groups, IT, etc. — from the outset in some capacity, otherwise KM forever swims upstream against the other priorities of all the other core initiatives of the company.

    2. Knowledge Management is a LIFESTYLE, not a PROJECT:  Take a program approach.

    While individual content or technology initiatives may well be performed in project mode, the actual long-term capability of developing and delivering knowledge is a core organizational competency that requires ongoing investment, commitment and monitoring.  The most effective way to approach it is as a program – draw a circle around what is to be accomplished, engage all the stakeholders in the life cycle of knowledge (see item #1 above), and create an internal branding, awareness and energy around it.

    This is not foreign to organizations:  cross-group marketing, product development and sales programs happen all the time.  Seek out allies and long-term partners across the business, and just keep at it — it takes years to establish an ongoing cultural commitment to something in any organization.  As long as the goals and outcomes are clear and compelling people WILL come on board as more and more stakeholders get involved.  Which leads to another principle of KM success: show success!

    3.  Focus the program on the higher organizational goals served, and build commitments continuously towards achieving them.

    Often there’s a disconnect between the day-to-day, mundane tasks associated with creating content, managing KB’s, etc., and the critical organizational capabilities that are served by the KM program.  The fruits of Knowledge Management are the successful outcomes: customer sat, efficiency, better resourcing, agent productivity.  THAT is what the organization truly cares about.  The more clearly, compellingly and continuously KM owners and stakeholders can keep the organization focused on resourcing the program towards these objectives, the more KM seeps into the organizational culture as a fundamental enabler.

    This is more than just proving a business case – it’s getting the organization to see and believe that knowledge delivery drives value to the business.  Here’s where good metrics, clear reporting and analysis, and outright evangelism and communications come in.  People in the organization are busy, they have a lot on their plate already, and many other objectives happening at one time.  We have to GET and KEEP their attention on why they should care about KM, what it’s done for them lately, and how they can help get it to the next level with their resources, content or support.

    At the end of the day (or month, or year) KM is a core organizational competency.  Self-service has finished a process that began decades ago when online information began to become available on networks, then intranets, now the web and customer communities at large.  Very few organizations do not need to create and share knowledge to enable their services and support process in some regard or not.  Whether that activity is ad hoc, weakly managed, and drives inconsistent results, or is well managed and drives value and differentiation is up to the organization.  Hopefully thinking about these principles will help you think about approaching KM as a way towards building a more knowledge-centered organization. Good luck — remember, it’s happening whether the organization wants to acknowledge it or not!

    John Chmaj
    “The Knowledge Advocate”

     
  • 09:35:09 pm on November 6, 2009 | 0 | # |

    Cast Your VOTE Today:  Search!

    This week’s elections got me thinking about a voting process most of us do many times a day but hardly think of that way:  searching on the web.  If you really break it down, a search isn’t some deep, technical thing we do – it’s more of a guess, an approximation, a forecast, a VOTE about what words we think exist in content we think we want to view.

    This process works quickly, magically and consistently most of the time so we rarely give it much thought.  We enter a few words, as unique and meaningful as we can make them upon 2 seconds reflection, and if we get what we want no more thought is given to it.  But many times we don’t get great results, or they’re confusing, or what we want appears way down on the list.  At these times we often blame the search tool, the content, the website we’re on.  But what about our own responsibility in this game?  Who made us the experts in the domain of information we’re looking through?  I don’t know about you but I have little interest in the details of my cellular providers’ terminology, content types and taxonomy, except as it applies to my immediate issue.  I have to admit upon reflection I’m not really much of an expert.  My queries are based on guesswork, what I assume to be common sense.  But often times they’re just my view of some other organizations’ reality – with good or bad results based on how well I’ve accurately guessed what that is!

    In order for us to create great search and browse experiences for customers in support and service we should acknowledge the realities of our searching audience.  We need to think about how our tools and content can respond when we get queries of various types.

    Many people query in greatly oversimplified fashion, ESPECIALLY when they’re not sure what to look for, since they don’t know what other words to use.  These folks are usually the least satisfied with the search experience, for obvious reasons.  We need to set up content and knowledge base results mechanisms that guide these folks to either the most popular documents we know they’re likely looking for, and/or give them further guidance on where or how to look for something in the topic area suggested.

    For example, I had a client once who discovered upon analysis that 4% of their queries were the word “vista”.  It wasn’t Microsoft, it was a company whose customers wanted to know how the new Vista OS impacted their company’s products.  This one word query isn’t really very descriptive, and it wasn’t about their stuff, so they had not done anything to respond to it.  But 4% of web traffic’s a pretty decent amount, so all they had to do was put some general documents up pertaining to Vista requirements and settings and they were able to satisfy it.

    Other people query in overly complicated fashion, using really specific terms or longer queries in an effort to get something really specific.  The same tactic applies as with general queries: being sure content exists to overview a topic, or to help people understand how the information is organized to find the specific item.  The mechanisms may be different in terms of how the tool responds to such queries, but the principle is the same.  These folks need to learn what will truly drive the specific results they seek.

    Finally, it’s often the case that a hot topic or issue comes up that people query on in lots of different ways, using a wider variety of terms and combinations than one would expect, or using terminology that has nothing to do with how the company thinks of the problem.  I recall a company that had a cash-back program that had a fancy internal marketing name that every document referred to, but no information or support built in for queries where people just asked obvious questions like “where’s my check?”, “how do I mail in the return form?” or “how much cash am I eligible for?”  In this instance adding synonyms to the search capability, generalizing the titles to these types of issues and making the wording in the content a bit more customer-friendly all helped assure people assure they got something that seemed to match an obvious common question.

    Searching is a funny sort of discipline – it’s something we’ve all evolved culturally without any rules, guidance or training.  Nobody says how to do it well, websites don’t give away any help about what’s going to work, and we are really left to our own resources to decide and cast our vote about what’s going to be in the content we’re looking for.  As long as we don’t forget that customers are indeed just GUESSING at what they want, and build bridges to meet them as they enter our world, we’ll stand a better than even chance of providing them with the best info, at least for the common questions that have the most impact both ways.

    So get out there and VOTE today – for content!

    John Chmaj
    “The Knowledge Advocate”

     
  • 08:19:29 pm on November 2, 2009 | 0 | # |

    It’s no fun to state things in negatives, but sometimes it’s important to identify common patterns and issues, so we don’t keep doing the wrong things over and over.  It’s also quite easy to enter into a comfortable state of denial, where things are because “that’s always they way they’ve been”.

    In an effort to help identify areas that might actually be hurting you that you’re not aware of, or not aware you can CHANGE, I present to you my KM ‘Letterman List’ for November.  KM DOESN’T WORK BECAUSE:

    10. The knowledge management tool just sort of sits there – nobody’s really minding it, it’s kind of like bad plumbing that everybody puts up with.

    9. Nobody really measures how well content is meeting the needs of the internal and/or external users.There’s no top-line outcome of success from either the internal KB or external self-help that guides activity.

    8. Nobody asked the CSR’s how THEY search, what content THEY use, and how THEY want their tool organized.

    7. You dumped all your stuff into your new knowledge base, and it’s no easier to find than before.

    6. The toolset doesn’t work – it’s too slow, inconsistent, up and down, behaves oddly, integrates poorly, and/or doesn’t bring back decent results.

    5.  The content being used is too long, complex, jargon-filled or inaccurate:  it doesn’t provide quick, easy access to the best information the user requests.

    4.  The organization isn’t really staffed around KM – activity is sporatic and hit/miss towards keeping the knowledge base current and relevant.

    3.  There’s nobody holding the products, engineering, marketing and/or sales groups responsible for keeping their information accurate, up to date and focused on support-relevant topics. People across the organization don’t buy into the need for content and tagging standards – they write what they want, how they want.

    2.  Nobody’s evangelizing, coaching, educating the organization about the value of KM, how to best use the tools at hand, what needs improvement, and helping drive effective continuous adoption across centers, locales and lines of business.

    1.  There’s no objective, capability or outcome from doing KM that resides on a key executive’s top ‘to do list’, as a key enabler for the business as a whole.  Such objectives would spark and demand leadership, action and accountability across the organization to stay focused on achieving better knowledge development and delivery in the midst of all other priorities, initiatives and crises.

    Does #1 sound simple?  Ok – quickly – name the top outcome from KM your executive expects, and what objectives they are monitoring and driving to achieve it…

    If that’s clear and easy to identify, the chances are you’re getting help addressing the other 9 issues.  If you’re still waiting, make a few up and go have a chat with them!

    If you answered yes to any more than 2 of these you probably need to ask – WHO is driving your KM bus?

    Happy November,

    John Chmaj
    “The Knowledge Advocate”

     

     
  • 09:16:12 pm on October 16, 2009 | 0 | # |
    Tags: , , , ,

    FINDING Depends on How you SEEK –

    One of the most important things to think about in making knowledge delivery systems work is HOW users will find what they’re looking for.  At first glance it may seem simple:  type in a search, find your stuff.  But is it really that straightforward?  Do you really know what you’re looking for?  Do you really know what the right query is?  Do you even know what content exists in the system you’re using?

    One of the main reasons we all engage in building and managing support-focused search tools and portals is because we are trying to create a series of efficient, intuitive user experiences that lead people to the right answers to their questions.  However, my experience is that all questions – and all users – are not created equal.  Sometimes we know exactly what we need and how to ask, other times we haven’t a clue.  The same person, who may be expert on one issue, may be totally lost just looking for a slightly different issue in their own KB.

    In this context always think of one Christmas morning spent huddled over my sister’s computer, trying desperately to get the Disney Aladdin DVD to install and run properly, my little 4 yr old daughter wailing and pleading with me as I, the great computer expert in the family, struggled to figure out what query to enter in the Disney knowledge base to address the failure of this $5 DVD to load!  One is only as smart as the next problem, really —

    One of the most critical yet unrecognized features of a knowledge base is its ability to teach users about the terms, taxonomy, and content that is available to solve their question.  Most searches aren’t a one-shot deal – the user progressively browses to learn about an issue, scope the content, tries different queries to see what’s in the KB, and finally decides when to browse through titles and content for the best fit.  I find it useful to boil the types of interactions one needs to have based on how much one knows about an issue, and what one needs in terms of guidance from the knowledge base to get to the answer. A simple way to think about it is as follows.

    Three types of navigation

    1. I Know What I Know – Lookup

    This is the stereotypical “Google-esque” query – I know the data I want, I know the terms that should bring it back, I type in “error 99” and voila!  There’s my stuff.  Support experts usually expect this type of behavior, and in fact the weaker a KB is the more they memorize special terms, document ID’s and keywords that will give them what they want.  But in the end they’re not really searching as much as looking up known content.  This is an important capability that must work well, but it’s far from the only interaction people have, ESPECIALLY in self-help systems.

    2. I Know What I Don’t Know – Guided Search

    In this scenario users know the TYPE of information they need, perhaps also the topic or potential terms, but don’t exactly know what content is out there.  They may browse to scope the issue, then type in a query or two based on what appear to be the common topics.  KM systems assist here by providing potential additional browse and filtering options that are relevant to the current query scope.  The user can go back and forth, examining the options provided by the system, to find the right fit of topic and query detail.  These systems work well when the user has some idea of what the right solution is and just needs assistance getting to the right area of the knowledge base.  But that’s not the final area, nor perhaps the most important…there’s still:

    3. I Don’t Know What I Don’t Know – Browse and Filter

    Many users, especially those new to a particular domain, don’t really know how to think about the information that’s in it, what terms or topics are relevant, maybe not even what they’re supposed to ask.  In my Disney DVD problem, I had some smart ideas but none turned out to be relevant.  Was it the display driver?  The computer RAM?  OS version?  Plug-in requirements for video?  And how should I query a simple Disney DVD for this kind of stuff?  These scenarios are where a good browse experience can shine.  Not only can it help users figure out what the key components of a product are, the topics and known problems, but should also quickly point up common questions the user is likely to have.  By seeing common issues and how they are organized users can get insight into what their questions are, or likely queries should browsing and filtering not bring back the answer immediately.

    At the end of the day we’re all just looking for ANSWERS, aren’t we?  If we think through what the process of acquiring knowledge entails, we can trace these patterns of inquiry and do our best to model them for particular audiences, users and issues.  When this is well done for all three query types, users experience the ‘magic’ of knowledge bases.  They DO indeed seem to ‘know exactly what you want’, yet you may not even be aware of how seamlessly the system allows you to move between these modes of inquiry and still get the scent of the answer you need.  As KM system builders, we just have to give a little THOUGHT to how people THINK….!

    John Chmaj
    “The Knowledge Advocate”

     
  • 09:27:34 pm on October 9, 2009 | 0 | # |

    Numbers, Numbers Everywhere but How should we THINK?

    Call centers are notorious at producing STATISTICS – they can tell you how many people called, average talk time by product, agent, what percentage of people are on hold for how long, when people abandon, which calls get escalated to 2nd tier, etc. etc.  Often organizations have regular meetings to review the stats, wring hands over worrysome trends, take action on emergencies indicated by the numbers.  It can become THE management method, especially when the organization has certain service levels or utilization targets that are critical.

    When it comes to knowledge I wish I could say there was a similar zeal for measurement and metrics-based action.  In my experience there isn’t.  Why?

    Organizations often track simple things like which documents are viewed the most, search queries, documents created, but these are mostly transactional measures.  They don’t add up to insights on HOW and WHERE knowledge is driving value for an organization, or what to do when it’s not.

    I’ve spent quite a bit of time thinking through this problem, as the interesting paradox is that organizations that invest in KM unilaterally demand a clear business case and ROI for technology, services and program investments.  So up front anyone who wants to initiate or update their KM capabilities must generate some definite projections about how knowledge will impact the business.  Once the project is funded, often months or even years after completion, however, I find again and again that organizations aren’t measuring the things that were committed up front to demonstrate KM value.  This makes it difficult for these initiatives to garner the mind-share and focus they need to get more resources and funding, which in turn keeps the resources available for measuring at a minimum.  You get the pattern.  Measurement is a CRITICAL component of KM, both in managing the day-to-day process and in keeping an organization aware of how well KM is working and what’s required to improve overall.

    In a nutshell this situation might be attributed to the following factors:

    • KM value measures require a multi-layered, multi-source reporting that isn’t natural or easy for tools or organizations to generate
    • KM analysis isn’t easy to do either, with the data – some business intelligence expertise is usually in order
    • There’s often a poor understanding of what analysis is needed, and actually a ‘missing layer’ of value correlation

    Here are the layers I’m talking about:

    Layer 1:  Knowledge Availability

    Is the right knowledge actually online for the questions and issues involved?  Is that information findable for representative queries?  Is it consumable – up to date, accurate, easy for the audience to read?  These are the baseline stats on knowledge availability you’re most likely to find.  They can tell you some things about the distribution of content & demand, where there may be trouble spots, and overall quality requirements.  But in themselves they say little about how knowledge affects support & service as a business.

    Layer 2: Knowledge Usage

    In order to define how knowledge development and delivery activities drive business value it’s necessary to figure out when the act of FINDING and USING knowledge actually helped solve, simplify or shorten time to resolution.  This itself requires some form of user feedback that’s correlated with the knowledge being used, which can be rolled up into trends by products, questions, information types, etc. Still, this information is typically available in KM tools today, and can be readily correlated with user feedback or support agent input.  It just needs to be done – and often isn’t.

    Layer 2: Knowledge Value

    In order to correlate the effective usage of information to top-line business value one must be able to assess where knowledge is actually driving improved KPI’s for the business.  Where is knowledge re-use shortening handle time, improving consistency or accuracy of answers, driving call deflection?  THIS is the missing layer of analysis in most organizations.  It requires the ability to roll up the Knowledge Availability and Usage data and cross-reference it against a well-defined feedback about the KPI one desires to track.  Typically this is labor-intensive and requires a focused program in itself, to be sure all the numbers and variables are properly tracked, since the top-line measures involve a lot more than just the knowledge component.  It’s necessary to find an area with relatively stable support issues, not a lot of business reorganization around it and enough traffic to get good data.

    Folks this CAN be done!  I’ve seen it and done it.  I worked at Microsoft in the late 90’s as the Content Architect for the support website- through targeted analysis and response to the trends we saw we were able to progressively introduce additional content, improved search features, and overall user experience features that drove user-defined success from the 30 per cent range to the 60 per cent range and up.  We did this by carefully studying which queries and document views were driving success, where the gaps were, what types of failure were occurring.  We had tools and techniques for gathering the right data, we had a business intelligence analyst for the support organization who helped us craft the right analytics, and we had the people who could own follow-up to make the necessary content, tool or web improvements.

    It CAN be done!  It just needs that passion and zeal that we reserve usually for our call stats.  For any organization with high support volume – whether it be a huge self-service hit rate, or lots of call center traffic – it’s well worth the effort to find out where and how to push satisfaction and success numbers. Even a few percentage points of improvement add up to millions of dollars in savings.  Yet we don’t “eat our vegetables” – this stuff is hard, relatively unglamorous work.  And we as an industry haven’t done a great job of pushing our organizations to embrace a clear measurement methodology.  But I know that when the day comes that we CAN and DO measure the value KM pushes for the organization will be the day we may suddenly move to the center and forefront of perceived value from support knowledge.  For it’s still true that “knowledge is golden” – it’s our stock and trade, really, it’s what people look to us for, and the more we can show the value of that the greater service we’re doing to ourselves and our business.

    Viva La KB!

    John Chmaj
    “The Knowledge Advocate”

     
  • 07:18:21 pm on October 1, 2009 | 0 | # |

    KM is Crossing the Chasm!

    In the late 90’s Geoffrey Moore published his famous book “Crossing the Chasm”, in which he developed a compelling model for understanding a common lifecycle pattern of technology adoption.  Essentially the “chasm model” pointed out the difference between the initial phases of a technology and associated products coming to market, and later phases, between which he posited a CHASM existed.  The chasm is the gap between the expectations, expertise and needs of the Innovators & Early Adopters,  and those of the Early and Late Majority (often referenced as “the herd”).  The Chasm Model is represented as a curve – here’s a simplified version I’ve done  for Knowledge Management:

    Innovators & Early Adopters are willing to accept limitations in adopting new technology, the need to figure out and/or customize components of it themselves, in favor of the advantages they gain from being first to market for the new capabilities the technology provides.  The Majority needs more proof, market acceptance and guidance to adopt the new approach.

    I assert that Knowledge Management for Support & Service is crossing this chasm now.  The early adopters in support KM have all implemented – these were the hi-tech companies, many telcos, other technology-related businesses with deep technology expertise and existing knowledge bases, as well as a critical business need to provide hi-quality access to complex technical content.  The Majority is now coming to the fore:  I’m now working with financial services institutions, banks, retail, insurance, utilities, airlines, online consumer and many other types of businesses which are new to KM.  Self-service is driving a lot of this market – the need to drive as much successful service traffic online as possible.  But I’m also seeing a level of maturity of internal understanding evolving.  Organizations are becoming aware of the internal need to ORGANIZE and FOCUS their information development and delivery activities.

    All this activity suggests that ‘the herd is on the move’ – which means it’s time for us knowledge advocates to embrace the principles and practices required to successfully engage them.  A key insight of the chasm strategy is that the majority has different perspectives and needs than the early adopters.  The majority needs to see how others have done it, proof that it worked, and a clear recipe for moving forward.  Core KM practices to date have been as much art as science.  We need to frame and tool our methods in more consumable ways.  We need to make the process for engaging core practices clear and easy to understand, for everyone from the support agent to the self-help managers to the managers and executives who drive such initiatives.  We need to provide compelling, complete examples of success that show people how it’s done.  In short, we need better KM about KM!

    What does this mean for KM?  My experience is that the ‘chasm’ is the gap between capabilities and emerging objectives for organizations.  I tend to get requests to address technology, content, metrics and organizational issues, which all add up to challenges in getting to a fully functioning KM capability. Considered together they actually comprise the gap in capabilities, expertise and focus needed to get to a well-functioning KM system:

    If organizations can keep their eye on the ball, stay focused on developing these key capabilities, they will themselves be crossing the chasm!

    This is an exciting time for Knowledge Management – if we can learn to cross this chasm KM practices may clarify and standardize rapidly for a much larger set of organizations, which in turn will create more examples and reference points of value for everyone else.  Then, as Geoffrey Moore describes, we’ll be prepared to engage a “KM Tornado”!  But that discussion is for another time – let’s get those bridges built first!

    John Chmaj
    “The Knowledge Advocate”

     
  • 10:11:33 pm on September 25, 2009 | 1 | # |

    All Content is NOT Created (or used) Equally!

    Back when I started in support at Lotus in the late 80’s “content” was pretty easily understood and identified.  You had software packages that shipped with user manuals, there might be a Q&A document and some technical notes one had access to, and if you were really into it you might buy a book or two about using the product or platform.  That was it.  In support we stored most things in great blue binders, we had internal text retrieval knowledge bases that stored all the bug reports or KB articles.  Even that information was finite – I published the entire knowledge base on one CD-ROM each month, a few thousand articles covered all Lotus products. One could actually KNOW all there was that was published and known about a product.  Managing it required a few simple fields and a text retrieval engine.  It was a simpler, gentler age for knowledge management.

    Then came the Internet followed by Social Media.

    Today information is passed across all kinds of channels and formats, in sizes great and small, from the shortest text messages to lengthy PDF files, videos and everything in between.  Web pages, documents, emails, chats, forum posts, blogs, infinite databases and content repositories.  There’s more people providing more input into the data stream than we ever could have imagined back then.  We can get answers and ideas from almost anywhere.  This is a great boon to the concept of knowledge sharing, but also presents some fundamental challenges to organizations trying to manage some sort of quality information transfer to interested parties (especially SUPPORT & SERVICE!)  The result of this information blitz has impacted some of the key principles that are necessary to actually drive services through knowledge, such as:

    • Ownership: it’s harder to tell where some piece of information came from, the source and credibility of it
    • Scope of Application:  personal opinions or advice may or may not apply outside of the situation in which they were used initially
    • Quality and Style:  ALL manner of authoring detail, style, consistency and completeness can exist, even within one channel of one community
    • Accuracy:  Who’s to say which content is the most valid?  What does “content version” mean any more?

    The proliferation of channels also means organizations must be able to format and re-purpose critical information on different delivery channels.  A lengthy technical document is useless as an email, conversely the back and forth of forum or chat threads is almost impossible to read outside those mediums.  Yet the answers are coming from all these places, and need to travel to these places, in formats and quantities appropriate for each channel.

    What’s a knowledge manager to do?

    I contend it’s back to first principles – at the end of every one of these interactions people want the same thing they did 20 years ago – clear, complete, actionable answers to their specific question or knowledge request.  Aside from the other interesting dynamics introduced by social media (aspects like reputation, collaboration, swarming, etc.) an answer is still an answer.  As such many if not most of the same principles of good knowledge management apply:

    • Recognizable Context – the subject matter needs to be presented within the scope where it can be easily found on the channel its on.  Subject matter, products or topics, authors, date, all the parameters necessary to find information are still important for users to quickly search for and evaluate what’s most relevant to their request.
    • Fit to task – the information provided has to meet the specific ability of the requestor to actually DO the thing required.
    • Quality – good information is still readable, audience-appropriate, up to date, accurate and complete.
    • Efficiency – results have to be achievable in predictable ways.  The user has to have a clear idea of how to navigate quickly and accurately to the best answers.

    Of course, a lot of interesting questions arise in this new open-ended knowledge sharing world, such as:

    How do you give context to something as small as a chat thread?

    How do you assure stylistic compliance in an open community?

    Who’s to say what’s most accurate, credible or relevant?

    A lot of the answers to managing social media come from insights into the dynamics of each specific collaborative group.  In the early 2000’s the Consortium for Service Innovation did some research towards proposing a collaborative model for knowledge sharing.  We called it the “Adaptive Organization”.  The interesting thing that emerged from this work pertaining to managing collaborative assets was that the same dynamics that drive collaboration, sharing, and overall quality of effort in any community are fundamental to a well-functioning community.

    The CSI’s initial KM strategy, KCS (Knowledge-Centered Support) focused on creating collaborative internal communities of support agents sharing knowledge actively and dynamically.  The same principles of joint ownership, transparency, reputation and knowledge-focused behavior extend to external communities as well.  So the basic rule of thumb in managing collaborative knowledge is to help define the frameworks for collaboration, stimulate flexible but standard knowledge models, and coach participants to quality through examples of effective knowledge sharing.  It IS possible to manage knowledge in community, one just has to stay focused on getting the community to manage it’s knowledge (!)

    There are many challenges emerging in truly managing a multi-channel, multi-mode knowledge exchange environment.  The world of our children will look even more different than the world we see today, I’m certain of it.  My 18 year old expects the entire world to appear to him on demand from the browser window on his phone.  To him a “knowledge base” is a quaint artifact of prehistory.  The companies that can seem to be meeting HIM where he already is, with the information he requires as he requires, it, will earn his loyalty and repeat business.  The idea of a tech support phone call is already a running joke in his generation – an admission of failure and lack of capability by the company one has to call.

    The winners in the knowledge delivery game will be those who can master this fragmented, multi-channel approach to knowledge delivery in a way that leverages core knowledge while simultaneously achieving effective dynamic, multi-channel distribution.  In many respects THIS is the new frontier of KM!

    John Chmaj
    “The Knowledge Advocate”

     
Next Page »