Seed and Soil

Siddhartha Mukerjee, the brilliant Columbia oncologist and Pulitzer-winning writer, has struck again. In a recent New Yorker magazine (September 11), his article “The Invasion Equation” describes a striking leap of insight that could transform cancer research. This insight, called “seed and soil,” brings ecological or systems thinking to studies of cancer research — and could equally be applied to management interventions.

Mukerjee is, as always, dazzling in the depth of his historical knowledge that brings us to any given point. A key issue in cancer treatment is whether any given cancer will stay in place — and be treatable there by surgery and/or chemotherapy — or whether it will metastasize into cancers in other parts of the body. It turns out that this has less to do with the nature of the cancer itself, and more with the nature of the host tissue. The fundamental research question Mukerjee addresses is, “Why do cancers spread more often to certain parts of the body than to others?”

What determines cancer’s journey?

Mukerjee traces his work back to that of a 19th-century doctor Stephen Paget, who noticed that cancers spread more often, for example, to the liver than to the spleen, in spite of the many similarities of these two organs. The characteristics of the cancer cells, in other words, play only part of the role. It is also something about the tissue that plays “host” to the tumor that makes metastasis more or less likely.Seed and Soil

Paget termed this “seed and soil,” a concept that will be resonant with those who have studied ecology or systems thinking. Mukerjee claims that this is coming back as an avenue of inquiry after lying largely dormant for the past century.

Mukerjee displays leaps of insight that would delight any creative artist.  He goes on to show how the seed and soil metaphor applies to other areas of biology — the overbreeding of mollusk populations in the Great Lakes, for example.

Is consulting like cancer? Or like clams?

Organizations are made up of people, and people are biological entities — so it often makes sense to apply biological models and metaphors to organizational phenomena.  When I read about this breakthrough thinking, it occurred to me that the seed and soil concept also translates directly to the work we do as consultants. We have our favorite techniques and interventions, things that have worked with other clients, things that are heralded widely as “best practices”. This has become a whole industry, with books and articles advising organizations on how to replicate the success of their peers.

A more experienced consulting practitioner will realize that it is often the characteristics of the host — the client ecology — that determine whether a given intervention will succeed, as much as the intervention itself. The client (the “soil”) may be receptive, it may be skeptical, or it may be dead set against change, or even hostile to it. These factors, every bit as much as the design and execution of the invention itself (the “seed”), determine whether any given intervention will take root (to extend the metaphor) and eventually succeed.

Intervention = change management

I’m always amused by the study of “change management” as if this were some specialized branch of consulting. All consulting is change, or at least intended to be so — and all consulting must therefore consider how to effectively catalyze and manage that change. The goal of any meaningful intervention is to move your client from Point A (the current client state) to Point B (the desired future client state.) Any intervention plan that fails to describe clearly the conditions and actions that will enable this movement — for that individual client — is fundamentally flawed.

Why does a consulting intervention that worked in one client situation fail to produce the same result elsewhere? The seed may have been right, but the soil was not receptive. Knowing how to assess and condition the soil (i.e., the enterprise culture) thus become critically important in creating successful client outcomes.

Columbia IKNS Residency – August 2017

As a faculty member of Columbia University’s Information and Knowledge Strategy (IKNS) program, I have a variety of duties and responsibilities. One of these is to actively participate in the “residencies,” a twice-yearly physical coming together of the students — many of whom live and work outside New York City, and some of whom are outside the United States.  (See the great 2017 cohort below.)IKNS 2017 Cohort

It’s always exhausting, due to the string of 16-hour days and the need, as a faculty member, to always be “on” for advising students. My job consists primarily of counseling student teams who are working on live consulting projects within sponsoring organizations, many of which are large and complex — NASA and the United Nations, for example.

“Year of the KVC”

This cycle was especially challenging — and rewarding — as the students had clearly embraced the Knowledge Value Chain model to an extent that had not happened in previous cohorts of the program. This was not only hugely gratifying to me, it was a substantive assist to my work. As students engaged the model on behalf of their clients, many of them also engaged my help in determining how the model could benefit their clients.

While the students were respectful of my time, and even apologetic in some cases, I pointed out that this is how the KVC model grew in the first place — with input from clients and interested friends. And that is how it will continue to grow in the future — so I regard no question or application as off-topic or non-fruitful. It’s all good!

Managing projects

One Columbia team used the KVC to chart their own progress in their consulting project, which lasts a total of eight months. Two key lessons emerged: (1) that the consulting effort itself passes through the essential stages of the KVC — collecting Data, analyzing it, presenting it to management in the form of Intelligence, and (2) that the production of Value from that Intelligence remains solidly in the domain of the client.

The client (the “user” in KVC terms) is uniquely empowered to actually develop value-producing outcomes from the consulting effort: making decisions, allocating resources, and taking actions based on the findings. That said, the consulting team can provide a roadmap by which this knowledge-to-value transformation can optimally be operationalized by their client.

Discovering expertise incentives

Another team is helping a European scientific research agency improve the speed and value of their innovation process. They used the KVC to discover that the incentives for knowledge production at the micro level — that is, the incentives for each individual subject matter expert within the organization — were not well-aligned with those experts’ actual day-to-day needs and workflow.

Designing knowledge products

Another team is working with a global (and quite sophisticated) consulting firm to develop client-facing knowledge-based service and product offerings. They are interested in including both hard and soft costs and benefits in any ROI model.

I also helped them with the idea of “verticalizing and horizontalizing” their financial benefit/ROI models. That is, knowledge ROI targets should be developed specifically for each industry vertical and for each horizontal type of intervention (e.g., community of practice development, lesson learned capture and curation, etc.) This is primarily because the KPIs and other performance metrics impacted by the effective uses of knowledge will vary across both industry sectors and enterprise functions.

Capturing global lessons learned

Finally, one student came to me with a hard-copy KVC Handbook that was dog-eared and underlined and/or highlighted in several places on just about every page. It was a mess — a beautiful mess! Though she seemed embarrassed, I was of course delighted — this is a clear sign that she is actively using the book and the constructs that it represents.

Her team is working for a global NGO to fix and empower their “lessons learned” capture and curation process. That process is not providing sufficient macro-value (i.e., value to the enterprise) because it does not provide micro-value (i.e., value to each individual who supplies the system with data on his/her projects) to actually generate a critical mass of reliable data. They are in a downward-spiraling “doom loop” of low-quality/low-usage.

V = B:CThey are using the “value leverage” formula Value = Benefit / Cost to increase the value of this knowledge-producing activity by decreasing its implicit cost — specifically, the time it takes each agency professional to make meaningful contributions to the lessons learned database.

These projects and the resulting stories will continue to unfold through the end of this year. I’ll report here any significant non-NDA’ed developments that you should know about.

What is the Difference Between Information and Knowledge?

In my KVC Handbook v. 4, I draw a clear distinction between knowledge and information — essentially that knowledge is a more “processed” version of information. In speaking with people I find that this difference is still not totally understood — so will amplify here.

The short version

Simply put, the distinction is this: information is essentially inanimate — organized data that has been captured in databases, papers, books, news articles. Information is essentially mediated — by definition it exists only as embedded in a medium like those mentioned.

KVC_Triangle_FNL_RGB_w_name_jpegKnowledge, on the other hand, is essentially human. What we mean when we talk about knowledge is invariably embedded in an animate being. (I’ll allow that this definition could recognize that animals have knowledge — but until such time as they can write or talk understandably to us about it, I’m willing to let that line of speculation go.)

A book on the shelf is information — until a person reads it, understands it, and absorbs it. Then (and only then) it has been converted into that person’s knowledge. (When the person subsequently socializes that knowledge and applies it to make decisions and/or take actions, then it has become intelligence. But that’s a discussion for another time.)

But what about “explicit knowledge”?

There are those who speak of tacit knowledge — implying that there are also varieties of knowledge that are non-tacit, i.e., explicit knowledge. The scientist-philosopher Michael Polanyi is said to have first coined this distinction in the late 1950s, which has become widely-accepted, even canonical, in the Knowledge management field. In 1995, Nonaka and Takeushi developed a model (“SECI”) for how individual tacit knowledge is converted into explicit knowledge, then socialized within the enterprise.

We think a wrong turn was taken. The KVC framework finds “explicit knowledge” a contradiction in terms; we define all knowledge as quintessentially tacit. Knowledge that has been mediated — by speaking it, writing it, entering it into a database, etc. — is what we identify as information.

We fully agree with most experts — and this was Polanyi’s original driving insight — that “we know more than we can speak.” Indeed, we find this a titanic understatement. It is a mere fraction of what we know that we can capture in its mediated form (information).

“Explicit knowledge” is an index

We might productively think of the mediated information about our knowledge as an index to that knowledge — a series of pointers. Even in its much-reduced, codified form, such an index nonetheless plays the vital role of navigating us into, and within, the body of knowledge.  Information enables our knowledge by providing us ways to access it.

Did you ever try to write down all the things you did, conversations you had, thoughts and daydreams you had within ONE day? James Joyce did this when writing his magnum opus Ulysses, which runs to more than 1000 pages — and barely scratches the surface of its characters (who, though fictional, are based on real people.)

In many cases, one might even question whether the transfer of knowledge to its information analogue has much value at all. An often-cited example is learning to ride a bicycle. An experienced bike rider could explain for days, in great detail, how it is that she rides a bike.  Her student will listen for days on end, asking lots of questions — but without being able to ride himself.

While the explanation (the information) can serve as a foundation for developing the knowledge of how to ride, much more fruitful in that process is trial and error, practice, and just getting the hang of it.

Information is static

What support can I offer for drawing this clear distinction between information and knowledge? I start by comparing the essential characteristics of information and knowledge.

Let’s compare the two in terms of their dynamism — the speed at which, and degree to which, they change. Information is essentially static. I have a book sitting on the shelf; when I open it a year from now I will expect it to have the same content it does today. And if, for any reason, it does not — then it has not fulfilled my basic requirements for a book. Information does its job by remaining reasonably static.

But what’s “static” about real-time data?

What about databases that are monitoring with sensors — for example, the health metrics of people wearing smart watches? Or your social media feed? While it’s true that they are continually being refreshed or added to — once that refresh process is completed, the information recorded as of a certain moment is there permanently.

So permanently, in fact, that there is now a “right to forget” legal movement to have Internet data be more dynamic — specifically, to have data scrubbed that would be incriminating, embarrassing, or is otherwise not wanted.

But absent some action to expunge such data, the default is that it stays around forever. Information is static at its essence.

But life is dynamic

Where information does not change, the underlying phenomena it describes do change, continually. As soon as you commit “knowledge” to a medium — essentially converting it into information — it starts to become “out of date” — it decays, in other words. Information has a shelf-life, a half-life, during which time it becomes progressively less useful as a representation of “what is”.

And knowledge is dynamic

Knowledge, on the other hand, is dynamic at its essence. Knowledge is adaptive — continually shifting, being modified, being enhanced. Knowledge is “wet” — it is organic — it is human.

Knowledge does ITS job by being dynamic. Change, adaptation, and evolution of knowledge are essential elements of its character. If knowledge does not have these characteristics, it is not fulfilling its purpose.

Does this matter?

Is information versus knowledge just a semantic distinction that makes little difference in the real world? I think not, and here’s why.

It has to to with that management thing that we encounter in the real world. I propose above that information is essentially static, and knowledge essentially dynamic — in other words, that these two are essentially different as economic resources.  If this is true, it follows that the respective manner in which these resources are optimally managed within organizations will likewise be completely different. If, for example, you try to manage knowledge as if it were information, you violate the essence of the resource — dramatically increasing the likelihood that you will fail.

And this is exactly what happens in many “KM systems”. Tacit knowledge is made explicit through an elicitation process (whether moderated or self-powered), then put into a database for storage — where it can (in theory) be retrieved and reused — that is, converted back into knowledge by another user.

My experience is that, in practice, this “knowledge re-use” rarely works as effectively as expected, for a number of reasons. Primary among these reasons is that the knowledge, once made explicit by being converted into information, is no longer dynamic. It ages and becomes progressively less useful — often quickly.

When we manage information as if it were knowledge, we court failure

Our temptation in conflating information and knowledge is to manage the former while asserting that it is the latter. We can think we are “managing enterprise knowledge” by, for example monitoring DOCUMENT metrics — for example, the numbers of times they are viewed, downloaded, “liked” or endorses, etc. Documents are information — static and by definition out-of-date.

Document access may be useful as an OUTPUT measure — but not as as a true metric of KNOWLEDGE, and even less as an OUTCOME measure, which is what really matters. It tells us nothing about whether the document was converted into knowledge (i.e., by being read, understood, and discussed), and even less about whether it was converted into Results, Outcomes, and Impact — the true measures of knowledge value.

By managing information while we intend to manage knowledge, we let ourselves off the hook by getting off the “value elevator” on a lower floor than we might optimally do. We measure what is easy to measure — instead of what matters.

Building Knowledge Value in Practice

My basic work and message have been steadfast for a couple of decades: helping companies use information and knowledge more effectively in the service of functioning and competing more effectively.

However, I find that the way I express this core message varies based on the level of sophistication of my audience and on the level of the opportunity it represents.

In speaking recently with students at Columbia University’s innovative Information and Knowledge Strategy program, I used a technique recommended by many successful speakers — distilling a complex message down to core “principles of practice” that can be readily applied.  I’ll briefly describe below what those are.

The inspiration for my talk started one year prior, when Larry Prusak had spoken to a previous group of students, and observed that, “We have to figure out how to SELL this knowledge stuff.” Larry is a pioneer in the knowledge field and has a knack for knowing — and saying — what is really happening. So his words resonated deeply with me, and I vowed to use my training in business strategy to help these students “sell” their work.

To me, selling anything is primarily about getting the potential buyer to recognize the value of what you are selling — after that, the stuff sells itself. I believe this insight applies not only to knowledge but to all B2B sales — and B2C too.

Knowledge producers and practitioners, though often acting as internal staff resources rather than outside agencies, still must “sell” their work and its value. When they do not understand this, or know how it applies to their work, their contribution is undervalued, their careers can suffer, and so on. It’s not a happy situation.Key_Principles

I believe that the key to escaping this low-value loop is to understand and practice these six principles:

The Language of Value

Understand the basic concepts and metrics of value. ROI = V = B/C. This means that ROI (= value V) equals the ratio of the benefits (B) of an effort to the costs (C) of that effort. Both benefits and costs are incremental, i.e., those that would not have occurred without the effort. Both benefits and costs should include those that are “hard” — direct and readily measurable — and those that are “soft” — indirect and less available to measurement.

User-Centricity

Determine the benefits and costs — hard and soft — from the User’s perspective. It matters much less what the Producer of knowledge thinks the value is than what the User thinks it is. I see this mistake made all the time. It is easier to ascribe value at a distance to a knowledge product (a term I use to include knowledge services) than it is to empirically determine what the value is to the User. But it can be done, most often by asking the User.

The Value Sphere

Understand who benefits from knowledge: who cares, how much, and why. Benefits of a knowledge initiative could vary with different stakeholders in that initiative, whose own value systems differ. A success in an human resources-sponsored knowledge effort could be measured differently than in an innovation-sponsored or IT-sponsored effort. The interests and incentives of the people working in these areas vary, and the value they ascribe to knowledge will vary accordingly.

Enterprise Value Metrics

Understand how your client thinks about and communicates value. Within a given enterprise (business, government agency, NGO), there will be many existing metrics — externally-facing ones like those required by regulations and/or stakeholders — and internal ones like Key Performance Indicators, scorecards, dashboards, and so on. Understanding what these metrics are and how they work in an organization is key to managing within that organization.

Value Impact

Align the benefits of your knowledge product with these value metrics. The benefits of any knowledge product should map to the overall goals and strategies of the enterprise in as direct and measurable a way as possible. Demonstrating enhanced revenues or cost savings are among the most persuasive arguments for a value-adding knowledge initiative.

Value Proximity

Identify and remove the barriers to producing enterprise value (“knowledge-value gaps”.) I often notice that, rather there being a clear chain between knowledge production and enterprise value, there are gaps. Such disconnects can be identified and fixed.  The KVC methodology is designed to do this rapidly and effectively.

So there it is. I hope this helps you. Contact me if you have questions or would like a copy of the full slide deck.

Positioning Knowledge for Value

The early-winter holiday break is an opportunity to recharge our batteries and refocus our strategies. Amidst visiting with family and friends, I took time to reflect on the recent past and what the future holds.

Among other things, I realized that over time my clients have been paying more attention to the top half of the Knowledge Value Chain (how knowledge is used) and less about the bottom half (how knowledge is produced.)

What drives knowledge strategy?

Ideally, knowledge strategies spring from, and are tightly linked to, top-level enterprise strategies. In practice, however, many of the problems in knowledge production spring from misunderstandings of, or lack of clear linkages to, enterprise value.  Some of my research on this is cited in the KVC Handbook.  This knowledge-value gap raises several existential questions about knowledge-centric activities, among them:

  • How does knowledge support our enterprise mission and strategies?
  • What tangible benefits does knowledge provide us?
  • Is our knowledge strategy optimized in an economic sense?

Any lack of clarity at the top of the pyramid tends to get driven down through the chain, where it causes tactical and executional confusion and ineffectiveness.  Those of you in the trenches will know what I mean…

Benefits-driven positioning

If you are a knowledge producer, do not wait for those problems at the top to get sorted out — seize the initiative yourself!

We’ve been advising our clients: Always position your product (and I use this term to include services) from the point of view of the needs of, and benefits to, your user/customer/client/patron. Not — as so many of us do instinctively — from how your product works, why it is wonderful, or even why it’s better than your rivals’.  The diagram below summarizes TKA’s discovery process for working with clients on this.Powell_Knowledge_Innovation_160814a

As has been said so often it is becoming a management cliché, customers in general seek to fill a need (the hole) rather than buy a product (the drill). Your user is at the center of the value ecosystem — not you, nor your product, nor your expertise, nor your enterprise.

Most of us will nod our heads in agreement with this, and will “talk the talk” about the importance of our User-clients. Yet many of us will continue in practice to (usually unconsciously) put the customer somewhere else other than at the very center of our competitive ecosystem.

Let’s be clear on this: customers are all-important because they are ultimately the major sustaining source of enterprise value — that is, the flow of financial and other resources that keep everything else running. Without customers, there is no enterprise.

What is positioning?

In many cases this is largely a matter of positioning. Positioning is basically the representation you create in your customer’s mind (and wallet) of what your product is and (more importantly) the job it accomplishes.

Positioning is an essential element of branding, and is captured in the words you use to describe customer benefits. Often these are contained in a tag line, like: “Be well” (Merck); “Be brilliant” (Ameriprise); “Performance with purpose” (PepsiCo); “Building a better working world” (EY). Each of these statements focuses more on the customer/client than on the product or its provider.

Customers don’t buy products — they fill needs

Customer benefits-driven positioning is needed for any product or service, not just knowledge-based services. In general, customers buy largely based on what they need (solutions) — not on what you sell (products). If those two coincide, so much the better. To the extent they do not, your positioning should be re-calibrated against current and future customer value expectations.

I personally experienced an example of this was when I worked early in my career as a marketing manager in one of the world’s four global accounting/consulting firms. This was during the 1980s, when the long-held legal prohibitions against marketing these services had suddenly been lifted — and these firms (along with law firms, whose marketing was likewise deregulated around that time) began to to test the marketing waters and to sharpen their competitive knives.

From “audits” to “assurance”

These firms at that time all had as their major product the audit — an elaborate, labor-intensive, expensive process essentially designed to determine whether their published financial statements accurately represented to stockholders their true financial condition.

Through positioning research both formal and informal, these firms became aware that their value proposition was fundamentally something like, “Give me, as a stockholder, the assurance that this company is telling the truth in their books.”

As a result, during this period, these firms moved their positioning away from “we do audits” toward “we provide assurance”; in other words, toward customer benefit-centric positioning. This re-positioning remains largely in effect to this day.

Strategic pivots

Aside from meeting the customer where he or she is, there is a strategic benefit to being benefits-focused, rather than product-focused. It opens up opportunities to provide other solutions that may supplement, or even replace, those solutions that you offer currently.

When Lou Gerstner was recruited in 1993 to lead the then-foundering IBM, he found that what they were selling — mainframe computers and software to run them — was not fully solving their customers’ data management problems. They quickly diversified (by acquisition) into services, a move which most of their rivals quickly followed. Services now make up 60% of IBM’s revenue.

Being customer-centric thus opens you strategic pivots that you might not otherwise consider.

Determining customer needs

Let’s assume you have truly adopted this customer-centric outlook. How do you determine your users’ present — and future — needs?

The most common practice is to ask them, in one form or another, “What do you need?” This makes perfect sense — but unfortunately is rarely effective. To paraphrase Steve Jobs, “It’s not the customer’s job to know what she wants — it’s OUR job.”

Our role, in other words, escalates to what I call knowledge leadership — to distinguish it from knowledge management, which sounds (and too often actually is) reactive, passive, and producer-centric.

Things to do

Take these steps now in developing a new product or re-positioning an existing one:

  1. Identify your users — the crucial step (and typically not as straightforward as it sounds)
  2. Assess those users’ needs, met and unmet, present and future — not by asking directly — but by skilled observation of their workflows, incentives, and goals
  3. Productize your solution to satisfy those user needs
  4. Iterate your productized solution to minimize the gap between What We Sell and What They Buy
  5. Position your product as a tangible benefit within your client’s value ecosystem

Busy Season

I hope each of you is enjoying this new year — whenever it is that you celebrate its beginning.

For me, 2017 is already full of new beginnings and revelations.  To recap:

Launch of KVC Clinic v.2.0

In the fall of 2016, we launched an expanded version of our KVC Clinic. KVC Clinic elementsThis includes three days of on-site work with each intelligence or knowledge services group, as well as depth interviews with internal clients. It’s a hybrid event incorporating experiential team learning, organizational diagnosis, and customer research.

We received enthusiastic engagement from our initial host team, and were able to quickly develop a solid set of recommendations going forward. We are delighted that this client has added the KVC as a major 2017 initiative in their knowledge services program.

Launch of St. Clair knowledge services book

I am fortunate in my work to have been introduced to many of the world’s leaders in the fields of knowledge and intelligence — and even more fortunate that some of those have becomes colleagues, confidants, and (in a few cases) friends.

One of the latter is “Mr. Guy” St. Clair, a pioneer in moving-and-shaking libraries into the modern field of knowledge services. Guy’s wide-ranging set of professional experiences and contacts, and his lively and creative mind, make him someone people in this emerging field want to listen to.

St. Clair bookIn his important new book, Knowledge Services: A Strategic Framework for the 21st Century, Guy lays out a program for any organization to move ahead and capitalize on the rapidly-moving developments in this field. Devotedly “Druckerian” in outlook, he rightly emphasizes the leadership and organizational cultural aspect of enterprise knowledge — those constants that do not change rapidly, and that constitute much of the difference between success and failure.

Though Guy’s own voice comes through clearly, he also cites and includes contributions from a range of experts, some of whom have been assistants or guests speakers in his classes at Columbia University.

Among these, I am delighted that Guy has generously cited and discussed the KVC framework. (It was through his initiative that the KVC framework was added to Columbia’s curriculum.) He positions the KVC as the framework for conducting a value-added organizational knowledge audit — generally acknowledged to be the starting point for most any successful improvement initiative.

To use Guy’s words, he and I are “fellow travelers.” I value our frequent dialogues, and count myself wealthy in having him as a colleague and friend.

Eric Garland’s KVC analysis

And speaking of colleagues who became friends, another of these is futurist and digital man-of-the-world Eric Garland.

Eric is a master of digital engagement, and a knowledgeable speaker who always entertains — and who never minces words. His 127-tweet thread on the 2016 US election went viral and has been called (among other things) “a Federalist Paper for 2016″. He has a shot at becoming Tom Paine meets Hunter S. Thompson meets the Internet.

A few weeks ago, Eric totally surprised me with his KVC-based analysis of the Russian interference in the 2016 US presidential election. Interesting indeed, and certainly an innovative and useful application of the KVC framework.

Columbia IKNS Capstone projects

It’s the chance to meet rare individuals like Guy St. Clair and Eric Garland that have always driven my engagement with professional networks (such as SCIP), and now drive my engagement with students and peers in academia.

To that end, I have been asked by Program Director Kate Pugh to re-up as a Capstone project instructor and coach for Columbia’s Information and Knowledge Strategy program. Kate is the central driving force of the program, the hub of a wide network of interesting people, and herself a leader in the “knowledge” field.

I look forward to working again with two returning members of our “2016 instructional dream team,” Madelyn Blair and Nita Gupta.  I will greatly miss working with Vanessa DiMauro, with whom I collaborated often — but hope to be working with her on other initiatives.  I welcome the chance to get to know and work with Chris Samuels, who is joining us.

We will use the KVC as a Capstone framework that integrates others’ work in the value and ROI of knowledge. There are few experiences quite so fulfilling as interacting with students — the future leaders of the world’s knowledge economy!

Bad Night for Big Data

I have a nightmarish pet scenario that as we as a society gain non-stop access to ever-increasing data, there is a risk that we actually get progressively dumber — as we lose the ability to process and analysis that data sufficiently.

My idea got a workout this week during election night when the polling industry, most of whom had predicted a single or even double percentage point Clinton victory, got it monumentally wrong.

When we hear on TV every ten minutes about how Watson is curing cancer, among other breathless hype about Big Data, an error of this stunning magnitude seems at first paradoxical.

But the more you think about it, the more it makes a perverse kind of sense.

“Dewey Defeats Truman”

Embarrassing election errors are nothing new — witness the iconic photo of President-elect Truman gleefully displaying the newspaper headline “Dewey Defeats Truman” the day after the 1948 election.dewey-defeats-truman

People claimed then that the error was due to a combination of slow reporting and the print-era need to prepare headlines hours in advance of publication.

What IS new is that polls are now easier and cheaper to field, and as a natural consequence there is a proliferation of them. And, as they are invariably deemed newsworthy, they feed the hungry news-cycle monster. They generate eyeballs and click-bait — and they’re fun, especially when your own pick is ahead.

Especially toward the close of this 18-month campaign, it seemed like a new poll was appearing every other day. We became so collectively absorbed in the twitching poll dashboards that we neglected the fleeting opportunity to discuss in any depth the serious challenges facing our country and our society.

Let’s figure this out

I’m confident that over the coming weeks we will see a vast, rolling post-mortem on how things went so wrong — discuss amongst yourselves — and please tell us what you came up with. It’s way too important not to figure this out.

Some of the early hypothesis include:

  • SAMPLING ERROR. The sample selection was biased by cord-cutting, the tectonic rolling shift in the US from phone landlines to wireless.
  • THE NEWS TEAM BUBBLE. Most of the major media are based in cities on the east and west coasts (New York, Washington, Atlanta, LA). People talking to like-minded people creates an echo-chamber effect, where differing perspectives tend to remain largely unheard, much less tolerated.
  • THE CHAOS EFFECT. Voter behavior is complex, and can be influenced by small, apparently non-related events — like public leaks of hacked email threads. One commentator compared it to weather forecasting in this regard, evoking chaos theory.

Not so fast, would be my retort on this last one. The weather is inanimate, and will happen regardless of what we forecast about it. In elections, by contrast, forecasts are used to allocate resources in real time. A forecast that Wisconsin would vote Democratic caused the Clinton campaign not to show up there even once after the convention — leading them to lose by a small margin. This gives new meaning to the term false positive.

As I write this, I note that intelligence expert John McGonagle, who is also my colleague and friend, has blogged about this.

Data is not deterministic

The New York Times’ Upshot column — one of my must-reads these days — was a bellwether of this. Nate Cohn’s September 20 headline says it all: “We Gave Four Good Pollsters the Same Raw Data. They Had Four Different Results.” These results, which included the Upshot’s own analysis, projected two larger and two smaller Clinton wins and one small Trump win — all from the identical data set of Florida voters.

Trump won by one percent in Florida, and only one expert team (Stanford/ Columbia/ Microsoft) called that right. Cohn attributes this primarily to two factors: (1) the team’s use of voting history, rather than stated intention to vote, to indicate likelihood of voting — a key factor when you realize only a little more than half of registered voters actually voted, and (2) their use of statistical modeling in weighting voter characteristics from the interviewed sample.

The key point here is — data by itself is not deterministic. Data does not “decide” anything by itself — the processing and analysis that follow are essential elements of the “value” equation — in this case, measured by whether or not you got it right.

Data blindness

In my recurring nightmare, we get dumber as — and even because — we get more data. In KVC terms, we cycle endlessly around the bottom levels of the chain without gaining enough momentum to leap up a level or two and see what it all means. I have termed this “data blindness,” a variation of which I think describes the Election Projection Debacle of 2016.

Of course, there is much hand-wringing, even some falling on swords (albeit soft ones).  Forecaster Dr. Sam Wong of Princeton went on national TV to fulfill his promise that he would eat a bug if his forecasts were wrong. Other forecasters are busy back-spinning their stories to explain how they actually warned people (somewhere in the fine print) that they could be wrong.

We should neither stop forecasting nor exercise the option, however tempting, of dismissing the entire forecasting industry out-of-hand. We should commit to doing much better — at polling, at critically analyzing the results, and at communicating what those polls signify — and what they do not.

Value Gets Lost

“I’m stuck at the bottom of the pyramid.” “My value is unclear to people who matter.” “I’m invisible.”

In conducting “Points of Pain” exercises during TKA’s workshops and on-site clinics, too often we hear things like this from competent and hard-working knowledge producers. In study after study, roughly half of the challenges expressed by PRODUCERS of organizational knowledge or intelligence involve questions or concerns about the value they generate.

More often than not, the questions are not about producing value per se — usually producers are pretty clear and confident about that. The major gap is that their client USERS do not understand this value — and that therefore they have trouble attributing the value of knowledge back to those who originally produced it.

Our output is their input

In economic terms, any knowledge or intelligence work product, while typically the OUTPUT or end product of a knowledge or intelligence process, is subsequently the INPUT or raw material for a client’s work stream. Knowledge users take over where knowledge producers leave off — that’s one of the fundamental lessons of the KVC framework. During the handoff — the Communication step — the knowledge work product is transformed into intelligence — the basis for decisions, actions, and the production of “enterprise value” (for example, a product that brings revenues into a business).

A KVC Clinic client recently pointed out to us that the KVC triangle graphic makes it appear as if value is only produced by people and processes at the top. This was a fundamental misunderstanding of the model — for which (of course) I take full responsibility.  And hereby try to correct, please read on…

Triangle and trapezoid

The Enterprise Value (EV) Triangle

The Enterprise Value (EV) Triangle

What we mean by the word Value in the “little triangle” at the top of the KVC triangle is more properly specified as “Enterprise Value” (EV). Value is produced at each one of the seven steps in the KVC process (see the KVC Handbook p. 54). But value is only realized — i.e., made manifest and measurable in terms of revenues or other organizational outcomes or results — at the top.

Using one of my favorite analogies, the people who pick the grapes ultimately get paid by the people who buy the wine — but there is a value chain of activities that separate these two economic events in time and space.

The Knowledge Value (KV) Trapezoid

The Knowledge Value (KV) Trapezoid

With knowledge services, the problem is that the production of “Knowledge Value” (KV) — what another client called “the trapezoid” below Communication — is often separated from the production of EV, in two respects: (1) separated in time, and (2) separated in organizational location.

Time and location

The stepwise transformation of KV into EV can can take anywhere from moments to months — many months, in some cases. And the organizational unit that produces EV (for example, a new product development or sales team) is often far removed from the producers of KV (for example, a corporate research library or intelligence unit.)

The tendency of the EV producers to attribute some portion of the EV to the producers of the supporting KV varies inversely with these distances in time and location. The longer it takes for EV to be realized — through increased sales or other tangible, measurable outcomes — the lower the tendency for the EV producers to attribute value back to the KV producers.  “Out of sight, out of mind..and out of compensation.”

And because knowledge producers and users often work in different organizational teams, even in far-flung offices, EV producers are not reminded to make such attribution in the everyday course of their work. The value literally gets lost in moving from “here and now” to “there and then.”

Proximal value

That’s just human nature. People forget, even when they genuinely appreciate the input and would like to give credit for it.  And regrettably this is not always the case. “They take our work and pass it off as their own,” whether explicitly or implicitly, is a refrain we hear often from knowledge producers.

As a general rule, I have observed that the closer (in time and location) one is to the production of EV, the higher the value typically attributed to his or her input. I call this the Proximal Value effect on p. 78 of the KVC Handbook.

Shrinking time and distance

Seen in this way, the challenge of increasing attributed value is essentially the challenge of shrinking those distances of time and location that separate KV from EV.

One possible way of doing this is to continually be on the phone or email reminding people of the inputs you contributed to their work. But this method is not only time-consuming, it may be seen as intrusive by your client. As a result, it may totally backfire.

The key is to understand how your knowledge work product produces EV (enterprise value = revenues or other organizational result).  You must do this first — and without this understanding, you will not be able to effectively do the rest.

A virtual presence

As a knowledge producer, your goal should be to create a virtual presence that outlasts your physical presence. Basically this means branding your knowledge-based work products so that they are always recognized as your work, and always associated with a value-added outcome (i.e., that produces EV) to which you made a tangible contribution.

To achieve this, leverage touch points at which you and/or your work products have contact with your client. Some of these include:

  • A logo (mark and/or distinctive font) and/or “tagline”
  • Stylized, attractive email signatures that consistently identify your team’s work
  • Titles of work products (PDFs, for example) that consistently identify your work

This virtual presence can be nicely complemented with a more active presence — periodic, friendly, non-intrusive follow-ups with the intent of adding further value. The combination will keep you, your work, and your contributions to Enterprise Value close to the top of your client’s mind.

More News from the Dark Side

I pay close attention to feedback I receive on the KVC and other analytic frameworks we are developing.  Many times I make revisions based on this feedback — that’s why the KVC Handbook is now on its fourth major edition.

One of the things I’ve heard is that the KVC model is too idealistic.  Even as I confess to being idealistic by nature, I think that’s a fair criticism.  And thanks to your feedback I use this blog (and my Knowledge Clinics) to address things not in the current edition of the book.

In the private sector, examples of damaging deviations from the ideal are as easy to spot as this morning’s Wall Street Journal.  Last month, for example, I outlined the issue of what happens when the Knowledge Value Chain is broken by chance — or corrupted by intention.

Intelligence in war

This month we examine a case that has been in play for a while, from public affairs in the US.  (Though even our readers in South Africa and elsewhere should take note — things like this could happen there too!)

In general the issue is the reliability of intelligence in an active war theater — here the ongoing actions against ISIS.  Does this sound familiar?  It should — read my earlier post about General Michael Flynn’s criticism and subsequent reshaping of the intelligence effort in Afghanistan.

And those of you who (like me) are baby boomers will remember this issue as it played out in Viet Nam.

Cooking the chain

Like the private sector examples last month, the principle at issue here is what I call cooking the chain.  That is, determining first what the desired outcome is, then selectively gathering data that supports that “conclusion” to the exclusion of other more feasible alternatives.

In accounting, this is known as cooking the books.  In social psychology, it’s related to what’s called confirmation bias.  You decide what the answer should be, then you backfill and/or selectively choose the data to support that answer.  You may even need to twist, distort, recast, or spin the data — fill in your favorite variation — to meet your needs.

This is, of course, a complete perversion of the admittedly idealistic KVC model, which recommends planning your data collection process from the top down — but not actually shaping the data itself to fit a foregone conclusion.

Analytic integrity

This is what was alleged to be happening with US Central Command (CENTCOM) intelligence about ISIS.  This is serious business, since intelligence is designed to support high-level policy decisions.  As the report so directly states, “Analytic integrity is crucial to good intelligence, and good intelligence is crucial to making informed policy judgments.”  While Congress was being formally told that ISIS had been reduced to a “defensive crouch”, the realities in the morning news indicated otherwise — the frequent made-for-cinema horrors accompanied by the captures of the Iraqi cities of Ramadi (which has since been taken back) and Mosul (which has not, as noted on the map below).isis-territory-sept-2016

We now know that the analysts assigned to the situation knew better, too.  This is because one of them initiated a whistleblower action that resulted in the convening of a US Congressional Task Force to examine his allegations.  That group issued an unclassified report on August 10, 2016.  Just below the surface of the dry, bureaucratic details and gov-speak are some fascinating revelations, chief among them that:

  • intelligence products (i.e., reports) were “inconsistent with the judgments of many senior, career analysts at CENTCOM” and the wider intelligence community;
  • these reports were “more optimistic than actual events warranted”;
  • this was the direct result of pressure brought to bear on analysts to “distort or suppress intelligence” that would be seen as counter to the desired narrative, i.e., of a generally positive trend; and
  • CENTCOM leaders chose to rely on operational reports from forces in the field, “rather than the more objective and better documented intelligence reporting.”

And yet…

The Task Force finally reports being “troubled” that, despite the complaint filed in May 2015 and “alarming [internal] survey results” that followed in December 2015, nothing significant was done by those responsible to correct the situation.  On the contrary, the report notes that the CENTCOM leadership downplayed the significance of these events, calling such allegations “exaggerated”, and so far has not cooperated fully with some of the information requests from the Task Force.

The report details several changes made during 2014 that, though purportedly intended to improve the intelligence process, had the opposite effect of biasing it.  These changes included:

  • introducing a new layer of review into the process, which some analysts resented, and which slowed the flow of intelligence;
  • introducing a fusion center to serve as a round-the-clock focal point for intelligence — but neglecting to make its role, structure, or even existence widely known to analysts;
  • introducing a daily summary report that over time grew in length from a few pages to more than ten pages; and
  • eliminating the standing practice of coordination, whereby analysts have drafts of their work challenged by external analysts prior to being issued.

Culture flows from the top

The report implies that the systemic bias resulting from these changes originated from the then-current leadership of CENTCOM intelligence — described by their own analysts as “risk-averse and unwilling to accept uncertainty in intelligence analysis”.  Dissenting opinions, highly valued in the intelligence culture, were to be discouraged.

In a rare display of optimism, the report notes that, by the time of its release, these leaders had been replaced and certain of the problems addressed.

What would motivate someone to cook the books on something as important as intelligence related to our national security? These are serious allegations that could easily end careers if found to be true.   Though it mentions in passing the successive organizational changes made in CENTCOM at the time, the report stops short of concrete answers — or even speculation about such explanatory details.  Maybe there is also a classified version that contains these?  To be fair, the report does imply that this investigation is ongoing — so we’ll stay tuned.

There are plenty of historical analogues for this that could be instructive.  And certainly businesses are not immune to this — there is an almost cosmological force that propels “happy talk” to the top of the organizational scrum, while too often bad news is suppressed or tweaked beyond recognition.

The Value of Knowledge Makes Headline News

Information has its greatest value when it is most available to, and accessible by, people for immediate use in understanding their world. I not only believe this, I put this insight to work in my consulting and teaching.

To implement this, I often use stories from the headlines to illustrate my key points. There are so many examples illustrating the KVC in the news that I am confident that I can pick up a Wall Street Journal at random and find a real-world illustration of a key point.

I call this technique a “flash case” — since it has the teaching value of a standard business school case — but it has the key advantages that (1) it can be developed quickly and (2) it evolves over time as the actual events play out.

The Deutsche Bank Case

For example, I recently used the warning letters from the NY Federal Reserve Bank to Deutsche Bank (DB) about deficiencies their capital requirements reporting process. Yes, all that detailed, boring, low-level stuff — that can gut the fortunes of enterprises heretofore thought unassailable.

Deutsche Bank logoGraduate students in my audience at Columbia University were able to identify each aspect of the knowledge-value relationship in the case. Much of the discussion focused on this pivotal issue: was this a technology shortfall, or rather a systemic problem in corporate culture originating at the top? More the latter than the former was the class consensus — a view that has been largely borne out by subsequent events.

Around the same time as the capital reporting issues, DB was involved in the LIBOR-rigging scandal, in which several huge banks were found to have essentially fabricated data used to set key rates in the world financial markets. In April 2015 the bank was fined $2.5 billion by US and British authorities for its role in the scandal — more than any other single institution.

These and related issues led to a top-management shakeup at the bank in June 2015. DB’s stock currently sells for 1/3 of what it sold for at the beginning of 2014, and the cost of insuring the bank’s debt has risen significantly — a clear signal that the once-dominant institution is now considered a risky asset.

Theranos

There are other cases, with increasing frequency, that I have not yet developed into rigorous analyses. Recently in the news has been the case of Theranos — specifically, does their “pin-prick” blood test data fall into the quality tolerance allowable for them to accurately measure key indicators of patient health? Independent tests said no, and the credibility of the company and its services were questioned, first in sharp reporting by the Wall Street Journal, then by Medicare and other payers.

Theranos logoWalgreens cancelled a major contract with the company, and the viability of Theranos going forward is in question. Now their founder has been barred from the industry for two years, the company is under criminal investigation by federal prosecutors, and the Securities and Exchange Commission is looking into allegations of stock fraud.

Volkswagen

In the Theranos case, the quality of data was driving the viability of a venture-funded start-up which had staked its entire future on this issue. In another recent case, the credibility of an established global company, Volkswagen, was called into question by a data-quality issue.

Volkswagen logoThe company’s US division was charged with tweaking its test software to give positive results on emissions of diesel engines during mandatory testing — then reverting to a higher level of emissions (and performance) once the test was over. This brilliant, highly unethical subterfuge had a huge negative impact on Volkswagen’s sales, brand, and corporate reputation. Several top executives were dismissed, and in 2015 the company experienced its first drop in global sales since 2002.

In June 2016 the company agreed to a $14.7 billion settlement with the car owners in the US — the largest class action settlement ever. They are expected to make additional restitution to the dealers.

Data and Value

The cases of Deutsche Bank, Theranos, Volkswagen, and others like them illustrate that in our knowledge-driven economy, the connection between data and value is neither theoretical nor abstract. It’s having a huge impact on the bottom lines and corporate reputations of the companies who fail to see or to act upon that relationship.

My title exaggerates to make a point.  While it’s not screaming in the headlines, the connection between data and value is an issue that affects many organizations, and has an increasing impact on their financial results and future viability.

  • Latest Posts

  • Topics

  • Archives

  • About this site

    COMPETING IN THE KNOWLEDGE ECONOMY is written by Timothy Powell, an independent researcher and consultant in knowledge strategy. Tim is president of The Knowledge Agency® (TKA) and serves on the faculty of Columbia University's Information and Knowledge Strategy (IKNS) graduate program.

    ===================

    "During my more than three decades in business, I have served more than 100 organizations, ranging from Fortune 500s to government agencies to start-ups. I document my observations here with the intention that they may help you achieve your goals, both professional and personal.

    "These are my opinions, offered for your information only. They are not intended to substitute for professional advice."

    ===================

    We typically publish monthly on or about the 15th of each month, subject to our client workload. Use the RSS feed links below to subscribe to posts and/or comments. Better yet, follow us on Twitter @twpowell to be notified of new posts and related developments.

    Thanks for reading! Please mention us to others and add your non-spam comments and suggestions -- we value your input.

    ===================

    COMPETING IN THE KNOWLEDGE ECONOMY is sponsored by the Knowledge Value Chain® (KVC), a methodology that increases the value and ROI of Data, Information, Knowledge, and Intelligence.

    The contents herein are original, except where otherwise noted. All original contents are Copyright © TW Powell Co. All rights reserved.

    All KVC trademarks, trade names, designs, processes, manuals, and related materials are owned and deployed worldwide exclusively by The Knowledge Agency®. Reg. U.S. Pat. & TM Off.

    ===================

    E SCIENTIA COPIA. Knowledge is the Engine of Value.