I pay close attention to feedback I receive on the KVC and other analytic frameworks we are developing. Many times I make revisions based on this feedback — that’s why the KVC Handbook is now on its fourth major edition.
One of the things I’ve heard is that the KVC model is too idealistic. Even as I confess to being idealistic by nature, I think that’s a fair criticism. And thanks to your feedback I use this blog (and my Knowledge Clinics) to address things not in the current edition of the book.
In the private sector, examples of “gross and damaging deviations from the ideal” are as easy to spot as this morning’s Wall Street Journal. Last month, for example, I outlined the issue of what happens when the Knowledge Value Chain is broken by chance — or corrupted by intention.
Intelligence in war
This month we examine a case that has been in play for a while, from public affairs in the US. (Though even our readers in South Africa and elsewhere should take note — things like this could happen there too!)
In general the issue is the reliability of intelligence in an active war theater — here the “war” against ISIS. Does this sound familiar? It should — read my earlier post about General Michael Flynn’s criticism and subsequent reshaping of the intelligence effort in Afghanistan.
And those of you who (like me) are “baby boomers” will remember this issue as it played out in Viet Nam.
Cooking the chain
Like the private sector examples last month, the principle at issue here is what I call cooking the chain. That is, determining first what the desired outcome is, then selectively gathering data that supports that “conclusion” to the exclusion of other more feasible alternatives.
In accounting this is known as “cooking the books”. In social psychology it’s called “confirmation bias”. You decide what the answer should be, then you backfill the data to support that answer. You may even need to twist, distort, recast, or spin the data — fill in your favorite variation — to meet your needs.
This is, of course, a complete perversion of the admittedly “idealistic” KVC model, which recommends planning your data collection process from the top down — but not actually shaping the data itself to fit a foregone conclusion.
This is what was alleged to be happening with US Central Command (CENTCOM) intelligence about ISIS. This is serious business, since intelligence is designed to support high-level policy decisions. As the report so directly states, “Analytic integrity is crucial to good intelligence, and good intelligence is crucial to making informed policy judgments.” While Congress was being formally told that ISIS had been reduced to a “defensive crouch”, the realities in the morning news indicated otherwise — the made-for-cinema horrors (PHOTO) accompanied by the captures of Ramadi (which has since been taken back) and Mosul (which has not).
What we now know is that the analysts assigned to the situation knew better, too. This is because one of them initiated a “whistleblower” action that resulted in the convening of a US Congressional Task Force to examine these allegations. That group issued an unclassified report on August 10, 2016. Hidden just below the surface of the dry, bureaucratic details and intel-speak are some fascinating revelations.
The Task Force’s main reported findings were that:
- intelligence products (i.e., reports) were “inconsistent with the judgments of many senior, career analysts at CENTCOM” and the wider intelligence community;
- these reports were “more optimistic than actual events warranted”, and
- this was the direct result of pressure brought on analysts to “distort or suppress intelligence” that would be seen as counter to the desired narrative, i.e., of a generally positive trend, and
- CENTCOM leaders chose to rely on operational reports from forces in the field, “rather than the more objective and better documented intelligence reporting.”
The report caps off by noting that “the Joint Task Force is troubled” that, despite the complaint in May 2015 and “alarming [internal] survey results” in December 2015, nothing significant was done by those responsible to correct the situation. On the contrary, the report notes that the leadership downplayed the significance of these events, calling such allegations “exaggerated”, and has so far not cooperated fully with some of the information requests from the Task Force.
The report details several changes made during 2014 that, though purportedly intended to improve the intelligence process, had the opposite effect of biasing it. These changes included:
- introducing a new layer of review into the process, which some analysts resented, and which slowed the flow of intelligence;
- introducing a “fusion center” to serve as a round-the-clock focal point for intelligence — but neglecting to make its role, structure, or even existence widely known to analysts;
- introducing a daily “summary” report that over time grew in length from a few pages to more than ten pages; and
- eliminating the practice of “coordination”, whereby analysts have drafts of their work challenged by external analysts prior to being issued.
Culture flows from the top
The Task Force report implies that the systemic bias resulting from these changes originated from the then-current leadership of CENTCOM intelligence — described by their own analysts as “risk-averse and unwilling to accept uncertainty in intelligence analysis”. Dissenting opinions, long valued in the intelligence culture, were to be discouraged.
In a rare display of optimism, the report notes that by the time of its release, these people had been replaced and certain of the problems addressed.
What would motivate someone to cook the books on something as important as intelligence related to our national security? These are serious allegations that could easily end careers if found to be true. Though it mentions in passing the successive organizational changes made in CENTCOM at the time, the report stops short of concrete answers — or even speculation about such explanatory details. Maybe there is also a classified version that contains these? To be fair, the report does imply that this investigation is ongoing — so we’ll stay tuned.
There are plenty of historical analogues for this that could be instructive. And certainly businesses are not immune to this — there is an almost cosmological force that propels “happy talk” to the top of the organizational scrum, while bad news is suppressed or tweaked beyond recognition.