Eight to Late

Sensemaking and Analytics for Organizations

Six heresies for business intelligence

with 10 comments

What is business intelligence?

I recently asked a few acquaintances to answer this question without referring to that great single point of truth in the cloud.  They duly came up with a variety of  responses ranging from data warehousing and the names of specific business intelligence tools to particular functions such as reporting or decision support.

After receiving their responses, I did what I asked my respondents not to: I googled the term.  Here are a few samples of what I found:

According to CIO magazine, Business intelligence is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data.

Wikipedia, on the other hand, tells us that BI is a set of theories, methodologies, architectures, and technologies that transform raw data into meaningful and useful information for business purposes.

Finally, Webopedia, tell us that BI [refers to] the tools and systems that play a key role in the strategic planning process of the corporation.

What’s interesting about the above responses and definitions is that they focus largely on processes and methodologies or tools and techniques. Now, without downplaying the importance of either, I think that many of the problems of business intelligence practice come from taking a perspective that is overly focused on methodology and technique.  In this post, I attempt to broaden this perspective by making some potentially controversial statements –or heresies – that challenge this view. My aim is not so much to criticize current practice as to encourage – or provoke – business intelligence professionals to take a closer look at some of the assumptions underlie their practices.

The heresies

Without further ado, here are my six heresies for business intelligence practice (in no particular order).

A single point of truth is a mirage

Many organisations embark on ambitious programs to build enterprise data warehouses – unified data repositories that serve as a single source of truth for all business-relevant data.  Leaving aside the technical  and business issues associated with establishing definitive data sources and harmonizing data, there is the more fundamental question of what is meant by truth.

The most commonly accepted notion of truth is that information (or data in a particular context) is true if it describes something as it actually is. A major issue with this viewpoint is that data (or information) can never fully describe a real-world object or event. For example, when a sales rep records a customer call, he or she notes down only what is required by the customer management system. Other data that may well be more important is not captured or is relegated to a “Notes” or “Comments” field that is rarely if ever searched or accessed. Indeed, data represents only a fraction of the truth, however one chooses to define it – more on this below.

Some might say that it is naïve to expect our databases to capture all aspects of reality, and that what is needed is a broad consensus between all relevant stakeholders as to what constitutes the truth. The problem with this is that such a consensus is often achieved by means that are not democratic. For example, a KPI definition chosen by a manager may be hotly contested by an employee.  Nevertheless, the employee has to accept it because that is the way (many) organisations work. Another significant issue is that the notion of relevant stakeholders is itself problematic because it is often difficult to come up with clear criterion by which to define relevance.

There are other ways to approach the notion of truth: for example, one might say that a piece of data is true as long as it is practically useful to deem it so. Such a viewpoint, though common, is flawed because utility is in the eye of the beholder: a sales manager may think it useful to believe a particular KPI whereas a sales rep might disagree (particularly if the KPI portrays the rep in a bad light!).

These varied interpretations of what constitute a truth have implications for the notion of a single point of truth. For one, the various interpretations are incommensurate – they cannot be judged by the same standard.  Further, different people may interpret the same piece of data differently. This is something that BI professionals have likely come across – say when attempting to come up with a harmonized definition for a customer record.

In short: the notion of a single point of truth is problematic because there is a great deal of ambiguity about what constitutes a truth.

There is no such thing as raw data

In his book, Memory Practices in the Sciences, Geoffrey Bowker wrote, “Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.”  I love this quote because it tells a great truth (!) about so-called “raw” data.

To elaborate: raw data is never unprocessed. Firstly, the data collector always makes a choice as to what data will be collected and what will not. So in this sense, data already has meaning imposed on it. Second, and perhaps more important, the method of collection affects the data. For example, responses to a survey depend on how the questions are framed and how the survey itself is carried out (anonymous, face-to-face etc.).   This is also true for more “objective” data such as costs and expenses. In both cases, the actual numbers depend on specific accounting practices used in the organization. So, raw data is an oxymoron because data is never raw, and as Bowker tells us, we need to ensure that the filters we apply and the methods of collection we use are such that the resulting data is “cooked with care.”

In short: data is never raw, it is always “cooked.”

There are no best practices for business intelligence, only appropriate ones

Many software shops and consultancies devise frameworks and methodologies for business intelligence which they claim are based on best or proven practices. However, those who swallow that line and attempt to implement the practices often find that the results obtained are far from best.

I have discussed the shortcomings of best practices in a general context in an earlier article, and (at greater length) in my book. A problem with best practice approaches is that they assume a universal yardstick of what is best.  As a corollary, this also suggest that practices can be transplanted from one organization to another in a wholesale manner, without extensive customisation. This overlooks the fact that organisations are unique, and what works in one may not work in another.

A deeper issue is that much of the knowledge pertaining to best practices is tacit – that is, it cannot be codified in written form. Indeed, what differentiates good business intelligence developers or architects from great ones is not what they learnt from a textbook (or in a training course), but how they actually practice their craft.  These consist of things that they do instinctively and would find hard to put into words.

So, instead of looking to import best practices from your favourite vendor, it is better to focus on understanding what goes on in your environment. A critical examination of your environment and processes will reveal opportunities for improvement. These incremental improvements will cumulatively add up to your very own, customized “best practices.”

In short: develop your own business intelligence best practices rather than copying those peddled by “experts.”

Business intelligence does not support strategic decision-making

One of the stated aims of business intelligence systems is to support better business decision making in organisations (see the Wikipedia article, for example). It is true that business intelligence systems are perfectly adequate – even indispensable – for certain decision-making situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc).  These generally tend to be routine situations with clear cut decision criteria and well-defined processes – i.e. decisions that can be programmed.

In contrast, decisions pertaining to strategic matters cannot be programmed. Examples of such decisions include: dealing with an uncertain business environment, responding to a new competitor etc. The reason such decisions cannot be programmed is that they depend on a host of factors other than data and are generally made in situations that are ambiguous.  Typically people use deliberative methods – i.e. methods based on argumentation – to arrive at decisions on such matters.  The sad fact is that all the major business tools in the market lack support for deliberative decision-making. Check out this post for more on what can be done about this.

In short: business intelligence does not support strategic decision-making .

Big data is not the panacea it is trumpeted to be

One of the more recent trends in business intelligence is the move towards analyzing increasingly large, diverse, rapidly changing datasets – what goes under the umbrella term big data.  Analysing these datasets entails the use of new technologies (e.g. Hadoop and NoSQL)  as well as statistical techniques that are not familiar to many mainstream business intelligence professionals.

Much has been claimed for big data; in fact, one might say too much.  In this article Tim Harford (aka the Undercover Economist) summarises the four main claims of “big data cheerleaders” as follows (the four phrases below are quoted directly from the article):

  1. Data analysis produces uncannily accurate results.
  2. Every single data point can be captured, making old statistical sampling techniques obsolete.
  3. It is passé to fret about what causes what, because statistical correlation tells us what we need to know.
  4. Scientific or statistical models aren’t needed.

The problem, as Harford points out, is that all of these claims are incorrect.

Firstly, the accuracy of the results that come out of a big data analysis depend critically on how the analysis is formulated. However, even analyses based on well-founded assumptions can get it wrong, as is illustrated in this article about Google Flu Trends.

Secondly, it is pretty obvious that it is impossible to capture every single data point (also relevant here is the discussion on raw data above – i.e. how data is selected for inclusion).

The third claim is simply absurd. The fact is detecting a correlation is not the same as  understanding what is going on a point made rather nicely by Dilbert.  Enough said, I think.

Fourthly, the claim that scientific or statistical models aren’t needed is simply ill-informed. As any big data practitioner will tell you, big data analysis relies on statistics. Moreover, as mentioned earlier, a correlation-based understanding is no understanding at all –  it cannot be reliably extrapolated to related situations without the help of hypotheses and (possibly tentative)  models of how the phenomenon under study works.

Finally, as Danah Boyd and Kate Crawford point out in this paper , big data changes the meaning of what it means to know something….and it is highly debatable as to whether these changes are for the better. See the paper for more on this point. (Acknowledgement: the title of this post is inspired by the title of the Boyd-Crawford paper).

In short:  business intelligence practitioners should not uncritically accept the pronouncements of big data evangelists and vendors.

Business intelligence has ethical implications

This heresy applies to much more than business intelligence: any human activity that affects other people has an ethical dimension. Many IT professionals tend to overlook this facet of their work because they are unaware of it – and sometimes prefer to remain so. Fact is, the decisions business intelligence professionals make with respect to usability, display, testing etc. have a potential impact on the people who use their applications. The impact may be as trivial as having to click a button or filter too many before they get their report, to something more significant, like a data error that leads to a poor business decision.

In short: business intelligence professionals ought to consider how their artefacts and applications affect their users.

In closing

This brings me to the end of my heresies for business intelligence. I suspect there will be a few practitioners who agree with me and (possibly many) others who don’t…and some of the latter may even find specific statements provocative. If so, I consider my job done, for my intent was to get business intelligence practitioners to question a few unquestioned tenets of their profession.

Written by K

April 3, 2014 at 9:29 pm

10 Responses

Subscribe to comments with RSS.

  1. So refreshing! Thank you, Kailash!

    For those interested in a deep dive into the immense question of truth and validity, I recommend reading almost anything by Ken Wilbur. This article looks like a good starting point (http://imprint.co.uk/Wilber.htm), but don’t let the ponderous academic tone put you off. My favorite is his book *A Brief History of Everything* (http://www.amazon.com/Brief-History-Everything-Ken-Wilber/dp/1570627401), a very accessible and exciting read.



    April 4, 2014 at 12:52 am

    • Hi Jeff,

      Thanks for reading and the pointer to Wilber’s work. It is only recently that I have started to look into the literature on truth and validity. Fascinating stuff, which I think needs more airtime in the world of businesses and organisations.

      Reconciling different truths is a problem that organisations don’t quite know how to handle yet. Your work in this area is truly groundbreaking (for readers who are unaware of Jeff’s work:check out his must-read book here).

      Thanks again!





      April 4, 2014 at 7:36 am

  2. You are incorrect about several fundamentals — which is odd, because you are ordinarily spot on — but not this time. Here are the miscues:

    (And, I should add that my bona fides come from the US intelligence community of which I was a long time member, not the commercial BI community in which I have limited exposure)
    1. Even if the mission, targets, and data sources are different between public and private sector intelligence needs, the basic intelligence cycle — which you don’t mention — is largely the same, to wit:
    — State an intelligence need, or a information need in intelligence terms
    — Parse the need in terms of likely sources and methods to answer need
    — Apply methods to sources (i.e. do collection of “raw data”, your point about nothing being raw is well stated and well taken)
    — Integrate collection with other sources/contexts and perform “analysis”
    — Develop the finished “product”

    In a few words: Need — collection — integration — analysis — finished product… is the cycle

    2. Strategy: If you go to the trouble to put up an intelligence system/capability and invest in the people etc, and it has no impact on strategic decision making, then you have egregiously misapplied the resources given to you. Yes, some intelligence and some systems have only tactical purposes, but I’ve frankly never seen a tactical system that did not inherit from a larger landscape with a related need/sources/method that provided assistance with strategic decisions

    3. Bigger (data) may be better but only if the data quality is adequate and the filters that were applied in collection are understood (to wit: the “raw” issue, again). Nonetheless, big data has the potential, as data warehousing has taught over a 20 year generation, that more can be better… but necessarily so.

    4. If there are no best practices, then all practices are uniformly the same: perhaps good, better, best, or even poor. This is an impractical and illogical conclusion: nothing in business is uniformly the same, so for a given situation something is going to be the long pole in the tent. The fact that it may not be universally the best is a “so what”

    John Goodpasture, johngoodpasture.com (blog)



    April 4, 2014 at 4:55 am

    • Hi John,

      Thanks for reading and for taking the time to write a detailed comment – I truly appreciate it.

      First up, I don’t so much disagree with the points you have made as feel they need to be qualified…and indeed, that was the point of the post.

      The issues you have raised pertain largely to three of the stated heresies: strategy, best practices and big data. I’ll attempt to address your points in that order.


      The problem with strategy is that it is (almost by definition) formulated under conditions of sparse information and ambiguity (I tend to agree with John Camillus’ view that strategy formulation and implementation is a wicked problem) .For this reason, data and information often play second fiddle to informal decision making methods such as deliberation. My point is that commercial BI tools offer little support for such methods.

      Best practices

      The claim that there are no best practices does not preclude the possibility of some practices being better than others (with the qualification elaborated below). Indeed, the problem with the term is that there are so many competing “best” practices which, apart from being a contradiction in terms, serves only to confuse practitioners.

      That said, I fully agree with your point that there can be best (or better, good) practices in a given situation (the longest pole in the tent) – a point mentioned obliquely in my post (but discussed at greater length in my book). The point is: what is good and what is not good is usually context dependent. So, although one can provide guidelines on how to implement and refine practices to make them appropriate to a particular situation, the idea that one can somehow transplant someone else’s “best” practices into one’s own environment, in a wholesale fashion, is (I think) mistaken.

      Big data

      As far as big data is concerned, I think we largely agree. I admit I have downplayed the positives – but my intent in doing so was to present a critical view of some of the (tall?) claims made by vendors and evangelists.

      Finally, I’d like to thank you again for taking the time to read and comment at length. This is exactly the sort of conversation I was hoping to have.





      April 4, 2014 at 7:45 am

  3. Kailash,

    It’s easy to fall into the trap of simply critiquing BI initiatives as either effective and worthwhile for business or not. What I get from your post is more around the issue of how organizations actually govern themselves and choose directions. Where the rationalist technophiles may be inclined to view failures as being the result of honestly not having enough accurate and compelling data in the hands of the right people, this ignores the social aspects of how organizations work.

    I’m experienced enough to remember when reports with round-cornered borders* could be used to sell “visions” (i.e. any crap). When discussing BI in the context of “Big Data”, it opens up wonderful opportunities, but “how to do it right” is very much a work in progress. Given its complexity, I’d see it as being subject to corrupting influences, issues of transparency, and definitely ethics, as much as we’d all like to hope that everything will be done for the greater good (for the most part). At the very least, the impact may influence organizations to operate at ever higher levels of skepticism, just as the impact of round-cornered reports have worn off. Will this cause analysis paralysis, or data fatigue along with a greater tendency to act upon ambiguity, or otherwise simply further fragment the directions and commitments of organizations?

    On LinkedIn under Geoffrey Moore, there is a guest post by Alistair Croll that explores some of this from an optimistic viewpoint of how it affects (smart) companies.

    * For the Millenials, before Apple commoditized laser printers, reports typically were burped out on dot matrix printers, and were anything but sexy to look at. When the same numbers and results were output on a laser printer with borders, shading and graphics, executives and customers reacted like Moses had just come to them from the mountain. Subsequently, a lot of Apple Macs were sold as peripherals to a laser printer and gained a foothold in offices where they would previously have been viewed as too expensive.


    David Turnbull

    April 9, 2014 at 3:46 am

    • Hi David,

      Agreed! The business intelligence community needs to develop a broader view of what organisational decision-making actually involves, and this requires an understanding of the social aspects of how organisations work.

      Big Data and other new technologies open up wonderful opportunities. But enthusiasts need to guard against the trap of solutionism.

      Thanks as always for reading and for your thoughtful comment.





      April 10, 2014 at 8:59 pm

    • Thanks Ravin, that’s a great piece.





      April 10, 2014 at 9:00 pm

  4. Hi Kailash,
    Excellent post! Although I am no expert in BI, would you allow me to share few of my thoughts here? BI applications, as I understand it, have been contextually blind for a very long time.Although we’ve had fancy machine learning and optimization algorithms tackle the problem of context[in recommendations] more effectively than before,the complexity of the problem has only increased,given the multiple dimensions of data we are dealing with and generating today[thanks to sensors]. I wonder if “context” could be the critical differentiating factor which can bridge the chasm you point out between strategic decision making and BI applications. BI applications have been traditionally good at handling “What you do not know” problems, which were mapped by consultants using the “As-Is-To-be” journey metaphor. With the advent of Big Data, I feel we are beginning to grapple with “What you do not know that you know” problems, which require a localized decision making approach, something which cannot be done without situational awareness. These are my inchoate thoughts, as I’ve been trying to think deeply about context. I would love to hear your thoughts!



    April 25, 2014 at 10:45 pm

    • Hi Venky,

      Thanks for your comment.

      The differentiating factor between strategic and operational decision-making basically boils down to a couple of things:

      1. Scarcity of information – since strategic decisions are about the future, often one simply does not have enough information to make a decision.

      2. More important, though, is that fact that strategic issues are typically complex and multifaceted. Consequently they are hard to define in a way that makes them amenable to solution in the traditional sense of the word (i.e. they are wicked problems). Data does not help in resolving such issues. Typically it is better to use deliberative methods.

      IMO, context is important in both operational and strategic domains: i.e. it is necessary to understand the features and quirks of one’s decision environment in both cases.

      Thanks again for taking the time to read and comment!





      April 26, 2014 at 8:25 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: