Eight to Late

Archive for the ‘Knowledge Management’ Category

The unspoken life of information in organisations

with 5 comments

Introduction

Many activities in organisations are driven by information. Chief among these is decision-making : when faced with a decision, those involved will seek information on the available choices and their (expected) consequences. Or so the theory goes.

In reality, information plays a role that does not quite square up with this view. For instance, decision makers may expend considerable time and effort in gathering information, only to ignore it when making their choices.  In this case information plays a symbolic role, signifying competence of the decision-maker (the volume of information being a measure of competence) rather than being a means of facilitating a decision. In this post I discuss such common but unspoken uses of information in organisations, drawing on a paper by James March and Martha Feldman entitled Information in Organizations as Symbol and Signal.

Information perversity

As I have discussed in an earlier post, the standard view of decision-making is that choices are based on an analysis of their consequences and (the decision-maker’s) preferences for those consequences.  These consequences and preferences generally refer to events in the future and are therefore uncertain. The main role of information is to reduce this uncertainty.  In such a rational paradigm, one would expect that  information gathering and utilization are consistent with the process of decision making.  Among other things this implies that:

  1. The required information is gathered prior to the decision being made.
  2. All relevant information is used in the decision-making process.
  3. All available information is evaluated prior to requesting further information.
  4. Information that is not relevant to a decision is not collected.

In reality, the above expectations are often violated. For example:

  1. Information is gathered selectively after a decision has been made (only information that supports the decision is chosen).
  2. Relevant information is ignored.
  3. Requests for further information are made before all the information at hand is used.
  4. Information that has no bearing on the decision is sought.

On the face of it, such behaviour is perverse – why on earth would someone take the trouble to gather information if they are not going to use it?  As we’ll see next, there are good reasons for such “information perversity”, some of which are obvious but others that are less so.

Reasons for information perversity

There are a couple of straightforward reasons why a significant portion of the information gathered by organisations is never used. These are:

  1. Humans have bounded cognitive capacities, so there is a limit to the amount of information they can process. Anything beyond this leads to information overload.
  2. Information gathered is often unusable in that it is irrelevant to the decision that is to be made.

Although these reasons are valid in many situations, March and Feldman assert that there are other less obvious but possibly more important reasons why information gathered is not used. I describe these in some detail below.

Misaligned incentives

One of the reasons for the mountains of unused information in organisations is that certain groups of people (who may not even be users of information) have incentives to gather information regardless of its utility. March and Feldman describe a couple of scenarios in which this can happen:

  1. Mismatched interests: In most organisations the people who use information are not the same as those who gather and distribute it. Typically, information users tend to be from  business functions (finance, sales, marketing etc.) whereas gatherers/distributors are from IT. Users are after relevant information whereas IT is generally interested in volume rather than relevance. This can result in the collection of data that nobody is going to use.
  2.   “After the fact” assessment of decisions:  Decision makers know that many (most?) of their decisions will later turn out to be suboptimal. In other words,   after-the-fact assessments of their decision may lead to the realisation that those decisions ought to have been made differently. In view of this, decision makers have good reason to try to anticipate as many different outcomes as they can, which leads to them gathering more information than can be used.

Information as measurement

Often organisations collect information to measure performance or monitor their environments. For example, sales information is collected to check progress against targets and employees are required to log their working times to ensure that they are putting in the hours they are supposed to. Information collected in such a surveillance mode is not relevant to any decision except when corrective action is required. Most of the information collected for this purpose is never used even though it could well contain interesting insights

Information as a means to support hidden agendas

People often use information to build arguments that support their favoured positions. In such cases it is inevitable that information will be misrepresented.  Such strategic misrepresentation (aka lying!) can cause more information to be gathered than necessary. As March and Feldman state in the paper:

Strategic misrepresentation also stimulates the oversupply of information. Competition among contending liars turns persuasion into a contest in (mostly unreliable) information. If most received information is confounded by unknown misrepresentations reflecting a complicated game played under conditions of conflicting interests, a decision maker would be curiously unwise to consider information as though it were innocent. The modest analyses of simplified versions of this problem suggest the difficulty of devising incentive schemes that yield unambiguously usable information…

As a consequence, decision makers end up not believing information, especially if it is used or generated by parties that (in the decision-makers’ view) may have hidden agendas.

The above points are true enough. However, March and Feldman suggest that there is a more subtle reason for information perversity in organisations.

The symbolic significance of information

In my earlier post on decision making in organisations I stated that:

…the official line about decision making being a rational process that is concerned with optimizing choices on the basis of consequences and preferences is not the whole story. Our decisions are influenced by a host of other factors, ranging from the rules that govern our work lives to our desires and fears, or even what happened at home yesterday. In short: the choices we make often depend on things we are only dimly aware of.

One of the central myths of modern organisations is that decision making is essentially a rational process.  In reality, decision making is often a ritualised activity consisting of going through the motions of identifying choices, their consequences and our preferences for them.  In such cases, information has a symbolic significance; it adds to the credibility of the decision. Moreover, the greater the volume of information, the greater the credibility (providing, of course, that the information is presented in an attractive format!). Such a process reaffirms the competence of those involved and reassures those in positions of authority that the right decision has been made, regardless of the validity or relevance of the information used.

Information is thus a symbol of rational decision making; it signals (or denotes) competence in decision making and that the decision made is valid.

Conclusion

In this article I have discussed the  unspoken life of information in organisations -  how it is used in ways that do not square up to a rational process of decision making. As March and Feldman put it:

Individuals and organizations invest in information and information systems, but their investments do not seem to make decision-theory sense. Organizational participants seem to find value in information that has no great decision relevance. They gather information and do not use it. They ask for reports and do not read them. They act first and receive requested information later.

Some of the reasons for such “information perversity” are straightforward: they include, limited human cognitive ability, irrelevant information, misaligned incentives and even lying!  But above all, organisations gather information because it symbolises proper decision making behaviour and provides assurance of the validity of decisions, regardless of whether or not decisions are actually made on a rational basis.  To conclude: the official line about information spins a tale about its role in rational decision-making but  the unspoken life of information in organisations tells another story.

Written by K

June 14, 2012 at 5:55 am

“The Heretic’s Guide to Best Practices” wins bronze at the 5th Annual Axiom Business Book Awards.

with 2 comments

I’m delighted to announce that the book that Paul Culmsee and I published recently has been awarded a bronze medal at the Axiom Business Book Awards for 2012, under the category Operations Management/Lean/Continuous Improvement.

http://www.axiomawards.com

We are truly honoured that the panel found our efforts worthy of an award.

If you are interested in finding out more about the book, please check out the review by Shim Marom and the one by Scott McCrickard.

There are also a number of customer reviews on Amazon.

http://www.amazon.com/Heretics-Guide-Best-Practices-Organisations/dp/1462058531

The Heretic’s Guide is a self-published book with no big publisher marketing behind it, so we’d greatly appreciate your spreading the word!

On the ineffable tacitness of knowledge

with 6 comments

Introduction

Knowledge management (KM) is essentially about capturing and disseminating the know-how,  insights and experiences  that exist within an organisation.  Although much is expected of KM initiatives, most end up delivering document repositories that are of as much help in managing knowledge as a bus is in getting to the moon. In this post I look into the question of why KM initiatives fail, drawing on a couple of sources that explore the personal nature of knowledge.

Explicit and tacit knowledge in KM

Most KM  professionals are familiar with terms explicit and tacit knowledge.  The first term refers to knowledge that can be expressed in writing or speech whereas the second refers to that which cannot.  Examples of the former include driving directions (how to get from A to B) or a musical score; examples of the latter include the ability to drive or to play a musical instrument.  This seems reasonable enough: a musician can learn how to play a piece by studying a score however a non-musician cannot learn to play an instrument by reading a book.

In their influential book, The Knowledge-Creating Company, Ikujiro Nonaka and Hirotaka Takeuchi proposed a model of knowledge creation1  based on their claim that:  “human knowledge is created and expanded through social interaction between tacit knowledge and explicit knowledge.” It would take me too far afield to discuss their knowledge creation model in full here – see this article for a quick summary.  However, the following aspects of it are relevant to the present discussion:

  1. The two forms of knowledge (tacit and explicit) can be converted from one to the other. In particular, it is possible to convert tacit knowledge to an explicit form.
  2. Knowledge can be transferred (from person to person).

In the remainder of this article I’ll discuss why these claims aren’t entirely valid.

All knowledge has tacit and explicit elements

In a paper entitled, Do we really understand tacit knowledge, Haridimos Tsoukas discusses why Nonaka and Takeuchi’s view of knowledge is incomplete, if not incorrect. To do so, he draws upon writings of the philosopher Michael Polanyi.

According to Polanyi, all knowledge has tacit and explicit elements. This is true even of theoretical knowledge that can be codified in symbols (mathematical knowledge, for example). Quoting from Tsoukas’ paper:

…if one takes a closer look at how theoretical (or codified) knowledge is actually used in practice, one will see the extent to which theoretical knowledge itself, far from being as objective, self-sustaining, and explicit as it is often taken to be, it is actually grounded on personal judgements and tacit commitments. Even the most theoretical form of knowledge, such as pure mathematics, cannot be a completely formalised system, since it is based for its application and development on the skills of mathematicians and how such skills are used in practice.

Mathematical proofs are written in a notation that is (supposed to be) completely unambiguous.  Yet every   mathematician will understand a proof  (in the sense of its implications rather than its veracity) in his or her own way.  Moreover, based on their personal understandings, some mathematicians will be able to derive insights that others won’t. Indeed this is how we distinguish between skilled and less skilled mathematicians.

Polanyi claimed that all knowing consists at least in part of skillful action because the knower participates in the act of understanding and assimilating what is known.

Lest this example seem too academic, let’s consider a more commonplace one taken from Tsoukas’ paper: that of a person reading a map.

Although a map is an explicit representation of location, in order to actually use a map to get from A to B a person needs to:

  1. Locate A on the map.
  2. Plot out a route from A to B.
  3. Traverse the plotted route by identifying landmarks, street names etc. in the real world and interpreting them in terms of the plotted route.

In other words, the person has to make use of his or her senses and cognitive abilities in order to use the (explicit) knowledge captured in the map. The point is that the person will do this in a way that he or she cannot fully explain to anyone else. In this sense, the person’s understanding (or knowledge) of what’s in the map manifests itself in how he or she actually goes about getting from A to B.

The nub of the matter: focal and subsidiary awareness

Let me get to the heart of the matter through another example that is especially relevant as I sit at my desk writing these words.

I ask the following question:

What is it that enables me to write these lines using my knowledge of the English language, papers on knowledge management and a host of other things that I’m not even aware of?

I’ll begin my answer by quoting yet again from Tsoukas’ paper,

 For Polanyi the starting point towards answering this question is to acknowledge that “the aim of a skilful performance is achieved by the observance of a set of rules which are not known as such to the person following them.” …Interestingly, such ignorance is hardly detrimental to [the] effective carrying out of [the]  task…

Any particular elements of the situation which may help the purpose of a mental effort are selected insofar as they contribute to the performance at hand, without the performer knowing them as they would appear in themselves. The particulars are subsidiarily known insofar as they contribute to the action performed. As Polanyi remarks, ‘this is the usual process of unconscious trial and error by which we feel our way to success and may continue to improve on our success without specifiably knowing how we do it.’

Polanyi noted that there are two distinct kinds of awareness that play a role in any (knowledge-based) action. The first one is conscious awareness of what one is doing (Polanyi called this focal awareness). The second is subsidiary awareness: the things that one is not consciously aware of but nevertheless have a bearing on the action.

Back to my example, as I write these words I’m consciously aware of the words appearing on my screen as I type whereas I’m subsidarily aware of a host of other things I cannot fully enumerate:  my thoughts, composition skills, vocabulary and all the other things that have a bearing on my writing (my typing skills, for example).

The two kinds of awareness, focal and subsidiary, are mutually exclusive: the instant I shift my awareness from the words appearing on my screen, I lose flow and the act of writing is interrupted.  Yet, both kinds of awareness are necessary for the act of writing. Moreover, since my awareness of the subsidiary elements of writing is not conscious, I cannot describe them. The minute I shift attention to them, the nature of my awareness of them changes – they become things in their own right instead of elements that have a bearing on my writing.

In brief, the knowledge-based act of writing is composed of both conscious and subsidiary elements in an inseparable way. I can no more describe all the knowledge involved in the act than I can the full glory of a  beautiful sunset.

Wrapping up

From the above it appears that the central objective of knowledge management is essentially unattainable because all knowledge has tacit elements that cannot be “converted” or codified explicitly. We can no more capture or convert knowledge than we can “know how others know.”  Sure, one can get people to document what they do, or even capture their words and actions on media. However this does not amount to knowing what they know. In his paper, Tsoukas writes about the ineffability of tacit knowledge.  However, as I have argued,  all knowledge is ineffably tacit. I hazard that this may, at least in part, be the reason why KM initiatives fall short of their objectives.

Acknowledgement and further reading

Thanks to Paul Culmsee for getting me reading and thinking about this stuff again!  Some of the issues that I have discussed above are touched upon in the book I have written with Paul.

Finally, for those who are interested, here are some of my earlier pieces on tacit knowledge:

What is the make of that car? A tale about tacit knowledge

Why best practices are hard to practice (and what can be done about it)


Footnotes:

1 As far as I’m aware, Nonaka and Takeuchi’s model mentioned in this article is still the gold standard in KM. In recent years, there have been a number of criticisms of the model (see this paper by Gourlay, or especially this one by Powell). Nonaka and von Krogh attempt to rebut some of the criticisms in this paper. I will leave it to interested readers to make up their own minds as to how convincing their rebuttal is.

Written by K

February 9, 2012 at 10:30 pm

On the limitations of business intelligence systems

with 7 comments

Introduction

One of the main uses of business intelligence  (BI) systems is to support decision making in organisations.  Indeed, the old term Decision Support Systems is more descriptive of such applications than the term BI systems (although the latter does have more pizzazz).   However, as Tim Van Gelder pointed out in an insightful post,  most BI tools available in the market do not offer a means to clarify the rationale behind decisions.   As he stated, “[what] business intelligence suites (and knowledge management systems) seem to lack is any way to make the thinking behind core decision processes more explicit.”

Van Gelder is absolutely right:  BI tools do not support the process of decision-making directly, all they do is present data or information on which a decision can be based.  But there is more:  BI systems are based on  the view that data should be the primary consideration when making decisions.   In this post I explore some of the (largely tacit) assumptions that flow from such a data-centric view. My discussion builds on some points made by Terry Winograd and Fernando Flores in their wonderful book, Understanding Computers and Cognition.

As we will see, the assumptions regarding the centrality of data are questionable, particularly when dealing with complex decisions. Moreover, since these assumptions are implicit in all BI systems, they highlight the limitations of using BI systems for making business decisions.

An example

To keep the discussion grounded, I’ll use a scenario to illustrate how assumptions of data-centrism can sneak into decision making. Consider a sales manager who creates sales action plans for representatives based on reports extracted from his organisation’s BI system. In doing this, he makes a number of tacit assumptions. They are:

  1. The sales action plans should be based on the data provided by the BI system.
  2. The data available in the system is relevant to the sales action plan.
  3. The information provided by the system is objectively correct.
  4. The  side-effects of basing decisions (primarily) on data are negligible.

The assumptions and why they are incorrect

Below I state some of the key assumptions of the data-centric paradigm of BI and discuss their limitations using the example of the previous section.

Decisions should be based on data alone:    BI systems promote the view that decisions can be made based on data alone.  The danger in such a view is that it overlooks social, emotional, intuitive and qualitative factors that can and should influence decisions.  For example, a sales representative may have qualitative information regarding sales prospects that cannot be inferred from the data. Such information should be factored into the sales action plan providing the representative can justify it or is willing to stand by it.

The available data is relevant to the decision being made: Another tacit assumption made by users of BI systems is that the information provided is relevant to the decisions they have to make. However, most BI systems are designed to answer specific, predetermined questions. In general these cannot cover all possible questions that managers may ask in the future.

More important is the fact that the data itself may be based on assumptions that are not known to users. For example, our sales manager may be tempted to incorporate market forecasts simply because they are available in the BI system.  However, if he chooses to use the forecasts, he will likely not take the trouble to check the assumptions behind the models that generated the forecasts.

The available data is objectively correct:  Users of BI systems tend to look upon them as a source of objective truth. One of the reasons for this is that quantitative data tends to be viewed as being more reliable than qualitative data.  However, consider the following:

  1. In many cases it is impossible to establish the veracity of quantitative data, let alone its accuracy. In extreme cases, data can be deliberately distorted or fabricated (over the last few years there have been some high profile cases of this that need no elaboration…).
  2. The imposition of arbitrary quantitative scales on qualitative data can lead to meaningless numerical measures. See my post on the limitations of scoring methods in risk analysis for a deeper discussion of this point.
  3. The information that a BI system holds is based the subjective choices (and biases) of its designers.

In short, the data in a BI system does not represent an objective truth. It is based on subjective choices of users and designers, and thus may not be an accurate reflection of the reality it allegedly represents. (Note added on 16 Feb 2013:  See my essay on data, information and truth in organisations for more on this point).

Side-effects of data-based decisions are negligible:  When basing decisions on data, side-effects are often ignored. Although this point is closely related to the first one, it is worth making separately.  For example, judging a sales representative’s performance on sales figures alone may motivate the representative to push sales at the cost of building sustainable relationships with customers.  Another example of such behaviour is observed in call centers where employees are measured by number of calls rather than call quality (which is much harder to measure). The former metric incentivizes employees to complete calls rather than resolve issues that are raised in them. See my post entitled, measuring the unmeasurable, for a more detailed discussion of this point.

Although I have used a scenario to highlight problems of the above assumptions, they are independent of the specifics of any particular decision or system. In short, they are inherent in BI systems that are based on data – which includes most systems in operation.

Programmable and non-programmable decisions

Of course, BI systems are perfectly adequate – even indispensable –  for certain situations. Examples of these include, financial reporting (when done right!) and other operational reporting (inventory, logistics etc).  These generally tend to be routine situations with clear cut decision criteria and well-defined processes. Simply put, they are the kinds of decisions that can be programmed.

On the other hand, many decisions cannot be programmed: they have to be made based on incomplete and/or ambiguous information that can be interpreted in a variety of ways. Examples include issues such as what an organization should do in response to increased competition or formulating a sales action plan in a rapidly changing business environment. These issues are wicked: among other things, there is a diversity of viewpoints on how they should be resolved. A business manager and a sales representative are likely to have different views on how sales action plans should be adjusted in response to a changing business environment. The shortcomings of BI systems become particularly obvious when dealing with such problems.

Some may argue that it is naïve to expect BI systems to be able to handle such problems. I agree entirely. However, it is easy to overlook over the limitations of these systems, particularly when called upon to make snap decisions on complex matters. Moreover, any critical reflection regarding what BI ought to be is drowned in a deluge of vendor propaganda and advertisements masquerading as independent advice in the pages of BI trade journals.

Conclusion

In this article I have argued that BI systems have some inherent limitations as decision support tools because they focus attention on data to the exclusion of other, equally important factors.  Although the data-centric paradigm promoted by these systems is adequate for routine matters, it falls short when applied to complex decision problems.

Written by K

November 24, 2011 at 6:20 am

Inexplicit knowledge: what people know, but won’t tell

with 2 comments

Introduction

Much of the knowledge that exists in organisations remains unarticulated, in the heads of those who work at the coalface of business activities. Knowledge management professionals know this well, and use the terms explicit and tacit knowledge to distinguish between knowledge that can and can’t be communicated via language.  Incidentally, the term tacit knowledge was coined by Michael Polanyi  – and it is important to note that he used it in a sense that is very different from what it has come to mean in knowledge management. However, that’s a topic for another post.  In the present post I look at a related issue that is common in organisations: the fact that much of what people know can be made explicit, but isn’t.  Since the  discipline of knowledge management is in dire need of more jargon, I call this inexplicit knowledge. To borrow a phrase from Polanyi, inexplicit knowledge is what people know, but won’t tell.   Below, I discuss reasons why potentially explicit knowledge remains inexplicit and what can be done about it.

Why inexplicit knowledge is common

Most people would have encountered work situations in which they chose “not to tell” – remaining silent instead of sharing knowledge  that would have been helpful. Common reasons for such behaviour include:

  1. Fear of loss of ownership of the idea: People are attached to their ideas. One reason for not volunteering their ideas is the worry that someone else in the organisation (a peer or manager) might “steal” the idea. Sometimes such behaviour is institutionalised in the form of an “innovation committee” that solicits ideas, offering monetary incentives for those that are deemed the best (more on incentives below). Like most committee-based solutions, this one is a dud. A better option may be to put in place mechanisms to ensure that those who conceive and volunteer ideas are encouraged to see them through to fruition.
  2. Fear of loss of face and/or fear of reprisals: In organisational cultures that are competitive, people may fear that their ideas will be ridiculed or put down by others. Closely related to this is the fear of reprisals from management. This happens often enough, particularly when the idea challenges the status quo or those in positions of authority. One of the key responsibilities of management is to foster an environment in which people feel psychologically safe to volunteer ideas, however controversial or threatening the ideas may be.
  3. Lack of incentives:  Some people may be willing to part with their ideas, but only at a price. To address this, organisations may offer extrinsic rewards (i.e. material items such as money, gift vouchers etc) for worthwhile ideas.  Interestingly, research has shown that non-monetary extrinsic rewards (meals, gifts etc.) are more effective than monetary ones. This makes sense – financial rewards are more easily forgotten; people are more likely to remember a meal at a top-flight restaurant than a 500$ cheque. That said, it is important to note that extrinsic rewards can also lead to unintended side effects. For example, financial incentives based on quantity of contributions might lead to a glut of low-quality contributions. See the next point for a discussion of another side effect of extrinsic rewards.
  4. Wrong incentives: As I have discussed at length in my post on motivation in knowledge management projects, people will contribute their hard earned knowledge only if they are truly engaged in their work.  Such people are intrinsically motivated (i.e. internally motivated, independent of material rewards); their satisfaction comes from their work (yes, such people do exist!).  Consequently they need little or no supervision. Intrinsic rewards are invariably non-material and they cannot be controlled by management. A surprising fact is that, intrinsically motivated people can actually be turned off – even offended – by material rewards.

Psychological safety and incentives are important factors, but there is an even more important issue: the relationships between people who make up the workgroup.

Knowledge sharing and the theory of cooperative action

The work of Elinor Ostrom on collective (or cooperative) action is relevant here because knowledge sharing is a form of cooperation. According to the theory of cooperative action, there are three core relationships that promote cooperation in groups: trust, reciprocity and reputation.  Below I take a look at each of these in the context of knowledge sharing:

Trust: In the end, whether we choose to share what we know is largely a matter of trust: if we believe that others will respond positively – be it through acknowledgement or encouragement via tangible or intangible rewards –  then the chances are that we will tell what we know.  On the other hand, if the response is likely to be negative, we may prefer to remain silent.

Reciprocity: This refers to strategies that are based on treating people in the way we believe they would treat us. We are more likely to share what we know with others if we have reason to believe that they would be just as open with us.

Reputation: This refers to the views we have about the individuals we work with.  Although such views may be developed by direct observation of peoples’ behaviours, they are also greatly influenced by opinions of others. The relevance of reputation is that we are more likely to be open with people who have a good reputation.

According to Ostrom, these core relationships can be enhanced by face-to-face communication and organisational rules/ norms that promote openness. See my post on Ostrom’s work and its relevance to project management for more on this.

Summing up

One of the key challenges that organisations face is to get people working together in a cooperative manner.  Among other things this includes getting people to share their knowledge; to “tell what they know.” Unfortunately, much of this potentially explicit knowledge remains inexplicit, locked away in peoples’ heads, because there is no incentive to share or, even worse, there are factors that actively discourage people from sharing what they know. These issues can be tackled by offering employees the right incentives and creating the right environment. As important as incentives are, the latter is the more important factor:   the key to unlocking inexplicit knowledge lies in creating an environment of trust and openness.

Follow

Get every new post delivered to your Inbox.

Join 249 other followers

%d bloggers like this: