Archive for September 2011
Uncertainty about uncertainty
Introduction
More often than not, managerial decisions are made on the basis of uncertain information. To lend some rigour to the process of decision making, it is sometimes assumed that uncertainties of interest can be quantified accurately using probabilities. As it turns out, this assumption can be incorrect in many situations because the probabilities themselves can be uncertain. In this post I discuss a couple of ways in which such uncertainty about uncertainty can manifest itself.
The problem of vagueness
In a paper entitled, “Is Probability the Only Coherent Approach to Uncertainty?”, Mark Colyvan made a distinction between two types of uncertainty:
- Uncertainty about some underlying fact. For example, we might be uncertain about the cost of a project – that there will be a cost is a fact, but we are uncertain about what exactly it will be.
- Uncertainty about situations where there is no underlying fact. For example, we might be uncertain about whether customers will be satisfied with the outcome of a project. The problem here is the definition of customer satisfaction. How do we measure it? What about customers who are neither satisfied nor dissatisfied? There is no clear-cut definition of what customer satisfaction actually is.
The first type of uncertainty refers to the lack of knowledge about something that we know exists. This is sometimes referred to as epistemic uncertainty – i.e. uncertainty pertaining to knowledge. Such uncertainty arises from imprecise measurements, changes in the object of interest etc. The key point is that we know for certain that the item of interest has well-defined properties, but we don’t know what they are and hence the uncertainty. Such uncertainty can be quantified accurately using probability.
Vagueness, on the other hand, arises from an imprecise use of language. Specifically, the term refers to the use of criteria that cannot distinguish between borderline cases. Let’s clarify this using the example discussed earlier. A popular way to measure customer satisfaction is through surveys. Such surveys may be able to tell us that customer A is more satisfied than customer B. However, they cannot distinguish between borderline cases because any boundary between satisfied and not satisfied customers is arbitrary. This problem becomes apparent when considering an indifferent customer. How should such a customer be classified – satisfied or not satisfied? Further, what about customers who choose not to respond? It is therefore clear that any numerical probability computed from such data cannot be considered accurate. In other words, the probability itself is uncertain.
Ambiguity in classification
Although the distinction made by Colyvan is important, there is a deeper issue that can afflict uncertainties that appear to be quantifiable at first sight. To understand how this happens, we’ll first need to take a brief look at how probabilities are usually computed.
An operational definition of probability is that it is the ratio of the number of times the event of interest occurs to the total number of events observed. For example, if my manager notes my arrival times at work over 100 days and finds that I arrive before 8:00 am on 62 days then he could infer that the probability my arriving before 8:00 am is 0.62. Since the probability is assumed to equal the frequency of occurrence of the event of interest, this is sometimes called the frequentist interpretation of probability.
The above seems straightforward enough, so you might be asking: where’s the problem?
The problem is that events can generally be classified in several different ways and the computed probability of an event occurring can depend on the way that it is classified. This is called the reference class problem. In a paper entitled, “The Reference Class Problem is Your Problem Too”, Alan Hajek described the reference class problem as follows:
“The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified.”
Consider the situation I mentioned earlier. My manager’s approach seems reasonable, but there is a problem with it: all days are not the same as far as my arrival times are concerned. For example, it is quite possible that my arrival time is affected by the weather: I may arrive later on rainy days than on sunny ones. So, to get a better estimate my manager should also factor in the weather. He would then end up with two probabilities, one for fine weather and the other for foul. However, that is not all: there are a number of other criteria that could affect my arrival times – for example, my state of health (I may call in sick and not come in to work at all), whether I worked late the previous day etc.
What seemed like a straightforward problem is no longer so because of the uncertainty regarding which reference class is the right one to use.
Before closing this section, I should mention that the reference class problem has implications for many professional disciplines. I have discussed its relevance to project management in my post entitled, “The reference class problem and its implications for project management”.
To conclude
In this post we have looked at a couple of forms of uncertainty about uncertainty that have practical implications for decision makers. In particular, we have seen that probabilities used in managerial decision making can be uncertain because of vague definitions of events and/or ambiguities in their classification. The bottom line for those who use probabilities to support decision-making is to ensure that the criteria used to determine events of interest refer to unambiguous facts that are appropriate to the situation at hand. To sum up: decisions made on the basis of probabilities are only as good as the assumptions that go into them, and the assumptions themselves may be prone to uncertainties such as the ones described in this article.
Chasing the mirage: the illusion of corporate IT standards
Introduction
Corporate IT environments tend to evolve in a haphazard fashion, reflecting the competing demands made on them by the organisational functions they support. This state of affairs suggests that IT is doing what it should be doing: supporting the work of organisations. On the other hand, this can result in unwieldy environments that are difficult and expensive to maintain. Efforts to address this generally involve the imposition of standards relating to infrastructure, software and processes. Unfortunately, the results of such efforts are mixed: although the adoption of standards can reduce IT costs, it does not lead to as much standardization as one might expect. In this post I explore why this is so. To this end I first look at intrinsic properties or characteristics that standards are assumed to have and discuss why they don’t actually have them. After that I look at some other factors that are external to standards but can also work against them. My discussion is inspired by and partially based on a paper by Ole Hanseth and Kristin Braa entitled, Hunting for Treasure at the End of the Rainbow: Standardizing Corporate IT Infrastructure.
Assumed characteristics of standards and why they are false
Those who formulate corporate IT standards have in mind a set of specifications that have the following intrinsic characteristics:
- Universality – the specifications are applicable to all users and situations.
- Completeness – they include all details, leaving nothing to the discretion of implementers.
- Unambiguity – every specification has only one possible interpretation.
Unfortunately, none of these hold in the real world. Let’s take a brief look at each of them in turn.
Non-universality
To understand why the universality claimed by standards is false, it is useful to start by considering how a standard is created. Any new knowledge is necessarily local before it becomes a standard– that is, it is formed in a particular context and situation. For example, a particular IT help desk process depends, among other things, on the budget of the IT department and the skills of the helpdesk staff. Moreover, it also depends on external factors such as organizational culture, business expectations, vendor response times and other external interfaces.
Once a process is established, however, local context is deleted and the process is presented as being universal. The key point is that this is an abstraction – the process is presented in a way that presumes that the original context does not matter. However, when one wants to reproduce the process in another environment, one has to reconstruct the context. The problem is that this is not possible; one cannot reproduce the exact same context as the one in which the process was originally constructed. Consequently, the standard has to be tailored to suit the new situation and context. Often this tailoring can be quite drastic. Further, different units in within an organisation might need to tailor the process differently: the customisations that work for the US branch of an organisation may not work in its Australian subsidiary. So one often ends up with different organisational units implementing their own versions of the standard.
Incompleteness
Related to the above point is the fact that standards are incomplete. We have seen that standards omit context. However, that is not all: standards documents are generally written at a high level that inevitably overlooks technical detail. As a consequence, those implementing standards have to fill in the gaps based on their knowledge of the technology This inevitably leads to a divergence between an espoused standard and its implementation.
Ambiguity
Two people who read a set of high-level instructions will often come away with different interpretations of what exactly those instructions mean. Such differences can be overcome providing:
- Those involved are aware of the differences in interpretation, and
- They care enough to want to do something about it.
These points are moot Firstly, people tend to assume that their interpretation is the right one. Secondly, even if they are aware of ambiguities, they may choose not to seek clarification because of geographical, language and other barriers.
Other factors
Some may argue that it is possible to work through some of the problems listed above. For example, it is possible – with some effort – to reduce incompleteness and ambiguity. Nevertheless, even if one does this (and the effort should not be underestimated!), there are other factors that can sabotage the implementation of standards. These include:
- Politics – It is a fact of life that organisations consist of stakeholder groups with different interests. Quite often these interests will conflict with each other. A good example is the outsourcing vs. in-house IT debate, in which management and staff usually have opposing views.
- Legacy – Those who want to implement standards have to overcome the resistance of legacy – the installed base that already exists within the organisation. Typically owners and users of legacy systems will oppose the imposition of the new standards, first overtly and if that does not work, then covertly. Moreover, legacy applications make demands of their own – infrastructure requirements, interfaces, support etc., each of which may not be compatible with the new standards.
- FUD factor – Finally, there is the whole issue of FUD (Fear, Uncertainty and Doubt) caused by the new standards. Many IT staff and other employees view standards negatively because they represent an unknown. Although much is said about the need to inform and educate people, most often this is done in a half-baked way that only serves to increase FUD.
In summary
Although the implementation of corporate IT standards can reduce an organisation’s application portfolio and the attendant costs, it does not reduce complexity as much as managers might hope. As discussed above, non-universality, incompleteness and ambiguity of standards will generally end up subverting standardization (see my post entitled The ERP paradox for an example of this at work). Moreover, even if an organisation addresses the inherent shortcomings of standards, the human factor remains: individuals who might lose out will resist change, and different groups will push to have their preferred platforms included in the standard.
In summary: a standardized IT environment will remain a mirage, tantalizingly in sight but always out of reach.
The Labyrinths of Information – a book review
Introduction
Once implemented, IT systems can evolve in ways that can be quite different from their original intent and design. One of the reasons for this is that enterprise systems are based on simplistic models that do not capture the complexities of real organisations. The gap between systems and reality is the subject of a fascinating book by Claudio Ciborra entitled, The Labyrinths of Information. Among other things, the book presents an alternative viewpoint on systems development, one that focuses on reasons for divergence between design and reality. It also discusses other aspects of system development that tend to be obscured by mainstream development methodologies and processes. This post is a summary and review of the book.
Background
The standard treatment of systems development in corporate environments is based on the principles of scientific management. Yet, as Ciborra tells us,
…science-based, method-driven approaches can be misleading. Contrary to their promise, they are deceivingly abstract and removed from practice. Everyone can experience this when he or she moves from the models to the implementation phase. The words of caution and pleas for ‘change management’ interventions that usually accompany the sophisticated methods and polished models keep reminding us of such an implementation gap. However, they offer no valid clue on how to overcome it…
Just to be clear, Ciborra offers no definitive solutions either. However, he offers “clues on how to bridge the gap” by looking into some of the informal techniques and approaches that people “on the ground” – users, designers, developers or managers – use to work and cope with technology. He is not concerned with techniques or methodologies per se, but rather with how people deal with the messy day-to-day business of working with technology in organisations.
The book is organised as a collection of essays based on Ciborra’s research papers spanning a couple of decades – from the mid 1980s until a few years prior to his death in 2005. I discuss each of the chapters in order below, providing links to the original papers where I could find them.
The divergence between models and reality
Most of the tools and techniques used in systems evaluation, design and development are based on simplified models of organisational reality. However, organisations do not function according to organograms, data flow diagrams or entity-relationship models. Models used by systems professionals abstract away much of the messiness of real-life. The methods that come out of such simplifications cannot deal with the complexities of a real organisation. As Ciborra states, “…concern with method is one of the key aspects of our discipline and possibly the true origin of its crisis…”
Indeed, as any systems professional will attest to, unforeseen occurrences and situations inevitably encountered in real life are what cause the biggest headaches in the implementation and acceptance of systems. Those on the ground deal with such exceptions by creative but essentially ad-hoc approaches. Much of the book is a case-study based discussion of such improvised approaches to systems development.
Making (do) with what is at hand
Ciborra argues that successful systems are invariably imitated by competitors, so any competitive advantage offered by such systems is, at best, limited. A similar argument holds for standards and best practices – they promote uniformity rather than distinction. Given this, organisations should strive towards practices that cannot be copied. They should work towards inimitability.
In art, bricolage refers to a process of creating a work from whatever is at hand. Among other things it involves tinkering, improvising and generally making do with what is available. Ciborra argues that many textbook cases of strategic systems in fact evolved through bricolage, tinkering and serendipity, rather than plan. Some of the cases he discusses include Sabre Reservation System developed by American Airlines, and the development of Email (as part of the ARPANET project). Moreover, although the Sabre System afforded American Airlines a competitive advantage for a while, it soon became a part of the travel reservation infrastructure thereby becoming an operational necessity rather than an advantage. This is much the same point that Nicholas Carr made in his article, IT Doesn’t Matter.
The question that you may be asking at this point is: “All this is well and good, but does Ciborra have any solutions to offer?” Well, that’s the problem: Ciborra tells us that bricolage and improvisation ought to be encouraged, but offers little advice on how this can be done. For example, he tells us to “Value bricolage strategically”, “Design tinkering” and “Establish systematic serendipity” – sounds great in theory, but what does it really mean? It is platitudinous advice that is hard to action.
Nevertheless his main point is a good one: that managers should encourage informal, creative practices instead of clamping down on them. This advice has not generally been heeded. Indeed, corporate IS practices have gone the other wa, down the road of standardisation and best practices. Ciborra tells us in no uncertain terms that this is not a good thing.
The enframing effect of technology
This part is, in my opinion, the most difficult chapter in the book. It is based on a paper by Ciborra and Hanseth entitled, From tool to Gestell: Agendas for managing the information infrastructure. In German the term Gestell means shelf or rack. The philosopher Martin Heidegger used the term to describe the way in which technology frames the way we view (or “organise”) the world. Ciborra highlights the way in which existing infrastructure affects the success of businesses processes and practices. Ciborra emphasises that technology-based enterprise initiatives are doomed to fail unless they pay due attention to:
- Existing or installed infrastructure.
- Local needs and concerns.
Instead of attempting to oust old technology, system designers and implementers need to co-opt or cultivate the installed base (and the user community) if they are to succeed at all. In this sense installed infrastructure is an actor (like an individual) with its own interest and agenda. It provides a context for the way people think and also influences future development.
The notion of Gestell thus reminds us of how existing technology influences and limits the way we think. To get around this, Ciborra suggests that we should:
- Be aware of technology and standards, but not be captive to them.
- Think imaginatively, but pay attention to the installed base (existing platforms and users).
- Remember that top down technology initiatives rarely succeed.
The drifting of information infrastructure
Ciborra uses Donald Schoen’s metaphor of the high ground and the swamp to highlight the gap between theory and practice of information systems (see this paper by Schoen, for a discussion of the metaphor). The high ground is the executive management view,where methodologies and management theories hold sway, while the swamp is the coalface where messy, day-to-day reality of organisational work unfolds. In the swamp of day-to-day work, people tend to use available technology in any way possible to solve real (and messy) problems. So, although a particular technology may have an espoused or intended aim, it may well be used in ways that are completely unforeseen by its designers.
The central point of this essay is that the full implications of a technology are often realised only after it has been implemented and used for a while. In Ciborra’s words, technology drifts – that is, it is put to uses that cannot be foreseen. Moreover, it may be never be used in ways that were intended by the designer. Although Ciborra lists several cases that demonstrate this point, in my opinion, his blanket claim that technology drifts is a bit over the top. Sure, in some cases, technologies may be used in unforeseen ways, but by and large they are used in ways that are intended and planned.
The organisation as a host
Reactions to a new technology in an organisation are generally mixed – some people may view the technology with some trepidation (because of the changes to their work routines, for instance) while others may welcome it (because of promised efficiencies, say). In metaphorical terms, the new technology is a “guest,” whose “desires” and “intentions” aren’t fully known. Seen in this light of this metaphor, the notion of hospitality makes sense: as Ciborra puts it, the organisation hosts the technology.
To be sure, the idea of hospitality applying to objects such as information systems will probably cause a few raised eyebrows. However it isn’t as “out there” as it sounds. Consider, for example, the following implications of the metaphor
- Interaction between the host and guest can change both parties.
- If the technology is perceived as unfriendly, it will be rejected (or even ejected!).
- System development and operations methodologies are akin to cultural rituals (it is how we “deal with” the guest).
- Technologies, like guests, stay for a while but not forever.
Ciborra’s intent in this and most of the other essays is to make us ponder over the way we design, develop and run systems, and possibly view what we do in a different light.
The organisation as a platform
In this essay Ciborra looks at the way in which successful technology organisations adapt and adjust to rapidly changing environments. It is based on his paper entitled, The Platform Organization: Recombining Strategies, Structures and Surprises, Using a case-study, he makes the point that the only way organisations can respond to rapidly evolving technology markets is to be open to recombining available resources in flexible ways: it is impossible to start from scratch; one has work with what is at hand, using it in creative ways.
Another point he makes is that the organisation of an organisation (hierarchy and structure) at any particular time is less important than how it gets there, where it’s headed and what are the obstacles in the way. To quote from the book:
…analysing and evaluating the platform organisation at a fixed point in time is of little use: it may look like a matrix, or a functional hierarchy, and one may wonder how well its particular form fits the market for that period and what its level of efficiency really is. What should be appreciated, instead, is the whole sequence of forms adopted over time, and the speed and friction in shifting from one to the other.
However, the identification of such a trajectory can be misleading – despite after-the-fact rationalisations, management in such situations is often based on improvised actions rather than carefully laid plans. Although this may not always be so, I suspect it is more common than managers would care to admit.
Improvisation and mood
By now the reader would have noted that Ciborra’s focus is squarely on the unexpected occurrences in day-to-day organisational work. So it will come as no surprise that the last essay in the book deals with improvisation.
Ciborra argues that most studies on improvisation have a cognitive focus – that is, they deal with how people respond to emerging situations by “quick thinking.” In his opinion, such studies ignore the human aspect of improvised actions, the emotions and moods evoked by situations that call for improvisation. These, he suggests, can be the difference between improvised actions and panic.
As he puts it, people are not cognitive robots – their moods will determine whether they respond to a situation with indifference or interest and engagement. This human dimension of improvisation, though elusive, is the key to understanding improvisation (and indeed, any creative / innovative action)
He also discusses the relationship between improvisation and time – something I have discussed at length in an earlier post, so I’ll say no more about it here.
A methodological postscript
In a postscript to the book, Ciborra discusses his research philosophy – the thread that links the essays in the book.. His basic contention is that methodologies and organisational models are based on after-the-fact rationalisations of real phenomena. More often than not such methods and models are idealisations that omit the messiness of real life organisations. They are abstractions, not reality. As such they can guide us, but we should be ever open to the surprises that real life may afford us.
Summarising
The essential message that Ciborra conveys is a straightforward one – that the real world is a messy place and that the simplistic models on which systems are based cannot deal with this messiness in full. Despite our best efforts there will always be stuff that “leaks out” of our plans and models. Ciborra’s book celebrates this messiness and reminds us that people matter more than systems or processes.