Eight to Late

Sensemaking and Analytics for Organizations

Archive for the ‘Paper Review’ Category

Routines and reality – on the gap between the design and use of enterprise systems

with 7 comments

Introduction

Enterprise IT systems that are built or customised according to specifications sometimes fail because of lack of adoption by end-users. Typically this is treated as a problem of managing change, and is dealt with in the standard way: by involving key stakeholders in system design, keeping them informed of progress, generating and maintaining enthusiasm for the new system and, most important, spending enough time and effort in comprehensive training programmes.

Yet these efforts are sometimes in vain and the question, quite naturally, is: “Why?”  In this post I offer an answer, drawing on a paper by Brian Pentland and Martha Feldman entitled, Designing Routines: On the Folly of Designing Artifacts While Hoping for Patterns of Action.

Setting the context

From a functional point of view, enterprise software embodies organisational routines – i.e. sequences of actions that have well-defined objectives, and are typically performed by business users in specific contexts.   Pentland and Feldman argue that although enterprise IT systems tend to treat organisational routines as well defined processes (i.e. as objects), it is far from obvious that they are so. Indeed, the failure of business process “design” or “redesign” initiatives can often be traced  back to a fundamental misunderstanding of what organisational routines are. This is a point that many system architects, designers and even managers / executives would do well to understand.

As Pentland and Feldman state:

… the frequent disconnect between [system or design] goals and results arises, at least in part, because people design artifacts when they want patterns of action…we believe that designing things while hoping for patterns of action is a mistake. The problem begins with a failure to understand the nature of organizational routines, which are the foundation of any work process that involves coordination among multiple actors… even today, organizational routines are widely misunderstood as rigid, mundane, mindless, and explicitly stored somewhere. With these mistaken assumptions as a starting point, designing new work routines would seem to be a simple matter of creating new checklists, rules, procedures, and software. Once in place, these material artifacts will determine patterns of action: software will be used, checklists will get checked, and rules will be followed.

The fundamental misunderstanding is that design artifacts, checklists, and rules and procedures encoded in software are mistaken for the routine instead of being seen for what they are: idealised representations of the routine. Many software projects fail  because designers do not understand this.  The authors describe a case study that highlights this point and then  draw some general inferences from it.  I describe the case study in the next section and then look into what can be learnt from it.

The situation

The case study deals with a packaged software implementation at a university. The university had two outreach programs that were quite similar, but were administered independently by two different departments using separate IT systems. Changes in the university IT infrastructure meant that one of the departments would lose the mainframe that hosted their system.

An evaluation was performed, and the general consensus was that it would be best for the department losing the mainframe to start using the system that the other department was using.  However, this was not feasible because the system used by the other department was licensed only for a single user. It was therefore decided to upgrade to a groupware version of the product. Since the proposed system was intended to integrate the outreach-related work of the two departments, this also presented an opportunity to integrate some of the work processes of the two departments.

A project team was set-up, drawing on expertise from both departments. Requirements were gathered and a design was prepared based on the requirements. The system was customised and tested as per the design, and data from the two departments was imported from the old systems into the new one. Further, the project team knew the importance of support and training: additional support was organised as were training sessions for all users.

But just as everything seemed set to go, things started to unravel. In the author’s words:

As the launch date grew nearer, obstacles emerged. The implementation met resistance and stalled. After dragging their feet for weeks, [the department losing the mainframe] eventually moved their data from the mainframe to a stand-alone PC and avoided [the new system] entirely. The [other department] eventually moved over [to the new system], but used only a subset of the features. Neither group utilized any of the functionality for accounting or reporting, relying instead on their familiar routines (using standalone spreadsheets) for these parts of the work. The carefully designed vision of unified accounting and reporting did not materialize.

People in the department that was losing the mainframe worked around the new system by migrating their data to a standalone PC and using that to run their own system. People in the other department did migrate to the new system, but used only a small subset of its features.

All things considered, the project was a failure.

So what went wrong?

The authors emphasise that the software was more than capable of meeting the needs of both departments, so technical or even functional failure can be ruled out as a reason for non-adoption. On speaking with users, they found that the main objections to the system had to with the work patterns that were imposed by it. Specifically, people objected to having to give up control over their work practices and their identities (as members of specific) departments. There was a yawning gap between the technical design of the system and the work processes as they were understood and carried out by people in the two departments.

The important thing to note here is that people found ways to work around the system despite the fact that the system actually worked as per the requirements. The system failed despite being a technical success. This is a point that those who subscribe to the mainstream school of enterprise system design and architecture would do well to take note of.

Dead and live routines

The authors go further: to understand resistance to change, they invoke the notion of dead and live routines. Dead routines are those that have been represented in technical artifacts such as documentation, flowcharts, software etc. whereas live routines are those that are actually executed by people. The point is, live routines are literally living objects – they evolve in idiosyncratic ways because people inevitably tweak them, sometimes even without being conscious that they are making changes. As a consequence live routines often generate new patterns of action.

The crux of the problem is that software is capable of representing dead routines, not live ones.

Complementary aspects of routines

Routines are composed of people’s understandings of the routines (the authors call this the ostensive aspect) and the way in which they are actually performed or carried out (the authors call this the performative aspect). The two aspects, the routine as understood and the routine as performed, complement each other:  new actions will modify one’s understanding of a routine, and the new understanding in turn modifies future actions.

Now, the interesting thing is that no artifact can represent all aspects of a routine – something is always left out. Even though certain artifacts such as written procedures may be intended to change both the ostensive and performative aspects of a routine (as they usually are), there is no guarantee that they will actually influence behaviour.  Managers who encounter resistance to change will have had first-hand experience of this: it is near impossible to force change upon a group that does not want it.

This leads us to another set of complementary aspects of routines. On the one hand, technology puts in place structures that enable certain actions while constraining others – this is the standard technological or material view of processes as represented in systems. In contrast, according to the social or agency view, people are free to use technology in whatever way they like and sometimes even not use it at all. In the case study, the former view was the one that designers took whereas users focused on the latter one.

The main point to take away from the foregoing discussion is that designers/ engineers and users often have very different perspectives on processes.  Software that is based on a designer/engineer perspective alone will invariably end up causing problems for end users.

Routines and reality

Traditional systems design proceeds according to the following steps:

  • Gather requirements
  • Analyse and encode them in rules
  • Implement rules in a software system
  • Provide people with incentives and training to follow the rules
  • Rollout the system

Those responsible for the implementation of the system described in the case study followed the steps faithfully, yet the system failed because one department didn’t use it at all and the other used it selectively.  The foregoing discussion tells us that the problem arises from confusing artifacts with actions, or as I like to put it, routines with reality.

As the authors write:

This failure to understand the difference between artifacts and actions is not new…the failure to distinguish the social (or human) from the technical is a “category mistake.” Mistakenly treating people like machines (rule-following automata) results in a wide range of undesirable, but not entirely surprising, outcomes. This category difference has been amply demonstrated for individuals, but it applies equally (if not more so) to organizational routines. This is because members’ embodied and cognitive understandings are often diverse, multiple, and conflicting.

The failure to understand the difference between routines and reality is not new, but it appears that the message is yet to get through: organisations continue to implement (new routines via) enterprise systems without putting in the effort to understand ground-level reality.

Some tips for designers

The problem of reconciling user and designer perspectives is not an easy one. Nevertheless, the authors describe an approach that may prove useful to some. The first concept is that of a narrative network – a collection of functional events that are related by the sequence in which they occur. A functional event is one in which two actants (or objects that participate in a routine) are linked by an action. The actants here could be human or non-human and the action represents a step in the process. Examples of functional events would be:

  1. The sales rep calls on the customer.
  2. The customer orders a product.
  3. The product is shipped to the customer.

The important thing to note is that the functional event does not specify how the process is carried out – that can be done in a myriad different ways. For example, the sales rep may meet the customer face-to-face or she may just make a phone call. The event only specifies the basic intent of the action, not the detail of how it is to be carried out. This leaves open the possibility for improvisation if the situation calls for it. More importantly it places the responsibility for making that decision on the person responsible for the carrying out the action (in the case of a human actant).

Then it is useful to classify a functional event based on the type of actants participating in the event – human or (non-human) artifact. There are four possibilities as shown in Figure 1.

Figure 1: Human-artifact grid

Figure 1: Human-artifact grid

The interesting thing to note is that from the perspective of system design, an artifact-artifact event is one over which the designer has the strongest control (actions involving only artifacts can be automated) whereas a designer’s influence is the weakest in human-human events (humans rarely, if ever, do exactly as they are told!). Decomposing a routine in this way can thus serve to focus designer efforts in areas where they are likely to pay off and, more importantly, help them understand where they are likely to get some push back.

Finally, there are some general points to consider when designing a system. These include:

  1. Reinforce the patterns of the routine:  Much as practise makes an orchestra perform better, organisations have to invest in practicing the routines. Practice connects the abstract routine to the performed one.
  2. Consider every stakeholder group’s point of view: This is a point I have elaborated at length in my posts on dialogue mapping so I’ll say no more about it here.
  3. Understand the relationship between abstract patterns and actual performances:  This involves understanding whether there are different ways to achieve the same goal. Also consider whether it is important to enforce a specific set of actions leading up to the goal, or whether the actions matter less than the goal.
  4. Encourage people to follow patterns of action that are important:  This involves creating incentives to encourage people to carry out certain important actions in specified ways. It is not enough to design and implement a particular pattern within a routine. One has to make it worthwhile for people to follow it.
  5.  Make it possible for users to become designers: Most routines involve decisions points where actors have to choose between alternate, but fixed, courses of action. At such points human actors have little room to improvise. Instead, it may be more fruitful to replace decision points with design points, where actors improvise their actions.
  6. Lock in actions that matter: Notwithstanding the previous point, there are certain actions that absolutely must be executed as designed. It is as important to ensure that these are executed as they should be as it is to allow people to improvise in other situations.
  7. Keepan open mind and invite engagement:  Perhaps the most important point is to continually engage with people who work the routine and to be open to their input as to what’s working and what isn’t.

Most of the above points are not new; I’d even go so far as to say they are obvious. Nevertheless they are worth restating because they are often unheeded.

Conclusion

A paradox of enterprise systems is that they can fail even if built according to specification. When this happens it is usually because designers fail to appreciate the flexibility of the business process(es) in question. As the authors state, “…even today, organizational routines are widely misunderstood as rigid, mundane, mindless, and explicitly stored somewhere.”  This is particularly true for processes that involve humans.  As the authors put it, when automating processes that involve humans it is a mistake to design  software artefacts while hoping for (fixed) patterns of action. The tragedy is that this is exactly how enterprise systems are often designed.

Written by K

November 13, 2013 at 8:55 pm

On the contradictions of consulting (and management) rhetoric

with 6 comments

Introduction

Successful management consultants are often seen as experts and trendsetters in the business world. The best among them are able to  construct convincing narratives about their expertise and experience, thereby gaining the  trust of senior managers in large organisations.

Have you ever wondered how they manage to pull this off?

In a paper entitled, The Invincible Character of Management Consulting Rhetoric: How One Blends Incommensurates While Keeping Them Apart, Jonas Berglund and Andreas Werr discuss how consultants, unbeknownst to their clients, often draw from two mutually contradictory forms of rhetoric to construct their arguments: rational (scientific or fact-based) and practical (action-based). This renders them immune to potential challenges from skeptics.  This post, which is based on the work of Berglund and Werr, is an elaboration of this claim.

Background and case study

Typically management consultants are hired to help organisations formulate and implement strategic initiatives aimed at improving organisational performance.  On the ground, such initiatives usually result in large-scale change initiatives such as organisation-wide restructuring or the implementation of enterprise systems.  Whatever the specific situation, however, consultants are generally brought in because clients perceive them as being experts who have the necessary knowledge and practical experience to plan and execute such transformations.

A typical consulting engagement consists of many interactions between consultants and diverse client-side stakeholders.  Berglund and Werr begin their paper with a description of an example of such an interaction drawn from their fieldwork in a large organisation. In brief: the  example describes a workshop that was aimed at redesigning business processes in an organisation. The two-day event was facilitated by the consultants and involved many stakeholders from the business.  I reproduce their description of the event below so that you can read it in its original form:

The event begins with a plenary session. The 25 participants—a selection of key persons on different levels in the organization—sit around a u-shaped table in a large room. Three consultants sit at one end of the table. One (a bit older than the others) is Ben, the project manager.

At 9 am sharp he rises and enters the stage. A nervousness is reflected in his somewhat impatient movements and way of talking. This is an important presentation. It is the first time since the ‘kick off’ of the project, that it is being delivered to a larger audience. Ben welcomes the participants and briefly introduces himself: ‘I am a consultant at Consulting Ltd. My specialty is BPR [Business Process Reengineering]. I have worked extensively with this method in the telecom industry.’ He also briefly introduces the two colleagues sitting at the end of the table. But the consulting team is not complete: ‘We are waiting for Alan, a portal figure and innovator concerning BPR.’

Ben suggests beginning the seminar with a brief introduction of the participants. After this has been completed, he remarks: ‘we clearly have a massive competence here today’. Thereafter, he leaves the floor to Ken, the CEO of the company, who says the following:

‘There are many reasons why we are sitting here today. The triggering factor has been the rapid growth rate of the market. But why should we start working with BPR? I have worked a lot with process improvement, and I have failed many times, but then I heard a presentation by Alan and everything fell in place. I saw the mistakes we had made—we focused on the current situation instead of being creative.’ Following this introduction, the importance of the project is further stressed. ‘The high growth rate of the market demands a new way of working . . . The competitive situation for the company is getting harder; the years when the customers just came to us are over. Now we have to start working for our money . . . The reason for this project is that we want to become the best from our owners’, customers’ and employees’ perspective.’

After this presentation, Ben takes over the floor again: ‘I have something to tell you. I want to report what we have done in the project so far . . . We have worked in four steps, which is a quite typical approach in reengineering’, he says, showing a slide headed ‘Method for Implementation’, which depicts four project phases arranged in the form of steps from the lower left to the upper right. The more detailed exploration of these phases, and the related activities occupy the group for some minutes.

Thereafter, a sequence of transparencies is shown. They describe the overall situation of the company using well-known business concepts. The titles of the slides read ‘Strategic Positioning’ (the model presented under this title has strong similarities with the BCG [Boston Consulting Group] matrix), ‘SWOT Analysis’, ‘Core Competencies’, and ‘Critical Success Factors’.

I expect many readers who work in organisational settings will be able to relate elements from the above extract to their own experiences with management consultants.

Although the case-study is dated,  the rhetoric used by the consultant is timeless. Indeed, in such plenary sessions, the main aim of  consultants (and client-side senior management) is to justify the proposed changes and convince client-side staff to get involved in implementing them.  This is as true now as it was a decade ago, the rhetoric used has hardly changed at all. What’s more interesting, though, is that their arguments taken as a whole are often inconsistent. To see why, let’s take a closer look at two kinds of rhetoric employed by consultants.

The rhetoric of reason

Consultants often legitimize their proposed actions by claiming to use “established” or “proven” methods. At the time of the case study (remember this was in the 90s), BPR was all the rage and, as a consequence, there were a number of contemporary books and articles (both in research and trade journals) that consultants could draw upon to legitimize their claims.  Indeed, many of the articles about BPR from that era delved into things such as critical success factors and core competencies – the very terms used by Ben, the consultant in the case study.  By doing so, Ben emphasised that BPR was a logically justifiable undertaking for the client  organisation.

However, that’s not all:  by referring to a stepwise “method for implementation,” Ben makes the process seem like a rational one with an “if we do X then Y will follow” logic. Of course, real life is never that simple, as evidenced by the statistics on failed BPR projects. Consultants often confuse their clients by presenting the map which is the idealised process as being equivalent to the territory that is organisational reality.

The rhetoric of action

To be sure, those who run organisations care more about results than models or methodologies. As a result,  consultants have to portray  themselves as being practical rather than theoretical. This is where the rhetoric of action comes in.

Ben’s reference to his “extensive experience in the telecom industry” and his invocation of   “Alan, the portal figure and innovator” are clearly intended to emphasise the consulting organisation’s experience and “innovative approaches” to  implementing BPR initiatives. Notice there are no references to reason here; there is only the implicit, “trust me, I’ve done this before”, and (if not that, then), “trust Alan, the portal figure and innovator.”

Ben’s spiel is backed up by the CEO;  consider the CEO’s line, ” …I have worked a lot with process improvement, and I have failed many times, but then I heard a presentation by Alan and everything fell in place. I saw the mistakes we had made…

The boss heard the BPR Gospel According To Alan and had an epiphany; everything just “fell in place.”

Discussion

The short case study illustrates how consultants shift back and forth between two essentially incompatible modes of rhetoric when speaking to clients: a rational one which assumes the existence of objective management models and a normative one which appeals to human behaviours and emotions. This enables them to construct narratives that, on the surface, seem plausible and convincing, and more important, are hard to refute.

Although the rhetoric of reason refers to an idealised world of management models, its power and appeal  cannot be overstated. As the authors state:

The belief in experts and their techniques is firmly anchored in the modern belief in rationality. In our culture ‘the notions of ‘‘science’’, ‘‘rationality’’, ‘‘objectivity’’, and ‘‘truth’’ are bound up with one another’. Knowledge is power, and formalized knowledge is praised as the only legitimate form of knowledge, offering hard and objective truth in correspondence to reality.

Indeed, consultants play a huge role in the diffusion of new knowledge and models in the wider business world, thus perpetuating the myth that management models work.

On the other hand, consultants must show results. They have to portray themselves action-oriented and hence Ben’s attempt to establish his (and his organisation’s) credibility via credentials. This mode of rhetoric downplays scientific-rational thinking and highlights  wisdom gained by experience instead. As the authors state:

The chain of argument usually goes like this: merit always prevails over privilege; management knowledge is often contrasted with scientific, theoretically informed knowledge, which is regarded with suspicion by managers; and a persons’ track record and ‘hands-on’ experience is regarded as more important than expertise in general management skills acquired through extensive education.

Another facet of the rhetoric of action is that it emphasises the uniqueness of each situation. This is based on the idea that things in organisations are subject to continual change and that the lack of a stable configuration and environment makes it impossible to employ management models. The implication being that the only way to deal with the mess is to create a sense of collectivism – a “we’re in this together” attitude. The  concept of  organisational culture plays on this by portraying an organisation as this unique, wonderful place in which everyone shares the same values and deep sense of meaning. As the authors state:

The management literature discussing corporate culture is filled with religious and magical metaphors of the leader stressing the less rational sides of the organization, emphasizing the role of ceremonies, rituals, sagas, and legends (to mention only a few), in creating a system of shared values in the organization.

Seen in this light, the CEO’s references to Alan’s epiphany-inducing presentation, the “competitive situation,” and the need to “start working for our money” are attempts to generate this sense of collectivism.

The foregoing discussion highlights how consultants and their allies draw upon incompatible modes of rhetoric to justify their plans and actions. This essentially makes it difficult to refute their claims: if one tries to pin them down on logical grounds, they can argue based on their track record and deep experience; if one questions their experience, they can point to the logic of their models and processes.

…but we are all guilty

Finally, I should emphasise that management consultants are not the only ones guilty of using both forms of rhetoric,  we all are: the business cases we write, the presentations we deliver, the justifications we give our bosses and staff are all rife with examples of this. Out of curiosity, I re-read a business case I wrote recently and was amused to find a couple of contradictions of the kind discussed in this post.

Conclusion

In this post I have discussed how consulting rhetoric frequently draws upon two incompatible kinds of arguments –rational/fact-based and practical/action-based. This enables consultants to present arguments that are hard to refute on logical grounds.  However, it isn’t fair to single out consultants: most people who work in organisation-land are just as guilty of mixing incompatible rhetorics when attempting to convince others of the rightness of their views.

Written by K

August 1, 2013 at 10:55 pm

The paradox of the learning organisation

with 15 comments

Introduction

The term learning organisation  refers to an organisation that continually modifies itself in response to changes in its environment.   Ever since Peter Senge coined the term in his book, The Fifth Discipline, assorted consultants and academics have been telling us that a learning organisation is an ideal worth striving for.  The reality, however,  is that most organisations that undertake the journey actually end up in a place far removed  from this ideal. Among other things, the journey may expose managerial hypocrisies that contradict the very notion of a learning organisation.  In this post, I elaborate on the paradoxes of learning organisations, drawing on an excellent and very readable paper by Paul Tosey entitled, The Hunting of the Learning Organisation: A Paradoxical Journey.

(Note:  I should point out that the term learning organisation should be distinguished from organisational learning: the latter refers to processes of learning whereas the former is about an ideal type of organisation. See this paper for more on the distinction.)

The journey metaphor

Consultants and other experts are quick to point out that the path to a learning organisation is a journey towards an ideal that can never be reached.  Quoting from this paper, Tosey writes, “we would talk about the fact that, in some ways, the learning organization represented all of our collective best wishes for Utopia in the workplace.” As another example, Peter Senge writes of it being, “a journey in search of the experience of being a member of `a great team.”  Elsewhere, Senge  suggests that the learning organisation is a vision that is essentially unattainable.

The metaphor of a journey seems an apt one at first, but there are a couple of problems with it. Firstly, the causal connection between initiatives that purport to get one to the goal and actual improvements in an organisation’s capacity to learn  is tenuous and impossible to establish.  This suggests the journey is one without a map. Secondly, the process of learning about learning within the organisation – how it occurs, and how it is perceived by different stakeholders – can expose organisational hypocrisies and double-speak that may otherwise have remained hidden.  Thus instead of progressing towards the the ideal one may end up moving away from it.  Tosey explores these  paradoxes by comparing the journey of a learning organisation to  the one described in Lewis Carroll’s  poem, The Hunting of The Snark.

Hunting the Snark (and the learning organisation)

Carroll’s poem tells the story of ten characters who set of in search of a fabulous creature called a  Snark.  After many trials and tribulations, they end up finding out that the Snark is something else:  a not-so-pleasant creature called a Boojum. Tosey comments that the quest described in the poem is a superb metaphor for the journey towards a learning organisation. As he states:

Initially, when reflecting on personal experience of organizational events… I was struck by the potential of the dream-like voyage of fancy on which Carroll’s characters embarked as an allegory of the quest for the learning organization. Pure allegory has limitations. Through writing and developing the article I came to view the poem more as a paradigm of the consequences of human desire for, and efforts at, progress through the striving for ideals. In other words the poem expresses something about our `hunting’. In this respect it may represent a mythological theme,a profound metaphor more than a mere cautionary moral tale.

There are many interesting parallels between the hunt for the Snark and the journey towards a learning organisation. Here are a few:

The expedition to find the Snark is led by a character called the Bellman who asserts: “What I tell you three times is true.” This is akin to the assurances (pleas?) from experts who tell us (several times over) that it is possible to transform our organisations into ones that continually learn.

The journey itself is directionless because the Bellman’s map is useless. In Carroll’s words:

Other maps are such shapes, with their islands and capes!
But we’ve got our brave Captain to thank:
(So the crew would protest) “that he’s bought us the best—
A perfect and absolute blank!

Finally, the Snark is never found. In its stead, the crew find a scary creature called  a Boojum that has the power to make one disappear. Quoting from the poem:

In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away—
For the Snark was a Boojum, you see.

The journey towards a learning organisation often reveals the Boojum-like dark side of organisations.  One common example of this is when the process of learning surfaces questions that are uncomfortable for those in power. Tosey relates the  following tale  which may be familiar to some readers,

…a multinational company intending to develop itself as a learning organization ran programmes to encourage managers to challenge received wisdom and to take an inquiring approach. Later, one participant attended an awayday, where the managing director of his division circulated among staff over dinner. The participant raised a question about the approach the MD had taken on a particular project; with hindsight, had that been the best strategy? `That was the way I did it’, said the MD. `But do you think there was a better way?’, asked the participant. `I don’t think you heard me’, replied the MD. `That was the way I did it’. `That I heard’, continued the participant, `but might there have been a better way?’. The MD fixed his gaze on the participants’ lapel badge, then looked him in the eye, saying coldly, `I will remember your name’, before walking away.

One could argue that a certain kind of learning – that of how the organisation learns – occurred here:  the employee learnt that certain questions were out of bounds. I think it is safe to say, though, that this was not the kind of learning that was intended by those who initiated the program.

In the preface to the poem, Carroll notes that the Bellman there is a rule  –  Rule 42 – which states, “No one shall speak to the Man at the Helm,” to which the Bellman (the leader) added, “and the Man at the Helm shall speak to no one.” This rendered communication between the helmsman and the crew impossible. In such periods the ship was not steered. The parallels between this and organisational life are clear: there is rarely open communication between the those steering the organisational ship and rank and file employees. Indeed, Tosey reformulates Rule 42 in organisational terms as, “the organization shall not speak to the supervision, and the  supervision shall not speak to the organization.” This, he tells us, interrupts the feedback loop between individual experience and the organisations which renders learning impossible.

(Note:  I can’t help but wonder if Douglas Adams’ famous  answer to the life universe and everything was inspired by Carroll’s rule 42…)

In the poem,  the ship sometimes sailed backwards when Rule 42 was in operation. Tosey draws a parallel between “sailing backwards” and unexpected or unintended consequence of organisational rules.  He argues that organisational actions can result in learning even if those actions were originally intended to achieve something else. The employee in the story above learnt something about the organisational hierarchy and how it worked.

Finally, it is a feature of Rule-42-like rules that they cannot be named. The employee in the story above could not  have pointed out that the manager was acting in a manner that was inconsistent with the intent of the programme – at least not without putting his own position at risk. Perhaps that in itself is a kind of learning, though of a rather sad kind.

Conclusion

Experts and consultants have told us many times over that the journey towards a learning organisation is one worth making….and as the as the Bellman in Carroll’s poem says: “What I tell you three times is true.” Nevertheless, the reality is that instances in which learning actually occurs tend to be more a consequence of accident than plan, and tend to be transient than lasting. Finally, and perhaps most important, the Snark may turn out to Boojum:  people may end up learning truths that the organisation would rather remained hidden.   And therein lies the paradox of the  learning organisation.

Written by K

June 4, 2013 at 9:23 pm

A stupidity-based theory of organisations – a paper review

with 9 comments

Introduction

The platitude “our people are  our most important asset”  reflects a belief that the survival and evolution of organisations depends  on the intellectual and cognitive capacities of the individuals who comprise them.   However,  in view of the many well documented examples of actions that demonstrate a lack of  foresight and/or general callousness about the fate of organisations or those who work in them,  one has to wonder if such a belief is justified, or even if it is really  believed by those who spout such platitudes.

Indeed,  cases such as Enron or Worldcom  (to mention just two) seem to suggest that stupidity may be fairly prevalent in present day organisations. This point is the subject of a brilliant paper by Andre Spicer and Mats Alvesson entitled, A stupidity based theory of organisations.  This post is an extensive summary and review of the paper.

Background

The notion that the success of an organization depends on the intellectual and rational capabilities of its people seems almost obvious. Moreover, there is a good deal of empirical research that seems to support this. In the opening section of their paper, Alvesson and Spicer cite many studies which appear to establish that developing the knowledge (of employees) or hiring smart people  is the key to success in an ever-changing, competitive environment.

These claims are mirrored in theoretical work on organizations. For example Nonaka and Takeuchi’s model of knowledge conversion acknowledges the importance of tacit knowledge held by employees. Although there is still much debate about tacit/explicit knowledge divide, models such as these serve to perpetuate the belief that knowledge (in one form or another) is central to organisational success.

There is also a broad consensus that decision making in organizations, though subject to bounded rationality and related cognitive biases,  is by and large a rational process. Even if a decision is not wholly rational, there is usually an attempt to depict it as being so. Such behaviour attests to the importance attached to rational thinking in organization-land.

At the other end of the spectrum there are decisions that can only be described as being, well… stupid. As Rick Chapman discusses in his entertaining book, In Search of Stupidity, organizations occasionally make decisions that are  plain dumb However, such behaviour seldom remains hidden because of its rather obvious negative consequences for the organisation.  Such stories thus end up being  immortalized in business school curricula as canonical examples of what not to do.

Functional stupidity

Notwithstanding the above remarks on  obvious stupidity, there is another category of foolishness that is perhaps more pervasive but remains unnoticed and unremarked. Alvesson and Spicer use the term functional stupidity to refer to such  “organizationally supported lack of reflexivity, substantive reasoning, and justitication.”

In their words, functional stupidity amounts to the “…refusal to use intellectual resources outside a narrow and ‘safe’ terrain.”   It is reflected in a blinkered approach to organisational problems, wherein people display  an unwillingness  to consider or think about solutions that lie outside an arbitrary boundary.  A common example of this is when certain topics are explicitly or tacitly deemed as being “out of bounds” for discussion. Many “business as usual” scenarios are riddled with functional stupidity, which is precisely why it’s often so hard to detect.

As per the definition offered above, there are three cognitive elements to functional stupidity:

  1. Lack of reflexivity: this refers to the inability or unwillingness to question claims and commonly accepted wisdom.
  2. Lack of substantive reasoning: This refers to  reasoning that is based on a small set of concerns that do not span the whole issue. A common example of this sort of myopia is when organisations focus their efforts on achieving certain objectives with little or no questioning of the objectives themselves.
  3. Lack of justification: This happens when  employees do not question managers or, on the other hand, do not provide explanations regarding their  own actions. Often this is a consequence of power relationships in organisations. This may, for example, dissuade employees from “sticking their necks out” by asking questions that managers might deem out of bounds.

It should be noted that functional stupidity has little to do with limitations of human cognitive capacities. Nor does it have anything to do with ignorance, carelessness or lack of thought. The former can be  rectified through education and/or the hiring of consultants with the requisite knowledge,  and the latter via the use of standardised procedures and checklists.

It is also important to  note that  functional stupidity is not necessarily a bad thing. For example, by placing certain topics out of bounds, organisations can avoid discussions about potentially controversial topics and can thus keep conflict and uncertainty at bay.  This maintains  harmony, no doubt, but it also strengthens the existing organisational order which  in turn serves to reinforce functional stupidity.

Of course, functional stupidity also has negative consequences, the chief one being that it prevents organisations from finding solutions to issues that involve topics that have been arbitrarily deemed as being out of bounds.

Examples of functional stupidity

There are many examples of functional stupidity in recent history, a couple being the irrational exuberance in the wake of the internet boom of the 1990s, and the lack of  critical examination of the complex mathematical models that lead to the financial crisis of last decade.

However, one does not have to look much beyond one’s own work environment to find examples of functional stupidity.  Many of these come under the category of  “business as usual”  or “that’s just the way things are done around here” – phrases that are used to label practices that are ritually applied without much thought or reflection.  Such practices often remain unremarked because it is not so easy to link them to negative outcomes.  Indeed, the authors point out that “most managerial practices are adopted on the basis of faulty reasoning, accepted wisdom and complete lack of evidence.”

The authors cite the example of companies adopting HR practices that are actually detrimental to employee and organisational wellbeing.  Another common example  is when organisations place a high value on gathering information which is then not used in a meaningful way.    I have discussed this “information perversity” at length in my post on entitled, The unspoken life of information in organisations, so I won’t  rehash it here.  Alvesson and Spicer point out that information perversity is a consequence of the high cultural value placed on information: it is seen as a prerequisite to “proper” decision making. However,  in reality it is often used to justify questionable decisions or simply “hide behind the facts.”

These examples suggest that functional stupidity may be the norm rather than the exception. This is a scary thought…but I suspect it may not be surprising to many readers.

The dynamics of stupidity

Alvesson and Spicer claim that functional stupidity is a common feature in organisations. To understand why it is so pervasive, one has to look into the dynamics of stupidity – how it is established and the factors that influence it.  They suggest that the root cause lies in the fact that organisations attempt to short-circuit critical thinking through what they call economies of persuasion, which are activities such as corporate culture initiatives, leadership training or team / identity building, relabelling positions with pretentious titles – and many other such activities that are aimed at influencing employees  through the use of symbols and images rather than substance. Such symbolic manipulation, as the authors calls it, is aimed at increasing employees’ sense of commitment to the organisation.

As they put it:

Organizational contexts dominated by widespread attempts at symbolic manipulation typically involve managers seeking to shape and mould the ‘mind-sets’ of employees . A core aspect of this involves seeking to create some degree of good faith and conformity and to limit critical thinking

Although such efforts are not always successful, many employees do buy in to them and thereby identify with the organisation. This makes employees uncritical of the organisation’s  goals and the means by which these will be achieved. In other words, it sets the scene for functional stupidity to take root and flourish.

Stupidity management and stupidity self-management

The authors use the term stupidity management to describe managerial actions that prevent or discourage organisational actors (employees and other stakeholders) from thinking for themselves.   Some of the ways in which this is done include the reinforcement of positive images of the organisation, getting employees to identify with the organisation’s vision and myriad other organisational culture initiatives aimed at burnishing the image of the corporation. These initiatives are often backed by organisational structures (such as hierarchies and reward systems) that discourage employees from raising and exploring potentially disruptive issues.

The monitoring and sanctioning of activities that might disrupt the positive image of the organisation can be overt (in the form of warnings, say). More often, though, it is subtle. For example, in many meetings, participants participants know that certain issues cannot be raised. At other times, discussion and debate may be short circuited by exhortations to “stop thinking and start doing.”  Such occurrences serve to create an environment in which stupidity flourishes.

The net effect of  managerial actions that encourage stupidity is that employees start to cast aside their own doubts and questions and behave in corporately acceptable ways – in other words, they start to perform their jobs in an unreflective and unquestioning way. Some people may actually internalise the values espoused by management; others may psychologically  distance themselves from the values but still act in ways that they are required to. The net effect of such stupidity self-management (as the authors call it) is that employees stop questioning what they are asked to do and just do it. After a while, doubts fade and this becomes the accepted way of working. The end result is the familiar situation that many of us know as “business as usual” or  “that’s just the way things are done around here.”

The paradoxes and consequences of stupidity

Functional stupidity can cause both feelings of certainty and dissonance in members of an organisation. Suppressing  critical thinking  can result in an easy acceptance of  the way things are.  The feelings of certainty that come from suppressing difficult questions can be comforting. Moreover, those who toe the organisational line are more likely to be offered material rewards and promotions than those who don’t. This can act to reinforce functional stupidity because others who see stupidity rewarded may also be tempted to behave in a similar fashion.

That said,  certain functionally stupid actions, such as ignoring obvious ethical lapses, can result in serious negative outcomes for an organisation. This has been amply illustrated in the recent past. Such events can prompt formal inquiries  at the level of the organisation, no doubt accompanied by  informal soul-searching at the individual level. However, as has also been amply illustrated, there is no guarantee that inquiries or self-reflection lead to any major changes in behaviour. Once the crisis passes, people seem all too happy to revert to business as usual.

In the end , though, when stark differences between the rhetoric and reality of the organisation emerge  – as they eventually will– employees will  see the contradictions between the real organisation and the one they have been asked to believe in. This can result in alienation from and cynicism about the organisation and its objectives. So, although stupidity management may have beneficial outcomes in the short run, there is a price to be paid  in the longer term.

Nothing comes for free, not even stupidity…

Conclusion

The authors main message is that despite the general belief that organisations enlist the cognitive and intellectual capacities of their members in positive ways, the truth is that organisational behaviour often exhibits a wilful ignorance of facts and/or a lack of logic. The authors term this behaviour functional stupidity.

Functional stupidiy has the advantage of maintaining harmony at least in the short term, but its longer term consequences can be negative.   Members of an organisation “learn” such behaviour  by becoming aware that certain topics are out of bounds and that they broach these at their own risk. Conformance is rewarded by advancement or material gain whereas dissent is met with overt or less obvious disciplinary action. Functional stupidity thus acts as a barrier that can stop members of an organisation from developing potentially interesting perspectives on the problems the organisations face.

The paper makes an interesting and very valid point about the pervasiveness of wilfully irrational behaviour in organisations. That said, I  can’t help but think that the authors  have written it with tongue firmly planted in cheek.

On the evolution of corporate information infrastructures

leave a comment »

Introduction

In the last few decades two technology trends have changed much of the thinking about corporate IT infrastructures: commoditisation and the cloud.   As far as the first trend is concerned,  the availability of relatively cheap hardware and packaged “enterprise” software has enabled organisations to create their own IT infrastructures.  Yet, despite best efforts of IT executives and planners, most of these infrastructures take on lives of their own, often increasing in complexity to the point where they become unmanageable.

The maturing of cloud technologies in the last few years  appears to offer IT decision makers an attractive solution to this problem:  that of outsourcing their infrastructure headaches. Notwithstanding the wide variety of mix-and-match options of commodity and cloud offerings, the basic problem still remains: one can create as much of a mess in the cloud as one can in an in-house data center.  Moreover, the advertised advantages of cloud-based enterprise solutions can be illusory:  customers often find that solutions are inflexible and major changes can cost substantial sums of money.

Conventional wisdom tells us that these problems can be tackled by proper planning and control.  In this post I draw on Claudio Ciborra’s book, From Control to Drift: The Dynamics of Corporate Information Infrastructures, to show why such a view is simplistic and essentially untenable.

The effects of globalisation and modernity

The basic point made by Ciborra and Co.  is that initiatives to plan and control IT infrastructures via centrally-driven, standards-based governance structures are essentially misguided reactions to the unsettling effects of globalisation and modernity, terms that I elaborate on below.

Globalisation

Globalisation refers to the processes of interaction and integration between people of different cultures across geographical boundaries. The increasing number of corporations with a global presence is one of the manifestations of globalisation. For such organisations, IT infrastructures systems are seen as a means to facilitate globalisation and also control it.

There are four strategies that an organisation can choose from when establishing a global presence. These are:

  • Multinational: Where individual subsidiaries are operated autonomously.
  • International: Where work practices from the parent company diffuse through the subsidiaries (in a non-formal way).
  • Global: Where local business activities are closely controlled by the parent corporation.
  • Transnational: This (ideal) model balances central control and local autonomy in a way that meets the needs of the corporation while taking into account the uniqueness of local conditions.

These four business strategies map to two corporate IT strategies:

  •  Autonomous: where individual subsidiaries have their own IT strategies, loosely governed by corporate.
  •   Headquarters-driven: where IT operations are tightly controlled by the parent corporation.

Neither is perfect; both have downsides that start to become evident only after a particular strategy is implemented. Given this, it is no surprise that organisations tend to cycle between the two strategies, with cycle times varying from five to ten years; a trend that corporate IT minions are all too familiar with.  Typically, though,  executive management tends to favour the centrally-driven approach since it holds the promise of higher control and reduced costs.

Another consequence of globalisation is the trend towards outsourcing IT infrastructure and services. This is particularly popular for operational IT – things like infrastructure and support. In view of this, it is no surprise that organisations often choose to outsource IT development and support to external vendors.  Equally unsurprising, perhaps, is that the quality of service often does not match expectations and there’s little that can be done about it.  The reason is simple:  complex contracts are hard to manage and perhaps more importantly, not everything can be contractualised. See my post on the transaction cost economics of outsourcing for more on this point.

The effect of modernity

The phenomenon of modernity forms an essential part of the backdrop against which IT systems are implemented. According to a sociological definition due to Anthony Giddens,  modernity  is “associated with (1) a certain set of attitudes towards the world, the idea of the world as open to transformation, by human intervention; (2) a complex of economic institutions, especially industrial production and a market economy; (3) a certain range of political institutions, including the nation-state and mass democracy”  

Modernity is characterised by the following three “forces” that have a direct impact on information infrastructures:

  • The separation of space and time: This refers to the ways in which technology enables us reconfigure our notions of geographical  space and time.  For instance,  coordinating activities in distant locations is now possible  – global supply chains and distributed project teams being good examples. The important consequence of this ability, relevant to IT infrastructures such as ERP and CRM systems,  is that it makes it possible (at least in principle)  for organisations to increase their level of surveillance and control of key business processes across the globe.
  • The development of disembedding mechanisms: As I have discussed at length in this post, organisations often “import” procedures that have worked well in organisations. The assumption underlying this practice is that the procedures can be lifted out of their original context and implemented in another one without change. This, in turn, tacitly assumes that those responsible for implementing the procedure in the new context understand the underlying cause-effect relationships completely. This world-view, where organisational processes and procedures are elevated to the status of universal “best practices”  is  an example of a disembedding mechanism at work. Disembedding mechanisms are essentially processes via which certain facts are abstracted from their context and ascribed a universal meaning. Indeed, most “enterprise” class systems claim to implement such “best practices.”
  • The reflexivity of knowledge and practice: Reflexive phenomena are those for which cause-effect relationships are bi-directional – i.e. causes determine effects which in turn modify the causes. Such phenomena are unstable in the sense that they are continually evolving – in potentially unpredictable ways. Organisational practices (which are based on organisational knowledge) are reflexive in the sense that they are continually modified in the light of their results or effects.  This conflicts with the main rationale for IT infrastructures such as ERP systems, which is to rationalise and automate organisational processes and procedures in a relatively inflexible manner.

 Implications for organisations

One of the main implications of globalisation and modernity is that the world is now more interconnected than ever before. This is illustrated by the global repercussions of the financial crises that have occurred in recent times. For globalised organisations this manifests itself in not-so-obvious dependencies  of the organisation’s  well-being on events within the organisation and outside it. These events are  usually not within the organisation’s control So they have to be  managed as risks.

A standard response to risk is to increase control. Arguably, this may well be the most common executive-level rationale behind decisions to impose stringent controls and governance structures around IT infrastructures.  Yet, paradoxically, the imposition of controls often lead to undesirable outcomes because of unforeseen side effects and the inability to respond to changing business needs in a timely manner.

 A bit about standards

Planners of IT infrastructures spend a great deal of time worrying about which standards they should follow. This makes sense if for no other reason than the fact that corporate IT infrastructures are embedded in a larger (external) ecosystem that is made up of diverse organisations, each with their own infrastructures. Standards ease the problem of communication between interconnected organisations. For example,  organisations often have to exchange information electronically in various formats. Without (imposed or de-facto) standards, this would be very difficult as IT staff would have to write custom programs to convert files from one format to another.

The example of file formats illustrates why those who plan and implement IT infrastructures prefer to go with well established technologies and standards rather than with promising (but unproven) new ones. The latter often cause headaches because of compatibility problems with preexisting technologies.  There are other reasons, of course, for staying with older technologies and established standards – acceptance, maturity and reliability being a few important ones.

Although the rationale for adopting standards seems like a sound one, there are a few downsides too.  Consider the following:

  • Lock in: This refers to the fact that once a technology is widely adopted, it is very difficult for competing technologies to develop. The main reason for this is that  dominant technology will attract a large number of complementary products. These make it more attractive to stick with the dominant standard. Additionally, contractual commitments, availability of expertise, switching costs make it unviable for customers to move to competitor products.
  • Inefficiency: This refers to the fact that a dominant standard is not necessarily the best. There are many examples of cases where a dominant standard is demonstrably inferior to a less popular competitor. My favourite example is the Waterfall project management methodology which became a standard for reasons other than its efficacy. See this paper for details of this fascinating story.
  • Incompatibility:  In recent years, consumer devices such as smartphones and tablets have made their way into corporate computing environments, primarily because of pressures and demands from technology savvy end-users. These devices pose problems for infrastructure planners and administrators because they are typically incompatible with existing corporate technology standards and procedures. As an example, organisations that have standardised on a particular platform such as Microsoft Windows may face major challenges when introducing devices such as iPads in their environments.

Finally, and most importantly, the evolution of standards causes major headaches for corporate IT infrastructure planners. Anyone who has been through a major upgrade of an operating system at an organisation-wide level will have lived this pain.  Indeed, it is such experiences that have driven IT decision-makers to cloud offerings. The cloud  brings with it a different set of problems, but that’s another story.  Suffice to say that the above highlights, once again, the main theme of the book: that infrastructure planning is well and good, but planners have to be aware that the choices they make constrain them in ways that they will not have foreseen.

Summing up

The main argument that Ciborra and his associates make is that  corporate information infrastructures drift because they are subject to unpredictable forces within and outside the hosting organisation. Standards and processes may slow the drift (if at all) but they cannot arrest it entirely. Infrastructures are therefore best seen as ever-evolving constructs made up systems, people and processes that interact with each other in (often) unforeseen ways.  As Ciborra so elegantly puts it:

Corporate information infrastructures are puzzles, or better collages, and so are the design and implementation processes that lead to their construction and operation. They are embedded in larger, contextual puzzles and collages. Interdependence, intricacy, and interweaving of people, systems, and processes are the culture bed of infrastructure. Patching, alignment of heterogeneous actors and making do are the most frequent approaches…irrespective of whether management [is] planning or strategy oriented, or inclined to react to contingencies.

And therein lies an important message for those who plan and oversee information infrastructures.

Notes:

Sections of this post are drawn from my article entitled, The ERP Paradox.

Written by K

April 3, 2013 at 7:07 pm

“Strategic alignment” – profundity or platitude?

with 6 comments

Introduction

Some time ago I wrote a post entitled, Models and messes in management, wherein I discussed how a “scientific” approach to management has resulted in a one-size-fits-all  approach to problem solving in organizations.  This is reflected in the tendency of organisations to  implement similar information technology (IT) systems, often on the  “expert” advice of carbon-copy consultancies that offer commoditized solutions.

A particularly effective marketing tactic is to advertise such “solutions” as being able to help organisations achieve  strategic alignment between IT and the business.  In this post I discuss how the concept of “strategic alignment” though seemingly sensible, makes no sense in the messy, real world of organization-land. My discussion is based on a brilliant paper by Claudio Ciborra entitled, De Profundis? Deconstructing the concept of strategic alignment.  The paper analyses the notion of alignment as it is commonly understood in the context of  IT– namely as a process by which IT objectives are brought in line with those of the organization it serves.

Background

The paper begins with a short chronology of the term strategic alignment, starting with this  highly cited paper  published by Henderson and Venkataraman  in 1993. The paper describes the need for alignment between business and IT strategies of companies.   More importantly, however, the authors detail a “Strategic Alignment Model” that purports to “guide management practice” towards achieving alignment between IT and the business. However, as Ciborra noted four years later, in 1997, it was still an open question as to what strategic alignment really meant and how it was to be achieved.

Fast forward 15 years to 2012, and it appears that the question of what strategic alignment is and how to achieve it still an open one. Here are some excerpts from recently published papers:

In the abstract to their paper entitled, Strategic Alignment of Business Processes, Morrison et. al. state:

Strategic alignment is a mechanism by which an organization can visualize the relationship between its business processes and strategies. It enables organizational decision makers to collect meaningful insights based on their current processes. Currently it is difficult to show the sustainability of an organization and to determine an optimal set of processes that are required for realizing strategies.” (italics mine)

Even worse, the question of what strategic alignment is is far from settled.  It appears that it means different things to different people. In the abstract to their paper entitled, Reconsidering the Dimensions of Business-IT Alignment, Schlosser et. al. state:

While the literature on business-IT alignment has become increasingly mature in the past 20 years, different definitions and conceptualizations have emerged. Several dimensions like strategic, intellectual, structural, social, and cultural alignment have been developed. However, no integrated and broadly accepted categorization exists and these dimensions are non-selective and do overlap…

This begs the question as to how meaningful it is for organizations to pursue “alignment” when people are still haggling over the fine print of what it means.

Ciborra dealt with this  very question 15 years ago. In the remainder of this post I summarize the central ideas of his paper,  which I think are as relevant today as the were at the time it was written.

Deconstructing  strategic alignment

The whole problem with the notion of strategic alignment is nicely summarized in a paragraph that appears in Ciborra’s introduction:

…while strategic alignment may be close to a truism conceptually, in the everyday business it is far from being implemented. Strategy ends up in “tinkering” and the IT infrastructure tends to “drift”. If alignment was supposed to be the ideal “bridge” connecting the two key variables [business and IT], it must be admitted that such a conceptual bridge faces the perils of the concrete bridge always re-designed and never built between continental Italy and Sicily, (actually, between Scylla and Charybdis) its main problem being the shores: shifting and torn by small and big earthquakes….

The question, then, is how and why do dubious concepts such as strategic alignment worm their way into mainstream management?

Ciborra places the blame for this squarely in the camp of academics who really ought to know better. As he states:

[Management Science] deploys careful empirical research, claiming to identify “naturally occurring phenomena” but in reality measures theoretical (and artificial) constructs so that the messiness of everyday reality gets virtually hidden. Or it builds models that should be basic but do not last a few years and quickly fall into oblivion.

And a few lines later:

…practitioners and academics increasingly worship simplified models that have a very short lifecycle….managers who have been exposed to such illusionary models, presented as the outcome of quasi-scientific studies, are left alone and disarmed in front of the intricacies of real business processes and behaviors, which in the meantime have become even more complicated than when these managers left for their courses. People’s existence, carefully left out of the models, waits for them at their workplaces.

Brilliantly put, I think!

Boxes, arrows and platitudes

Generally, strategic alignment is defined as a fit, or a bridge, between different domains of a business.  To be honest it seems the metaphor of a “bridge” seems to distract from reflecting on the chasm that is allegedly being crossed and the ever-shifting banks that lie on either side. Those who speak of alignment would do better to first focus on what they are trying to align. They may be surprised to find that the geometric models  that pervade their PowerPoint Presentations (e.g. organograms, boxes connected by arrows) are completely divorced from reality. Little surprise, then, that top-down, management efforts at achieving alignment invariably fail.

Why do such tragedies play out over and over again?

Once again, Ciborra offers some brilliant insights…

The messy world, he tells us, gives us the raw materials from which we build simplified representations of the organization we work in. These representations are often built in the image of models that we have learnt or read about (or have been spoon-fed to us by our expensive consultants). Unfortunately, these models are abstractions of reality – they cannot and must not be confused with the real thing.  So when we speak of alignment, we are talking of an abstraction that is not “out there in the world” but instead only resides in our heads; in textbooks and journal papers; and, of course, in business school curricula.

As he says,

…there is no pure alignment to be measured out there. It is, on the contrary, our pre-scientific understanding of and participating in the world of organizations that gives to the notion of alignment a shaky and ephemeral existence as an abstraction in our discourses and representations about the world.

This is equally true of management research programs on alignment: they are built on multiple abstractions and postulated causal connections that are simply not there.

If academics who spend their productive working lives elaborating these concepts make little headway, what hope is there for the manager who  to implement or measure strategic alignment?

Is there any hope?

Ciborra tells us that the answer to the question posed in the sub-heading is a qualified “yes”. One can pursue alignment, but one must first realize that the concept is an abstraction that exists only in an ideal world in which there are no surprises and in which things always go according to plan. Perhaps more importantly, they need to understand that implementations of technology invariably have significant unintended consequences that require improvised responses and adaptations. Moreover, these are essential aspects of the process of alignment, not things that can be wished away by better plans or improved monitoring.

So what can one do? As Ciborra states:

…we are confronted with a choice. Either we can do what management science suggests, that is “to realize these surprises in implementation as exceptions, build an ideal world of “how things should be” and to try to operate so that the messy world that in which managers operate moves towards this model….or we suspend  belief about what we think we know…and reflect on what we observe. Sticking to the latter we encounter phenomena that deeply enrich our notion of alignment… (italics mine)

Ciborra then goes on to elaborate on concepts of Care (dealing with the world as it is, but in a manner that is honest and free of preconceived notions), Cultivation (allowing systems to evolve in a way that is responsive to the needs of the organization rather than a predetermined plan) and Hospitality (the notion that the organization hosts the technology, much in the way that a host hosts a guest). It would take at least a thousand words more to elaborate on these concepts, so I’ll have to leave it here. However, if you are interested in finding out more, please see my summary and review of Ciborra’s book: The Labyrinths of Information.

…and finally, who aligns whom?

The above considerations lead us to the conclusion that, despite our best efforts, technology infrastructures tend to have lives of their own – they align us as much as we (attempt to) align them.  IT infrastructures are deeply entwined with the organizations that host them, so much so that they are invisible (until they breakdown, of course) and  even have human advocates who “protect” their (i.e. the infrastructure’s) interests! Although this point may seem surprising to business folks, it is probably familiar to those who work with information systems in corporate or other organizational environments.

A final word:  many other management buzz-phrases, though impressive sounding, are just as meaningless as the term strategic alignment.  However, I think I have rambled on enough, so I will leave you here to find and deconstruct some of these on your own.

Written by K

February 21, 2013 at 9:43 pm

Pseudo-communication in organisations

with 7 comments

Introduction

Much of what is termed communication in organisations is but a  one-way, non-interactive process of information transfer. It doesn’t seem right to call this communication, and other terms  such as propaganda  carry too much baggage. In view of this, I’ve been searching for an appropriate term for some time. Now –  after reading a paper by Terence Moran entitled  Propaganda as Pseudocommunication  – I think I have found one.

Moran’s paper discusses how propaganda, particularly in the social and political sphere,  is packaged and sold as genuine communication even though it isn’t –  and hence the term pseudo-communication.   In this post,I draw on the paper to show how one can distinguish between communication and pseudo-communication in organisational life.

Background

Moran’s paper was written in 1978, against a backdrop of political scandal and so, quite naturally, many of the instances of pseudo-communication he discusses are drawn from the politics of the time. For example, he writes:

As Watergate should have taught us, the determined and deliberate mass deceptions that  are promulgated via the mass media by powerful political figures cannot be detected, much less combated easily.

Such propaganda is not the preserve of politicians alone, though. The wonderful world of advertising illustrates how pseudo-communication works in insidious ways that are not immediately apparent. For example, many car or liquor advertisements attempt to associate the advertised brand with sophistication and style, suggesting that somehow those who consume the product will be transformed into sophisticates.

As Moran states:

It was reported in the Wall Street Journal of August 14, 1978 that the the Federal Trade Commission  finally has realized that advertisements carry messages via symbol systems other than language. The problem is in deciding how to recognise, analyse and legislate against deceptive messages

Indeed! And I would add that  the problem has only become worse in the 30 odd years since Mr. Moran wrote those words.

More relevant to those of  us who work in organisation-land, however, is the fact that  sophisticated pseudo-communication has wormed its way into the corporate world, a prime example being  mission/vision statements that seem to be de rigueur for corporations. Such pseudo-communications are rife with platitudes, a point that Paul Culmsee and I explore at length in Chapter 1 of our book.

Due to the increasing sophistication of pseudo-communication it can sometimes be hard to distinguish it from the genuine stuff.  Moran  offers some tips that can help us do this.

Distinguishing between communication and pseudo-communication

Moran describes several characteristics of pseudo-communication vis-à-vis its authentic cousin. I describe some of  these below with particular reference to pseudo-communication in organisations.

1. Control and interpretation

In organisational pseudo-communication  the receiver is not free to interpret the message as per his or her own understanding. Instead, the sender determines the meaning of the message and receivers are  expected to “interpret” the message as the sender requires them to. An excellent example of this are corporate mission/vision statements – employees are required to understand these as per the officially endorsed interpretation.

Summarising: in communication control is shared between the sender and receiver whereas in pseudo-communication, control rests solely with the sender.

2. Stated and actual purpose

To put it quite bluntly, the aim of most employee-directed corporate pseudo communication is to get employees to behave in ways that the organisation would like them to. Thus, although pseudo-communiques may use words like autonomy and empowerment  they are directed towards achieving organisational objectives, not those of employees.

Summarising: in communication the stated and actual goals are the same whereas in pseudo-communication they are different. Specifically, in pseudo-communication  actual purposes are hidden and are often contradictory to the stated ones.

3. Thinking and analysis

Following from the above  it seems pretty clear that the success of organisational pseudo-communication  hinges on employees not analysing messages in an individualistic or critical way. If they did, they would see it for them for the propaganda that they actually are. In fact, it isn’t a stretch to say that most organisational pseudo-communication is generally are aimed at encouraging groupthink at the level of the entire organisation.

A corollary of this is that in communication it is assumed that the receiver will act on the message in ways that he or she deems  appropriate whereas in pseudo-communication the receiver is encouraged to act in “organisationally acceptable” ways.

Summarising: in communication it is expected that receivers will analyse the message individually in a critical way so as to reach their own conclusions. In pseudo-communication  however, receivers are expected to think about the message in a standard, politically acceptable way.

4. Rational vs. emotional appeal

Since pseudo-communication works best by dulling  the critical faculties of recipients, it seems clear that it should aim evoke a emotional response rather than a rational (or carefully considered) one.  Genuine communication, on the other hand, makes clear the relationship between elements of the message and supporting evidence so that receivers can  evaluate it for themselves and reach their own conclusions.

Summarising:  communication makes an appeal to the receivers’ critical/rational side whereas pseudo-communication aims to  make an emotional connection with receivers.

5. Means and ends

In  organisational pseudo-communication such as mission/vision statements and the strategies that arise from it, the ends are seen as justifying the means. The means are generally assumed to be value-free in that it is OK to do whatever it takes to achieve organisational goals, regardless of the ethical or moral implications. In contrast, in (genuine) communication, means and ends are intimately entwined and are open to evaluation on rational and moral/ethical bases.

Summarising: in pseudo-communication, the ends are seen as justiying the means whereas in communication they are not.

6. World view

In organisational pseudo-communication the the organisation’s world is seen as being inherently simple, so much so that it can be captured using catchy slogans such as “Delivering value” or “Connecting people” or whatever. Communication, on the other hand,   acknowledges the existence of intractable problems and alternate worldviews and thus viewing the world as being inherently complex.  As Moran puts it, “the pseudo-communicator is always endeavouring to have us accept a simplified view of life.” Most corporate mission and vision statements will attest to the truth of this.

Summarisingpseudo communication over-simplifies or ignore  difficult or inconvenient issues whereas communication acknowledges them.

Conclusion

Although Moran wrote his paper over 30 years ago, his message is now more relevant and urgent than ever.  Not only is pseudo-communication prevalent in politics and advertising, it has also permeated organisations and even our social relationships. In view of this, it is ever more important that we are able to distinguish pseudo-communication from the genuine stuff.  Incidentally, I highly recommend that reading  the original paper -it is very readable and even laugh-out-loud funny in parts.

Finally, to indulge in some speculation: I wonder why pseudo-communication is so effective in the organisational world when even a cursory analysis exposes its manipulative nature. I think an answer lies in the fact that modern organisations use powerful, non-obtrusive techniques such as organisational culture initiatives to convince their people of the inherent worth of the organisation and their roles in it. Once this is done, it makes employees less critical and hence more receptive to pseudo-communication. Anyway, that is fodder for another post. For now, I leave you to ponder the points made above and perhaps use them in analysing (pseudo)communication in your own organisation.

Written by K

January 23, 2013 at 9:36 pm

%d bloggers like this: