Archive for the ‘Organizations’ Category
There are two facets to the operation of IT systems and processes in organisations: governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.
An example might help clarify the difference:
The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project. The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.
Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek. It also happens to be the root word of Cybernetics which is the science of regulation or control. In this post, I apply a key principle of cybernetics to a couple of areas of enterprise IT.
An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment. Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.
Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.). In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.
The law of requisite variety
It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of disturbances it might experience.
According to this paper, the law of requisite variety asserts that:
The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.
V(E) > V(D) – V(R) – K
Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:
V(E) represents the set of desired outcomes for the controlled environmental variable: desired temperature range in the case of the thermostat, successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.
V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)
V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)
K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.
I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more (full pdf here).
Implications for enterprise IT
In plain English, the law of requisite variety states that only “variety can absorb variety.” As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:
…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.
This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.
In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of IT systems and/or processes.
The main rationale behind standardizing an IT process is to handle all possible demands (or use cases) via a small number of predefined responses. When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.
Requisite variety and the service desk
A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.
The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the V(R) term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed. The assumption here is that, between them, the three support tiers should be able to handle majority of calls.
Slack (the K term) relates to unexploited capacity. Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it. Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible. One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.
Service desk standardization is thus relatively straightforward to achieve provided:
- The kinds of calls that come in are largely predictable.
- The work can be routinized.
- All non-routine work – such as an application enhancement request or a demand for a new system- is dealt with outside the system via (say) a change management process.
All this will be quite unsurprising and obvious to folks working in corporate IT. Now let’s see what happens when we apply the law to a more complex system.
Requisite variety and the PMO
Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution. PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve. Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:
- “50% of project management offices close within 3 years (Association for Project Mgmt)”
- “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
- “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
- “68% of stakeholders perceive their PMOs to be bureaucratic (2013 Gartner PPM Summit)”
- “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”
The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic. I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.
There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.
The answer, as you might suspect, has to do with variety. Projects and service requests are very different beasts. Among other things, they differ in:
- Duration: A project typically goes over many months whereas a service request has a lifetime of days,
- Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated. A service request typically consists one (or a small number) of well-defined tasks.
- Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.
It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.
The key point is that the increased variety in the case of the PMO precludes standardisation. As the law of requisite variety tells us, there are two ways to deal with variety: regulate it or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects. In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.
The law of requisite variety is a general principle that applies to any regulated system. In this post I applied the law to two areas of enterprise IT – service management and project governance – and discussed why standardization works well for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both. The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.
I am delighted to announce that my new business book, The Heretic’s Guide to Management: The Art of Harnessing Ambiguity, is now available in e-book and print formats. The book, co-written with Paul Culmsee, is a loose sequel to our previous tome, The Heretics Guide to Best Practices.
Many reviewers liked the writing style of our first book, which combined rigour with humour. This book continues in the same vein, so if you enjoyed the first one we hope you might like this one too. The new book is half the size of the first one and I considerably less idealistic too. In terms of subject matter, I could say “Ambiguity, Teddy Bears and Fetishes” and leave it at that…but that might leave you thinking that it’s not the kind of book you would want anyone to see on your desk!
Rest assured, The Heretic’s Guide to Management is not a corporate version of Fifty Shades of Grey. Instead, it aims to delve into the complex but fascinating ways in which ambiguity affects human behaviour. More importantly, it discusses how ambiguity can be harnessed in ways that achieve positive outcomes. Most management techniques (ranging from strategic planning to operational budgeting) attempt to reduce ambiguity and thereby provide clarity. It is a profound irony of modern corporate life that they often end up doing the opposite: increasing ambiguity rather than reducing it.
On the surface, it is easy enough to understand why: organizations are complex entities so it is unreasonable to expect management models, such as those that fit neatly into a 2*2 matrix or a predetermined checklist, to work in the real world. In fact, expecting them to work as advertised is like colouring a paint-by-numbers Mona Lisa, expecting to recreate Da Vinci’s masterpiece. Ambiguity therefore invariably remains untamed, and reality reimposes itself no matter how alluring the model is.
It turns out that most of us have a deep aversion to situations that involve even a hint of ambiguity. Recent research in neuroscience has revealed the reason for this: ambiguity is processed in the parts of the brain which regulate our emotional responses. As a result, many people associate it with feelings of anxiety. When kids feel anxious, they turn to transitional objects such as teddy bears or security blankets. These objects provide them with a sense of stability when situations or events seem overwhelming. In this book, we show that as grown-ups we don’t stop using teddy bears – it is just that the teddies we use take a different, more corporate, form. Drawing on research, we discuss how management models, fads and frameworks are actually akin to teddy bears. They provide the same sense of comfort and certainty to corporate managers and minions as real teddies do to distressed kids.
Most children usually outgrow their need for teddies as they mature and learn to cope with their childhood fears. However, if development is disrupted or arrested in some way, the transitional object can become a fetish – an object that is held on to with a pathological intensity, simply for the comfort that it offers in the face of ambiguity. The corporate reliance on simplistic solutions for the complex challenges faced is akin to little Johnny believing that everything will be OK provided he clings on to Teddy.
When this happens, the trick is finding ways to help Johnny overcome his fear of ambiguity.
Ambiguity is a primal force that drives much of our behaviour. It is typically viewed negatively, something to be avoided or to be controlled.
The truth, however, is that ambiguity is a force that can be used in positive ways too. The Force that gave the Dark Side their power in the Star Wars movies was harnessed by the Jedi in positive ways.
Our book shows you how ambiguity, so common in the corporate world, can be harnessed to achieve the results you want.
The e-book is available via popular online outlets. Here are links to some:
For those who prefer paperbacks, the print version is available here.
Thanks for your support 🙂
A mainstay of team building workshops is the old “what can we do better” exercise. Over the years I’ve noticed that “improving communication” is an item that comes up again and again in these events. This is frustrating for managers. For example, during a team-building debrief some years ago, an exasperated executive remarked, “Oh don’t pay any attention to that [better communication], it keeps coming up no matter what we do.”
The executive had a point. The organisation had invested much effort in establishing new channels of communication – social media, online, face-to-face forums etc. The uptake, however, was disappointing: turnout at the face-to-face meetings was consistently low as was use of other channels.
As far as management was concerned, they had done their job by establishing communication channels and making them available to all. What more could they be expected to do? The matter was dismissed with a collective shrug of suit-clad shoulders…until the next team building event, when the issue was highlighted by employees yet again.
After much hand-wringing, the organisation embarked on another “better communication cycle.” Much effort was expended…again, with the same disappointing results.
Anecdotal evidence via conversations with friends and collaborators suggests that variants of this story play out in many organisations. This makes the issue well worth exploring. I won’t be so presumptuous as to offer answers; I’m well aware that folks much better qualified than I have spent years attempting to do so. Instead I raise a point which, though often overlooked, might well have something to do with the lack of genuine communication in organisations.
Communication experts have long agreed that face-to-face dialogue is the most effective mode of communication. Backing for this comes from the interactional or pragmatic view, which is based on the premise that communication is more about building relationships than conveying information. Among other things, face-to-face communication enables the communicating parties to observe and interpret non-verbal signals such as facial expression and gestures and, as we all know, these often “say” much more than what’s being said.
A few months ago I started paying closer attention to non-verbal cues. This can be hard to do because people are good at disguising their feelings. Involuntary expressions indicative of people’s real thoughts can be fleeting. A flicker of worry, fear or anger is quickly covered by a mask of indifference.
In meetings, difficult topics tend to be couched in platitudinous language. Platitudes are empty words that sound great but can be interpreted in many different ways. Reconciling those differences often leads to pointless arguments that are emotionally draining. Perhaps this is why people prefer to take refuge in indifference.
A while ago I was sitting in a meeting where the phrase “value add activity” (sic) cropped up once, then again…and then many times over. Soon it was apparent that everyone in the room had a very different conception of what constituted a “value add activity.” Some argued that project management is a value add activity, others disagreed vehemently arguing that project management is a bureaucratic exercise and that real value lies in creating something. Round and round the arguments went but there was no agreement on what constituted a “value add activity.” The discussion generated a lot of heat but shed no light whatsoever on the term.
A problem with communication in the corporate world is that it is loaded with such platitudes. To make sense of these, people have to pay what I call a “value add” tax – the effort in reaching a consensus on what the platitudinous terms mean. This can be emotionally extortionate because platitudes often touch upon issues that affect people’s sense of well-being.
Indifference is easier because we can then pretend to understand and agree with each other when we would rather not understand, let alone agree, at all.
This year has been hugely exciting so far: I’ve been exploring and playing with various techniques that fall under the general categories of data mining and text analytics. What’s been particularly satisfying is that I’ve been fortunate to find meaningful applications for these techniques within my organization.
Although I have a fair way to travel yet, I’ve learnt that common wisdom about data analytics – especially the stuff that comes from software vendors and high-end consultancies – can be misleading, even plain wrong. Hence this post in which I dispatch some myths and share a few pointers on establishing data analytics capabilities within an organization.
Busting a few myths
Let’s get right to it by taking a critical look at a few myths about setting up an internal data analytics practice.
- Requires high-end technology and a big budget: this myth is easy to bust because I can speak from recent experience. No, you do not need cutting-edge technology or an oversized budget. You can get started for with an outlay of 0$ – yes, that’s right, for free! All you need to is the open-source statistical package R (check out this section of my article on text mining for more on installing and using R) and the willingness to roll-up your sleeves and learn (more about this later). No worries if you prefer to stick with familiar tools – you can even begin with Excel.
- Needs specialist skills: another myth floating around is that you need Phd level knowledge in statistics or applied mathematics to do practical work in analytics. Sorry, but that’s plain wrong. You do need a PhD to do research in the analytics and develop your own algorithms, but not if you want to apply algorithms written by others.Yes, you will need to develop an understanding of the algorithms you plan to use, a feel for how they work and the ability to tell whether the results make sense. There are many good resources that can help you develop these skills – see, for example, the outstanding books by James, Witten, Hastie and Tibshirani and Kuhn and Johnson.
- Must have sponsorship from the top: this one is possibly a little more controversial than the previous two. It could be argued that it is impossible to gain buy in for a new capability without sponsorship from top management. However, in my experience, it is OK to start small by finding potential internal “customers” for analytics services through informal conversations with folks in different functions.I started by having informal conversations with managers in two different areas: IT infrastructure and sales / marketing. I picked these two areas because I knew that they had several gigabytes of under-exploited data – a good bit of it unstructured – and a lot of open questions that could potentially be answered (at least partially) via methods of data and text analytics. It turned out I was right. I’m currently doing a number of proofs of concept and small projects in both these areas. So you don’t need sponsorship from the top as long as you can get buy in from people who have problems they believe you can solve. If you deliver, they may even advocate your cause to their managers.
A caveat is in order at this point: my organization is not the same as yours, so you may well need to follow a different path from mine. Nevertheless, I do believe that it is always possible to find a way to start without needing permission or incurring official wrath. In that spirit, I now offer some suggestions to help kick-start your efforts
As the truism goes, the hardest part of any new effort is getting started. The first thing to keep in mind is to start small. This is true even if you have official sponsorship and a king-sized budget. It is very tempting to spend a lot of time garnering management support for investing in high-end technology. Don’t do it! Do the following instead:
- Develop an understanding of the problems faced by people you plan to approach: The best way to do this is to talk to analysts or frontline managers. In my case, I was fortunate to have access to some very savvy analysts in IT service management and marketing who gave me a slew of interesting ideas to pursue. A word of advice: it is best not to talk to senior managers until you have a few concrete results that you can quantify in terms of dollar values.
- Invest time and effort in understanding analytics algorithms and gaining practical experience with them: As mentioned earlier, I started with R – and I believe it is the best choice. Not just because it is free but also because there are a host of packages available to tackle just about any analytics problem you might encounter. There are some excellent free resources available to get you started with R (check out this listing on the r-statistics blog, for example).It is important that you start cutting code as you learn. This will help you build a repertoire of techniques and approaches as you progress. If you get stuck when coding, chances are you will find a solution on the wonderful stackoverflow site.
- Evangelise, evangelise, evangelise: You are, in effect, trying to sell an idea to people within your organization. You therefore have to identify people who might be able to help you and then convince them that your idea has merit. The best way to do the latter is to have concrete examples of problems that you have tackled. This is a chicken-and-egg situation in that you can’t have any examples until you gain support. I got support by approaching people I know well. I found that most – no, all – of them were happy to provide me with interesting ideas and access to their data.
- Begin with small (but real) problems: It is important to start with the “low-hanging fruit” – the problems that would take the least effort to solve. However, it is equally important to address real problems, i.e. those that matter to someone.
- Leverage your organisation’s existing data infrastructure: From what I’ve written thus far, I may have given you the impression that the tools of data analytics stand separate from your existing data infrastructure. Nothing could be further from the truth. In reality, I often do the initial work (basic preprocessing and exploratory analysis) using my organisation’s relational database infrastructure. Relational databases have sophisticated analytical extensions to SQL as well as efficient bulk data cleansing and transport facilities. Using these make good sense, particularly if your R installation is on a desktop or laptop computer as it is in my case. Moreover, many enterprise database vendors now offer add-on options that integrate R with their products. This gives you the best of both worlds – relational and analytical capabilities on an enterprise-class platform.
- Build relationships with the data management team: Remember the work you are doing falls under the ambit of the group that is officially responsible for managing data in your organization. It is therefore important that you keep them informed of what you’re doing. Sooner or later your paths will cross, and you want to be sure that there are no nasty surprises (for either side!) at that point. Moreover, if you build connections with them early, you may even find that the data management team supports your efforts.
Having waxed verbose, I should mention that my effort is work in progress and I do not know where it will lead. Nevertheless, I offer these suggestions as a wayfarer who is considerably further down the road from where he started.
You may have noticed that I’ve refrained from using the overused and over-hyped term “Big Data” in this piece. This is deliberate. Indeed, the techniques I have been using have nothing to do with the size of the datasets. To be honest, I’ve applied them to datasets ranging from a few thousand to a few hundred thousand records, both of which qualify as Very Small Data in today’s world.
Your vendor will be only too happy to sell you Big Data infrastructure that will set you back a good many dollars. However, the chances are good that you do not need it right now. You’ll be much better off going back to them after you hit the limits of your current data processing infrastructure. Moreover, you’ll also be better informed about your needs then.
You may also be wondering why I haven’t said much about the composition of the analytics team (barring the point about not needing PhD statisticians) and how it should be organized. The reason I haven’t done so is that I believe the right composition and organizational structure will emerge from the initial projects done and feedback received from internal customers. The resulting structure will be better suited to the organization than one that is imposed upfront. Long time readers of this blog might recognize this as a tenet of emergent design.
Finally, I should reiterate that my efforts are still very much in progress and I know not where they will lead. However, even if they go nowhere, I would have learnt something about my organization and picked up a useful, practical skill. And that is good enough for me.
Managers display a range of attitudes towards planning for the future. In an essay entitled Systems, Messes and Interactive Planning, the management guru/philosopher Russell Ackoff classified attitudes to organizational planning into four distinct types which I describe in detail below. I suspect you may recognise examples of each of these in your organisation…indeed, you might even see shades of yourself 🙂
This attitude, as its name suggests, is characterized by a lack of meaningful action. Inactivism is often displayed by managers in organisations that favour the status quo. These organisations are happy with the way things are, and therefore see no need to change. However, lack of meaningful action does not mean lack of action. On the contrary, it often takes a great deal of effort to fend off change and keep things the way they are. As Ackoff states:
Inactive organizations require a great deal of activity to keep changes from being made. They accomplish nothing in a variety of ways. First, they require that all important decisions be made “at the top.” The route to the top is deliberately designed like an obstacle course. This keeps most recommendations for change from ever getting there. Those that do are likely to have been delayed enough to make them irrelevant when they reach their destination. Those proposals that reach the top are likely to be farther delayed, often by being sent back down or out for modification or evaluation. The organization thus behaves like a sponge and is about as active…
The inactive manager spends a lot of time and effort in ensuring that things remain the way they are. Hence they act only when a stituation forces them to. Ackoff puts it in his inimitable way by stating that, “Inactivist managers tend to want what they get rather than get what they want.”
Reactivist managers are a step worse than inactivists because they believe that disaster is already upon them. This is the type of manager who hankers after the “golden days of yore when things were much better than they are today.” As a result of their deep unease of where they are now, they may try to undo the status quo. As Ackoff points out, unlike inactivists, reactivists do not ride the tide but try to swim against it.
Typically reactivist managers are wary of technology and new concepts. Moreover, they tend to give more importance to seniority and experience rather than proven competence. They also tend to be fans of simplistic solutions to complex problems…like “solving” the problem of a behind-schedule software project by throwing more people at it.
Preactivists are the opposite of reactivists in that they believe the future is going to be better than the past. Consequently, their efforts are geared towards understanding what the future will look like and how they can prepare for it. Typically, preactive managers are concerned with facts, figures and forecasts; they are firm believers in scientific planning methods that they have learnt in management schools. As such, one might say that this is the most common species of manager in present day organisations. Those who are not natural preactivists will fly the preactivist flag when they’re asked for their opinions by their managers because it’s the expected answer.
A key characteristic of preactivist managers is that they tend to revel in creating plans rather than implementing them. As Ackoff puts it, “Preactivists see planning as a sequence of discrete steps which terminate with acceptance or rejection of their plans. What happens to their plans is the responsibility of others.”
Interactivists planners are not satisfied with the present, but unlike reactivists or preactivists, they do not hanker for the past, nor do they believe the future is automatically going to be better. They do want to make things better than they were or currently are, but they are continually adjusting their plans for the future by learning from and responding to events. In short, they believe they can shape the future by their actions.
Experimentation is the hallmark of interactivists. They are willing to try different approaches and learn from them. Although they believe in learning by experience, they do not want to wait for experiences to happen; they would rather induce them by (often small-scale) experimentation.
Ackoff labels interactivists as idealisers – people who pursue ideals they know cannot be achieved, but can be approximated or even reformulated in the light of new knowledge. As he puts it:
They treat ideals as relative absolutes: ultimate objectives whose formulation depends on our current knowledge and understanding of ourselves and our environment. Therefore, they require continuous reformulation in light of what we learn from approaching them.
To use a now fashionable term, interactivists are intrapreneurs.
Although Ackoff shows a clear bias towards interactivists in his article, he does mention that specific situations may call for other types of planners. As he puts it:
Despite my obvious bias in my characterization of these four postures, there are circumstances in which each is most appropriate. Put simply, if the internal and external dynamics of a system (the tide) are taking one where one wants to go and are doing so quickly enough, inactivism is appropriate. If the direction of change is right but the movement is too slow, preactivism is appropriate. If the change is taking one where one does not want to go and one prefers to stay where one is or was, reactivism is appropriate. However, if one is not willing to settle for the past, the present or the future that appears likely now, interactivism is appropriate.
The key point he makes is that inactivists and preactivists treat planning as a ritual because they see the future as something they cannot change. They can only plan for it (and hope for the best). Interactivists, on the other hand, look for opportunities to influence events and thus potentially change the future. Although both preactivists and interactivists are forward-looking, interactivists tend to be long-term thinkers as compared to preactivists who are more concerned about the short to medium term future.
Ackoff’s classification of planners in organisations is interesting because it highlights the kind of future-focused attitude that managers ought to take. The sad fact, though, is that a significant number of managers are myopic preactivists, focused on this year’s performance targets rather than what their organisations might look like five or even ten years down the line. This is not the fault of individuals, though. The blame for the undue prevalence of myopic preactivism can be laid squarely on the deep-seated management dogma that rewards short-termism.
Since the 1980s, intangible assets, such as knowledge, have come to represent an ever-increasing proportion of an organisation’s net worth. One of the problems associated with treating knowledge as an asset is that it is difficult to codify in its entirety. This is largely because knowledge is context and skill dependent, and these are hard to convey by any means other than experience. This is the well-known tacit versus explicit knowledge problem that I have written about at length elsewhere (see this post and this one, for example). Although a recent development in knowledge management technology goes some way towards addressing the problem of context, it still looms large and is likely to for a while.
Although the problem mentioned above is well-known, it hasn’t stopped legions of consultants and professional organisations from attempting to codify and sell expertise: management consultancies and enterprise IT vendors being prime examples. This has given rise to the notion of a knowledge-intensive firm, an organization in which most work is said to be of an intellectual nature and where well-educated, qualified employees form the major part of the work force. However, the slipperiness of knowledge mentioned in the previous paragraph suggests that the notion of a knowledge intensive firm (and, by implication, expertise) is problematic. Basically, if it is true that knowledge itself is elusive, and hard-to-codify, it raises the question as to what exactly such firms (and their employees) sell.
In this post, I shed some light on this question by drawing on an interesting paper by Mats Alvesson entitled, Knowledge Work: Ambiguity, Image and Identity (abstract only), as well as my experiences in dealing with IT services and consulting firms.
Background: the notion of a knowledge-intensive firm
The first point to note is that the notion of a knowledge-intensive firm is not particularly precise. Based on the definition offered above, it is clear that a wide variety of organisations may be classified as knowledge intensive firms. For example, management consultancies and enterprise software companies would fall into this category, as would law, accounting and research & development firms. The same is true of the term knowledge work(er).
One of the implications of the vagueness of the term is that any claim to being a knowledge-intensive firm or knowledge worker can be contested. As Alvesson states:
It is difficult to substantiate knowledge-intensive companies and knowledge workers as distinct, uniform categories. The distinction between these and non- (or less) knowledge-intensive organization/non-knowledge workers is not self-evident, as all organizations and work involve “knowledge” and any evaluation of “intensiveness” is likely to be contestable. Nevertheless, there are, in many crucial respects, differences between many professional service and high-tech companies on the one hand, and more routinized service and industry companies on the other, e.g. in terms of broadly socially shared ideas about the significance of a long theoretical education and intellectual capacities for the work. It makes sense to refer to knowledge-intensive companies as a vague but meaningful category, with sufficient heuristic value to be useful. The category does not lend itself to precise definition or delimitation and it includes organizations which are neither unitary nor unique. Perhaps the claim to knowledge-intensiveness is one of the most distinguishing features…
The last line in the excerpt is particularly interesting to me because it resonates with my experience: having been through countless IT vendor and management consulting briefings on assorted products and services, it is clear that a large part of their pitch is aimed at establishing their credibility as experts in the field, even though they may not actually be so.
The ambiguity of knowledge work
Expertise in skill-based professions is generally unambiguous – an incompetent pilot will be exposed soon enough. In knowledge work, however, genuine expertise is often not so easily discernable. Alvesson highlights a number of factors that make this so.
Firstly, much of the day-to-day work of knowledge workers such as management consultants and IT experts involves routine matters – meetings, documentation etc. – that do not make great demands on their skills. Moreover, even when involved in one-off tasks such as projects, these workers are generally assigned tasks that they are familiar with. In general, therefore, the nature of their work requires them to follow already instituted processes and procedures. A somewhat unexpected consequence of this is that incompetence can remain hidden for a long time.
A second issue is that the quality of so-called knowledge work is often hard to evaluate – indeed evaluations may require the engagement of independent experts! This is true even of relatively mundane expertise-based work. As Alvesson states:
Comparisons of the decisions of expert and novice auditors indicate no relationship between the degree of expertise (as indicated by experience) and consensus; in high-risk and less standard situations, the experts’ consensus level was lower than that of novices. [An expert remarked that] “judging the quality of an audit is an extremely problematic exercise” and says that consumers of the audit service “have only a very limited insight into the quality of work undertaken by an audit firm”.
This is true of many different kinds of knowledge work. As Alvesson tells us:
How can anyone tell whether a headhunting firm has found and recruited the best possible candidates or not…or if an audit has been carried out in a high-quality way? Or if the proposal by strategic management consultants is optimal or even helpful, or not. Of course, sometimes one may observe whether something works or not (e.g. after the intervention of a plumber), but normally the issues concerned are not that simple in the context in which the concept of knowledge-intensiveness is frequently used. Here we are mainly dealing with complex and intangible phenomena. Even if something seems to work, it might have worked even better or the cost of the intervention been much lower if another professional or organization had carried out the task.
In view of the above, it is unlikely that market mechanisms would be effective in sorting out the competent from the incompetent. Indeed, my experience of dealing with major consulting firms (in IT) leads me believe that market mechanisms tend to make them clones of each other, at least in terms of their offerings and approach. This may be part of the reason why client firms tend to base their contracting decisions on the basis of cost or existing relationships – it makes sense to stick with the known, particularly when the alternatives offer choices akin to Pepsi vs Coke.
But that is not the whole story, experts are often hired for ulterior motives. On the one hand, they might be hired because they confer legitimacy – “no one ever got fired for hiring McKinsey” is a quote I’ve heard more than a few times in many workplaces. On the other hand, they also make convenient scapegoats when the proverbial stuff hits the fan.
One of the consequences of the ambiguity of knowledge-intensive work is that employees in such firms are forced to cultivate and maintain the image of being experts, and hence the stereotype of the suited, impeccably-groomed Big 4 consultant. As Alvesson points out, though, image cultivation goes beyond the individual employee:
This image must be managed on different levels: professional-industrial, corporate and individual. Image may be targeted in specific acts and arrangements, in visible symbols for public consumption but also in everyday behavior, within the organization and in interaction with others. Thus image is not just of importance in marketing and for attracting personnel but also in and after production. Size and a big name are therefore important for many knowledge-intensive companies – and here we perhaps have a major explanation for all the mergers and acquisitions in accounting, management consultancy and other professional service companies. A large size is reassuring. A well-known brand name substitutes for difficulties in establishing quality.
Another aspect of image cultivation is the use of rhetoric. Here are some examples taken from the websites of Big 4 consulting firms:
“No matter the challenge, we focus on delivering practical and enduring results, and equipping our clients to grow and lead.” —McKinsey
“We continue to redefine ourselves and set the bar higher to continually deliver quality for clients, our people, and the society in which we operate.” – Deloitte
“Cutting through complexity” – KPMG
“Creating value for our clients, people and communities in a changing world” – PWC
Some clients are savvy enough not to be taken in by the platitudinous statements listed above. However, the fact that knowledge-intensive firms continue to use second-rate rhetoric to attract custom suggests that there are many customers who are easily taken in by marketing slogans. These slogans are sometimes given an aura of plausibility via case-studies intended to back the claims made. However, more often than not the case studies are based on a selective presentation of facts that depict the firm in the best possible light.
A related point is that such firms often flaunt their current client list in order to attract new clientele. Lines like, “our client list includes 8 of top ten auto manufacturers in the world,” are not uncommon, the unstated implication being that if you are an auto manufacturer, you cannot afford not to engage us. The image cultivation process continues well after the consulting engagement is underway. Indeed, much of a consultant’s effort is directed at ensuring that the engagement will be extended.
Finally, it is important to point out the need to maintain an aura of specialness. Consultants and knowledge workers are valued for what they know. It is therefore in their interest to maintain a certain degree of exclusivity of knowledge. Guilds (such as the Project Management Institute) act as gatekeepers by endorsing the capabilities of knowledge workers through membership criteria based on experience and / or professional certification programs.
Maintaining the façade
Because knowledge workers deal with intangibles, they have to work harder to maintain their identities than those who have more practical skills. They are therefore more susceptible to the vagaries and arbitrariness of organisational life. As Alvesson notes,
Given the high level of ambiguity and the fluidity of organizational life and interactions with external actors, involving a strong dependence on somewhat arbitrary evaluations and opinions of others, many knowledge-intensive workers must struggle more for the accomplishment, maintenance and gradual change of self-identity, compared to workers whose competence and results are more materially grounded…Compared with people who invest less self- esteem in their work and who have lower expectations, people in knowledge-intensive companies are thus vulnerable to frustrations contingent upon ambiguity of performance and confirmation.
Knowledge workers are also more dependent on managerial confirmation of their competence and value. Indeed, unlike the case of the machinist or designer, a knowledge worker’s product rarely speaks for itself. It has to be “sold”, first to management and then (possibly) to the client and the wider world.
The previous paragraphs of this section dealt with individual identity. However, this is not the whole story because organisations also play a key role in regulating the identities of their employees. Indeed, this is how they develop their brand. Alvesson notes four ways in which organisations do this:
- Corporate identity – large consulting firms are good examples of this. They regulate the identities of their employees through comprehensive training and acculturation programs. As a board member remarked to me recently, “I like working with McKinsey people, because I was once one myself and I know their approach and thinking processes.”
- Cultural programs – these are the near-mandatory organisational culture initiatives in large organisations. Such programs are usually based on a set of “guiding principles” which are intended to inform employees on how they should conduct themselves as employees and representatives of the organisation. As Alvesson notes, these are often more effective than formal structures.
- Normalisation – these are the disciplinary mechanisms that are triggered when an employee violates an organisational norm. Examples of this include formal performance management or official reprimands. Typically, though, the underlying issue is rarely addressed. For example, a failed project might result in a reprimand or poor performance review for the project manager, but the underlying systemic causes of failure are unlikely to be addressed…or even acknowledged.
- Subjectification – This is where employees mould themselves to fit their roles or job descriptions. A good example of this is when job applicants project themselves as having certain skills and qualities in their resumes and in interviews. If selected, they may spend the first few months in learning and internalizing what is acceptable and what is not. In time, the new behaviours are internalized and become a part of their personalities.
It is clear from the above that maintaining the façade of expertise in knowledge work involves considerable effort and manipulation, and has little to do with genuine knowledge. Indeed, it is perhaps because genuine expertise is so hard to identify that people and organisations strive to maintain appearances.
The ambiguous nature of knowledge requires (and enables!) consultants and technology vendors to maintain a façade of expertise. This is done through a careful cultivation of image via the rhetoric of marketing, branding and impression management.The onus is therefore on buyers to figure out if there’s anything of substance behind words and appearances. The volume of business enjoyed by big consulting firms suggests that this does not happen as often as it should, leading us to the inescapable conclusion that decision-makers in organisations are all too easily deceived by the facade of expertise.
“You mean there’s a catch?”
“Sure there’s a catch”, Doc Daneeka replied. “Catch-22. Anyone who wants to get out of combat duty isn’t really crazy.”
There was only one catch and that was Catch-22, which specified that a concern for one’s own safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions…” Joseph Heller, Catch-22
The term Catch-22 was coined by Joseph Heller in the eponymous satirical novel written in 1961. As the quote above illustrates, the term refers to a paradoxical situation caused by the application of contradictory rules. Catch-22 situations are common in large organisations of all kinds, not just the military (which was the setting of the novel). So much so that it is a theme that has attracted some scholarly attention over the half century since the novel was first published – see this paper or this one for example.
Although Heller uses Catch-22 situations to highlight the absurdities of bureaucracies in a humorous way, in real-life such situations can be deeply troubling for people who are caught up in them. In a paper published in 1956, the polymath Gregory Bateson and his colleagues suggested that these situations can cause people to behave in ways that are symptomatic of schizophrenia . The paper introduces the notion of a double-bind, which is a dilemma arising from an individual receiving two or more messages that contradict each other . In simple terms, then, a double-bind is a Catch-22.
In this post, I draw on Bateson’s double bind theory to get some insights into Catch-22 situations in organisations.
Double bind theory
The basic elements of a double bind situation are as follows:
- Two or more individuals, one of whom is a victim – i.e. the individual who experiences the dilemma described below.
- A primary rule which keeps the victim fearful of the consequences of doing (or not doing) something. This rule typically takes the form , “If you do x then you will be punished” or “If you do not do x then you will be punished. “
- A secondary rule that is in conflict with the primary rule, but at more abstract level. This rule, which is usually implicit, typically takes the form, “Do not question the rationale behind x.”
- A tertiary rule that prevents the victim from escaping from the situation.
- Repeated experiences of (1) and (2)
A simple example (quoted from this article) serves to illustrate the above in a real- life situation:
One example of double bind communication is a mother giving her child the message: “Be spontaneous” If the child acts spontaneously, he is not acting spontaneously because he is following his mother’s direction. It’s a no-win situation for the child. If a child is subjected to this kind of communication over a long period of time, it’s easy to see how he could become confused.
Here the injunction to “Be spontaneous” is contradicted by the more implicit rule that “one cannot be spontaneous on demand.” It is important to note that the primary and secondary (implicit) rules are at different logical levels – the first is about an action, whereas the second is about the nature of all such actions. This is typical of a double bind situation.
The paradoxical aspects of double binds can sometimes be useful as they can lead to creative solutions arising from the victim “stepping outside the situation”. The following example from Bateson’s paper illustrates the point:
The Zen Master attempts to bring about enlightenment in his pupil in various ways. One of the things he does is to hold a stick over the pupil’s head and say fiercely, “If you say this stick is real, I will strike you with it. If you say this stick is not real, I will strike you with it. If you don’t say anything, I will strike you with it.”… The Zen pupil might reach up and take the stick away from the Master–who might accept this response.
This is an important point which we’ll return to towards the end of this piece.
Double binds in organisations
Double bind situations are ubiquitous in organisations. I’ll illustrate this by drawing on a couple of examples I have written about earlier on this blog.
The paradox of learning organisations
This section draws on a post I wrote while ago. In the introduction to that post I stated that:
The term learning organisation refers to an organisation that continually modifies its processes based on observation and experience, thus adapting to changes in its internal and external environment. Ever since Peter Senge coined the term in his book, The Fifth Discipline, assorted consultants and academics have been telling us that although a learning organisation is an utopian ideal, it is one worth striving for. The reality, however, is that most organisations that undertake the journey actually end up in a place far removed from this ideal. Among other things, the journey may expose managerial hypocrisies that contradict the very notion of a learning organisation.
Starkly put, the problem arises from the fact that in a true learning organisation, employees will inevitably start to question things that management would rather they didn’t. Consider the following story, drawn from this paper on which the post is based:
…a multinational company intending to develop itself as a learning organization ran programmes to encourage managers to challenge received wisdom and to take an inquiring approach. Later, one participant attended an awayday, where the managing director of his division circulated among staff over dinner. The participant raised a question about the approach the MD had taken on a particular project; with hindsight, had that been the best strategy? `That was the way I did it’, said the MD. `But do you think there was a better way?’, asked the participant. `I don’t think you heard me’, replied the MD. `That was the way I did it’. `That I heard’, continued the participant, `but might there have been a better way?’. The MD fixed his gaze on the participants’ lapel badge, then looked him in the eye, saying coldly, `I will remember your name’, before walking away.
Of course, a certain kind of learning occurred here: the employee learnt that certain questions were taboo, in stark contrast to the openness that was being preached from the organisational pulpit. The double bind here is evident: feel free to question and challenge everything…except what management deems to be out of bounds. The takeaway for employees is that, despite all the rhetoric of organisational learning, certain things should not be challenged. I think it is safe to say that this was probably not the kind of learning that was intended by those who initiated the program.
The paradoxes of change
In a post on the paradoxes of organizational change, I wrote that:
An underappreciated facet of organizational change is that it is inherently paradoxical. For example, although it is well known that such changes inevitably have unintended consequences that are harmful, most organisations continue to implement change initiatives in a manner that assumes complete controllability with the certainty of achieving solely beneficial outcomes.
As pointed out in this paper, there are three types of paradoxes that can arise when an organisation is restructured. The first is that during the transition, people are caught between the demands of their old and new roles. This is exacerbated by the fact that transition periods are often much longer expected. This paradox of performing in turn leads to a paradox of belonging – people become uncertain about where their loyalties (ought to) lie.
Finally, there is a paradox of organising, which refers to the gap between the rhetoric and reality of change. The paper mentioned above has a couple of nice examples. One study described how,
“friendly banter in meetings and formal documentation [promoted] front-stage harmony, while more intimate conversations and unit meetings [intensified] backstage conflict.” Another spoke of a situation in which, “…change efforts aimed at increasing employee participation [can highlight] conflicting practices of empowerment and control. In particular, the rhetoric of participation may contradict engrained organizational practices such as limited access to information and hierarchical authority for decision making…
Indeed, the gap between the intent and actuality of change initiatives make double binds inevitable.
I suspect the situations described above will be familiar to people working in a corporate environment. The question is what can one do if one is on the receiving end of such a Catch 22?
The main thing is to realise that a double-bind arises because one perceives the situation to be so. That is, the person experiencing the situation has chosen to interpret it as a double bind. To be sure, there are usually factors that influence the choice – things such as job security, for example – but the fact is that it is a choice that can be changed if one sees things in a different light. Escaping the double bind is then a “simple” matter of reframing the situation.
Here is where the notion of mindfulness is particularly relevant. In brief, mindfulness is “the intentional, accepting and non-judgemental focus of one’s attention on the emotions, thoughts and sensations occurring in the present moment.” As the Zen pupil who takes the stick away from the Master, a calm non-judgemental appraisal of a double-bind situation might reveal possible courses of action that had been obscured because of one’s fears. Indeed, the realization that one has more choices than one thinks is in itself a liberating discovery.
It is important to emphasise that the actual course of action that one selects in the end matters less than the realisation that one’s reactions to such situations is largely under one’s own control.
In closing – reframe it!
Organisational life is rife with Catch 22s. Most of us cannot avoid being caught up in them, but we can choose how we react to them. This is largely a matter of reframing them in ways that open up new avenues for action, a point that brings to mind this paragraph from Catch-22 (the book):
“Why don’t you use some sense and try to be more like me? You might live to be a hundred and seven, too.”
“Because it’s better to die on one’s feet than live on one’s knees,” Nately retorted with triumphant and lofty conviction. “I guess you’ve heard that saying before.”
“Yes, I certainly have,” mused the treacherous old man, smiling again. “But I’m afraid you have it backward. It is better to live on one’s feet than die on one’s knees. That is the way the saying goes.”
“Are you sure?” Nately asked with sober confusion. “It seems to make more sense my way.”
“No, it makes more sense my way. Ask your friends.”
And that, I reckon, is as brilliant an example of reframing as I have ever come across.