Archive for the ‘Paper Review’ Category
Since the 1980s, intangible assets, such as knowledge, have come to represent an ever-increasing proportion of an organisation’s net worth. One of the problems associated with treating knowledge as an asset is that it is difficult to codify in its entirety. This is largely because knowledge is context and skill dependent, and these are hard to convey by any means other than experience. This is the well-known tacit versus explicit knowledge problem that I have written about at length elsewhere (see this post and this one, for example). Although a recent development in knowledge management technology goes some way towards addressing the problem of context, it still looms large and is likely to for a while.
Although the problem mentioned above is well-known, it hasn’t stopped legions of consultants and professional organisations from attempting to codify and sell expertise: management consultancies and enterprise IT vendors being prime examples. This has given rise to the notion of a knowledge-intensive firm, an organization in which most work is said to be of an intellectual nature and where well-educated, qualified employees form the major part of the work force. However, the slipperiness of knowledge mentioned in the previous paragraph suggests that the notion of a knowledge intensive firm (and, by implication, expertise) is problematic. Basically, if it is true that knowledge itself is elusive, and hard-to-codify, it raises the question as to what exactly such firms (and their employees) sell.
In this post, I shed some light on this question by drawing on an interesting paper by Mats Alvesson entitled, Knowledge Work: Ambiguity, Image and Identity (abstract only), as well as my experiences in dealing with IT services and consulting firms.
Background: the notion of a knowledge-intensive firm
The first point to note is that the notion of a knowledge-intensive firm is not particularly precise. Based on the definition offered above, it is clear that a wide variety of organisations may be classified as knowledge intensive firms. For example, management consultancies and enterprise software companies would fall into this category, as would law, accounting and research & development firms. The same is true of the term knowledge work(er).
One of the implications of the vagueness of the term is that any claim to being a knowledge-intensive firm or knowledge worker can be contested. As Alvesson states:
It is difficult to substantiate knowledge-intensive companies and knowledge workers as distinct, uniform categories. The distinction between these and non- (or less) knowledge-intensive organization/non-knowledge workers is not self-evident, as all organizations and work involve “knowledge” and any evaluation of “intensiveness” is likely to be contestable. Nevertheless, there are, in many crucial respects, differences between many professional service and high-tech companies on the one hand, and more routinized service and industry companies on the other, e.g. in terms of broadly socially shared ideas about the significance of a long theoretical education and intellectual capacities for the work. It makes sense to refer to knowledge-intensive companies as a vague but meaningful category, with sufficient heuristic value to be useful. The category does not lend itself to precise definition or delimitation and it includes organizations which are neither unitary nor unique. Perhaps the claim to knowledge-intensiveness is one of the most distinguishing features…
The last line in the excerpt is particularly interesting to me because it resonates with my experience: having been through countless IT vendor and management consulting briefings on assorted products and services, it is clear that a large part of their pitch is aimed at establishing their credibility as experts in the field, even though they may not actually be so.
The ambiguity of knowledge work
Expertise in skill-based professions is generally unambiguous – an incompetent pilot will be exposed soon enough. In knowledge work, however, genuine expertise is often not so easily discernable. Alvesson highlights a number of factors that make this so.
Firstly, much of the day-to-day work of knowledge workers such as management consultants and IT experts involves routine matters – meetings, documentation etc. – that do not make great demands on their skills. Moreover, even when involved in one-off tasks such as projects, these workers are generally assigned tasks that they are familiar with. In general, therefore, the nature of their work requires them to follow already instituted processes and procedures. A somewhat unexpected consequence of this is that incompetence can remain hidden for a long time.
A second issue is that the quality of so-called knowledge work is often hard to evaluate – indeed evaluations may require the engagement of independent experts! This is true even of relatively mundane expertise-based work. As Alvesson states:
Comparisons of the decisions of expert and novice auditors indicate no relationship between the degree of expertise (as indicated by experience) and consensus; in high-risk and less standard situations, the experts’ consensus level was lower than that of novices. [An expert remarked that] “judging the quality of an audit is an extremely problematic exercise” and says that consumers of the audit service “have only a very limited insight into the quality of work undertaken by an audit firm”.
This is true of many different kinds of knowledge work. As Alvesson tells us:
How can anyone tell whether a headhunting firm has found and recruited the best possible candidates or not…or if an audit has been carried out in a high-quality way? Or if the proposal by strategic management consultants is optimal or even helpful, or not. Of course, sometimes one may observe whether something works or not (e.g. after the intervention of a plumber), but normally the issues concerned are not that simple in the context in which the concept of knowledge-intensiveness is frequently used. Here we are mainly dealing with complex and intangible phenomena. Even if something seems to work, it might have worked even better or the cost of the intervention been much lower if another professional or organization had carried out the task.
In view of the above, it is unlikely that market mechanisms would be effective in sorting out the competent from the incompetent. Indeed, my experience of dealing with major consulting firms (in IT) leads me believe that market mechanisms tend to make them clones of each other, at least in terms of their offerings and approach. This may be part of the reason why client firms tend to base their contracting decisions on the basis of cost or existing relationships – it makes sense to stick with the known, particularly when the alternatives offer choices akin to Pepsi vs Coke.
But that is not the whole story, experts are often hired for ulterior motives. On the one hand, they might be hired because they confer legitimacy – “no one ever got fired for hiring McKinsey” is a quote I’ve heard more than a few times in many workplaces. On the other hand, they also make convenient scapegoats when the proverbial stuff hits the fan.
One of the consequences of the ambiguity of knowledge-intensive work is that employees in such firms are forced to cultivate and maintain the image of being experts, and hence the stereotype of the suited, impeccably-groomed Big 4 consultant. As Alvesson points out, though, image cultivation goes beyond the individual employee:
This image must be managed on different levels: professional-industrial, corporate and individual. Image may be targeted in specific acts and arrangements, in visible symbols for public consumption but also in everyday behavior, within the organization and in interaction with others. Thus image is not just of importance in marketing and for attracting personnel but also in and after production. Size and a big name are therefore important for many knowledge-intensive companies – and here we perhaps have a major explanation for all the mergers and acquisitions in accounting, management consultancy and other professional service companies. A large size is reassuring. A well-known brand name substitutes for difficulties in establishing quality.
Another aspect of image cultivation is the use of rhetoric. Here are some examples taken from the websites of Big 4 consulting firms:
“No matter the challenge, we focus on delivering practical and enduring results, and equipping our clients to grow and lead.” —McKinsey
“We continue to redefine ourselves and set the bar higher to continually deliver quality for clients, our people, and the society in which we operate.” – Deloitte
“Cutting through complexity” – KPMG
“Creating value for our clients, people and communities in a changing world” – PWC
Some clients are savvy enough not to be taken in by the platitudinous statements listed above. However, the fact that knowledge-intensive firms continue to use second-rate rhetoric to attract custom suggests that there are many customers who are easily taken in by marketing slogans. These slogans are sometimes given an aura of plausibility via case-studies intended to back the claims made. However, more often than not the case studies are based on a selective presentation of facts that depict the firm in the best possible light.
A related point is that such firms often flaunt their current client list in order to attract new clientele. Lines like, “our client list includes 8 of top ten auto manufacturers in the world,” are not uncommon, the unstated implication being that if you are an auto manufacturer, you cannot afford not to engage us. The image cultivation process continues well after the consulting engagement is underway. Indeed, much of a consultant’s effort is directed at ensuring that the engagement will be extended.
Finally, it is important to point out the need to maintain an aura of specialness. Consultants and knowledge workers are valued for what they know. It is therefore in their interest to maintain a certain degree of exclusivity of knowledge. Guilds (such as the Project Management Institute) act as gatekeepers by endorsing the capabilities of knowledge workers through membership criteria based on experience and / or professional certification programs.
Maintaining the façade
Because knowledge workers deal with intangibles, they have to work harder to maintain their identities than those who have more practical skills. They are therefore more susceptible to the vagaries and arbitrariness of organisational life. As Alvesson notes,
Given the high level of ambiguity and the fluidity of organizational life and interactions with external actors, involving a strong dependence on somewhat arbitrary evaluations and opinions of others, many knowledge-intensive workers must struggle more for the accomplishment, maintenance and gradual change of self-identity, compared to workers whose competence and results are more materially grounded…Compared with people who invest less self- esteem in their work and who have lower expectations, people in knowledge-intensive companies are thus vulnerable to frustrations contingent upon ambiguity of performance and confirmation.
Knowledge workers are also more dependent on managerial confirmation of their competence and value. Indeed, unlike the case of the machinist or designer, a knowledge worker’s product rarely speaks for itself. It has to be “sold”, first to management and then (possibly) to the client and the wider world.
The previous paragraphs of this section dealt with individual identity. However, this is not the whole story because organisations also play a key role in regulating the identities of their employees. Indeed, this is how they develop their brand. Alvesson notes four ways in which organisations do this:
- Corporate identity – large consulting firms are good examples of this. They regulate the identities of their employees through comprehensive training and acculturation programs. As a board member remarked to me recently, “I like working with McKinsey people, because I was once one myself and I know their approach and thinking processes.”
- Cultural programs – these are the near-mandatory organisational culture initiatives in large organisations. Such programs are usually based on a set of “guiding principles” which are intended to inform employees on how they should conduct themselves as employees and representatives of the organisation. As Alvesson notes, these are often more effective than formal structures.
- Normalisation – these are the disciplinary mechanisms that are triggered when an employee violates an organisational norm. Examples of this include formal performance management or official reprimands. Typically, though, the underlying issue is rarely addressed. For example, a failed project might result in a reprimand or poor performance review for the project manager, but the underlying systemic causes of failure are unlikely to be addressed…or even acknowledged.
- Subjectification – This is where employees mould themselves to fit their roles or job descriptions. A good example of this is when job applicants project themselves as having certain skills and qualities in their resumes and in interviews. If selected, they may spend the first few months in learning and internalizing what is acceptable and what is not. In time, the new behaviours are internalized and become a part of their personalities.
It is clear from the above that maintaining the façade of expertise in knowledge work involves considerable effort and manipulation, and has little to do with genuine knowledge. Indeed, it is perhaps because genuine expertise is so hard to identify that people and organisations strive to maintain appearances.
The ambiguous nature of knowledge requires (and enables!) consultants and technology vendors to maintain a façade of expertise. This is done through a careful cultivation of image via the rhetoric of marketing, branding and impression management.The onus is therefore on buyers to figure out if there’s anything of substance behind words and appearances. The volume of business enjoyed by big consulting firms suggests that this does not happen as often as it should, leading us to the inescapable conclusion that decision-makers in organisations are all too easily deceived by the facade of expertise.
In their book, Waltzing with Bears, Tom DeMarco and Timothy Lister coined the phrase, “risk management is project management for adults”. Twenty years on, it appears that their words have been taken seriously: risk management now occupies a prominent place in BOKs, and has also become a key element of project management practice.
On the other hand, if the evidence is to be believed (as per the oft quoted Chaos Report, for example), IT projects continue to fail at an alarming rate. This is curious because one would have expected that a greater focus on risk management ought to have resulted in better outcomes. So, is it possible at all that risk management (as it is currently preached and practiced in IT project management) cannot address certain risks…or, worse, that there are certain risks are simply not recognized as risks?
Some time ago, I came across a paper by Richard Barber that sheds some light on this very issue. This post elaborates on the nature and importance of such “hidden” risks by drawing on Barber’s work as well as my experiences and those of my colleagues with whom I have discussed the paper.
What are internally generated risks?
The standard approach to risk is based on the occurrence of events. Specifically, risk management is concerned with identifying potential adverse events and taking steps to reduce either their probability of occurrence or their impact. However, as Barber points out, this is a limited view of risk because it overlooks adverse conditions that are built into the project environment. A good example of this is an organizational norm that centralizes decision making at the corporate or managerial level. Such a norm would discourage a project manager from taking appropriate action when confronted with an event that demands an on-the-spot decision. Clearly, it is wrong-headed to attribute the risk to the event because the risk actually has its origins in the norm. In other words, it is an internally generated risk.
(Note: the notion of an internally generated risk is akin to the risk as a pathogen concept that I discussed in this post many years ago.)
Barber defines an internally generated risk as one that has its origin within the project organisation or its host, and arises from [preexisting] rules, policies, processes, structures, actions, decisions, behaviours or cultures. Some other examples of such risks include:
- An overly bureaucratic PMO.
- An organizational culture that discourages collaboration between teams.
- An organizational structure that has multiple reporting lines – this is what I like to call a pseudo-matrix organization 🙂
These factors are similar to those that I described in my post on the systemic causes of project failure. Indeed, I am tempted to call these systemic risks because they are related to the entire system (project + organization). However, that term has already been appropriated by the financial risk community.
Since the term is relatively new, it is important to draw distinctions between internally generated and other types of risks. It is easy to do so because the latter (by definition) have their origins outside the hosting organization. A good example of the latter is the risk of a vendor not delivering a module on time or worse, going into receivership prior to delivering the code.
Finally, there are certain risks that are neither internally generated nor external. For example, using a new technology is inherently risky simply because it is new. Such a risk is inherent rather than internally generated or external.
Understanding the danger within
The author of the paper surveyed nine large projects with the intent of getting some insight into the nature of internally generated risks. The questions he attempted to address are the following:
- How common are these risks?
- How significant are they?
- How well are they managed?
- What is the relationship between the ability of an organization to manage such risks and the organisation’s project management maturity level (i.e. the maturity of its project management processes)
Data was gathered through group workshops and one-on-one interviews in which the author asked a number of questions that were aimed at gaining insight into:
- The key difficulties that project managers encountered on the projects.
- What they perceived to be the main barriers to project success.
The aim of the one-on-one interviews was to allow for a more private setting in which sensitive issues (politics, dysfunctional PMOs and brain-dead rules / norms) could be freely discussed.
The data gathered was studied in detail, with the intent of identifying internally generated risks. The author describes the techniques he used to minimize subjectivity and to ensure that only significant risks were considered. I will omit these details here, and instead focus on his findings as they relate to the questions listed above.
Commonality of internally generated risks
Since organizational rules and norms are often flawed, one might expect that internally generated risks would be fairly common in projects. The author found that this was indeed the case with the projects he surveyed: in his words, the smallest number of non-trivial internally generated risks identified in any of the nine projects was 15, and the highest was 30! Note: the identification of non-trivial risks was done by eliminating those risks that a wide range of stakeholders agreed as being unimportant.
Unfortunately, he does not explicitly list the most common internally-generated risks that he found. However, there are a few that he names later in the article. These are:
- Resource allocation (see my article on the resource allocation syndrome for much more on this)
- Inadequate sponsorship (see my post on the systemic roots of project failure for more on this)
I suspect that experienced project managers would be able to name many more.
Significance of internally generated risks
Determining the significance of these risks is tricky because one has to figure out their probability of occurrence. The impact is much easier to get a handle on, as one has a pretty good idea of the consequences of such risks should they eventuate. (Question: What happens if there is inadequate sponsorship? Answer: the project is highly likely to fail!). The author attempted to get a qualitative handle on the probability of occurrence by asking relevant stakeholders to estimate the likelihood of occurrence. Based on the responses received, he found that a large fraction of the internally-generated risks are significant (high probability of occurrence and high impact).
Management of internally generated risks
To identify whether internally generated risks are well managed, the author asked relevant project teams to look at all the significant internal risks on their project and classify them as to whether or not they had been identified by the project team prior to the research. He found that in over half the cases, less than 50% of the risks had been identified. However, most of the risks that were identified were not managed!
The relationship between project management maturity and susceptibility to internally generated risk
Project management maturity refers to the level of adoption of standard good practices within an organization. Conventional wisdom tells us that there should be an inverse correlation between maturity levels and susceptibility to internally generated risk – the higher the maturity level, the lower the susceptibility.
The author assessed maturity levels by interviewing various stakeholders within the organization and also by comparing the processes used within the organization to well-known standards. The results indicated a weak negative correlation – that is, organisations with a higher level of maturity tended to have a smaller number of internally generated risks. However, as the author admits, one cannot read much into this finding as the correlation was weak.
The study suggests that internally generated risks are common and significant on projects. However, the small sample size also suggests that more comprehensive surveys are needed. Nevertheless, anecdotal evidence from colleagues who I spoke with suggests that the findings are reasonably robust. Moreover, it is also clear (both, from the study and my conversations) that these risks are not very well managed. There is a good reason for this: internally generated risks originate in human behavior and / or dysfunctional structures. These tend to be a difficult topic to address in an organizational setting because people are unlikely to tell those above them in the hierarchy that they (the higher ups) are the cause of a problem. A classic example of such a risk is estimation by decree – where a project team is told to just get it done by a certain date. Although most project managers are aware of such risks, they are reluctant to name them for obvious reasons.
I suspect most project managers who work in corporate environments will have had to grapple with internally generated risks in one form or another. Although traditional risk management does not recognize these risks as risks, seasoned project managers know from experience that people, politics or even processes can pose major problems to smooth working of projects. However, even when recognised for what they are, these risks can be hard to tackle because they lie outside a project manager’s sphere of influence. They therefore tend to become those proverbial pachyderms in the room – known to all but never discussed, let alone documented….and therein lies the danger within.
The general image of management consultants in contemporary society is somewhat ambiguous. To take two rather extreme views: high achievers in universities may see management consulting as a challenging (and well paying!) profession that offers opportunities to make a positive difference to organisations, whereas those on the receiving end of a consultant-inspired restructure may see the profession as an embodiment of much that is wrong with the present-day corporate world.
The truth, as always, is not quite so black and white. In this post I explore this question by taking a look at the different types of consultants one may encounter in the wilds of the corporate jungle. My discussion is based on a typology of management consultants proposed by Mats Alvesson and Anders Johansson in a paper published in this book (see citation at the end of this post for the full reference).
There is a considerable body of research on management consulting, most of which is tucked away in the pages of management journals and academic texts that are rarely read by professionals. It would take me too far afield to do even a cursory review of this literature so I’ll not go there, except to point out that much of the work can be classified as either strongly pro- or anti-consultant. This in itself is revealing: academics are just as divided in their opinions about consultants as professionals are. Indeed, to see just how strong the opinions are, here’s a small list of paper / book titles from the pro and anti-consultant camps
“Management Consulting as a Developer of SMEs”
“Process Consultation, Vol 1: Its Role in Organization Development”
“The Management Guru as an Organizational Witch Doctor”
“The Violent Rhetoric of Re-engineering: Management Consultancy on the Offensive”
These titles have been taken from the reference list in Alvesson and Johansson’s paper. A quick search on Amazon will reveal many more.
The pro camp depicts consultants as rational, selfless experts who solve complex problems for their clients, sometimes at considerable personal cost. The anti camp portrays them as politically-motivated, self-interested individuals whose main aim is to build relationships that ensure future work. The classification proposed by Alvesson and Johansson puts these extreme views in perspective.
A classification of management consultants
Alvesson and Johansson classify consultants into the following categories based on consultants’ claims to professionalism and their preferred approaches to dealing with political issues:
These consultants typically offer high expertise in some specialized area. Some examples of these include IT consultants specializing in complex products (such as ERP systems) and tax experts who have specialized knowledge typically not possessed by those who work within business organisations. As one might expect, esoteric experts have strong claims to professionalism.
One might think that such consultants have little need to play political games as their skill/knowledge does not threaten anyone within organizations. However, this is not always so because esoteric experts may portray themselves as being experts when they actually aren’t. In such cases they would have to use their social and political skills to cover up for their shortcomings. Perhaps more important, esoteric experts may also play politics to secure future gigs.
Typical clients of esoteric experts are purchasers of large IT systems, small organisations in occasional need of specialized skills (lawyers, accountants etc.) and so on.
Brokers of meaning
Brokers of meaning are sense makers: they help clients make sense of difficult or ambiguous situations. Typically brokers of meaning act as facilitators, teachers or idea-generators, who work together with clients to produce meaning. They often do not have deep technical knowledge like esoteric experts, but instead have a good understanding of human nature and the socio-political forces within organizations.
Brokers of meaning typically do not indulge in overt politics as the success of their engagements depends largely on their ability to gain the trust of a wide spectrum of stakeholders within the organization. That said, such consultants, once they have gained trust of a large number of people within an organization, are often able to influence key stakeholders in particular directions. Another way in which brokers of meaning influence decisions is through the skillful use of language – for example, depending on how one wants to portray it, an employee taking the initiative can be called gung-ho (negative) or proactive (positive).
Typical clients of brokers of meaning are managers who are faced with complex decisions.
Traders in trouble
The archetypal trader in trouble is the hatchet-man who is employed by a senior executive who wants to reduce costs. Since the work of these consultants typically involves a great deal of organizational suffering, they are careful to cast their aims in neutral or objective language. Indeed, much of the corporate doublespeak around layoffs (e.g. rightsizing) and cost reduction initiatives (e.g. productivity improvements) originated from traders in trouble. Typical outcomes of such consulting engagements involve massive restructuring on an organization-wide scale, often resulting in a lot of pain for minimal gain.
The work of such consultants is necessarily political – they must support senior management at all costs. Indeed this is another reason that they go to great lengths to portray their proposed solutions as being rational. On the other hand, their claim to professional knowledge is ambiguous as they often have to (knowingly) forgo actions that may be more logical and (more important!) ethical.
Alvesson and Johansson summarise this by quoting from Robert Jackall’s brilliant ethnographical study of managers, Moral Mazes:
The further the consultant moves away from strictly technical issues – that is from being an expert in the ideal sense, a virtuoso of some institutionalized and valued skill – the more anomalous his status becomes. He becomes an expert who trades in others’ troubles. In managerial hierarchies, of course, troubles, like everything else, are socially defined. Consultants have to depend on some authority ‘s definition of what is troublesome in an organization and, in most cases, have to work on the problem as defined. As it happens, it is extremely rare that an executive declares himself or his own circle to be the problem; rather, other groups in the corporation are targeted to be ‘worked on.
A terrific summary of the typical trader in trouble!
Clients of such consultants tend to be senior managers who have been tasked with increasing “efficiency” or “productivity.”
Agents of anxiety (suppliers of security)
The agent of anxiety is a messiah who sells a “best practice” solution to his clients’ problems. This type of consultant can therefore also be described as a supplier of security who assures his clients that their troubles will vanish if they just follow his prescribed process. Common examples of agents of anxiety are purveyors of project management methodologies and frameworks (such as PRINCE2 or IPMA) or process improvement techniques (such as Six Sigma).
Although such consultants may seem to have a high claim to professional expertise, they actually aren’t experts. A good number of them are blind followers of the methods they sell; rarely, if ever, do they develop a critical perspective on those practices. Also, agents of anxiety do not have to be overtly political: once they are hired by senior managers in an organization, employees have no choice but to follow the “best practice” techniques that are promoted.
Clients of such consultants tend to be senior managers in organisations that are having trouble with specific aspects of their work – projects, for example. What such managers do not realize is that they would be better served by creating and fostering the right work environment rather than attempting to impose silver bullet solutions sold by suppliers of security.
Now that we are done with the classification, I should mention that most of the consultants I have come across cannot be boxed into a single category. This is no surprise: consultants, like the rest of humanity, display behaviours that vary from situation to situation. Many consultants will display characteristics from all four categories within a single engagement or, at the very least, exhibit both professional and political behaviours. As Alvesson and Johansson state:
Management consultancy work probably typically means some blending of these four types. Sometimes one or two of the types dominates in the same assignment. But few management consultants presumably operate without appealing to the management fashions signalling the needs for consultancy services; few altogether avoid trouble-shooting tasks; few can solely rely on a technocratic approach, and few can simply work with cooperative meaning making processes. The complexity and diversity of consultancy assignments requires that the consultant move back and forth between a professional area and a non-professional area, i.e. areas viewed as coherent with claims of professionalism, recognizing the highly floating boundaries between these areas and the constructed character also of technical and professional work. Professional work is mingled with, but can’t be reduced to, political or symbolic work.
Finally, I should also add that consultants sometimes hide their real objectives because they are required to: their duplicity simply reflects the duplicity of those who hire them. Whether consultants should choose to do such work is another matter altogether. As I have argued elsewhere, the hardest questions we have to deal with in our professional lives are ethical ones.
In this post I have described a typology of consultants. For sure, the four categories of consultants described are stereotypes. That said, although consultants may slip on different personas within a single engagement, most would fit into a single category based on the nature of their work and their overall approach. A knowledge of this classification is therefore helpful, not just for clients, but also for front-line employees who have to deal with consultants and those who hire them.
Alvesson, M. & Johansson, A.W. (2002). Professionalism and politics in management consultancy work. In R. Fincham & T. Clark (Eds), Critical consulting: New perspectives on the management advice industry. Oxford: Blackwell, pp. 228–246.
Information system development is generally viewed as a rational process involving steps such as planning, requirements gathering, design etc. However, since it often involves many people, it is only natural that the process will have social and political dimensions as well.
The rational elements of the development process focus on matters such as analysis, coding and adherence to guidelines etc. On the other hand, the socio-political aspects are about things such as differences of opinion, conflict, organisational turf wars etc. The interesting thing, however, is that elements that appear to be rational are sometimes subverted to achieve political ends. Shorn of their original intent, they become rituals that are performed for symbolic reasons rather than rational ones. In this paper I discuss rituals in system development, drawing on a paper by Daniel Robey and Lynne Markus entitled, Rituals in Information System Design .
According to the authors, labelling the process of system design and development as rational implies that the process can be set out and explained in a logical way. Moreover, it also implies that the system being designed has clear goals that can be defined upfront, and that the implemented system will be used in the manner intended by the designers. On the other hand, a political perspective would emphasise the differences between various stakeholder groups (e.g. users, sponsors and developers) and how each group uses the process in ways that benefit them, sometimes to the detriment of others.
In the paper the authors discuss how the following two elements of the system development process are consistent with both views summarised above.
- System development lifecycle.
- Techniques for user involvement
I’ll look at each of these in turn in the next two sections, emphasising their rational features.
The basic steps of a system development lifecycle, common to all methodologies, are:
- Requirements gathering / analysis
Waterfall methodologies run through each of the above once whereas Iterative/Incremental methods loop through (a subset of) them as many times as needed.
It is easy to see that the lifecycle has a rational basis – specification depends on requirements and can therefore be done only after requirements have been gathered and analysis; programming can only proceed after design in completed, and so on It all sounds very logical and rational. Moreover, for most mid-size or large teams, each of the above activities is carried out by different individuals – business analysts, architects/designers, programmers, testers, trainers and operations staff. So the advantage of following a formal development cycle is that it makes it easier to plan and coordinate large development efforts, at least in principle.
Techniques for user involvement
It is a truism that the success of a system depends critically on the level of user interest and engagement it generates. User involvement in different phases of system development is therefore seen as a key to generating and maintaining user engagement. Some of the common techniques to solicit user involvement include:
- Requirements analysis: Direct interaction with users is necessary in order to get a good understanding of their expectations from the system. Another benefit is that it gives the project team an early opportunity to gain user engagement.
- Steering committees: Typically such committees are composed of key stakeholders from each group that is affected by the system. Although some question the utility of steering committees, it is true that committees that consist of high ranking executives can help in driving user engagement.
- Prototyping: This involves creating a working model that serves to demonstrate a subset of the full functionality of the system. The great advantage of this method of user involvement that it gives users an opportunity to provide feedback early in the development lifecycle.
Again, it is easy to see that the above techniques have a rational basis: the logic being that involving users early in the development process helps them become familiar with the system, thus improving the chances that they will be willing, even enthusiastic adopters of the system when it is rolled out.
The political players
Politics is inevitable in any social system that has stakeholder groups with differing interests. In the case of system development, two important stakeholder groups are users and developers. Among other things, the two groups differ in:
- Cognitive style: developers tend to be analytical/logical types while users come from a broad spectrum of cognitive types. Yes, this is a generalisation, but it is largely true.
- Position in organisation: in a corporate environment, business users generally outrank technical staff.
- Affiliations: users and developers belong to different organisational units and therefore have differing loyalties.
- Incentives: Typically member of the two groups have different goals. The developers may be measured by the success of the rollout whereas users may be judged by their proficiency on the new system and the resulting gains in productivity.
These lead to differences in ways the two groups perceive processes or events. For example, a developer may see a specification as a blueprint for design whereas a user might see it as a bureaucratic document that locks them into choices they are ill equipped to make. Such differences in perceptions make it far from obvious that the different parties can converge on a common worldview that is assumed by the rational perspective. Indeed, in such situations it isn’t clear at all as to what constitutes “common interest.” Indeed, it is such differences that lead to the ritualisation of aspects of the systems development process.
Ritualisation of rational processes
We now look at how the differences in perspectives can lead to a situation where processes that are intended to be rational end up becoming rituals.
Let’s begin with an example that occurs at the inception phase of system development project: the formulation of a business case. The stated intent of a business case is to make a rational argument as to why a particular system should be built. Ideally it should be created jointly by the business and technology departments. In practice, however, it frequently happens that one of the two parties is given primary responsibility for it. As the two parties are not equally represented, the business case ends up becoming a political document: instead of presenting a balanced case, it presents a distorted view that focuses on one party’s needs. When this happens, the business case becomes symbol rather than substance – in other words, a ritual.
Another example is the handover process between developers and users (or operations, for that matter). The process is intended to ensure that the system does indeed function as promised in the scope document. Sometimes though, both parties attempt to safeguard their own interests: developers may pressure users to sign off whereas users may delay signing-off because they want to check the system ever more thoroughly. In such situations the handover process serves as a forum for both parties to argue their positions rather than as a means to move the project to a close. Once again, the actual process is shorn of its original intent and meaning, and is thus ritualised.
Even steering committees can end up being ritualised. For example, when a committee consists of senior executives from different divisions, it can happen that each member will attempt to safeguard the interests of his or her fief. Committee meetings then become forums to bicker rather than to provide direction to the project. In other words, they become symbolic events that achieve little of substance.
The main conclusion from the above argument is that information system design and implementation is both a rational and political process. As a consequence, many of the processes associated with it turn out to be more like rituals in that they symbolise rationality but are not actually rational at all.
That said, it should be noted that rituals have an important function: they serve to give the whole process of systems development a veneer of rationality whilst allowing for the political manouevering that is inevitable in large projects. As the authors put it:
Rituals in systems development function to maintain the appearance of rationality in systems development and in organisational decision making. Regardless of whether it actually produces rational outcomes or not, systems development must symbolize rationality and signify that the actions taken are not arbitrary but rather acceptable within the organisation’s ideology. As such, rituals help provide meaning to the actions taken within an organisation
And I feel compelled to add: even if the actions taken are completely irrational and arbitrary…
Summary…and a speculation
In my experience, the central message of the paper rings true: systems development and design, like many other organisational processes and procedures, are often hijacked by different parties to suit their own ends. In such situations, processes are reduced to rituals that maintain a facade of rationality whilst providing cover for politicking and other not-so-rational actions.
Finally, it is interesting to note that the problem of ritualisation is a rather general one: many allegedly rational processes in organisations are more symbol than substance. Examples of other processes that are prone to ritualisation include performance management, project management and planning. This hints at a deeper issue, one that I think has its origins in modern management’s penchant for overly prescriptive, formulaic approaches to managing organisations and initiatives. That, however, remains a speculation and a topic for another time…
Much of mainstream project management is technique-based – i.e. it is based on processes that are aimed at achieving well-defined ends. Indeed, the best-known guide in the PM world, the PMBOK, is structured as a collection of processes and associated “tools and techniques” that are categorised into various “knowledge areas.”
Yet, as experienced project managers know, there is more to project management than processes and techniques: success often depends on a project manager’s ability to figure out what to do in unique situations. Dealing with such situations is more an art rather than science. This process (if one can call it that) is difficult to formalize and even harder to teach. As Donald Schon wrote in a paper on the crisis of professional knowledge :
…the artistic processes by which practitioners sometimes make sense of unique cases, and the art they sometimes bring to everyday practice, do not meet the prevailing criteria of rigorous practice. Often, when a competent practitioner recognizes in a maze of symptoms the pattern of a disease, constructs a basis for coherent design in the peculiarities of a building site, or discerns an understandable structure in a jumble of materials, he does something for which he cannot give a complete or even a reasonably accurate description. Practitioners make judgments of quality for which they cannot state adequate criteria, display skills for which they cannot describe procedures or rules.
Unfortunately this kind of ambiguity is given virtually no consideration in standard courses on project management. Instead, like most technically-oriented professions such as engineering, project management treats problems as being well-defined and amenable to standard techniques and solutions. Yet, as Schon tells us:
…the most urgent and intractable issues of professional practice are those of problem-finding. “Our interest”, as one participant put it, “is not only how to pour concrete for the highway, but what highway to build? When it comes to designing a ship, the question we have to ask is, which ship makes sense in terms of the problems of transportation?
Indeed, the difficulty in messy project management scenarios often lies in figuring out what to do rather than how to do it. Consider the following situations:
- You have to make an important project-related decision, but don’t have enough information to make it.
- Your team is overworked and your manager has already turned down a request for more people.
- A key consultant on your project has resigned.
Each of the above is a not-uncommon scenario in the world of projects. The problem in each of these cases lies in figuring out what to do given the unique context of the project. Mainstream project management offers little advice on how to deal with such situations, but their ubiquity suggests that they are worthy of attention.
In reality, most project managers deal with such situations using a mix of common sense, experience and instinct, together with a deep appreciation of the specifics of the environment (i.e. the context). Often times their actions may be in complete contradiction to textbook techniques. For example, in the first case described above, the rational thing to do is to gather more data before making a decision. However, when faced with such a situation, a project manager might make a snap decision based on his or her knowledge of the politics of the situation. Often times the project manager will not be able to adequately explain the rationale for the decision beyond knowing that “it felt like the right thing to do.” It is more an improvisation than a plan.
Schon used the term reflection-in-action to describe how practitioners deal with such situations, and used the following example to illustrate how it works in practice:
Recently, for example, I built a wooden gate. The gate was made of wooden pickets and strapping. I had made a drawing of it, and figured out the dimensions I wanted, but I had not reckoned with the problem of keeping the structure square. I noticed, as I began to nail the strapping to the pickets that the whole thing wobbled. I knew that when I nailed in a diagonal piece, the structure would become rigid. But how would I be sure that, at that moment, the structure would be square? I stopped to think. There came to mind a vague memory about diagonals-that in a square, the diagonals are equal. I took a yard stick, intending to measure the diagonals, but I found it difficult to make these measurements without disturbing the structure. It occurred to me to use a piece of string. Then it became apparent that I needed precise locations from which to measure the diagonal from corner to corner. After several frustrating trials, I decided to locate the center point at each of the corners (by crossing diagonals at each corner), hammered in a nail at each of the four center points, and used the nails as anchors for the measurement string. It took several moments to figure out how to adjust the structure so as to correct the errors I found by measuring, and when I had the diagonal equal, I nailed in the piece of strapping that made the structure rigid…
Such encounters with improvisation are often followed by a retrospective analysis of why the actions taken worked (or didn’t). Schoen called this latter process reflection-on-action. I think it isn’t a stretch to say that project managers hone their craft through reflection in and on ambiguous situations. This knowledge cannot be easily codified into techniques or practices but is worthy of study in its own right. To this end, Schon advocated an epistemology of (artistic) practice – a study of what such knowledge is and how it is acquired. In his words:
…the study of professional artistry is of critical importance. We should be turning the puzzle of professional knowledge on its head, not seeking only to build up a science applicable to practice but also to reflect on the reflection-in-action already embedded in competent practice. We should be exploring, for example, how the on-the-spot experimentation carried out by practicing architects, physicians, engineers and managers is like, and unlike, the controlled experimentation of laboratory scientists. We should be analyzing the ways in which skilled practitioners build up repertoires of exemplars, images and strategies of description in terms of which they learn to see novel, one-of-a-kind phenomena. We should be attentive to differences in the framing of problematic situations and to the rare episodes of frame-reflective discourse in which practitioners sometimes coordinate and transform their conflicting ways of making sense of confusing predicaments. We should investigate the conventions and notations through which practitioners create virtual worlds-as diverse as sketch-pads, simulations, role-plays and rehearsals-in which they are able to slow down the pace of action, go back and try again, and reduce the cost and risk of experimentation. In such explorations as these, grounded in collaborative reflection on everyday artistry, we will be pursuing the description of a new epistemology of practice.
It isn’t hard to see that similar considerations hold for project management and related disciplines.
In closing, project management as laid out in books and BOKs does not equip a project manager to deal with ambiguity. As a start towards redressing this, formal frameworks need to acknowledge the limitations of the techniques and procedures they espouse. Although there is no simple, one-size-fits-all way to deal with ambiguity in projects, lumping it into a bucket called “risk” (or worse, pretending it does not exist) is not the answer.
Flexibility is one of those buzzwords that keeps coming up in organizational communiques and discussions. People are continually asked to display flexibility, without ever being told what the term means: flexible workplaces, flexible attitudes, flexible jobs – the word itself has a flexible meaning that depends on the context in which it is used and by whom.
When words are used in this way they become platitudes – empty words that make a lot of noise. In this post, I analyse the platitude, flexibility, as it is used in organisations. My discussion is based on a paper by Thomas Eriksen entitled, Mind the Gap: Flexibility, Epistemology and the Rhetoric of New Work.
Background – a bit about organizational platitudes
One of the things that struck me when I moved from academia to industry is the difference in the way words or phrases are used in the two domains. In academics one has to carefully define the terms one uses (particularly if one is coining a new term) whereas in business it doesn’t seem to matter, words can mean whatever one wants them to mean (OK, this is an exaggeration, but not by too much). Indeed, as Paul Culmsee and I discuss in the first chapter of The Heretic’s Guide to Best Practices, many terms that are commonly bandied about in organizations are platitudes because they are understood differently by different people.
A good example of a platitude is the word governance. One manager may see governance as being largely about oversight and control whereas another might interpret it as being about providing guidance. Such varying interpretations can result in major differences in the way the two managers implement governance: the first one might enforce it as a compliance-oriented set of processes that leave little room for individual judgement while the other might implement it as a broad set of guidelines that leave many of the decisions in the hands of those who are actually doing the work. Needless to say, the results in the two cases are likely to be different too.
Flexibility – the conventional view
A good place to start our discussion of flexibility is with the dictionary. The online Oxford Dictionary defines at as:
- the ability to be easily modified
- willingness to change or compromise
The term is widely used in both these senses in organizational settings. For example, people speak of flexible designs (i.e. designs that can be easily modified) or flexible people (referring to those who are willing to change or compromise). However, and this is the problem: the term is open to interpretation – what Jack might term a flexible approach may be seen by Jill as a complete lack of method. These differences in interpretation become particularly obvious when the word is used in a broad context – such as in a statement justifying an organizational change. An executive might see a corporate restructure and the resulting changes in jobs/roles as a means to achieve organizational flexibility, but those affected by it may see it as constraining theirs. As Eriksen states:
Jobs are flexible in the sense that they are unstable and uncertain, few employees hold the same jobs for many years, the content of jobs can be changed almost overnight, and the boundaries between work and leisure are negotiable and chronically fuzzy.
Indeed, such “flexibility” which requires one to change at short notice results in a fragmentation of individual experience and a resulting loss of a coherent narrative of one’s life. It appears that increased flexibility in one aspect results in a loss of flexibility in another. Any sensible definition of flexibility ought to reflect this.
Consider the following definition of flexibility proposed by Gregory Bateson:
“Flexibility is uncommitted potential for change”
This deceptively simple statement is a good place to start understanding what flexibility really means for projects, organisations …and even software systems.
As Eriksen tells us, Bateson proposed this definition in the context of ecology. In particular, Bateson had in mind the now obvious notion that the increased flexibility we gain through our increasingly energy-hungry lifestyles results in a decrease in the environment’s capacity to cope with the consequences. This is true of flexibility in any context: a gain in flexibility in one dimension will necessarily be accompanied by a loss of flexibility in another.
Another implication of the above definition is that a system that is running at or near the limits of its operating variables cannot be flexible. The following examples should make this clear:
- A project team that is putting in 18 hour workdays in order to finish a project on time.
- A car that’s being driven at top speed.
- A family living beyond their means.
All these systems are operating at or near their limits, they have little or no spare capacity to accommodate change.
A third implication of the definition follows from the preceding one: the key variables of a flexible system should lie in the mid-range of their upper and lower limits. In terms of above examples:
- The project team should be putting in normal hours.
- The car should be driven at or below the posted road speed limits
- The family should be living within its income, with a reasonable amount to spare.
Of course, the whole point of ensuring that systems operate in their comfort zone is that they can be revved up if the need arises. Such revving up, however, should be an exceptional circumstance rather than the norm – a point that those who run projects, organisations (and, yes, even vehicles) often tend to forget. If one operates a system at the limits of its tolerance for too long, not only will it not be flexible, it will break.
Flexibility in the workplace
As mentioned in the introduction, the term flexibility keeps cropping up in organizational settings: corporate communiques exhort employees to be flexible in the face of change. This is typically a coded signal that employees should expect uncertainty and be prepared to adjust to it. A related manifestation of flexibility is the blurring of the distinction between work and personal life. As Eriksen puts it:
The term flexibility is often used to describe this new situation: Jobs are flexible in the sense that they are unstable and uncertain, few employees hold the same jobs for many years, the content of jobs can be changed, and the boundaries between work and leisure are poorly defined.
This trend is aided by recent developments in technology that enable employees to be perpetually on call. This is often sold as a work from home initiative but usually ends up being much more. Eriksen has this to say about home offices:
One recent innovation typically associated with flexibility is the home office. In Scandinavia (and some other prosperous, technologically optimistic regions), many companies equipped some of their employees with home computers with online access to the company network in the early 1990s, in order to enhance their flexibility. This was intended to enable employees to work from home part of the time, thereby making the era when office workers were chained to the office desk all day obsolete.
In the early days, there were widespread worries among employers to the effect that a main outcome of this new flexibility would consist in a reduction of productivity. Since there was no legitimate way of checking how the staff actually spent their time out of the office, it was often suspected that they worked less from home than they were supposed to. If this were in fact the case, working from home would have led to a real increase in the flexibility of time budgeting. However, work researchers eventually came up with a different picture. By the late 1990s, hardly anybody spoke of the home office as a convenient way of escaping from work; rather, the concern among unionists as well as researchers was now that increasing numbers of employees were at pains to distinguish between working hours and leisure time, and were suffering symptoms of burnout and depression. The home office made it difficult to distinguish between contexts that were formerly mutually exclusive because of different physical locations.
It is interesting to see this development in the light of Bateson’s definition of flexibility: the employee gains flexibility in space (he or she can work from home or from the office) at the expense of flexibility in time (organization time encroaches on personal time). As Eriksen states:
There seems to be a classic Batesonian flexibility trade-off associated with the new information technologies: increased spatial flexibility entails decreased temporal flexibility. If inaccessibility and ‘empty time’ are understood as scarce resources, the context of ‘new work’ thus seems to be an appropriate context for a new economics as well. In fact, a main environmental challenge of our near future will consist in protecting slow time and gaps from environmental degradation.
In short, it appears that flexibility for the organization necessarily implies a loss of flexibility for the individual.
Flexibility is in the eye of the beholder: an action to increase organisational flexibility by, say, redeploying employees would likely be seen by those affected as a move that constrains their (individual) flexibility. Such a dual meaning is characteristic of many organizational platitudes such as Excellence, Synergy and Governance. It is an interesting exercise to analyse such platitudes and expose the difference between their espoused and actual meanings. So I sign off for 2013, wishing you many hours of platitude-deconstructing fun 🙂
Enterprise IT systems that are built or customised according to specifications sometimes fail because of lack of adoption by end-users. Typically this is treated as a problem of managing change, and is dealt with in the standard way: by involving key stakeholders in system design, keeping them informed of progress, generating and maintaining enthusiasm for the new system and, most important, spending enough time and effort in comprehensive training programmes.
Yet these efforts are sometimes in vain and the question, quite naturally, is: “Why?” In this post I offer an answer, drawing on a paper by Brian Pentland and Martha Feldman entitled, Designing Routines: On the Folly of Designing Artifacts While Hoping for Patterns of Action.
Setting the context
From a functional point of view, enterprise software embodies organisational routines – i.e. sequences of actions that have well-defined objectives, and are typically performed by business users in specific contexts. Pentland and Feldman argue that although enterprise IT systems tend to treat organisational routines as well defined processes (i.e. as objects), it is far from obvious that they are so. Indeed, the failure of business process “design” or “redesign” initiatives can often be traced back to a fundamental misunderstanding of what organisational routines are. This is a point that many system architects, designers and even managers / executives would do well to understand.
As Pentland and Feldman state:
… the frequent disconnect between [system or design] goals and results arises, at least in part, because people design artifacts when they want patterns of action…we believe that designing things while hoping for patterns of action is a mistake. The problem begins with a failure to understand the nature of organizational routines, which are the foundation of any work process that involves coordination among multiple actors… even today, organizational routines are widely misunderstood as rigid, mundane, mindless, and explicitly stored somewhere. With these mistaken assumptions as a starting point, designing new work routines would seem to be a simple matter of creating new checklists, rules, procedures, and software. Once in place, these material artifacts will determine patterns of action: software will be used, checklists will get checked, and rules will be followed.
The fundamental misunderstanding is that design artifacts, checklists, and rules and procedures encoded in software are mistaken for the routine instead of being seen for what they are: idealised representations of the routine. Many software projects fail because designers do not understand this. The authors describe a case study that highlights this point and then draw some general inferences from it. I describe the case study in the next section and then look into what can be learnt from it.
The case study deals with a packaged software implementation at a university. The university had two outreach programs that were quite similar, but were administered independently by two different departments using separate IT systems. Changes in the university IT infrastructure meant that one of the departments would lose the mainframe that hosted their system.
An evaluation was performed, and the general consensus was that it would be best for the department losing the mainframe to start using the system that the other department was using. However, this was not feasible because the system used by the other department was licensed only for a single user. It was therefore decided to upgrade to a groupware version of the product. Since the proposed system was intended to integrate the outreach-related work of the two departments, this also presented an opportunity to integrate some of the work processes of the two departments.
A project team was set-up, drawing on expertise from both departments. Requirements were gathered and a design was prepared based on the requirements. The system was customised and tested as per the design, and data from the two departments was imported from the old systems into the new one. Further, the project team knew the importance of support and training: additional support was organised as were training sessions for all users.
But just as everything seemed set to go, things started to unravel. In the author’s words:
As the launch date grew nearer, obstacles emerged. The implementation met resistance and stalled. After dragging their feet for weeks, [the department losing the mainframe] eventually moved their data from the mainframe to a stand-alone PC and avoided [the new system] entirely. The [other department] eventually moved over [to the new system], but used only a subset of the features. Neither group utilized any of the functionality for accounting or reporting, relying instead on their familiar routines (using standalone spreadsheets) for these parts of the work. The carefully designed vision of unified accounting and reporting did not materialize.
People in the department that was losing the mainframe worked around the new system by migrating their data to a standalone PC and using that to run their own system. People in the other department did migrate to the new system, but used only a small subset of its features.
All things considered, the project was a failure.
So what went wrong?
The authors emphasise that the software was more than capable of meeting the needs of both departments, so technical or even functional failure can be ruled out as a reason for non-adoption. On speaking with users, they found that the main objections to the system had to with the work patterns that were imposed by it. Specifically, people objected to having to give up control over their work practices and their identities (as members of specific) departments. There was a yawning gap between the technical design of the system and the work processes as they were understood and carried out by people in the two departments.
The important thing to note here is that people found ways to work around the system despite the fact that the system actually worked as per the requirements. The system failed despite being a technical success. This is a point that those who subscribe to the mainstream school of enterprise system design and architecture would do well to take note of.
Dead and live routines
The authors go further: to understand resistance to change, they invoke the notion of dead and live routines. Dead routines are those that have been represented in technical artifacts such as documentation, flowcharts, software etc. whereas live routines are those that are actually executed by people. The point is, live routines are literally living objects – they evolve in idiosyncratic ways because people inevitably tweak them, sometimes even without being conscious that they are making changes. As a consequence live routines often generate new patterns of action.
The crux of the problem is that software is capable of representing dead routines, not live ones.
Complementary aspects of routines
Routines are composed of people’s understandings of the routines (the authors call this the ostensive aspect) and the way in which they are actually performed or carried out (the authors call this the performative aspect). The two aspects, the routine as understood and the routine as performed, complement each other: new actions will modify one’s understanding of a routine, and the new understanding in turn modifies future actions.
Now, the interesting thing is that no artifact can represent all aspects of a routine – something is always left out. Even though certain artifacts such as written procedures may be intended to change both the ostensive and performative aspects of a routine (as they usually are), there is no guarantee that they will actually influence behaviour. Managers who encounter resistance to change will have had first-hand experience of this: it is near impossible to force change upon a group that does not want it.
This leads us to another set of complementary aspects of routines. On the one hand, technology puts in place structures that enable certain actions while constraining others – this is the standard technological or material view of processes as represented in systems. In contrast, according to the social or agency view, people are free to use technology in whatever way they like and sometimes even not use it at all. In the case study, the former view was the one that designers took whereas users focused on the latter one.
The main point to take away from the foregoing discussion is that designers/ engineers and users often have very different perspectives on processes. Software that is based on a designer/engineer perspective alone will invariably end up causing problems for end users.
Routines and reality
Traditional systems design proceeds according to the following steps:
- Gather requirements
- Analyse and encode them in rules
- Implement rules in a software system
- Provide people with incentives and training to follow the rules
- Rollout the system
Those responsible for the implementation of the system described in the case study followed the steps faithfully, yet the system failed because one department didn’t use it at all and the other used it selectively. The foregoing discussion tells us that the problem arises from confusing artifacts with actions, or as I like to put it, routines with reality.
As the authors write:
This failure to understand the difference between artifacts and actions is not new…the failure to distinguish the social (or human) from the technical is a “category mistake.” Mistakenly treating people like machines (rule-following automata) results in a wide range of undesirable, but not entirely surprising, outcomes. This category difference has been amply demonstrated for individuals, but it applies equally (if not more so) to organizational routines. This is because members’ embodied and cognitive understandings are often diverse, multiple, and conflicting.
The failure to understand the difference between routines and reality is not new, but it appears that the message is yet to get through: organisations continue to implement (new routines via) enterprise systems without putting in the effort to understand ground-level reality.
Some tips for designers
The problem of reconciling user and designer perspectives is not an easy one. Nevertheless, the authors describe an approach that may prove useful to some. The first concept is that of a narrative network – a collection of functional events that are related by the sequence in which they occur. A functional event is one in which two actants (or objects that participate in a routine) are linked by an action. The actants here could be human or non-human and the action represents a step in the process. Examples of functional events would be:
- The sales rep calls on the customer.
- The customer orders a product.
- The product is shipped to the customer.
The important thing to note is that the functional event does not specify how the process is carried out – that can be done in a myriad different ways. For example, the sales rep may meet the customer face-to-face or she may just make a phone call. The event only specifies the basic intent of the action, not the detail of how it is to be carried out. This leaves open the possibility for improvisation if the situation calls for it. More importantly it places the responsibility for making that decision on the person responsible for the carrying out the action (in the case of a human actant).
Then it is useful to classify a functional event based on the type of actants participating in the event – human or (non-human) artifact. There are four possibilities as shown in Figure 1.
The interesting thing to note is that from the perspective of system design, an artifact-artifact event is one over which the designer has the strongest control (actions involving only artifacts can be automated) whereas a designer’s influence is the weakest in human-human events (humans rarely, if ever, do exactly as they are told!). Decomposing a routine in this way can thus serve to focus designer efforts in areas where they are likely to pay off and, more importantly, help them understand where they are likely to get some push back.
Finally, there are some general points to consider when designing a system. These include:
- Reinforce the patterns of the routine: Much as practise makes an orchestra perform better, organisations have to invest in practicing the routines. Practice connects the abstract routine to the performed one.
- Consider every stakeholder group’s point of view: This is a point I have elaborated at length in my posts on dialogue mapping so I’ll say no more about it here.
- Understand the relationship between abstract patterns and actual performances: This involves understanding whether there are different ways to achieve the same goal. Also consider whether it is important to enforce a specific set of actions leading up to the goal, or whether the actions matter less than the goal.
- Encourage people to follow patterns of action that are important: This involves creating incentives to encourage people to carry out certain important actions in specified ways. It is not enough to design and implement a particular pattern within a routine. One has to make it worthwhile for people to follow it.
- Make it possible for users to become designers: Most routines involve decisions points where actors have to choose between alternate, but fixed, courses of action. At such points human actors have little room to improvise. Instead, it may be more fruitful to replace decision points with design points, where actors improvise their actions.
- Lock in actions that matter: Notwithstanding the previous point, there are certain actions that absolutely must be executed as designed. It is as important to ensure that these are executed as they should be as it is to allow people to improvise in other situations.
- Keepan open mind and invite engagement: Perhaps the most important point is to continually engage with people who work the routine and to be open to their input as to what’s working and what isn’t.
Most of the above points are not new; I’d even go so far as to say they are obvious. Nevertheless they are worth restating because they are often unheeded.
A paradox of enterprise systems is that they can fail even if built according to specification. When this happens it is usually because designers fail to appreciate the flexibility of the business process(es) in question. As the authors state, “…even today, organizational routines are widely misunderstood as rigid, mundane, mindless, and explicitly stored somewhere.” This is particularly true for processes that involve humans. As the authors put it, when automating processes that involve humans it is a mistake to design software artefacts while hoping for (fixed) patterns of action. The tragedy is that this is exactly how enterprise systems are often designed.