Over the last few months, I’ve published a number of posts in which the term emergent design makes a cameo appearance (see this article or this interview for example). Some readers may have noticed that although the term is used in various contexts in the articles/interviews, it is not explicitly defined anywhere. This is deliberate. Emergent design is…well, emergent, so a detailed definition is neither necessary nor useful – providing one can describe a set of guidelines for its practice. My main aim in this post is to do just that. To keep things concrete I will discuss the guidelines in the context of the often bizarre world of enterprise IT, a domain that epitomizes top-down, plan-based design.
(Note: Before going any further a couple of clarifications are in order. Firstly, the word emergent as used here has nothing to do with emergence in complex systems. Secondly, the guidelines provided here are a starting point, not a comprehensive list.)
The wickedness of enterprise IT
Most IT initiatives in large organisations are planned, designed and executed in a top-down manner, with little or no attempt to understand the existing culture and / or on-the-ground realities. This observation applies not only to enterprise software projects, such as those involving Collaboration or Customer Relationship Management platforms, but also to design and process-driven IT functions like architecture and service management.
Top down approaches are liable to fail because enterprise IT displays many of the characteristics of wicked problems. In particular, organization-wide IT initiatives:
- Are one-shot operations – for example, an ERP system is simply too expensive to implement over and over again.
- Have no stopping rule – enterprise IT systems are never completely done; there are always things to be fixed and additional features to be implemented.
- Are highly contentious – whether or not an initiative is good, or even necessary, depends on who you ask.
- Could be done in other, possibly “better”, ways – and the problem is that one person’s “better” is another one’s “worse”!
- Are essentially unique – and don’t let vendors or Big $$$ consultants tell you otherwise!
These characteristics make enterprise IT a socially complex problem – that is, different stakeholder groups have different perceptions of the problem that the initiative is intended to address. The most important implication of social complexity is that it cannot be tackled using rational methods of planning, design and implementation that are taught in schools, propagated in books, or evangelized by standards authorities and assorted snake oil salespersons.
Enter emergent design
The term emergent design was coined by David Cavallo in the context of technology-driven education reforms in indigenous cultures (the original paper by David Cavallo can be accessed here). Cavallo observed that traditional systems engineering approaches that attempt to change an educational system in a top-down manner fail primarily because they do not take into account the unique features of local cultures. Instead, he found that using the existing culture as a starting point from which to work towards systemic change offered a much better chance of the new ways taking root. In his words, “[the] adoption and implementation of new methodologies needs to be based in, and grow from, the existing culture.”
Cavallo’s words hold the key to understanding emergent design in the context of enterprise IT. It is that any enterprise IT initiative necessarily affects many stakeholders, and should therefore start by taking their concerns seriously.
Not quite. As I will discuss in the remainder of this post, although emergent design shares a few number features with Agile methods, there’s considerably more to it than that. That said, chances are that good Agile coaches are emergent design practitioners without knowing it. This is something that will become apparent as we go on.
Guidelines for emergent design
I have, for a while, been thinking about what emergent design means in the context of enterprise IT. Among other things, I have been looking at how it might be applied to a wide variety initiatives that are traditionally planned upfront – things such as offshoring and enterprise-wide projects such as data warehouse or enterprise resource planning initiatives.
In one of those serendipitous occurrences, last week I happened re-read an old series of articles entitled Confessions of a post-SharePoint Architect written by my friend, the ace sensemaker and emergent entrepreneur, Paul Culmsee. Although the series focuses on emergent design principles in the context of the Microsoft SharePoint platform, many of the points that Paul makes apply to enterprise IT in general. In addition to material drawn from Paul’s blog, I also borrow from a few posts on my blog. In the interests of space I have provided only a brief overview of the points because they have been elaborated elsewhere. The original pieces fill in a lot of relevant detail, so please do follow the links if you have the inclination and the time.
With that said, let’s get to it.
Be a midwife rather than an expert
You do not learn in school how to deal with wicked problems…expertise and ignorance is distributed over all participants in a wicked problem. There is a symmetry of ignorance among those who participate because nobody knows better by virtue of his degrees or his status. There are no experts (which is irritating for experts), and if experts there are, they are only experts in guiding the process of dealing with a wicked problem, but not for the subject matter of the problem.
The first guideline of emergent design is to realize that no one is an expert – not you nor your Big $$$ consultant. The only way to build a robust and lasting system or process is for everyone to put their heads together and work towards it in a collaborative fashion, dispensing with the pretense that one can outsource one’s thinking to the “expert”. Indeed, the role of the “expert” is to create and foster the conditions for such collaboration to occur. Paul and I elaborate on this point at length in our book and this paper (summarized in this post).
In brief, the knowledge required to successfully implement an enterprise system is distributed across all stakeholders (analysts, consultants, architects and, above all, users). Pulling all this together into a coherent whole has more to do with facilitation and people skills than technology.
Ensure that governance is about enablement rather than control
Most organisations have onerous procedures to ensure that people do the right thing – the poor system lead drowns in a ream of documentation that she is required to “read and understand”; things have to be documented according to certain standards etc. etc. All these procedures are aimed at keeping people on the straight and narrow path of IT righteousness.
I submit that most governance regimes within organisations encourage a checkbox-based compliance mentality aimed at ensuring that people comply in letter, but not necessarily in spirit (actually, never in spirit). As Paul mentions in this post governance ought to be about enablement rather than compliance or control.
There’s a very simple test to tell one from the other: when you come across a procedure such as an SOP or a methodology that you are required to follow, ask yourself this question: does this help me do my job?
If the answer positive, the procedure is an enabler; if not, it is likely a control that is primarily intended as a CYA mechanism.
Do not penalize people for learning
The main rationale behind iterative and incremental approaches to software development is that they encourage (and take advantage) of continuous learning. Incremental increases in functionality are easier to test exhaustively and errors are also easier to correct. Reviews and retrospectives also tend to be more focused leading to a better chance of lessons actually being learnt. Thanks to the Agile movement, this is now well known and understood in mainstream IT.
However, learning is not just a matter of using iterative/incremental methodologies; one also needs to build an environment that encourages it. This is trickier matter because it depends on things that are outside an individual manager’s control; indeed it has more to do with the entire IT function or even the organization. In an organisation with a strong blame culture, the culture tends to win against pretty much any methodology, agile or otherwise. Blame cultures preclude learning because mistakes are punished and people are scapegoated as a result. Check out this article on learning organizations for more on this topic, and this post for a more nuanced (realistic?) view.
With that said for the importance of learning, it is also important to note that there are some situations where learning is less important. This is the case for work for that can be planned and scripted in detail up front. It is important to be able to distinguish between the two types of situations…which brings us to the next point.
Understand the difference between complicated and complex initiatives
Requirements analysis is one of the first activities in traditional system development. Most enterprise IT initiatives that are driven by a vendor or consultant will have many sessions for this at the front-end of an engagement. Enterprise wisdom tells us that things need to be specified in detail at the start. The rationale behind this is to set requirements in stone so that the entire project can be planned in detail before actual implementation begins. Such an approach is fine if one knows for sure:
- How the future is going to unfold and has appropriate contingencies in place for adverse events;
- That users have a clear idea of what they want, and
- That requirements will not change (or change minimally.
It is obvious that this approach will be disastrous if any of the above assumptions are incorrect. Unfortunately it is more often the case that the assumptions do not hold, as evidenced by innumerable IT project that have failed for a lack of adequate risk management, scope clarity and/or uncontrolled change.
So how does one distinguish between initiatives that can be planned in detail upfront and those that can’t?
The distinction is best illustrated via an example: consider a project to replace a fixed line phone system by VoIP versus an ERP project. The first project has a fixed set of requirements across different groups. The second one, in contrast, involves diverse stakeholder groups, each with their own unique expectations of the system. Both projects are complicated from the technology point of view, but the second one has elements of wickedness arising from social complexity. Consequently, the two projects cannot be run in the same way. In particular, the first one can be planned in detail upfront while the second one cannot. Borrowing from David Snowden’s Cynefin framework, we call the first type of project complicated and the second one complex. You need to understand which kind of initiative you are dealing with before deciding which project management approach would be appropriate.
Beware of platitudinous goals
The enterprise IT marketplace is one that is largely buzzword driven. The in-vogue buzzwords at the time this piece was written is the cloud and big data. Buzzwords, while sounding “right”, are actually platitudes – words that are devoid of meaning because different people interpret and use them differently. The use of platitudes, therefore, results in confusion rather than clarity. For example, your information security guy may be wary of the cloud because he sees it as a potential security risk whereas a business user may view it positively because it liberates her from the clutches of a ponderous IT department. (Check out this video for a cautionary fable regarding a poorly thought out cloud strategy)
People tend to use platitudes as mental shortcuts to avoid thinking things through and coming up with their own opinions. It is therefore pointless to ask a person who uses a platitude to clarify what he or she means: they have not thought it through and will therefore be unable to give you a good answer.
The best way to deconstruct a platitude is via an oblique approach that is best illustrated through an example.
Say someone tells you that they want to improve efficiency (a rather common platitude). Asking them to define efficiency is not a good way to go because the answer you get is likely to be couched in generalities such as higher productivity and performance, words that are world class platitudes in their own right! Instead, it is better to ask them what difference would be apparent if efficiency were to improve. To answer this question, they would have to come down from platitude-land and start thinking about concrete, measurable outcomes.
Use open questions to broaden perspectives
A more general point to note from the foregoing is that the framing of questions matters, particularly when one wants people to come up with ideas. For example, instead of asking people what they like (or dislike) about a particular approach, it is generally better to ask them what aspects of the approach are important to them. The latter question is neutrally framed so it does not impose any constraints on their thinking
Another good way to get people thinking about possibilities is to ask them what they would like to do if there were no constraints (such as budget or time, say). Conversely, if you encounter a constraining factor (like a company policy), it is sometimes helpful to ask what the intent behind the policy is.
If posed in the right way and in the right environment, answers to such questions get people to think beyond their immediate concerns and focus on purposes and outcomes instead.
Check out Paul’s posts on powerful questions to find out more about these perspective-expanding questions.
Understand the need for different types of thinking
One of the ironies of enterprise IT initiatives is that the most important decisions on projects have to be made when the information available is the least. As I wrote in the introduction to this paper,
The early stages of projects are fraught with ambiguity. Yet, it is at this “front end” of projects that the most important decisions have to be made. Front-end decisions are hard because there is:
- uncertainty about scope, i.e. about what needs to be done;
- uncertainty about rationale, i.e. why it needs to be done; and
- uncertainty about approach, i.e. how it should be done.
Arguably, the lack of clarity regarding any of these can sow the seeds of failure in the early stages of a project.
The standard approach is to treat uncertainty as a problem that can be solved through convergent thinking – i.e. the kind of thinking that assumes a problem has a single “correct answer.” However, project uncertainty has a social dimension; different people have different perceptions of things like scope and rationale, as well as different coping mechanisms for ambiguity. So, project uncertainty is a wicked problem that has no single “right answer.” This can cause anxiety for some. One therefore needs to begin with divergent thinking, which is largely about generating ideas and options and move to convergent thinking only when:
- The group has a shared understanding of the main issues.
- An adequate set of options have been generated.
As I alluded to above, people tend to show a preference for one type of thinking over the other. The strength of collaborative problem solving lies precisely in the fact that a group is likely to have a mix of the two types of thinkers.
It is perhaps obvious, but still worth mentioning that the other standard way to deal with uncertainty is to impose a solution by diktat or governance. Clearly such approaches are doomed to fail because they suppress the problem instead of solving it.
Consider long term consequences
It is an unfortunate fact of life that cost tends to be the ultimate arbiter on IT decisions. Vendors know this, and take advantage of it when crafting their proposals. The contract goes to the lowest bidder and the rest, as they say, is tragedy. Although cost is an important criterion in IT decisions, making it the overriding concern is a recipe for future disappointment.
The general lesson to draw from this is that one must consider the longer-term consequences of one’s choices. This can be hard to do because the distant future is less salient than the present or the immediate future. A good way to look beyond immediate concerns (such as cost) is to use the solution after next principle proposed by Gerald Nadler and Shozo Hibino in their book, Breakthrough Thinking. The basic idea behind the principle is to get people to focus on the goals that lie beyond the immediate goal. The process of thinking about and articulating longer-term goals can often provide insights into potential problems with the current goals and/or how they are being achieved.
Build in spare capacity
In his book on antifragility, Nicholas Taleb points out that the opposite of fragility is not robustness or resilience, rather it is the ability to thrive on or benefit from uncertainty. There is no word in the English language to describe such behavior, and that is what led him to coin the term antifragile.
In a post inspired by the book, I outlined the elements of an antifragile IT strategy. One of the key points of such a strategy is the assumption that, despite our best laid plans, it is inevitable that something important will have been overlooked. It is therefore important to build in some spare capacity to deal with the unexpected events and demands. Unfortunately, experience tells me that many enterprise IT systems operate at the limits of their capacity, with little or nothing in reserve. This is a disaster waiting to happen.
Design so as to increase your future choices
This is perhaps the most important point in my list because it encapsulates all the other points. I have adapted it from Heinz von Foerster’s ethical imperative which states that one should always act so as to increase the number of choices in the future. This principle is useful as a tiebreaker between two designs that are close in all other respects. However, there is more to it than just that. I have found it particularly useful in making decisions regarding IT outsourcing and software as a service. There is very little critical scrutiny of the benefits of these as claimed by vendors and advisories. This principle can help you see through the fog of marketing rhetoric and advisory hype.
One of the paradoxes of life is that the harder we strive for something – money, happiness or whatever – the more unattainable it seems to become. Indeed, some of the most financially successful people (Bill Gates and Warren Buffett, for example) became rich by doing what they loved. Their financial success was a happy byproduct of their engagement in their work. The economist John Kay formalized this paradoxical notion in his concept of obliquity – that certain goals are best attained via indirect means.
If you have been patient enough to read through this piece, you will have noted that some of the guidelines listed above have a hint of obliquity about them. This is no surprise; indeed it is inevitable in a design approach that values people over processes and improvisation (or even serendipity) over planning.
I usually conclude my posts with a summary of the main message. For reasons that should be obvious I will not do that here. Instead, I will end by pointing out yet another feature of emergent design that you have likely figured out already: the guidelines listed above are actually domain neutral; they can be applied to pretty much any area of endeavour. This is no surprise: wicked problems aren’t domain specific and therefore neither are the techniques to deal with them. For example, see my interview with Neil Preston for a perspective on emergent design in organizational change and <advertisement> my book co-authored with Paul </advertisement> for ways in which it can be applied to domains as diverse as town planning and knowledge management.
…and now I really must stop for I have gone on too long. Many thanks for your patience!
My thanks go out to Paul Culmsee for his feedback on a draft version of this post.
In a recent article in Forbes, Jason Bloomberg asks if Enterprise Architecture (EA) is “completely broken”. He reckons it is, and that EA frameworks, such as TOGAF and the Zachman Framework, are at least partly to blame. Here’s what he has to say about frameworks:
EA generally centers on the use of a framework like The Open Group Architecture Framework (TOGAF), the Zachman Framework™, or one of a handful of others. Yet while the use of such frameworks can successfully lead to business value, [they] tend to become self-referential… that [Enterprise Architects end up] spending all their effort working with the framework instead of solving real problems.
From experience, I’d have to agree: many architects are so absorbed by frameworks that they overlook their prime imperative, which is to deliver tangible value rather than pretty diagrams.
In this post I present six (possibly heretical!) practices that underpin an evolutionary or emergent approach to enterprise architecture. I believe that these will go some way towards addressing the issues that Bloomberg and others bemoan.
Here are they are, in no particular order:
1. Use an incremental approach
As Bloomberg notes in the passage above, many enterprise architects spend a great deal of their time creating blueprints and plans based on frameworks. The problem is, this activity rarely leads to anything of genuine business value because blueprints or plans are necessarily:
- Incomplete – they are high-level abstractions that omit messy, ground-level details. Put another way, they are maps that should not be confused with the territory.
- Static – they are based on snapshots of an organization and its IT infrastructure at a point in time. Even roadmaps that are intended to describe how organisations’ processes and technologies will evolve in time are, in reality, extrapolations from information available at a point in time.
The straightforward way to address this is to shun big architectures upfront and embrace an iterative and incremental approach instead. The Agile movement has shown the way forward for software development. There are many convincing real-life demonstrations of the use of agile approaches to enterprise-scale initiatives (Ralph Hughes’ approach to Agile data warehousing, for example). Such examples suggest that the principles behind agile software development can be fruitfully adapted to the practice of enterprise architecture.
The key principle is to start as small as possible and scale-up from there. Such an approach would enable architects to learn along the way, giving them a better chance of getting their heads around those messy but important details that get overlooked when one considers the entire enterprise upfront. Another benefit is that it would give them adequate time to develop an awareness of the factors that are prone to change frequently and those that will remain relatively static. Most important, though, is that it would offer several opportunities to correct errors and fine tune things at little extra cost.
2. Understand the unique features of your organisation
Enterprise architects tend to be people with a wide experience in technology. As a result they are often treated (and see themselves) as experts. The danger of this attitude is that it can lead them to believe that they have privileged insights into the ills of their organizations. A familiarity with frameworks tends to reinforce this view because frameworks, with their focus on standardization, tend to emphasise the common features of organisations. However, I believe that instead of focusing on similarities an enterprise architect would be better served by focusing on differences – the things that make his or her organisation unique. Only when one develops an understanding of differences can one start to build an architecture that actually supports the organization.
To be sure, there are structures and processes that are common to many organisations (for example, functions such as 1st level support or expense reimbursement processes) and these could potentially benefit from the implementation of standardized designs / practices. However, the identification of such commonalities should be an outcome of research rather than an upfront assumption. And this requires ongoing, open dialogue with key stakeholders in the business (more on that later)
3. Form your own opinions about what works and what doesn’t
The surfeit of standardized “best practice” approaches to EA tends to make enterprise architects intellectually lazy. This is actually a symptom of a wider malaise in IT: I have noticed that professionals in the upper echelons of IT are happy to “outsource their thinking” by following someone else’s advice or procedures without much thought. The problem with this attitude, penalty clauses notwithstanding, is that one cannot “outsource the blame” when things go wrong.
I therefore believe one of the key attributes of a good architect is to develop a critical attitude towards frameworks, “best practices” and even consulting advice (especially if it comes from a Big 4 consultancy or guru). If you’re really as experienced as you claim to be, you need to develop your own opinions of what works and what doesn’t and be willing to subject those opinions to scrutiny by others.
4. Understand your organisation’s constraints
Humans tend to be over-optimistic when it comes to planning: witness the developer who blithely tells his boss that it will take a week to do a certain task…and then takes a month because of problems he had not anticipated. Similarly, many architects create designs that look great on paper but are not realistic because they have overlooked potential organizational constraints. Examples of such constraints include:
- Top-down reporting structures in which all decisions have to be made or approved by top management.
- Rigid organizational hierarchies that discourage cross-functional communication and collaboration.
Although constraints can also be technical (such as the limitations of a particular technology), the above examples illustrate that organizational constraints are considerably harder to address, primarily because they involve changing behaviour of influential people within the organization.
Architects lack the positional authority to do anything about such constraints directly. However, they need to develop an understanding of the main constraints so that they can bring them to the attention of those who can do something about them.
5. Focus on problem finding rather than problem solving
In his book, Management in Small Doses, Russell Ackoff wrote that in the real world problems are seldom given; they must be taken. This is good advice for enterprise architects to live by. Indeed, one of the shortcomings of frameworks is that they tend to present the architect with ready-made problems – specifically, any process or technology that does not comply with the framework is seen as a problem. Consequently, many architects spend considerable effort in fixing such “deviations”. However, non-conforming processes or technologies are seldom the most pressing problems that the organization faces. Instead, an enterprise architect will be better served by of finding problems that actually need fixing.
6. Understand the social implications of enterprise architecture
Enterprise architecture is seen as a primarily technical undertaking. As a result architects often overlook the social implications of their designs. Here are a couple of common ways in which this happens:
- Enterprise architecture invariably involves trade-offs. When evaluating these, architects typically focus on economic variables such as cost, productivity, efficiency, throughput etc., ignoring human factors such as ease of use and user preferences.
- Any design choice will (usually) benefit certain stakeholders while disadvantaging others.
Architects need to remember that their plans and designs are going to change the way people work. Nobody likes change without consultation, so it critical to seek as wide a spectrum of opinions as possible before making important design decisions. One can never satisfy everyone, but one has a better chance of making sensible compromises if one knows where all the stakeholders stand.
To summarise: enterprise architects need to take an emergent approach to design. Such an approach proceeds in an incremental fashion, pays due attention to the unique features and constraints of an organization, focuses on real rather than imagined problems…and, above all, acknowledges that the success or failure of enterprise architecture rests neither on frameworks nor technology, but on people. My (admittedly limited) experience suggests that such an approach would be more fruitful than the framework-driven approaches that have come to dominate the profession.
I’d greatly appreciate your feedback. Please be gentle though, this is a first attempt :-)
PS: I’ve just realised this is the shortest post I’ve ever written.
In this instalment of my sensemakers series, I chat with Dr. Neil Preston, an Organisational Psychologist based in Perth, about the very topical issue of organizational change. In a wide-ranging conversation, Neil draws interesting connections between myths that are deeply embedded in Western thought and the way we think about and implement change…and also how we could do it so much better.
KA: Hi Neil, thanks for being a guest on my ongoing series of interviews with sensemakers. You and I have corresponded for at least a year now via email, so it’s a real pleasure to finally meet you, albeit virtually. I’d like to kick things off by asking you to say a bit about yourself and your work.
NP: Well, I’m Dr. Neil Preston. I’m an organizational psychologist…what that means is that I’m specially registered in the area of organizational psychology, much like a clinical psychologist. My background professionally is that I originally worked in mental health, as a senior research psychologist. I’ve published 30 to 40 peer-reviewed papers in psychiatry, mental health and psychometrics, so I know my way around empirical psychology. My real love, however, has always been in organizational and industrial psychology, so in 2006 I decided to leave the Health Department of Western Australia and move into full time consulting.
Consulting work has led me mainly into infrastructure projects- these are very large, complex projects where organisations from both the private and public sector have to get together and create alliances in order to get the work done. My job on these projects – as I often put it to people – is to make the Addams Family look like the Brady Bunch [laughter]. The idea is to get different value systems and organizational cultures to align, with the aim of getting to a shared understanding of project goals and a shared commitment to achieving them.
My original approach was very diagnostic – which is the way psychologists are taught their trade – but as problems have become more complex, I’ve had to resort to dialogical (rather than diagnostic) approaches. As you well know, dialogue is more commensurate with complexity than diagnosis, so dialogical approaches are more appropriate for so-called wicked problems. This approach then led me to complex systems theory which in turn led to an area of work that Paul Culmsee, I and yourself are looking into: emergent design practices. (Editor’s note: This refers to a method of problem solving in which solutions are not imposed up front but emerge from dialogue between various stakeholders.)
KA: OK, so could you tell us a bit about the kinds of problems you get called in to tackle?
NP: Very broadly speaking, I’m generally called in when organisations have goals that are incommensurate with each other. For example: a billion dollar road that has to be on time and on budget…but, by the way, the alignment of the road also takes out a nesting site of a Carnaby White Tailed Cockatoo which triggers the environmental biodiversity protection act which in turn triggers issues with local councils and so on.
Complexity in projects often arises from situations like these, where the issue is not just about delivering on time and on budget, but also creating a sustainable habitat and ensuring alignment with local governments etc.
KA: So very broadly, I guess one could say that your work deals with the problems associated with change. The reason I put it in this way is that change is something that most people who work in organisations would have had to deal with – either as executives who initiate the change, managers who are charged with implementing it or employees who are on the receiving end of it. The one thing I’ve noticed through experience –initially as a consultant and then working in big organisations – is that change is formulated and implemented in a very prescriptive way. However, the end results are often less than satisfactory because there are many unintended consequences (loss of morale, drop in productivity etc.) – much like the unintended consequences of large infrastructure projects. I’ve long wondered about this is so: why, after decades of research and experience do we still get it so wrong?
NP: Let me give you an answer from a psychologist’s perspective. There are a couple of sub-disciplines of psychology called depth and archetypal psychology that look at myth. The kind of change management programs that we enact are driven by a (predominantly) Western myth of heroic intervention.
James Hillman, an archetypal psychologist once said that a myth is what is real. This is somewhat contrary to the usual sense in which the word is used because we usually think of a myth as being something that is not real. However, Hillman is right because a myth is really an archetype – an overarching way of seeing the world in a way that we believe to be true. The myth of the hero – the good guy overcoming all adversity to slay the bad guy – is essentially an interventionist one. It is based on the Graeco-Roman notion of the exercise of individual will. Does that make sense so far?
KA: Yeah absolutely. Please go on.
NP: OK, so this myth is dominant in the Western imagination. For example, any movie that a kid might go to see like, say, Star Wars is really about the exercise of the individual will. In much the same way, the paradigm in which your typical change management program operates is very much (individual) action and intervention oriented. Even going back to Homeric times – the Iliad and Odyssey are essentially stories about individuals exercising authority, power…and excellence is another word that crops up often too. The objective of all this of course is to effect dramatic, full-frontal change.
However, there is a problem with this myth, and it is that it assumes that things are not complex. It assumes that simple linear, cause-effect explanations hold – that if you do A then B will happen (if you restructure you will save costs, for example). Such models are convenient because they seem rational on the surface, perhaps because they are easy to understand. However, they overlook the little details that often trip things up. As a result, such change often has unforeseen consequences.
Unfortunately, much of the stuff that comes out of the Big 4 consultancies is based on this myth. The thing to note is that they do it not because it works but because it is in tune with the dominant myth of the Western business world.
KA: What you are saying definitely strikes a chord. What’s strange to me, however, is that there have been people challenging this for quite a while now. You mentioned the predominantly linear approach – A causes B sort of thinking – that change management practitioners tend to adopt. Now, as you well know, systems theorists and cyberneticists have proposed alternate approaches that are more cognizant of the multifaceted nature of change, and they have done so over fifty years ago! What happened to all that? When I read some of the papers, I see that they really speak to the problems we face now, but they seem to have been all but forgotten (Editor’s note: see this post that draws on work by the prominent cyberneticist, Heinz von Foerster, for example). One can’t help but wonder why that is so….
NP: Well that’s because myths are incredibly sticky. We are talking about an ancient myth of the exercise of the individual human will. And, by the way, it’s a very Western thing: I remember once hearing on the radio that the Western notion of the “squeaky wheel getting the grease” has an Eastern counterpart that goes something like, “the loudest goose is first to lose his head.” The point is, the two cultures have a very different way of looking at the world. That myth – the hero myth – is very much brought into the way we tell stories about organisations.
Now, why does that matter? Well, JR Hackman, an organizational psychologist said it quite brilliantly. He called our fixation on the hero myth (in the context of change) the leadership attribution error – he argues that we tend to over-attribute the success of a change process to the salient things that we can see – which is (usually) the leader. As a result we tend to overlook the hidden factors which give rise to the actual performance of the organization. These factors usually relate to the latent conditions present in the organization rather than specific causes like a leader’s actions.
So there are two types of change: planned change and emergent change. Planned change is the way organisations usually think about change. It is a causal view in which certain actions give rise to certain outcomes. But here is the problem: the causal approach focuses primarily on salient features, ignoring all the other things that might be going on.
Now, cybernetics and systems theory do a better job of taking into account features that are hidden. However, as you mentioned, they have not had much uptake. I think the reason for this is that myths are incredibly sticky…that is the best answer I can give.
KA: Hmm that’s interesting…I’d never thought of it that way – the stickiness of myths as blinding us to other viewpoints. Is there something in the nature of human thought or human minds that make us latch on to over-simplified explanations?
NP: Well, there’s this notion of cognitive bias – persistent biases in human perception or judgement (Editors’ Note: also see this post on the role of cognitive bias in project failure). The leadership attribution error is precisely such a bias. I should point out that these biases aren’t necessarily a problem; they just happen to be the way humans think. And there are good evolutionary reasons for the existence of biases: we can’t process every little bit of information that comes to us through our senses, and these biases offer a means to filter out what is unimportant. Unfortunately, sometimes they cause us to overlook what is important. They are heuristics and, like all heuristics, they don’t always work.
So in the case of leadership attribution bias – yes leadership does have an effect, but it is not as much as what people think. In fact, work done by Wageman (who worked with Hackman) shows that what is more important for team performance are the conditions in which the teams work rather than the qualities or abilities of the leader.
KA: From experience I would have to say that rings true: conditions trump causes any day as far as team performance is concerned.
NP: Yeah and there’s a good reason for it; and it is so simple that we often overlook it. Take the example of sending a rocket to the moon. If you set up the right conditions for the rocket – the right amount of fuel, the right load and so forth, then everything that is necessary for the performance of the rocket is already set up. The person who actually steers the rocket is not as critical to the performance as the conditions are. And the conditions are already present when the rocket is in flight.
Similarly, In the case of organizational change, we should not be looking for causes – be it leadership or planned actions or whatever– but the conditions that might give rise to emergent change.
KA: Yeah, but conditions are causes too, aren’t they.
NP: Yes they are, but the point is that they aren’t salient ones – that is, they aren’t immediately obvious. Moreover, and this is the important point: you do not know the exact outcomes of those causes except that they will in general be positive if the conditions are right and negative if they aren’t.
KA: That makes sense. Now I’d like to ask you about a related matter. When dealing with change or anything else, organisations invariably seem to operate at the limits of their capacity. Leaders always talk about “pushing ourselves” or “pushing the envelope” and so on. On the other hand, there’s also a great deal of talk about flexibility and the capacity for change, but we never seem to build this into our organisations. Is there a way one can do this?
NP: Yes, you can actually build in resilience. Organisations generally like to keep their systems and processes tightly coupled – that is, highly dependent on each other. This tends to make them fragile or prone to breakdown. So, one of the things organisations can do to build resilience is to keep systems and processes loosely coupled. (Editor’s note: for example, devolve decision-making authority to the lowest possible level in the organization. This increases flexibility and responsiveness while having the added benefit of reducing management overhead).
Conditions also play a role here. One of the things that organisations like to talk about is innovation. The point is you can’t put in place processes for innovation but you can create conditions that might foster it. You can’t ask people whether they “did their 15 minutes of innovation today” but you can give them the discretionary freedom to do things that have nothing to do with their work…and they just might do something that goes above and beyond their regular jobs. But of course what underpins all this is trust. Without trust you simply cannot build in flexibility or resilience.
KA: This really strikes a chord and let me tell you why. I read Taleb’s book a while ago. As you probably know, the book is about antifragility, which he defines as the ability to benefit from uncertainty rather than just being resilient to it. After I read the book I wrote a post on what an antifragile IT strategy might look like…and in an uncanny resonance with what you just said, I made the claim that trust would be the single important element of the strategy [laughter].
NP: Yeah, and trust is not something you receive as much as you give. So as a psychologist I know why it is so damaging to people. You know, “Et tu Brutus” – Caesar’s famous line – it was the betrayal of trust that was so damaging. Once trust is gone there’s nothing left.
KA: Indeed, I sometimes feel that the key job of a manager is to develop trust-based relationships with his or her peers and subordinates. However, what I see in the workplace is often (though definitely not always) the opposite: people simply do not trust their managers because managers are quick to pass the blame down (or even across) the hierarchy rather than absorbing it…which arguably, and ethically, is their job. They should be taking the heat so that people can get on with actual work. Unfortunately managers who do this are not as common as they should be.
NP: We’re getting into a complex area here, and it is one that I deal with at length in my masterclass on collaborative maturity and leadership. This is the old scapegoating mechanism at work, and it is related to the leadership attribution error and the hero myth. If the attribution is back to the individual, then the blame must also be attributable to an individual. In fact, I have this slide in one of my presentations that goes, “a scapegoat is almost as useful as the solution to a problem.” [laughter]
Now, there are two questions here. “The scapegoat” is the answer to the question “Who is responsible?” However, it is more important to look at conditions rather than causes, so the real question is, “How did this situation come about?” When you look at “Who” questions, you are immediately going into questions of character. It elicits responses like “Yeah, it’s Kailash’s fault because he is that kind of a guy…he is an INTP or whatever.” What’s happening here is that the problem is explained away because it is attributed to Kailash’s character. You see what is going on…and why it is so dangerous?
KA: Yeah, that’s really interesting.
NP: And you see, then they’ll say something like, “…so let’s take Kailash out and put Neil in”…but the point is that if the conditions remain the same, Neil will fall down the same hole.
KA: It’s interesting the way you tie both things back to the individual – the individual as hero and the individual as scapegoat.
NP: Yes, it’s two sides of the same coin. Followership acquiesces to leadership: Kailash will follow Neil, say, to the Promised Land. If we get there, Neil gets the credit but if we don’t, he gets the blame.
KA: Very interesting, but this brings up another question. Managers and leaders might turn around and say, “It’s all very well to criticize the way we operate, but the fact is that it is impossible to involve all stakeholders in determining, say, a strategy. So in a sense, we are forced to take on the role of “heroes,” as you put it.”
So my question is: are there some ways in which org are some of the ways in which organisations can address the difficulties associated with of collective decision-making?
NP: Of course, it is often impossible to include all stakeholders in a decision-making process, particularly around matters such as organisational strategy. What you have to do first is figure out who needs to be involved so that all interests are fairly represented. Second, I’m attracted to the whole idea of divergent (open-ended) and convergent (decisive) thinking. For example, if a problem is wicked or complex, there is no point attempting to use expert knowledge or analysis exclusively (Editor’s note: because no single expert holds the answers and there isn’t enough information for a sensible, unbiased analysis). Instead, one has to use collective intelligence or the wisdom of the crowd by seeking opinions from all groups of stakeholders who have a stake in the problem. This is divergent thinking.
However, there comes a time when one has to “make an incision in reality” – i.e. stop consultation and make the best possible decision based on data and ethics. – one has to use both IQ and EQ. This is the convergent side of the coin.
Another problem is that one often has the data one needs to make the right decision, but the decision does not get made for reasons of ideology. Then it becomes a question of power rather than collective intelligence: a solution is imposed rather than allowed to emerge.
KA: Well that happens often enough – this “short-circuiting” of the decision-making process by those in positions of power.
NP: Yes, and it is why I think deliberative decision-making which comes from the Western notion of deliberative democracy – i.e. decision-making based on dialogue and consultation is the best way forward but it can be a challenge to implement. Democracy is slow, but it is generally more accurate…
KA: Yes, that’s true, but it can also meander.
NP: Sure, everything is bound by certain limitations (like time) and that’s why you have to know when to intervene. One of the important things for a leader to have in this connection is negative capability – which is not “negative” in the usual sense of the word, but rather the ability to know how to be comfortable with ambiguity and be able to intervene in ambiguous situations in a way that gets some kind of useful outcome.
Of course, acting in such situations also means that one has to have good feedback mechanisms in place; one must know how things are actually working on the ground so that one can take corrective actions if needed. But, in the end, the success of this way of working depends critically on having the right conditions in place. If you don’t set up the right conditions, any intervention can have catastrophic consequences.
If I may talk politically for a minute – the current situation in the Middle East is a classic example of a planned intervention: direct, frontal, dramatic, causal, linear and supposedly rational. However, if the right conditions are not in place, such interventions can have unforeseen consequences that completely overshadow the alleged benefits. And that is exactly what we have seen.
In general I would say that emergent change is more likely to succeed than large-scale, direct, planned change. The example one hears all the time is that of continuous improvement – where small changes are put in place and then adjusted based on feedback on how they are working.
KA: This is a matter of some frustration for me: in general people will agree that collaboration and collective decision-making are good, but when the time comes, they revert to their old, top-down ways of working.
NP: Yes, well when I go into a consulting engagement on collaborative maturity, one of the first things I ask people is whether they want to use the collaborative process to inform people or to influence them. Often I find that they only want to use it to inform people. There is a big difference between the two: influencing is emergent, informing isn’t.
KA: This begs a question: say you walk into an organization where people say that they want to use collaborative processes to influence rather than inform, but you see that the culture is all wrong and it isn’t going to work. Do you actually tell them, “hey, this is not going to work in your organization?”
NP: Well if people don’t feel safe to speak their truth then it isn’t going to work. That’s why I’m so interested in Hackman’s work on conditions over causes. Coming to your question I don’t necessarily tell people that it’s not going to work because I believe it is more productive to invite them to explore the implications of doing things in a certain way. That way, they get to see for themselves how some of the things they are doing might actually be improved. One doesn’t preach but one hands things back to them.
In psychology there are these terms, transference and countertransference. In this context transference would be where a consultant thinks, “I’m a consultant so I’m going to assume a consultant persona by acting and behaving like I have all the answers”, and countertransference would be where the client reinforces this by saying something like, “you are the expert and you have all the answers.” Handing back stops this transference-countertransference cycle. So what we do is to get people to explore the consequences of their actions and thus see things that might have been hidden from their view. It is not to say “I told you so,” but rather “what are the implications of going down this path.” The idea is to appeal to the ethical or good side in human beings…and I believe that human beings are fundamentally good rather than not.
KA: I like your use of the word “ethical” here. I think that is really important and is what is often missing. One hears a lot about ethics in business these days, but it is most often taught and talked about in a very superficial way. The reality, however, is that the resolution of most wicked problems involves ethical considerations rather than logic and rationality…and this is something that many people do not understand. It isn’t about doing things right, rather it is about doing the right things.
NP: Yes, and this is related to what I call “meaning over motivation” – the idea being is that instead of attempting to motivate people to do something, try providing them with meaning. When you do this you will often find that change comes for free. And it is worth noting that meaning has both an emotional and rational component – or, put a little bit differently, an ethical and logical one. In one of his books, Daniel Pink makes the point that uncoupling ethics from profit can have catastrophic consequences…and we have good examples of that in recent history.
The broad lesson here is that if the conditions aren’t right then it is inevitable that unethical behavior will dominate.
KA: Yeah well human nature will ensure that won’t it?
NP: [laughs] Yeah, and you don’t need a psychologist to tell you that.
KA: [laughs] Indeed…and I think that would be a good note on which to bring this conversation to a close. Neil, thanks so much for your time and insights. It’s been a pleasure to chat with you and I look forward to catching up with you again…hopefully in person, in the not too distant future.
NP: Yeah, Singapore and Perth are not that far apart…
The classic aim of automation is to replace human manual control, planning and problem solving by automatic devices and computers. However… even highly automated systems, such as electric power networks, need human beings for supervision, adjustment, maintenance, expansion and improvement. Therefore one can draw the paradoxical conclusion that automated systems still are man-machine systems, for which both technical and human factors are important. This paper suggests that the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator.
These lines were written over thirty years ago, but are ever more apt today – such paradoxes are rife, not only in automation, but in any field in which technology plays an important part. To illustrate my point, I highlight a couple of ironies drawn from a domain that is likely to be familiar to many readers of this blog: the world of enterprise IT. I also present a brief discussion of how these ironies of enterprise IT can be avoided.
Ironies of enterprise IT
In the last few decades information technology has found its way into diverse organisational functions. This trend has been accompanied by an explosive growth in new technologies. As a result of this, corporate IT infrastructures have become ever more complex and the costs of maintaining them have burgeoned. Quite naturally, the focus has thus turned to taming both complexity and cost. The favoured approaches to tackling this problem are standardisation and/or outsourcing. However, as I discuss below, both often lead to ironic outcomes.
An irony of standardisation
Enterprise IT environments tend to evolve rapidly, reflecting the many demands made on them by the organisational functions they support. This is good because it means that IT is doing what it should be doing: supporting the work of the parent organisation. On the other hand, this can result in unwieldy environments that are difficult (not to mention, expensive) to maintain. One way to address this is to impose of standards relating to processes (such as ITIL) and infrastructure (such as SAP or any enterprise application).
The question is, how well does such standardisation work in practice?
In his book entitled, From Control to Drift, Claudio Ciborra pointed out that IT infrastuctures in organisations tend to drift – i.e. they escape processes, plans and standards, and take on a life of their own. The reason they drift is that they are subject to unpredictable forces within and outside the hosting organisation. The imposition of standards may slow the drift but cannot arrest it entirely. Infrastructures are therefore best seen as ever-evolving constructs consisting of systems, people and processes that interact with each other in often unforeseen ways. As he put it:
Corporate information infrastructures are puzzles, or better collages, and so are the design and implementation processes that lead to their construction and operation. They are embedded in larger, contextual puzzles and collages. Interdependence, intricacy, and interweaving of people, systems, and processes are the culture bed of infrastructure. Patching, alignment of heterogeneous actors and making do are the most frequent approaches…irrespective of whether management [is] planning or strategy oriented, or inclined to react to contingencies.
The essential message here is that standards and processes overlook the fact that enterprises are complex social systems that are subject to internal and external influences which cannot always be foreseen. Dealing with these, more often than not, entails the implementation of hacks and workarounds that violate the imposed standards and thus nullify the benefits of standardisation.
In summary, “standardised” IT environments often end up have a plethora of non-standard hacks and workarounds that are necessary, but are generally messy and expensive to maintain.
An irony of outsourcing
One of the main reasons for outsourcing IT is to reduce costs. Yes, I am aware that many decision-makers claim that their primary reason is to reduce complexity rather than cost, but the choices they make often belie their claims. The irony is that in their eagerness to control costs, they often end up increasing them because they overlook hidden factors. I explain this in brief below, drawing on my post on the transaction costs of outsourcing.
The basic idea is simple – it is that the upfront fee quoted by the vendor is but a fraction of the total cost that will be incurred by the customer. Some of the costs that are generally not included in upfront costs are:
- Search /selection costs: these are the costs associated with searching for and shortlisting vendors.
- Bargaining costs: these are costs associated with negotiations for a mutually acceptable contract.
- Costs of coordinating work: these are costs associated with coordinating external and internal work. This is particularly important in the case of software-as-a-service because the effort required to interface cloud applications with in-house systems is often underestimated.
- Costs of enforcement and change: These are costs associated with enforcing the terms of the contract and those associated with change.
The point to note is these costs are rarely if ever mentioned by the vendor, but almost always show up in one form or another. It is therefore important for the customer to try and get a handle on these before entering into any commercial agreements. The problem is, some of these costs (particularly 3 and 4 above) are hard if not impossible to figure out upfront. For example, if the relationship turns sour the only solution might be to switch vendors. The cost associated with this is often significant and is borne entirely by the customer. A lack of awareness of such costs associated with outsourcing will invariably result in ironical outcomes.
In summary: attempts to control costs by outsourcing IT can have the contrary effect of increasing them.
Avoiding ironical outcomes
So how does one avoid ironical outcomes?
I have only one piece of advice to offer here: when planning IT architectures or outsourcing initiatives, use an incremental or emergent approach that avoids big designs or commitments upfront. Using an emergent approach not only limits risk, it also provides opportunities for learning. Most important, it enables one to verify that the envisaged benefits are not just wishful architect or manager-level thinking
Below I outline what such an approach might entail for the two ironies discussed earlier:
- For infrastructures/systems: avoid grandiose system designs that attempt to span the “enterprise” – remember that one size will not fit all of your users. . Consequently, enterprise architectures and governance systems should provide guidelines, rather than detailed prescriptions. As Anders Jensen-Waud puts it in this post: they should foster resilience and adaptability rather than conformance,
- For outsourcing: start small, possibly with a small project or system. This will help you get a sense for how outsourcing would work in your environment and help you figure out whether the vendor you have selected is really right for you. Remember, no two environments are identical so others’ lessons learned may be considerably less useful than you think. Finally, if you’re going to the cloud, be sure to factor in costs and technical challenges associated with interfacing external apps with in-house ones.
Yes, there’s nothing particularly profound here, it is just common sense…but you know what they say about the commonality of common sense.
In this post I have highlighted some ironies of enterprise information systems and have briefly outlined an emergent approach to avoiding them. I believe but cannot prove that ironical outcomes are almost guaranteed if one takes a monolithic, enterprise-style approach or a let’s-outsource-it-all attitude to enterprise information technology. Such a view overlooks the messy little details and differences that trip up big designs and grandiose plans. In the end, the only way to avoid ironical outcomes is to start small, learn from experience and incorporate that learning in an incremental manner in whatever you’re building or doing. Yes, you might end up with something you did not envisage at the start, but you will have learnt much along the way. More important, perhaps, is that you will be able to rest assured that it works.
Much can be learnt about an organization by observing what management does when things go wrong. One reaction is to hunt for a scapegoat, someone who can be held responsible for the mess. The other is to take a systemic view that focuses on finding the root cause of the issue and figuring out what can be done in order to prevent it from recurring. In a highly cited paper published in 2000, James Reason compared and contrasted the two approaches to error management in organisations. This post is an extensive summary of the paper.
The author gets to the point in the very first paragraph:
The human error problem can be viewed in two ways: the person approach and the system approach. Each has its model of error causation and each model gives rise to quite different philosophies of error management. Understanding these differences has important practical implications for coping with the ever present risk of mishaps in clinical practice.
Reason’s paper was published in the British Medical Journal and hence his focus on the practice of medicine. His arguments and conclusions, however, have a much wider relevance as evidenced by the diverse areas in which his paper has been cited.
The person approach – which, I think is more accurately called the scapegoat approach – is based on the belief that any errors can and should be traced back to an individual or a group, and that the party responsible should then be held to account for the error. This is the approach taken in organisations that are colloquially referred to as having a “blame culture.”
To an extent, looking around for a scapegoat is a natural emotional reaction to an error. The oft unstated reason behind scapegoating, however, is to avoid management responsibility. As the author tells us:
People are viewed as free agents capable of choosing between safe and unsafe modes of behaviour. If something goes wrong, it seems obvious that an individual (or group of individuals) must have been responsible. Seeking as far as possible to uncouple a person’s unsafe acts from any institutional responsibility is clearly in the interests of managers. It is also legally more convenient…
However, the scapegoat approach has a couple of serious problems that hinder effective risk management.
Firstly, an organization depends on its frontline staff to report any problems or lapses. Clearly, staff will do so only if they feel that it is safe to do so – something that is simply not possible in an organization that takes scapegoat approach. The author suggests that the Chernobyl disaster can be attributed to the lack of a “reporting culture” within the erstwhile Soviet Union.
Secondly, and perhaps more important, is that the focus on a scapegoat leaves the underlying cause of the error unaddressed. As the author puts it, “by focusing on the individual origins of error it [the scapegoat approach] isolates unsafe acts from their system context.” As a consequence, the scapegoat approach overlooks systemic features of errors – for example, the empirical fact that the same kinds of errors tend to recur within a given system.
The system approach accepts that human errors will happen. However, in contrast to the scapegoat approach, it views these errors as being triggered by factors that are built into the system. So, when something goes wrong, the system approach focuses on the procedures that were used rather than the people who were executing them. This difference from the scapegoat approach makes a world of difference.
The system approach looks for generic reasons why errors or accidents occur. Organisations usually have a series of measures in place to prevent errors – e.g. alarms, procedures, checklists, trained staff etc. Each of these measures can be looked upon as a “defensive layer” against error. However, as the author notes, each defensive layer has holes which can let errors “pass through” (more on how the holes arise a bit later). A good way to visualize this is as a series of slices of Swiss Cheese (see Figure 1).
The important point is that the holes on a given slice are not at a fixed position; they keep opening, closing and even shifting around, depending on the state of the organization. An error occurs when the ephemeral holes on different layers temporarily line up to “let an error through”.
There are two reasons why holes arise in defensive layers:
- Active errors: These are unsafe acts committed by individuals. Active errors could be violations of set procedures or momentary lapses. The scapegoat approach focuses on identifying the active error and the person responsible for it. However, as the author points out, active errors are almost always caused by conditions built into the system, which brings us to…
- Latent conditions: These are flaws that are built into the system. The author uses the term resident pathogens to describe these – a nice metaphor that I have explored in a paper review I wrote some years ago. These “pathogens” are usually baked into the system by poor design decisions and flawed procedures on the one hand, and ill-thought-out management decisions on the other. Manifestations of the former include faulty alarms, unrealistic or inconsistent procedures or poorly designed equipment; manifestations of the latter include things such as unrealistic targets, overworked staff and the lack of funding for appropriate equipment.
The important thing to note is that latent conditions can lie dormant for a long period before they are noticed. Typically a latent condition comes to light only when an error caused by it occurs…and only if the organization does a root cause analysis of the error – something that is simply not done in an organization takes a scapegoat approach.
The author draws a nice analogy that clarifies the link between active errors and latent conditions:
…active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defences and to drain the swamps in which they breed. The swamps, in this case, are the ever present latent conditions.
“Draining the swamp” is not a simple task. The author draws upon studies of high performance organisations (combat units, nuclear power plants and air traffic control centres) to understand how they minimised active errors by reducing system flaws. He notes that these organisations:
- Accept that errors will occur despite standardised procedures, and train their staff to deal with and learn from them.
- Practice responses to known error scenarios and try to imagine new ones on a regular basis.
- Delegate responsibility and authority, especially in crisis situations.
- Do a root cause analysis of any error that occurs and address the underlying problem by changing the system if needed.
In contrast, an organisation that takes a scapegoat approach assumes that standardisation will eliminate errors, ignores the possibility of novel errors occurring, centralises control and, above all, focuses on finding scapegoats instead of fixing the system.
Figure 1 was taken from the Patient Safety Education website of Duke University Hospital.
The Swiss Cheese model was first proposed in 1991. It has since been applied in many areas. Here are a couple of recent applications and extensions of the model to project management: