Archive for November 2008
Many corporate IT shops use big design up-front methodologies to guide their internal software development projects. Generally, IT decision makers seem reluctant to trial iterative/incremental approaches, which have proven their worth in diverse development environments. The best known amongst these techniques are the ones based on agile development principles. “Agile principles are OK for software development houses,” say these managers, “but they’ll never work in the corporate world.” I don’t quite agree with this because I’ve had some minor successes in using agile principles (continual customer collaboration, for instance) within corporate IT environments. However – and I freely admit it – my efforts have been piecemeal and somewhat ad-hoc. Now, finally, help is at hand for those who have wondered how they might “add agility” to their development processes: A book entitled Becoming Agile…in an imperfect world, by Greg Smith and Ahmed Sidky, shows how non-agile development environments can be transformed through a gradual adoption of agile techniques. This post is an extensive review of the book.
I should add a caveat before proceeding any further: this review is written from the perspective of a development manager / team lead working in corporate IT – for no better reason than it’s what I do at present. That said, I hope there’s enough detail and commentary for it to be of interest to those working in other environments too.
The book begins with a story about a mining rescue, which provides an excellent illustration of agile principles in practice. The analogy is apt because, to be successful, any rescue effort must be collaborative (must involve many people with diverse skills), adaptive (must be responsive to changes in conditions) and, above all, must produce results (those trapped must be rescued unharmed). Traditional project management, with its insistence on complete, up-front requirements analysis and inflexibility to change would be hopelessly inappropriate for any rescue effort. Why? Because one cannot know a priori what might lead to a successful rescue – it is a complex process that unfolds and evolves with time. Similarly, as Fredrick Brooks emphasised more than 20 years ago, software development is intrinsically complex. What makes it so is the in-principle impossibility of obtaining and assimilating user requirements upfront. This is the essential difference between – say – a construction project and a software development effort. Recent research on project complexity suggests that agile techniques offer the best hope of dealing with this complexity. The essential advantage conferred by agile processes is the built-in adaptability to change via iterative development and continual customer involvement. In the end, this is what enables development teams to build applications that customers really want. An obvious corollary – if it needs to be stated at all – is that the adoption of agile techniques provides demonstrable business value. This is important if one wants to get management buy-in for a move to agility.
The book provides a roadmap for software development teams that want to improve their agility. Although the authors claim they do not favour a specific methodology, much of their discussion is based on Scrum. There’s nothing wrong with this per se, but I believe it is more important to focus on principles (or intent) behind the practices rather than the practices themselves. Folks working in corporate IT environments would have a better chance of introducing agility into their processes by adopting principles (or ways of working) gradually, rather than attempting to introduce a specific methodology wholesale – the latter approach being much too radical for the corporate world. The book also lists some common “roadblocks to agility” and a brief discussion of how these can be addressed. The authors emphasise that the aim should be to create a customised agile development process that is tailored to the needs of the organisation. Furthermore, instead of aiming for “agile perfection”, one should aim at reaching the right level of agility for one’s organisation. Excellent advice!
The path to agility, as laid out in the book, is as follows:
- Assessment: evaluating current processes and developing a path to agility. Following Boehm and Turner, the authors suggest that upfront analysis be done to identify mismatches between organisational culture / practices and the agile techniques the organisation wishes to adopt. A proper assessment will help identify mismatches (or risks) associated with the transition. The book also provides a link to an online readiness assessment (registration required!). The assessments are to be provided in an appendix to the book. However, the review draft I received did not have this appendix, so I can’t comment on the utility of the tool.
- Getting buy-in: Introducing an agile methodology is impossible without management support. One needs to make a case for this upfront. The authors note that the move to agility should be undertaken only if there are demonstrable benefits for the company. When canvassing support, the costs, benefits (for the company and management) and risks must be clearly articulated in a business case for the migration to agile practices. The book provides some examples of each.
- Understanding current processes and modifying them appropriately: The authors emphasise that one needs to understand ones existing processes thoroughly before attempting to change them. Only when this is done can one determine which processes would benefit the most from change. The basic idea here is to make one’s processes as agile as possible, within organisational and other constraints. Transplanting another organisation’s processes into one’s environment is unlikely to work. The book outlines how organisations can develop customised processes suited to their specific environments. I found the book’s case-study based approach very helpful, as it provided a grounded example of how a company might approach the transition. In cases where companies have no pre-existing processes (or completely dysfunctional processes), the authors suggest starting with a packaged agile methodology such as Scrum.
- Piloting the new process: The new processes have to be tested on a real project. The authors recommend doing a pilot project using the new methodology. Much of the book is dedicated to discussing a case study of a pilot project in a fictitious organisation. The discussion is useful because it highlights common issues that any organisation might face in using agile processes for the first time. The pilot project is a useful vehicle to illustrate how feasibility studies, estimation and planning, iterative development, release and delivery work in an agile environment. I really liked this approach as it provided a grounded context to the principles.
- Retrospective: A retrospective or post-mortem offers the opportunity to improve the development process. Unfortunately, post-mortems are rarely done right. The book offers excellent advice on planning retrospectives. The basic idea: improve the process, don’t dissect the specific project.
Of course, achieving agility is more than modifying or adopting processes – it involves changing organisational culture as well. One of the main cultural obstacles is the command and control management style that is so prevalent in the corporate world. Another cultural issue is the lack of communication across organisational functions. The book provides advice on how to engender an agile culture within an organisation. Essentially, executives must endorse agile principles, line managers need to become coaches rather than supervisors, and teams need to adapt and adopt agile practices. Another characteristic of an agile culture is that teams are empowered to make their own decisions. This can be a challenge for managers and teams attuned to working in corporate IT environments that subscribe to the command and control approach.
The authors recommend engaging consultants to help with the transition to agility, but I think organisations may be better served by honest self evaluation first, followed by the development of an action plan. The action plan (in true agile fashion!) must be developed collaboratively, by involving all stakeholders who will be affected by the transformation. Books (such as the one being reviewed) and training courses can help one along the way, but there’s really no substitute for introspection and change from within. On a related note, the book mentions that agile teams should be composed of generalists – people with a broad range of technical skills. Corporate IT teams, on the other hand, tend to made up of specialists. The authors point out that this can be a barrier to agility, but not one that is insurmountable.
Finally, the authors use the Technology Adoption Cycle to illustrate the difficulties of moving to an enterprise wide adoption of agile techniques. Given the huge culture change involved, they recommend an evolutionary transition to agile processes. In this connection, the authors identify five levels of agility: Collaborative, Evolutionary, Integrated, Adaptive and Encompassing, and recommend that enterprises progress through each of these steps on their way to agility nirvana. The book presents a chart outlining what each level of agility entails (see this article for more). This approach enables the organisation (and people involved) to “digest and assimilate” the changes in bite-sized pieces. The really good news is that the lower levels of agility are eminently achievable, as they emphasise agile principles such as customer collaboration and evolutionary (iterative) development, whilst placing no great demands on technical skills. This puts agility within reach of most organisations. So if you work in a non-agile environment, you may want to consider getting yourself a copy of the book as a first step towards becoming agile.
Projects are, by definition, unique endeavours. Hence it is important that project risks be analysed and managed in a systematic manner. Traditionally, risk analysis in projects – or any other area – focuses on external events. In a recent paper entitled, The Pathogen Construct in Risk Analysis, published in the September 2008 issue of the Project Management Journal, Jerry Busby and Hongliang Zhang articulate a fresh perspective on risk analysis in projects. They argue that the analysis of external threats should be complemented by an understanding of how internal decisions and organisational structures affect risks. What’s really novel, though, is their use of metaphor: they characterise these internal sources of risk as pathogens. Below I explore their arguments via an annotated summary of their paper.
What’s a risk pathogen?
“Risk,” the authors state, “is a statistical concept of events that happen to someone or something.” Traditional risk analysis concerns itself with identifying risks, determining the probability of their occurrence, and finding ways of dealing with them. Risks are typically considered to be events that are external to an organisation. This approach has its limitations because it does not explicitly take into account the deficiencies and strengths of the organisation. For example, a project may be subject to risk due to the use of an unproven technology. When the risk becomes obvious, one has to ask why that particular technology was chosen. There could be several reasons for this, each obviously flawed only in hindsight. Some reasons may be: a faulty technology selection process, over optimism, decision makers’ fascination with new technology or some other internal predisposition. Whatever the case, the “onditions that lead to the choice of technology existed prior to the event that triggered the failure. The authors label such preexisting conditions pathogens. In the authors’ words, “At certain times, external circumstances combine with ‘resident pathogens’ to overcome a system’s defences and bring about its breakdown. The defining aspect of these metaphorical pathogens is that they predate the conditions that trigger the breakdown, and are generally more stable and observable.”
It should be noted that the pathogen tag is subjective – that is, one party might view a certain organisational predisposition as pathogenic whereas another might view it as protective. To illustrate using the above example – management might view a technology as unproven, whereas developers might view it as offering the company a head start in a new area. Perceptions determine how a “risk” is viewed: different groups will select particular risks for attention, depending on the cultural affiliations, background, experience and training. Seen in this light, the subjectivity of the pathogen label is reasonable, if not obvious. In the paper, the authors examine risk pathogens in projectised organisations, with particular focus on the subjectivity of the label (i.e. different perceptions of what is pathogenic). Why is this important? The authors note that in their studies, “the most insidious kind of risk to a project – the least well understood and potentially the most difficult to manage if materialised – was the kind that involved contradictory interpretations.” These contradictory interpretations must be recognised and addressed by risk analysis; else they will come in the way of dealing with risks that become reality.
The authors use a case study based approach, using a mix of projects drawn from UK and China. In order to accentuate the differences between pathogenic and protective perspectives of “pathogens”, the selected projects had both public and private sector involvement. In each of the projects, the following criteria were used to identify pathogens. A pathogen
- Is the cause of an identifiable adverse organisational effect.
- Is created by social actors – it should not be an intrinsic vulnerability such as a contract or practice.
- Exists prior to the problem – i.e. it predates the triggering event.
- Becomes a problem (or is identified as a problem) only after the triggering event.
The authors claim that in all cases studied, the pathogen was easily identifiable. Further it was also easy to identify contradictory interpretations (protective behaviour) made by other parties. As an example, in a government benefits card project, the formulation of requirements was done only at a high-level (pathogen). The project could not be planned properly as a consequence (triggering event). This lead to poor developer performance and time/cost overruns (effect). The ostensible reason for doing requirements only at a high-level was to save time and cost in the bidding process (protective interpretation). Another protective interpretation was that detailed requirements would strait-jacket the development team and preclude innovation. Note that the adaptive (or protective) interpretation refers to a risk other than the one that actually occurred. This is true of all the examples listed by the authors – in all cases the alternate interpretation refers to a risk other than the one that occurred, implying that the risk that actually occurred was somehow overlooked or ignored in the original risk analysis. It is interesting to explore why this happens, so I’ll jump straight to the analysis and discussion, referring the reader to the paper for further details on the case studies.
Analysis and Discussion
From an analysis of their data, the authors suggest three reasons why a practice that is seen as adaptive, might actually end up being pathogenic:
- Risks change with time, and managing risk at one time cannot be separated from managing it at another. For example, a limited-scale pilot project may be done on a shoestring budget (to save cost). A successful pilot may be seen as protective in the sense that it increases confidence that the project is feasible. However, because of the limited scope of the pilot, it may overlook certain risks that are triggered much later in the project.
- Risks are often interdependent – i.e. how one risk is addressed may affect another risk in an adverse manner (e.g. increase the probability of its occurrence)
- The stakeholders in a project do not have unrestricted choices on how they can address risks. There are always constraints (procedural or financial, for example) which restrict options on how risks can be handled. These constraints may lead to decisions that affect other risks negatively.
I would add another point to this list:
- Stakeholders do not always have all the information they need to make informed decisions on risks. As a consequence, they may not foresee the pathogenic effect of their decisions. The authors allude to this in the paper, but do not state it as an explicit point. In their words, “Being engaged in a particular stage of a project selects certain risks for a project manager’s attention, and the priority becomes dealing with these risks rather than worrying about how widely the way of dealing with them will ramify into other stages of the project.”
The authors then discuss the origins of subjectivity on whether something is pathogenic or adaptive. Their data suggests the following factors play an important role in how a stakeholder might view a particular construct:
- Identity: This refers to the roles people play on projects. For example, a sponsor might view a quick requirements gathering phase as protective, in that it saves time and money; whereas a project manager or developer may view it as pathogenic, as it could lead to problems later.
- Expectations of blame: It seems reasonable that stakeholders would view factors that cause outcomes that they may be blamed for as pathogenic. As the authors state, “Blameworthy events become highly specific risks to an individual and the origin of these events – whether practices, artefacts or decisions – become relevant pathogens.” The authors also point out that the expectation of blame plays a larger role in projectised organisations – where project managers are given considerable autonomy – compared to functional organisations where blame may be harder to apportion.
Traditional risk analysis, according to the authors, focus on face-value risks – i.e. on external threats – rather than the subjective interpretations of these risks by different stakeholders. To quote, “…problematic events become especially intractable because of actors’ interpretation of risk are contradictory.” These contradictory interpretations are easy to understand in the light of the discussion above. This then begs the question: how does one deal with this subjectivity of risk perception? The authors offer the following advice, combining elements of traditional risk analysis with some novel suggestions:
- Get the main actors (or stakeholders) to identify the risks (as they perceive them), analyse them and come up with mitigation strategies.
- Get the stakeholders to analyse each others analyses, looking for contradictory interpretations of factors.
- Get the stakeholders together, to explore the differences in interpretations particularly from the perspective of whether:
- These differences will interfere with management of risks as they arise.
- There are ways of managing risks that avoid creating problems for other risks.
They suggest that it is important to avoid seeking consensus, because consensus invariably results in compromises that are sub-optimal from the point of view of managing multiple risks
I end this section with a particularly apposite quote from the paper, “At some point the actors need to agree on how to get on with the concrete business of the project, but they should be clear not only about the risks this will create for them, but also the risks it creates for others – and the risks that will come from others trying to manage their risks.” That, in a nutshell, is the message of the paper.
The authors use the metaphor of a pathogen to describe inherent organisational characteristics or factors that become “harmful” or “pathogenic” when certain risks are triggered. The interpretations of these factors subjective in that one person’s “pathogen” may be another person’s “protection”. Further, a factor that offers protection at one stage of a project may in fact become pathogenic at a later stage. Such contradictory views must be discussed in an open manner in order to manage risks effectively.
Although the work is based on relatively few data points, it offers a novel perspective on the perception of risks in projects. In my opinion the paper is well written, interesting and well worth a read for academics, consultants and project managers.
Busby, Jerry. & Zhang, Hongliang., The Pathogen Construct in Risk Analysis, Project Management Journal, 39 (3), 86-96. (2008).
It is an unfortunate fact of corporate life that management is sometimes practiced as a series of games between the manager and the managed (with the odds stacked against the latter, of course). In this post I list some of the more common games I have witnessed over time. As with all games, it is useful to know the ground rules before proceeding. In this case it’s simple because there’s only one: the manager always wins. Now that the ground rule is set, let the games begin…
Two cents up: Some managers feel obliged to contribute to any and every discussion – even those involving topics they know nothing about. These gents (and ladies) are professional players of the game of Two Cents Up. The game is played as follows: contribute your two cents (or equivalent in any other currency) to all discussions. There is no limit on the number of turns, and at the end of the discussion you simply tot up your contributions to get your net score. In case it isn’t clear, only managers get a turn. Expert players of this game routinely end up with several dollars worth of pointless contributions.
Now I delegate; now I don’t: This is essentially a game of delegation peekaboo. The manager delegates responsibility to an employee then, a little while later, takes it back. Then, later still delegates again and so on. The game can be played through several such delegation-undelegation cycles, driving the subordinate to responsibility uncertainty: a state where the subordinate knows not what he or she is (or isn’t) responsible for. The best exponents of this game can ensure that nothing ever gets done because no one on the team (the manager included) knows who is responsible making decisions.
The second guess: This game is the favourite of managers who find it hard to delegate real responsibility to their subordinates. They delegate only when forced to (by their managers), but then constantly second guess decisions made by the delegatee. As per the Merriam-Webster definition of second-guess, the game can be played at two levels: a) criticise decisions when they are made and then b) criticise them again after the result of the decision is known. Two bites of the cherry! What more could a second-guesser want? No, no… don’t bother answering that.
My way: This is the management version of the well-known childrens’ adage: he who owns the ball, makes the rules. In the grown-ups game the manager insists on doing things his or her way, riding roughshod over his team’s opinions or advice. The best way to sum up this game is through the (edited) lyrics of the eponymous song:
I’ll plan each charted course;
Each careful step along the byway,
But more, much more than this,
We’ll do it my way.
A more cut-throat version of the game is called my way or the highway – a cliche that nicely sums up what happens to those who choose not to follow the leader.
Bolt from the blue: This game is invoked by some managers when their opinion is challenged by an employee with a well thought out, irrefutable case. Just when the employee reckons the manager is about to concede, the manager invokes a bolt from the blue: a statement that has no relevance to the discussion, but serves as an effective distractor to confuse his opponent (sorry, I mean, employee). Here’s an almost true example from real life:
Ben – “So, from the evaluation, I think we can safely conclude that Oracle is a better than option SQL Server for this project.”
Manager – “May be so, but have you considered using SOA…”
This non sequitur usually results in game, set and match to the manager.
Leap of logic: This game is an insidious variant of the previous one. Like the bolt from the blue, the leap of logic is aimed at distracting the employee. However, it is harder to tackle a leap of logic because the argument isn’t as obviously unrelated to the discussion as the bolt from the blue. Illustrating the leap of logic using the previous example, the manager’s response to Ben might be:
Manager – “Ah, but what about non-relational databases…”
Brilliant! Although the manager is ostensibly talking about databases, he is really spouting nonsense. Ben’s gobsmacked, and doesn’t know where to begin refuting the point.
Picking nits: This game is played when the manager wants to find fault with work done. It’s an axiom that nothing’s perfect, so one can always find things that haven’t been done right. Some managers are specialist nitpickers – expressing great creativity in finding so-called errors or problems with the work done. Like the first game described in this post, this one can be scored. too. The scoring works as follows: a point per nit picked. At the risk of stating the obvious: only the manager can score.
Although management games are common in corporate settings, they aren’t particular to the business world. Games such as these are played out everyday in organisations ranging from government bureaucracies to universities. I should caution my readers that the foregoing listing is far from comprehensive – it is but a small list of the more common games that one might encounter. No doubt, other games (and variants of the ones I’ve described) exist, and still more are being invented by creative managers. Please feel free to add in management games that you have come across – if they’re good you might even score a point or two.