Archive for September 2010
Ideally, project evaluation decisions should be made on the basis of objective criteria (cost/benefit, strategic value etc.). In reality, however, there is often a political dimension to the process: personal agendas, power games etc. play a significant role in determining how key stakeholders perceive particular projects. In a paper entitled Seven Ways to get Your Favoured IT Project Accepted – Politics in IT Evaluation, Egon Berghout, Menno Nijland and Kevin Grant discuss seven political ploys that managers use to influence IT project selection. This post presents a discussion of these tactics and some strategies that can be used to counter them.
The seven tactics
Before outlining the tactics it is worth mentioning some of the differences between political and rational justifications for a project. In general, the former are characterised by a lot of rhetoric and platitudes whereas the latter dwell on seemingly objective measures (ROI, cost vs. benefit etc.). Further, political justifications tend to take a “big picture” view as opposed to the detail-oriented focus of the rational ones. Finally, it is worth mentioning that despite their negative connotation, political ploys aren’t always bad – there are situations in which they can lead to greater buy-in and commitment than would be possible with purely rational decision-making methods.
With that for background, let’s look at seven commonly used political tactics used to influence IT project decisions. Although the moves described are somewhat stereotypical and rather obvious, I do believe they are used quite often on real-world projects.
Here they are:
1. Designate the project as being strategic: In this classic ploy, the person advocating the project designates a project as being necessary in order to achieve the organisation’s strategic goals. To do this one may only need to show a tenuous connection between the project objectives and the organisation’s strategy. Once a project is deemed as being strategic, it will attract support from the upper echelons of management – no questions asked.
2. The “lights on” ploy: This strategy is to dub the project as an operational necessity. Here the idea is to indulge in scare-mongering by saying things like – “if we don’t do this, we run an 80% chance of system failure within the next year.” Such arguments are often used to justify expensive upgrades to legacy systems.
3. The “phase” tactic: Here the idea is to slice up a project into several smaller sub-projects and pursue them one at a time. This strategy keeps things under the organisation’s financial radar until the project is already well under way, a technique often used by budget-challenged IT managers.
4. Creative analysis: Most organisations have a standard process by which IT projects are evaluated. Typically such processes involve producing metrics to support the position that the project is worth pursuing. The ideas here is to manipulate the analysis to support the preferred decision. Some classic ways of doing this include ignoring negative aspects (certain risks, say) and/or overstating the benefits of desired option.
5. Find a problem for your solution: This strategy is often used to justify introducing a cool new technology into an organisation. The idea here is to create the perception that the organisation has a problem (where there isn’t necessarily one) and that the favoured technology is the only way out of it. See my post on solutions in search problems for a light-hearted look at warning signs that this strategy is in use.
6. No time for a proposal: Here the idea is to claim that it would take too much time to do a proper evaluation (the implication being that the person charged with doing the evaluation is too busy with other important matters). If successful, one can get away with doing a bare-bones evaluation which leaves out all inconvenient facts and details.
7. Old wine in a new bottle: This strategy is employed with unsuccessful project proposals. The idea here is to resubmit the proposal with cosmetic changes in the hope that it gets past the evaluation committee. Sometimes a change in the title and focus of the project is all that’s needed to sneak it past an unwary bunch of evaluators.
Of course, as mentioned earlier, there’s a degree of exaggeration in the above: those who sanction projects are not so naïve as to be taken in by the rather obvious strategies mentioned above. Nevertheless, I would agree with Berghout et. al. that more subtle variants of these strategies are sometimes used to push projects that would otherwise be given the chop.
The first step in countering political ploys such as the ones listed above is to understand when they are being used. The differences between political and rational behaviour were outlined by Richard Daft in his book on organisational theory and design. These are summarised in the table below (adapted from the paper):
|Organisational Feature relating to decision making||Rational response or behaviour||Political response or behaviour|
|Goals||Similar for all participants – aligned with organisational goals||Diversity of goals, depending on preferences and personal agendas|
|Processes||Rational, orderly.||Haphazard – determined by dominant group|
|Rules/Norms||Optimisation – to make the “best” decision (based on objective criteria)||Free for all – characterised by conflict|
|Information||Unambiguous and freely available to everyone||Ambiguous, can be withheld for strategic reasons|
|Beliefs about cause-effect||Known, even if only incompletely||Disagreement about cause-effect relationships|
|Basis of decisions||Maximisation of utility||Bargaining, interplay of interests|
|Ideology||Organisational efficiency and effectiveness||Individual/ group interest|
Although Daft’s criteria can help identify politically influenced decision-making processes, it is usually pretty obvious when politics takes over. The question then is: what can one do to counter such tactics?
The authors suggest the following:
1. Go on the offensive: This tactic hinges on finding holes in the opponents’ arguments and proposals. Another popular way is to attack the credibility of the individuals involved.
2. Develop a support base-: Here the tactic is to get a significant number of people to buy into your idea. Here it is important to focus efforts on getting support from people who are influential in the organisation.
3. Hire a consultant: This is a frequently used tactic, where one hires an “independent” consultant to research and support one’s favoured viewpoint.
4. Quid pro quo: This is the horse-trading scenario where you support the opposing group’s proposal with the understanding that they’ll back you on other matters in return.
Clearly these tactics are not those one would admit to using, and indeed, the authors’ language is somewhat tongue-in-cheek when they describe these. That said, it is true that such tactics – or subtle variants thereof – are often used when countering politically motivated decisions regarding the fate of projects.
Finally, it is important to realise that those involved in decision making may not even be aware that they are engaging in political behaviour. They may think they are being perfectly rational, but may in reality be subverting the process to suit their own ends.
The paper presents a practical view on how politics can manifest itself in project evaluation. The authors’ focus on specific tactics and counter tactics makes the paper particularly relevant for project professionals. Awareness of these tactics will help project managers recognise the ways in which politics can be used to influence decisions as to whether or not projects should be given the go-ahead .
In closing it is worth noting the role of politics in collective decision-making of any kind. A group of people charged with making a decision will basically argue it out. Individuals (or sub-groups) will favour certain positions regarding the issue at hand and the group must collectively debate the pros and cons of each position. In such a process there is no restriction on the positions taken and the arguments presented for and against them. The ideas and arguments needn’t be logical or rational – it is enough that someone in the group supports them. In view of this it seems irrational to believe that collective decision making – in IT project evaluation or any other domain – can ever be an entirely rational process.
Groupthink refers to the tendency of members of a group to think alike because of peer pressure and insulation from external opinions. The term was coined by the psychologist Irving Janis in 1972. In a recent paper entitled, Groupthink in Temporary Organizations, Markus Hallgren looks at how groupthink manifests itself in temporary organisations and what can be done to minimize it. This post, which is based on Hallgren’s paper and some of the references therein, discusses the following aspects of groupthink:
- Characteristics of groups prone to groupthink.
- Symptoms of groupthink.
- Ways to address it.
As we’ll see, Hallgren’s discussion of groupthink is particularly relevant for those who work in project environments.
Hallgren uses a fascinating case study to illustrate how groupthink contributes to poor decision-making in temporary organisations: he analyses events that occurred in the ill-fated 1996 Everest Expedition. The expedition has been extensively analysed by a number of authors and, as Hallgren puts it:
Together, the survivors’ descriptions and the academic analysis have provided a unique setting for studying a temporary organization. Examining expeditions is useful to our understanding of temporary organizations because it represents the outer boundary of what is possible. Among the features claimed to be a part of the 1996 tragedy’s explanation are the group dynamics and organizational structure of the expeditions. These have been examined across various parameters including leadership, goal setting, and learning. They all seem to point back to the group processes and the fact that no one interfered with the soon-to-be fatal process which can result from groupthink.
Mountaineering expeditions are temporary organisations: they are time-bound activities which are directed towards achieving a well-defined objective using pre-specified resources. As such, they are planned as projects are, and although the tools used in “executing” the work of climbing are different from those used in most projects, essential similarities remain. For example, both require effective teamwork and communication for successful execution. One aspect of this is the need for team members to be able to speak up about potential problems or take unpopular positions without fear of being ostracized by the group.
Some characteristics of groups that are prone to groupthink are:
- A tightly knit group.
- Insulation from external input.
- Leaders who promote their own preferred solutions (what’s sometimes called promotional leadership)
- Lack of clear decision-making process
- Homogenous composition of group.
Additional, external factors that can contribute to groupthink are:
- Presence of an external threat.
- Members (and particularly, influential members) have low self-esteem because of previous failures in similar situations.
Next we’ll take a brief look at how groups involved in the expedition displayed the above characteristics and how these are also relevant to project teams.
Groupthink in the 1996 Everest Expedition and its relevance to project teams
Much has been written about the ill-fated expedition, the most well-known account being Jon Krakauer’s best-selling book, Into Thin Air. As Hallgren points out, the downside of having a popular exposition is that analyses tend to focus on the account presented in it, to the exclusion of others. Most of these accounts, however, focus on the events themselves rather than the context and organizational structure in which they occur. In contrast, Hallgren’s interest is in the latter – the context, hierarchy and the role played by these in supporting groupthink. Below I outline the connections he makes between organizational features and groupthink characteristics as they manifested themselves on the expedition. Following Hallgren, I also point out how these are relevant to project environments.
Highly cohesive group
The members of the expedition were keen on getting to the summit because of the time and money they had individually invested in it. This shared goal lead to a fair degree of cohesion within the group, and possibly caused warnings signs to be ignored and assumptions rationalized. Similarly, project team members have a fair degree of cohesion because of their shared (project) goals.
Insulation from external input
The climbing teams were isolated from each other. As a result there was little communication between them. This was exacerbated by the fact that only team leaders were equipped with communication devices. A similar situation occurs on projects where there is little input from people external to the project, other teams working on similar projects or even “lessons learned” documents from prior projects. Often the project manager takes on the responsibility for communication, further insulating team members from external input.
Group leaders on the expedition had a commercial interest in getting as many clients as possible to the summit. This may have caused them to downplay risks and push clients harder than they should have. This is similar to situations in projects which are seen as the “making of project managers.” The pressure to succeed can cause project managers to display promotional leadership.
Lack of clear decision making process
All decisions on the expedition were made by group leaders. Although this may have been necessary because group members lacked mountaineering expertise, decisions were not communicated in a timely manner (this is related to the point about insulation of groups) and there was no clear advice to groups about when they should turn back. This situation is similar to projects in which decisions are made on an ad-hoc basis, without adequate consultation or discussion with those who have the relevant expertise. Ideally, decision-making should be a collaborative process, involving all those who have a stake in its outcome.
Homogeneous composition of group
Expedition members came from similar backgrounds – folks who had the wherewithal to pay for an opportunity to get to the summit. Consequently, they were all highly motivated to succeed (related to the point about cohesion). Similarly, project teams are often composed of highly motivated individuals (albeit, drawn from different disciplines). The shared motivation to succeed can lead to difficulties being glossed over and risks ignored.
The expedition was one of many commercial expeditions on the mountain at that time. This caused an “us versus them” mentality, which lead to risky decisions being made. In much the similar way, pressure from competitors (or even project sponsors) can cloud a project manager’s judgement, leading to poor decisions regarding project scope and timing.
Low self esteem
Expedition leaders were keen to prove themselves because of previous failures in getting clients to the summit. This may have lead to a single-minded pursuit of succeeding this time. A similar situation can occur in projects where managers use the project as a means to build their credibility and self-esteem.
Symptoms and solutions
The above illustrates how project teams can exhibit characteristics of groups prone to groupthink. Hallgren’s case study highlights that temporary organisations – be they mountaineering expeditions or projects – can unwittingly encourage groupthink because of their time-bound, goal-focused nature.
Given this, it is useful for those involved in projects to be aware of some of the warning signs to watch for. Janis identified the following symptoms of groupthink:
- Group members feel that they are invulnerable
- Warnings that challenge the groups assumptions are rationalized or ignored.
- Unquestioned belief in the group’s mission..
- Negative stereotyping of those outside the group.
- Pressure on group members to conform.
- Group members self-censor thoughts that contradict the group’s core beliefs.
- There is an illusion of unanimity because no dissenting opinions are articulated.
- Group leaders take on the role of “mind-guards” – i.e. they “shield” the group from dissenting ideas and opinions.
Regardless of the different contexts in which groupthink can occur, there are some stock-standard ways of avoiding it. These are:
- Brainstorm all alternatives.
- Play the devil’s advocate – consider scenarios contrary to those popular within the group.
- Avoid prejudicing team members’ opinions. For example, do not let managers express their opinions first.
- Bring in external experts.
- Discuss ideas independently with people outside the group.
Though this advice (also due to Janis) has been around for a while, and is well-known, groupthink remains alive and well in project environments; see my post on the role of cognitive biases in project failure for examples of high-profile projects that fell victim to it.
Hallgren’s case study is an excellent account of the genesis and consequences of groupthink in a temporary organisation. Although his example is extreme, the generalizations he makes from it hold lessons for all project managers and leaders. Like the Everest expedition, projects are invariably run under tight time and budgetary constraints. This can give rise to conditions that breed groupthink. The best way to avoid groupthink is to keep an open mind and encourage dissenting opinions – easier said than done, but the consequences of not doing so can be extreme.
Several surveys have indicated that IT projects – especially large ones – fail at an alarming rate. In a paper entitled, Pessimism, Computer Failure, and Information Systems Development in the Public Sector, Shaun Goldfinch mentions that 20-30% of projects costing more than $ 10 million are abandoned altogether. Further, over half are over time and/or budget, and do not deliver to expectations. Although Goldfinch’s paper focuses on IT investments in the public sector, the situation in the private sector isn’t much better.
Goldfinch makes the observation that,
Enthusiasm for large and complex investments in IS continues unabated despite decades of failure. Indeed, the largest-ever public sector project was initiated in 2002 by the United Kingdom’s National Health Service at an estimated cost of US$11 billion…
He proposes a model of four pathological enthusiasms which cause key stakeholders to talk-up benefits and downplay difficulties when advocating such projects. In this post, I take a brief look at the model and its utility evaluating project proposals.
The four enthusiasms model
Many projects begin as ideas which originate from a small number of enthusiastic advocates. Often a single enthusiast with sufficient influence can push an ill-conceived project through the approval stages to the point where it is given the go-ahead. According to Goldfinch, such misplaced enthusiasm generally falls into one of the following categories:
- Idolisation (Technological Infatuation): This is a situation where a key business stakeholder believes that business problems can always be solved by technology. Projects driven by such people place technology at the heart of the solution. Such efforts often fail because not enough attention is paid to other factors (people, processes etc.).
- Technophilia: This refers to the IT profession’s belief that all problems have technical solutions. As Goldfinch puts it, “it is the myth of the technological fix, in which “the entire IS profession perpetuates the myth that better technology, and more of it, are the remedies for practical problems.” Efforts driven by technophilia fail because those who are involved get too caught up in learning and mastering the technology rather than solving the problem.
- Lomanism: This term, derived from the protagonist in the play Death of a Salesman, refers to the (real or feigned) over-enthusiasm that IT sales and marketing professionals have for their companies’ products. Unfortunately such folks often have the ear of IT decision-makers who are susceptible to sales pitches that promise untold (but unrealistic) benefits. On the other hand it should also be mentioned that Lomanism is often a response to unrealistic customer expectations coupled with the pressure to meet sales targets. The only clear beneficiaries from Lomanism-driven efforts are technology vendors.
- Managerial faddism: This refers to the tendency of managers and senior executives to fall under the spell of the latest management fads. Many of these fads recommend a wholesale overhaul of organizational structures and processes, and are often accompanied by technical tools. IT service management methodologies are good examples of such fads.
Goldfinch states that:
Together, these four enthusiasms feed on and mutually reinforce one another in a vicious cycle, creating a strongly held belief that newer and larger IS projects are a good idea. Doubters and skeptics may be portrayed as “negative,” “not team players,” “not helpful”… Together, these pathologies make up the four enthusiasms of IT failure. When a project does encounter difficulties, these four enthusiasms can undermine attempts to curtail or abandon the project — a project can always be fixed with better management, a redesigned monitoring system or contract, more technology or hardware, better programming, or just a reassuring “it’ll be right on the night.”
Goldfinch suggests that large IT projects are often driven by one of four types of enthusiasm. These can lead to projects being driven by nothing more than wishful thinking and undue optimism. To counter this, he recommends that decision-makers take a pessimistic view when evaluating proposals for IT projects. Among other things this means questioning assumptions, particularly those relating to the technology that will be employed. Independent opinions are a good way to do this, but truly unbiased ones can be hard to come by (vested interests aren’t always obvious). In the end the solution may be as simple as relying on one’s own common sense and judgement. That’s where the model can help: viewing a business case or project proposal through the lens of the model can show up over-optimistic claims and projections.