Archive for December 2008
The role of the project sponsor
The role of the project sponsor has not been given much attention in project management literature and lore. This is strange because most project managers will attest to the importance of having an effective project sponsor. So here’s a question: what differentiates an effective sponsor from an ineffective one? Well, obviously involvement in the project is important – an indifferent sponsor won’t be much help at all. But what kind of involvement is required? A recent paper entitled Governance and Support in the Sponsoring of Projects and Programs, by Lynn Crawford and others, explores the formal and informal aspects of the sponsorship role, with a view to defining what makes an effective sponsor. This post is an annotated summary of the paper.
In their introduction, the authors state that, “convincing evidence demonstrates that success or failure of projects is not entirely within the control of the project manager and project team. Contextual issues are crucial in influencing the progress and outcomes of a project, and a key theme that has emerged is the support of top management.” In addition to this, the authors recognise the increasing focus on corporate governance adds another dimension to the role of the sponsor – that of ensuring that the project is aligned with corporate policies and, more generally, any relevant regulatory requirements.
The paper aims to provide a comprehensive view of project sponsorship by looking at the role from the perspective of sponsors, project managers, team members and other project stakeholders. The conclusions were based on interviews with assorted stakeholders selected from 36 projects across different geographical regions. The diversity of data enabled the authors to construct a conceptual model of the sponsor role. As hinted at in the previous paragraph, the model views the role as consisting of two independent perspectives: that of the organisation (governance) and that of the project (support).
Background
In true academic fashion, the authors present a comprehensive review of the literature on project sponsorship. Broadly, the literature may be divided into two areas: standards and research papers. The authors find that project management standards provide sparse material on the role of the sponsor. Specifically, the PMBOK views the sponsor in the context of a single project – and mainly in the role of a financier. The standard makes no reference to the wider organisational context in which the project plays out. Other standards such as the OPM and Program / Portfolio Management Standards provide a limited discussion of the sponsor as project champion, but these too, completely overlook the organisational context of the role. Other standards such as those from the APM and OGC have a more rounded view of the role.
In recent years the project sponsor role has been getting increased attention from project and general management researchers, as evidenced by the increasing number of papers published on the topic. In the author’s words, “Initial understanding of the role of the project executive sponsor as the person or group responsible for approving finance has gradually been expanded to include many other key functions that appear to be directly related to project success.” Some of these include:
- Ensuring availability of resources.
- Achieving and maintaining buy-in of senior management.
- Getting political support for the project.
- Serving as a “project critic”.
The literature also identifies the following characteristics of successful project sponsors:
- Appropriate seniority and power in the organisation.
- Political skill.
- Connections and influence in the organisation.
- Ability to motivate
- Excellent communication skills
- Ability to work with people at different levels in the organisation and the project team.
Yet, despite the importance of the role, many organisations do not nominate sponsors for all their projects. From their literature review, the authors conclude that although there is an increasing recognition of the importance of the sponsorship role, it remains largely unexplored. Their paper is intended to be a step towards filling this gap.
Methodology
The authors conducted their research in two phases. In the first phase, they performed five independent studies aimed at identifying:
- Attributes contributing to effective sponsorship
- The influence of sponsorship on project success
- Competencies required of project sponsors
- The sponsorship role in the context of organisational, program and project governance
- A model of factors contributing to the effective performance of the sponsorship role
The researchers were able to identify common themes that emerged from all of the independent studies. This commonality enabled the researchers to build a conceptual model of the sponsorship role, which was then tested against data gathered in the second phase of the research. The details of the research methodology, as fascinating as they may be for professional researchers, are of little interest to the practising project manager. So, I’ll leave it there, and move on to a discussion of the model.
The Model
The project sponsor role straddles two organisational entities: the project and the sponsoring organisation. From the perspective of the project, the main function of the sponsor is to provide support (e.g. champion the project); from the organisational perspective it is to ensure appropriate governance (e.g. compliance with regulations and corporate policy). The authors therefore model the role along two complementary, yet independent, dimensions – support and governance. The dimensions complement each other in that “the act of governing the project requires that the project be looked at from the perspective of the parent organisation (governance), and the act of providing top management support requires looking at the parent organisation from the perspective of the project (support).” Although complementary, one can conceive of situations where one might exist without another – think of an overly regulated organisation, for example – so they are independent as well.
To a large extent, which dimension is emphasised depends on the situation – and a canny sponsor recognises what a particular situation demands. For example, a sponsor may need to emphasise governance if a project has regulatory implications. On the other hand, she may need to emphasise support if some aspects of the parent organisation are impeding project progress. In principle, one can describe all possible sponsor perspectives in a two dimensional plane described by the coordinates (governance, support): the former describing the interests of the parent organisation and the latter the interests of the project. The paper does not describe the framework any further, although the authors do mention that future papers will fill in more detail.
The authors then present a long discussion of the two perspectives, discussing specific situations in which they may be appropriate. In essence, from a governance perspective the sponsor needs to ensure that the interests of the parent organisation are served by the project, whereas from a support perspective he or she needs to champion the project within the larger organisation. Specific examples of each are easy to find, and are left as an exercise for the reader.
It is interesting that the model presented is akin to some existing theories of management and leadership. A well-known example is the managerial grid model proposed by Blake and Mouton in 1964. In the grid model, the two dimensions are (the welfare of) People and (the maximisation of) Production, and an effective leader displays concern for both. Recent research, however, provides only limited support for the hypothesis that high-people and high-production leaders are more effective than others. In fact, Yukl suggests that people-related factors such as interpersonal relationships and teamwork are more important. A good leader should be able to “guide and facilitate” by fostering relationships and teamwork within the organisation. The congruence between the grid model (and others of its ilk) and the present sponsor model is false, and managerial effectiveness should be seen as a third dimension (in addition to Support and Governance).
So what are the attributes of a good “guide and facilitator” as it pertains to project sponsorship? The authors identify the following from their data:
- Good communication skills.
- Commitment to the project.
- Position and influence in the parent organisation.
- Is available when needed
In terms of practical help, the most important of these attributes is availability. In the words of one of the survey respondents, “The sponsor must be accessible. It doesn’t help if you have a sponsor who kind of reports to God and you never get to see them…” I’m sure many project managers will be able to relate to this.
Conclusion
The paper presents a conceptual model of the project sponsorship role, based on an analysis of existing project and general management literature, and data gathered from diverse project environments. The model views the role as being made up of two independent functions: support and governance. These functions represent the perspective of the temporary organisation (the project) and the permanent one (the parent organisation) respectively. It is also interesting that the two functions can, at times (or should I say, often?), lead to conflicting demands on a sponsor. The authors end the paper stating that analysis of data and work on the model continue. I look forward to seeing a further elaboration of their model in the future.
To end, a few quick words about relevance of the paper to practising project managers. From the practitioner’s perspective, the paper is worth a read because it spells out the full range of responsibilities of a project sponsor in very clear terms. In short, after reading this paper you’ll know what to say when a sponsor of your project wanders over and asks, “What do you need from me?”
References:
Crawford, Lynn., Cooke-Davis, Terry., Hobbs, Brian., Labuschagne, Les., Remington, Kaye., Chen, Ping., Governance and Support in the Sponsoring of Projects and Programs, Project Management Journal, 39 (S1), 43-55. (2008).
Project execution: efficiency versus learning
Most projects are subject to tight constraints. As a consequence, project teams are conditioned to focus on efficiency of project execution – i.e. to get things done within the least possible time, effort and expense. In this post I explore another approach; one that emphasises learning over efficiency. Now I know this sounds somewhat paradoxical: we all know learning takes time – and time’s the one commodity that’s universally in short supply on projects. However, please read on – I hope to convince you that an emphasis on learning may actually improve efficiency. My discussion is based on a recent Harvard Business Review article entitled, The Competitive Imperative of Learning, in which the author, Professor Amy Edmondson, presents two perspectives on organisational execution, which she defines as “the efficient, timely, consistent production and delivery of goods or services.” The two perspectives are: “execution as efficiency” and “execution as learning.” The former emphasises getting the job done, whereas the latter underscores the importance of finding better ways to get the job done. Projects are organisations too – albeit temporary ones – so the two views of execution discussed in the article may be relevant to project environments. This post discusses execution as efficiency vs. execution as learning in the context of project management.
Professor Edmondson compares and contrasts the two views of execution as follows:
Execution as efficiency | Execution as learning |
Leaders provide answers. | Leaders set direction and articulate mission. |
Employees follow directions. | Employees discover the way. |
Optimal work processes are designed and set up in advance. | Tentative work processes are set up as a starting point. |
New work processes are developed infrequently. | Work processes are ever evolving and improving. |
Feedback is one way (manager to team). | Two way feedback is common. |
Employees rarely exercise judgement and make decisions. | Employees continually make important judgement-based decisions. |
The reader will notice that the efficiency approach is rigid and very “top-down” whereas the learning approach is flexible and if not quite “bottom-up”, at least open to change. The remainder of this note discusses the latter might work in a project environment.
In projects the focus is on getting the job done in time and on budget. This sometimes (often?) leads to micro-management of project execution to the extent that team members are given detailed directions on how they should do their tasks. This corresponds to the efficiency approach. In contrast, the execution as learning way recommends that project managers set the overall objectives and leave team members to find their own way to achieve them (within parameters of scope, time and budget).
On a similar note, as I have written in a post on motivation, the best way to ensure that people remain engaged in their work is to give them autonomy – i.e. empower them to make decisions pertaining to their work. This is true both in (permanent) organisations and projects. Many project managers are reluctant to delegate responsibility to team members – and here I mean proper delegation, where team members are given responsibility and decision rights over all issues that come up in their work. Granted, on some projects it may not be possible to delegate these rights entirely. Nevertheless, even in such cases it should still be possible to make decisions in a collaborative manner, with input from all affected parties.
In another post I pointed out that project management methodologies are sometime implemented wholesale, without any regard to their appropriateness for a particular project. This corresponds to an execution as efficiency approach where directions are followed without question. In contrast, an execution as learning approach is one in which processes are adopted and adapted as required. This is better because it uses only those processes that contribute to achieving a project’s objectives; anything more is recognised as bureaucratic overhead – good only for generating pointless documentation and wasting time. This applies to not only to project management processes, but also to processes used in the creation of deliverables. This bit of common sense can be codified into a “principle of minimal process” which may be stated as follows: one should not increase, beyond what is necessary, the number of processes to achieve a particular end. This principle is akin to the principle of parsimony or Occam’s Razor in the sciences. Furthermore, in an execution as efficiency approach, processes, once established, rarely change. However, a project’s environment is always subject to change. In response to this, execution as learning recommends that processes be continually reviewed and tweaked, or even overhauled, as needed. What works well today may not work so well tomorrow. Bottom line: process is good as long as it contributes to getting the project done, anything that doesn’t should be discarded or fixed (i.e. improved).
An execution as efficiency approach downplays the need for communication because it is assumed that all processes are already running as efficiently as they possibly can. Communication in these environments tends to be one-way: from top to bottom. In contrast, in learning-oriented environments communication is a two way process: those doing the work suggest process improvements to management and management, in turn, provides feedback. Two-way communication is therefore an important element of execution as learning in organisations. I’d argue this is especially the case for projects because – as all project managers know – change (in scope, timeline, budget or whatever) is inevitable.
To conclude: projects are organisations, albeit temporary ones. Therefore, principles and learnings from research on permanent organisations should be checked for potential applicability to project environments. With this in mind, it may be more productive to approach project execution with a learning mindset rather than a focus on efficiency. Of course, this is not new – proponents of agile techniques have long advocated such an approach; learning is at the heart of the the agile manifesto. That said, I’d love to hear what you think; I look forward to your comments.
Enumeration or analysis? A note on the use and abuse of statistics in project management research
In a detailed and insightful response to my post on bias in project management research, Alex Budzier wrote, “Good quantitative research relies on Theories and has a sound logical explanation before testing something. Bad research gets some data throws it to the wall (aka correlation analysis) and reports whatever sticks.” I believe this is a very important point: a lot of current research in project management uses statistics in an inappropriate manner; using the “throwing data on a wall” approach that Alex refers to in his comment. Often researchers construct models and theories based on data that isn’t sufficiently representative to support their generalisations.
This point is the subject of a paper entitled, On Probability as a Basis for Action, published by Edwards Deming in 1975. In the paper, Deming makes the important distinction between enumerative and analytical studies. The basic difference between the two is that analytical studies are aimed at establishing cause and effect based on data (i.e. building theories that explain why the data is what it is), whereas enumerative studies are concerned with classification (i.e categorising data). In this post I delve into the use (or abuse) of statistics in project management research, with particular reference to enumerative and analytical studies. The discussion presented below is based on Deming’s paper and a very readable note by David and Sarah Kerridge.
Some terminology before diving into the discussion: Deming uses the notion of a frame, which he defines as an aggregate of identifiable physical units of some kind, any or all of which may be selected and investigated. In short: the aggregate of potential samples.
So what’s an enumerative study? In his paper, Deming defines it as one in which, “…action will be taken on the material in the frame studied…The aim of a study in an enumerative problem is descriptive. How many farms or people belong to this or that category? What is the expected out-turn of wheat for this region? How many units in the lot are defective? The aim (in the last example) is not to find out why there are so many or so few units in this or that category: merely how many.”
In contrast, an analytic study is one “in which action will be taken on the process or cause-system that produced the frame studied, the aim being to improve practice in the future…Examples include, comparison of two industrial processes A and B. (Possible) actions: adopt method B over method A, or hold on to A, or continue the experiment (gather more data).“
Deming also provides a criterion by which to distinguish between enumerative and analytic studies. To quote from the paper, “A 100 percent sample of the frame provides the complete answer to the question posed for the enumerative problem, subject to the limitations of the method of investigation. In contrast a 100 percent sample of the frame is inconclusive in an analytic problem“
It may be helpful to illustrate the above via project management examples. A census of tools used by project managers is an enumerative problem: sampling the entire population of project managers provides a complete answer. In contrast, building (or validating) a model of project manager performance is an analytic study: it is not possible, even in principle, to verify the model under all circumstances. To paraphrase Deming: there is no statistical method by which to extrapolate the validity of the model to other project managers or environments. This is the key point. Statistical methods have to be complemented by knowledge of the subject matter – in the case of project manager performance this may include organisational factors, environmental effects, work history and experience of project managers etc. Such knowledge helps the investigator design studies that cover a wide range of circumstances, paving the way for generalisations necessary for theory building. Basically, the sample data must cover the entire range over which generalisations are made. What this means is that the choice of samples depends on the aim of the study. The Kerridges offer some examples in their note, which I reproduce below:
Aim: Discover problems and possibilities, to form a new theory.
Method: Look for interesting groups, where new ideas will be obvious. These
may be focus groups, rather than random samples. Accuracy and rigour aren’t required. But this assumes that the possibilities discovered will be tested by other means, before making any prediction.Aim: Predict the future, to test a general theory.
Method: Study extreme and atypical samples, with great rigour and accuracy.Aim: Predict the future, to help management.
Method: Get samples as close as possible to the foreseeable range of circumstances
in which the prediction will be used in practice.Aim: Change the future, to make it more predictable.
Method: Use statistical process control to remove special causes, and experiment using the PDSA cycle to reduce common cause variation.
Unfortunately, many project management studies that purport to build theories do not exercise appropriate care in study design. The typical offence is that samples used in the studies do not support generalisations made. The resulting theories are thus built on flimsy empirical foundations. To be sure, most offenders label their studies as preliminary (other favoured adjectives include exploratory, tentative initial etc), thereby absolving themselves of responsibility for their irresponsible speculations. That would be OK if such work were followed up by a thorough empirical study, but it often isn’t. I’m loath to point fingers at specific offenders, but readers will find an example or two amongst papers reviewed on this blog. Lest I be accused of making gross and unfair generalisations, I should hasten to add that the reviews also include papers in which statistical analysis is done right (I’ll leave it to the reader to figure out which ones these are…).
To sum up: in this post I’ve discussed the difference between enumerative and analytic studies and its implications for the validity of some published project management research. Enumerative statistics deals with counting and categorisations whereas the analytical studies are concerned with clarifying cause-effect relationships. In analytical work, it is critical that samples are chosen that reflect the stated intent of the work, be it general theory-building or prediction in specific circumstances. Although this distinction should be well understood (having been articulated clearly over quarter a century ago!), it appears that it isn’t always given due consideration in project management research.