Archive for May 2008
Eliciting and documenting application requirements is hard work. It’s made even harder if one is dealing with an appstronaut. “So who or what’s an appstronaut?” – I hear you ask. Here’s a definition, along with some explanatory notes:
Appstronaut (noun): a person who can hold forth for hours on the big picture, but cannot (will not!) get down to details.
Distribution: Generally found in upper echelons of management, but sightings have been reported at other levels too.
Field notes: Appstronauts are characterised by verbosity coupled with a short attention span. They exhibit extreme fondness for strategy, vision and all that “big picture” stuff, but have little patience for details. They are known to have good ideas, but generally lack the focus to see them through to fruition.
Appstronauts infuriate analysts, who need nitty-gritty details in order to understand and document application requirements. The big picture stuff, as fascinating as it is to the appstronaut, is simply of no use to the analyst. But, if left unchecked, appstronauts will be content to float around at rarefied heights where ideas are thick, but details thin. Analysts charged with compiling application requirements must bring them down to earth. But, how? This post aims to answer the question by highlighting some simple techniques to tether appstronauts to reality.
Without further ado, here they are:
- Start by zooming in on a small part of the big picture: The main problem is that the canvas painted by appstronauts is way too big. A practical way to start filling in detail is by focusing on a part of the picture and drilling down to specifics. Which part? The answer should be clear from the appstronaut’s spiel. In brief: the part should be important yet easy enough to action or implement. It may be something that ties into existing systems, for instance. What the analyst should look for is a quick win. Something that can be built quickly, but also provides value to the appstronaut.
- Contribute to the picture by painting a part of it: The best analysts understand the business thoroughly. Given this, they should be able to contribute to the appstronaut’s vision. In fact, I have sat through meetings where smart analysts have steered the discussion (with tact!) in productive directions. At this point the analyst is contributing to the vision too.
- Help appstronauts drill down to specifics: Appstronauts abhor details; analysts thrive on them. The analyst’s job is to get appstronauts to talk about specifics. To do this, it can be helpful to send out a list of questions before the meeting so that appstronauts come in prepared. Very often, they’ll throw up their hands and say, “It’s your job to fill in the details.” At this point the analyst has to gently, but firmly, insist that details of business requirements have to come from (no surprise!) the business.
- Verify mutual understanding: It is important to verify that both parties – the appstronaut and the analyst – have a common understanding of what transpires in a requirements analysis session. This should be done both during and after the meeting. During the session the analyst should, by asking appropriate questions, check that their understanding of the requirements is correct . After the session, a compilation of notes should be sent out, asking users to send their corrections and comments within the next day or two. This gives them a chance to make any revisions before work on documenting the requirements is started. This is standard practice in requirements elicitation, but is absolutely critical when one is dealing with an appstronaut (remember the “short attention span” bit in the field notes above)
- Use visual aids (screen mock-ups, process diagrams etc.) liberally: This applies both to the analysis sessions and the documents. Often, in requirements sessions I use the whiteboard to sketch process flows, relationships and even app screen layouts. Any documents or presentations should have lots of visuals as appstronauts have even less patience than others to go through large swathes of text.
After bagging appstronauts through most of this piece, I should acknowledge that they can play a key role in driving important business initiatives. As mentioned in the field notes above, they often have good ideas but need some help implementing them. This is where the analyst comes in: he or she is very familiar with the business and/or associated systems, and is thus well placed to help appstronauts add substance to their grand, but ethereal, visions. If approached constructively, working with appstronauts can be an opportunity instead of a trial.
And what is good, Phædrus,
And what is not good…
Need we ask anyone to tell us these things?
…so runs the quote at the start of Robert Pirsig’s philosophical novel, Zen and the Art of Motorcycle Maintenance. In the book, Pirsig introduces his ideas on the metaphysics of quality in which he argues that quality is undefinable because it precedes analysis. Or, to paraphrase the quotation above: when we see something good (i.e of high quality) we just know it is good without having to analyse what makes it so. When I read the book some years ago, it seemed (to me at any rate) that Pirsig was using the term “quality” in a different and expanded sense from its usual meaning(s). Experience also confirms that using the word in any discussion leaves one open to being misunderstood because it means different things to different people. This brings me to my point: it is important for project managers to ensure that all stakeholders have a common understanding of quality. I expand on this thought below, drawing on what leading project management frameworks – PMBOK and PRINCE2 – have to say about quality.
It is worth starting with a couple of dictionary definitions of quality, if only to show how worthless they are from a project manager’s perspective:
1. An essential or distinctive characteristic, property, or attribute.
2. High grade; superiority; excellence
Not much help, right? In fact, it’s even worse: the second definition confuses quality and grade – two terms that PMBOK assures us are as different as chalk and cheese.
So what is a good definition of quality from the perspective of a project manager? The PMBOK, quoting from the American Society for Quality (ASQ), defines quality as, “the degree to which a set of inherent characteristics fulfil requirements.” This is clearly a much more practical definition for a project manager, as it links the notion of quality to what the end-user expects from deliverables. PRINCE2, similarly, keeps the end-user firmly in focus when it defines quality, rather informally, as, “fitness for purpose.”
The PMBOK/ASQ definition of quality, through its use of the word “degree”, suggests that any measure of quality should be quantitative – i.e. there should be a number associated with it; or, if not a number, at least a qualitative measure of quantity (high, medium, low, for example). PRINCE2 also insists on measurability through its notion of customer quality expectations and acceptance criteria, the former being high level, qualitative specifications and the latter quantitative, testable quality criteria. Although PMBOK does not formalise customer quality expectations, both methodologies insist on measurable quality criteria being defined as early in the game as possible. Basically these should be aimed at answering the question: how will end-users know they’ve got what they asked for?
Any project management methodology (including informal or homegrown ones!) must include processes that deal with quality. As might be expected, both PMBOK and PRINCE2 accord great importance to it. Quality Management is one of the nine knowledge areas in PMBOK. Further, PMBOK explicitly acknowledges that project quality is affected by balancing the three fundamental variables of scope, time and cost. In PRINCE2 quality is built in to a project up-front: right from start up (pre-initiation) through to initiation, product delivery and close. However, despite superficial differences in their approaches, both PMBOK and PRINCE2 treat quality in much the same way. Both emphasise the need to plan for quality at the outset (quality planning), put processes in place to ensure quality (quality assurance) and finally test for it when the deliverables (products in PRINCEspeak) are completed (quality control / quality review).
Regardless of your persuasion – or even if you’re a methodology agnost like me – quality is something you have to worry about on your projects. Project managers, unlike Pirsig, cannot afford to leave quality undefined. So, if your end-users’ eyes glaze over when they are asked about their quality expectations, understand it’s only because they don’t know what you mean; quality means different things to different people. Instead, ask them how they’ll know what is good and what is not good, and be sure to pay very close attention to what they say in reply.
Do project managers learn from their experiences? One might think the answer is a pretty obvious, “Yes.” However in a Harvard Business Review article entitled, The Experience Trap, Kishore Sengupta, Tarek Abdel-Hamid and Luk Van Wassenhove suggest the answer may be a negative, especially on complex projects. I found this claim surprising, as I’m sure many project managers would. It is therefore worth reviewing the article and the arguments made therein.
The article is based on a study in which several hundred experienced project managers were asked to manage a simulated software project with specified goals and constraints. Most participants failed miserably: their deliverables were late , over-budget and defect ridden. This despite the fact that most of them acknowledged that the problems encountered on the simulations were reflective of those they had previously seen on real projects. The authors suggest this indicates problems with the way project managers learn from experience. Specifically:
- When making decisions, project managers do not take into account consequences of prior actions.
- Project managers don’t change their approach, even when it is evident that it doesn’t work.
The authors identify three causes for this breakdown in learning:
- Time lags between cause and effect: In complex projects, the link between causes and effects are not immediately apparent. The main reason for this, the authors contend, is that there can be significant delays between the two – e.g. something done today might affect the project only after a month. The authors studied this effect through another simulated project in which requirements increased during implementation. The participants were asked to make hiring decisions at specified intervals in the project, based on their anticipated staffing requirements. The results showed that the ability of the participants to make good hiring decisions deteriorated as the arrival lag (time between hiring and arrival) or assimilation lag (time between arrival and assimilation) increased. This, the authors claim, shows that project managers find it hard to make causal connections when delays between causes and effects are large.
- Incorrect estimates: It is well established that software projects are notoriously hard to estimate (see my article on complexity of IT projects for more on why this is so). The authors studied how project managers handle incorrect estimates. This, again, was done through a simulation. What they found was participants tended to be overly conservative when providing estimates even when things were actually going quite well. The authors suggest this is an indication that project managers attempt to game the system to get more resources (or time), regardless of what the project data tells them.
- Initial goal bias: Through yet another another simulation, the authors studied what happens as project goals change with time. The simulation started out with a well-defined initial scope which was then changed some time after the project started. Participants were not required to reevaluate goals as scope changed, but that avenue was open to them. The researchers found that none of the participants readjusted their goals in response to the change, thus indicating that unless explicitly required to reevaluate objectives, project managers tend to stick to their original targets.
After discussing the above barriers to learning, the authors provide the following suggestions reduce these:
- Provide cognitive feedback: A good way to understand causal relationships in complex processes is to provide cognitive feedback – i.e feedback that clarifies the connection between important variables. In the simulation exercise involving arrival / assimilation delays, participants who were provided with such feedback (basically graphical displays of number of defects detected vs time) were able to make better (i.e. more timely) staffing decisions.
- Use calibrated model-based tools and guidelines: The authors suggest using decision support and forecasting tools to guide project decision-making. They warn that these tools should be calibrated to the specific industry and environment.
- Set goals based on behaviours rather than performance: Most project managers are assessed on their performance – i.e. the success of their projects. Instead, the authors suggest setting goals that promote specific behaviours. An example of such a goal might be the reduction of team attrition. Such a goal would ensure that project managers focus on things such as promoting learning within the team, protecting their team from schedule pressure etc. This, the logic goes, will lead to better team cohesion and morale, ultimately resulting in better project outcomes.
- Use project simulators: Project simulations provide a safe environment for project managers to hone their skills and learn new ones. The authors cite a case where introduction of project simulation games significantly improved the performance of managers on projects, and also lead to a better understanding of dynamic relationships in complex environments.
Although many of the problems (e.g. inaccurate estimates) and solutions (e.g. use of simulation and decision support software) discussed in the article aren’t new, the authors present an interesting and thought-provoking study on the apparently widespread failure of project managers to learn from experience. However, for reasons which now I outline, I believe their case may somewhat overstated.
Regarding the research methodology, I believe their reliance on simulations limits the strength, if not the validity, of their claims. More on this below:
- Having participated in project simulations before, I can say that simulators cannot simulate (!)important people-related factors which are always present in a real project environment. These include factors such as personal relationships and ill-defined but important notions such as organisational culture. In my experience, project managers always have to take these into account when making project decisions.
- Typically many of the important factors on real projects are “fuzzy” and have complex dependencies that are hard to disentangle. Simulations are only as good as the models they use, and these factors are hard to model.
On the solutions recommended by the authors:
- I’m somewhat sceptical about the use of software tools to supports decision making. In my experience, decision support tools require a fair bit of calibration, practice and (good) data to be of any real use. By the time one gets them working, one usually has a good handle on the problem any way. They’re also singularly useless when extrapolated to new situations – and projects (almost by definition) often involve new situations.
- Setting behavioural goals is nice in theory, but I’m not sure how it would work in practice. Essentially I have a problem with how a project manager’s performance will be measured against such goals. The causal connection between a behavioural goal such as “reduce team attrition” and improved project performance is indirect at best.
- Regarding simulators as training tools, I have used them and have been less than impressed. It is very easy to make a “wrong” decision on a simulator because information has been hidden from you. In a real life situation, a canny project manager would find ways to gather the information he or she needs to make an informed decision, even if this is hard to do. Typically, this involves using informal communication modes and generally keeping ear to the ground. The best project managers excel at this.
So, in closing, I think the authors have a point about the disconnect between project management practice and learning at the level of an individual project manager. However, I believe their thesis is somewhat diluted because it is based on the results of simulated project games which are limited in their ability to replicate complex, people-related issues that are encountered on real projects.
My five year old son looked out through our living room window this morning and said, “Dad, I can’t write on the window today.”
“Hmm…,” I said, not really listening. I continued with my breakfast, looking down at the article I was reading.
He repeated, “I can’t write on the window.” Then added, “I did yesterday but I can’t today.” …And shortly after came the inevitable, “Why?”
I looked up and realised what he was talking about: there was no condensation on the window. Yesterday morning our windows had a thin veil of condensation on which he could write with his finger. But it was less humid today, so the panes were clear. There would be no writing today. Quite naturally, he wanted to know why.
I launched into a long-winded explanation about water vapour, temperature and condensation. He listened to me politely, but (obviously) didn’t really understand what I was going on about.
I stopped. There had to be a better way to explain this to a five year old.
I got up and went to the kitchen. “Let me show you something,” I said. The little fellow followed, with a somewhat dubious look on his face. I had lost credibility and he wasn’t about to cut me any slack.
I filled the kettle with some water and turned it on.
“Wait and watch,” I said, unfolding a step ladder so he could have a good look-see at what was happening.
He clambered on and watched as I held up a glass near the spout. I had his attention now. “Tell me when you see something,” I said.
“I see it now, I see it now,” he said, pointing excitedly at a sheen of condensation on the glass. “I can write on the glass!”
I launched into another explanation of water vapour, humidity and condensation. But this time I could see that I was getting through. He listened, and asked questions which I answered as best I could. It was great!
I was late for work this morning, but it was worth it. I’d been given a refresher course on a vital aspect of communication: show, don’t tell.
I’ll start with a story that may sound familiar to some of you.
A project team member – let’s call him Ernest – appears to be a major asset to the team. He is enthusiastic, volunteers to do stuff others don’t want to do and is always (seemingly) on the ball. The problem is most of his work is shoddy, riddled with errors and has to be redone. Worse, this is starting to have a negative knock-on effect on other deliverables. Other team members are having to clean up the mess and are beginning to resent it. Yet Ernest is blissfully unaware of the repercussions of his earnest efforts. By his estimation, he’s doing a fine job and, quite naturally, expects to be rewarded for it.
It is clear the project manager has to do something about Ernest. Trouble is she can’t. She has no say in the composition of his team, and the functional manager to whom Ernest reports reckons that Ernie is the best thing that happened to the company in a long time. Our PM’s in a pickle; one which I reckon isn’t an uncommon one.
There are two factors at work here:
- Ernest thinks he’s (way) more competent than he actually is.
- He isn’t aware of his shortcomings.
This story is an illustration of the Dunning-Kruger Effect, so named after the authors of this paper, published in the Journal of Personality and Social Psychology in 1999. In the paper the authors demonstrated, through a series of experiments, that less skilled individuals tend to:
- Overestimate their competence.
- Fail to recognise the extent of their lack of skills.
The paper suggests that improving the skills of such individuals not only increases their competence, but also helps them recognise and acknowledge their prior lack of skills – i.e. it improves their ability for self-assessment.
I should mention that not all authors agree with Dunning and Kruger. However, in a recent paper, Dunning and others appear to address many criticisms that were levelled at the original work. So the current academic consensus seems to be that the Dunning-Kruger effect is real.
So, going back to my original story, what can the project manager do about Ernest? Remember, Ernie can’t be relieved of his project duties because he has his manager’s backing.
The PM has a few options which I outline below:
- Provide Ernest with honest feedback. This has to be done with care as Ernest reckons he’s doing a great job. The PM should also be sure to provide Ernest with positive feedback where possible – compliment him on his enthusiasm, readiness to take on tasks etc.
- Channel Ernest’s positive qualities to good use. One way our PM could do this is to position less critical tasks as important, and suggest (or gently insist!) that Ernest take responsibility for them. This needs to be done carefully, as the PM needs to ensure that Ernest remains motivated.
- Suggest concrete actions that might help Ernest improve the quality of his work. This option is usually considered to be a non-starter given that there’s no time (or budget) for training in the middle of a project. However, there are a few other ways to achieve this: informal coaching, mentoring for example. These, however, are difficult to put into action because it is difficult to find time to coach or mentor while a project’s in progress. Besides, Ernest has to be willing to acknowledge and accept his shortcomings.
At all times, the PM has to be cognisant of the effect of the effect her actions (or inaction, for that matter!) on team morale. Plummeting morale is the last thing she needs in the middle of a project.
I’m sure most project managers would have had first-hand experience of dealing with individuals like Ernest. If so, they’ll know that fixing the problem is hard, especially if the project manager has no authority over team composition. Although I’ve suggested some strategies to deal with such individuals, I acknowledge that the solutions can be difficult, time consuming and expensive to implement; especially in stressed-out project environments.
As Dunning points out in this article, we’re strangers to ourselves. So we’re all potential victims of this effect (yes, I realise that includes me too!). Having said that, I leave you now to ponder this question: how do you rate your competence as a project manager?
This is a tale of distress
caused by a system on Access,
which failed one day
in a spectacular way,
leaving users in a bit of a mess.
File-based databases are prone
to crashing for reasons unknown.
So it was no surprise
to the IT guy.
“I knew it would happen,” he groaned.
The boss went totally ballistic,
turning red and apoplectic.
He told the IT guy,
“It will be good-bye
unless you get off your rear and fix it.”
On hearing he could be history,
the IT guy rolled up his sleeves
and tried to revive
the system that died,
but gray stayed the monitor screen.
The lesson to learn from this tale
is to backup your systems each day.
Disaster can strike
any time, day or night .
Be prepared! It’s the only way.
Several years ago Fredrick Brooks wrote his well known article, No Silver Bullet , in which he explained why software development is intrinsically hard. I believe many of the issues that make software development inherently difficult have close parallels in IT project management – parallels which apply even in projects that don’t involve much writing of code. In this post I look at Brooks’ article from the perspective of an IT project manager, with a view to gaining some insight into why managing IT projects is hard.
Brooks defines the essence of a software entity as, “…a construct of interlocking concepts: data sets, relationships among data items, algorithms and invocations of functions…” This construct is abstract; that is, it has many different representations or implementations. A line later he states, “…I believe the hard part of building software to be specification, design and testing of the construct, not the labor of representing it and testing the fidelity of the representation…” The connection with project management (especially, IT project management) is immediately apparent: the hard part in many projects is figuring out what needs to be done, not actually doing it. Put another way, requirements, rather than implementation, are the key to successful projects. Project management methodologies deal with implementation reasonably well, but have little to say on how requirements should be elicited. Why is this so? To answer this it is helpful to take a closer look at the parallels between inherent properties of software entities (as elucidated by Brooks) and IT projects.
According to Brooks, the essence of software systems (as defined above) is irreducible – i.e. it cannot be simplified further. This irreducible essence, he claims, has the following inherent properties: complexity, conformity, changeability and invisibility. These characteristics are present in any non-trivial software entity. Furthermore – and this is the crux of the matter- any advances in software engineering or development methodologies cannot, in principle, solve difficulties that arise from these inherent properties. I discuss these properties and some of their consequences below, pointing out the very close connections with IT project management.
- Complexity: Brooks describes software entities as complex in that no two parts are alike. This complexity increases exponentially with entity size. Deriving from this complexity, he says, are issues of difficulty of communication among team members leading to product flaws, cost overruns and schedule delays. This should sound extremely familiar to IT project managers.In an earlier post I looked into definitions of project complexity. In a nutshell, there are two dimensions of project complexity: technical and management (or business) complexity. Brooks defines complexity in technical terms, as he is concerned with building software. However, in large part the complexity he talks about arises from business complexity (or complex user requirements). The latter is often what makes IT projects difficult, even when there’s not much actual code cutting involved. Furthermore, this characteristic is intrinsic to all but the most trivial IT projects.
- Conformity: software entities must conform to constraints of the environment into which they are introduced. To paraphrase Brooks, “…These constraints are often arbitrary, forced without rhyme or reason by the many human institutions and systems to which interfaces must conform…” This too has obvious parallels with IT project management – the deliverables of any IT project have to fit into the environment they are intended for. This fit has to be at the technical level (eg: interfaces) but also at the business level (eg: processes). I’m sure many IT project managers would agree that the technical and business constraints imposed on them are often arbitrary (but compliance is always mandatory!). So we see that this characteristic, too, is intrinsic to most business and technical environments in which projects are conceived and implemented.
- Changeability: Brooks describes the software entity as being “…constantly subject to pressures for change…” This, he reckons, is partly because software embodies function (i.e. it does something useful) and partly because it is perceived as being easy to change (italics mine). One would think twice (or many more times!) before asking for large-scale changes to a building that has already been erected, but there’s considerably less restraint shown when asking for major changes to software. This, again, has parallels with IT project management – shifting requirements are the norm, despite the high cost in terms of time, if not money. Change control processes are put in place to dampen this tendency, but it is ubiquitous nonetheless.
- Invisibility: According to Brooks, software entities are, “… invisible and unvisualizable…,” in all but the simplest cases. Unlike the floor plan of a building, which helps project personnel visualise the finished product, no pictorial model is available to software builders. Sure, modelling techniques do exist, but they do not (and cannot!) depict the complexity of a non-trivial software system in any meaningful way. This, says Brooks, “…not only impedes the process of design within one’s mind, it severely hinders communication among minds…” Here too, the parallels with IT project management are clear – communicating the requirements of the project would be so much simpler if there were a visual representation of what’s required. If Brooks is right, though, a search for such a representation is akin to the search for the philosopher’s stone.
Yet Brooks is not a pessimist. Towards the end of his article, he mentions some techniques that can alleviate some of the essential difficulties of building software. These include: rapid prototyping and iterative / incremental approaches. Grow software, he says, instead of building it. Such an approach, which incorporates frequent interactions between users and developers, reduces risk associated with missed or misunderstood requirements and clarifies design in small, digestible steps. This is also good advice for IT project managers, as I’ve pointed out in a previous post.
In the last section of his article, Brooks states, “The central question of how to improve the software centers, as it always has, on people” and then goes on to discuss how talented designers (or architects) can greatly reduce the essential difficulties in building software. I believe the parallel here is between designers and project managers. A talented project manager can make all the difference between the success and failure of an IT project. What are the attributes of a talented project manager? Well, that’s a topic for another post, but I think most of us who’ve worked in the field can recognise a good project manager when we see one.
Brooks believes the essential difficulties associated with building software make silver bullet solutions impossible, in principle. The parallels outlined above lead me to believe that the same applies to project management. Methodologies may help us along the road to project success (and some do so more than others! ), but there are no silver bullets : managing IT projects is intrinsically hard.