Eight to Late

Sensemaking and Analytics for Organizations

Archive for May 2008

Appstronauts

with 2 comments

Eliciting and documenting application requirements is hard work. It’s made even harder if one is dealing with an appstronaut. “So who or what’s an appstronaut?” – I hear you ask.  Here’s a definition, along with some  explanatory notes:

Appstronaut (noun): a person who can hold forth for hours on the big picture, but cannot (will not!) get down to details.  
Distribution: Generally found in upper echelons of management,  but sightings have been reported at other levels too. 
Field notes: Appstronauts are characterised by verbosity coupled with a short attention span. They exhibit extreme fondness for strategy,  vision and all that “big picture” stuff, but have little patience for details. They are known to have good ideas, but generally lack the focus to see them through to fruition.

Appstronauts infuriate analysts, who need nitty-gritty details in order to understand and document application requirements. The big picture stuff, as fascinating as it is to the appstronaut, is simply of no use to the analyst.  But, if left unchecked, appstronauts will be content to float around at rarefied heights where ideas are thick, but details thin. Analysts charged with compiling application requirements must bring them down to earth. But, how?  This post aims to answer the question by highlighting some simple techniques to tether appstronauts to reality.

Without further ado, here they are:

  • Start by zooming in on a small part of the big picture: The main problem is that the canvas painted by appstronauts is way too big. A practical way to start filling in  detail is by focusing on a part of the picture and drilling down to specifics. Which part? The answer should be clear from the appstronaut’s spiel. In brief: the part should be important yet easy enough to action or implement. It may be something that ties into existing systems, for instance. What the analyst should look for is a quick win. Something that can be  built quickly, but also provides value to the appstronaut.
  • Contribute to the picture by painting a part of it: The best analysts understand the business thoroughly. Given this, they should be able to contribute to the appstronaut’s vision. In fact, I have sat through meetings where smart analysts have steered the discussion (with tact!) in productive directions. At this point the analyst is contributing to the vision too.
  • Help appstronauts drill down to specifics: Appstronauts abhor details; analysts thrive on them. The analyst’s job is to get appstronauts to talk about specifics.  To do this, it can be helpful to send out a list of questions before the meeting so that appstronauts come in prepared. Very often, they’ll throw up their hands and say, “It’s your job to fill in the details.” At this point the analyst has to gently, but firmly, insist that details of business requirements have to come from (no surprise!) the business.
  • Verify mutual understanding: It is important to verify that both parties – the appstronaut and the analyst – have a common understanding of what transpires in a requirements analysis session. This should be done  both during and after the meeting. During the session the analyst should, by asking appropriate questions, check that their understanding of the requirements is correct . After the session,  a compilation of notes should be sent out, asking users to send their corrections and comments within the next day or two. This gives them a chance to make any revisions before  work on documenting the requirements is started. This is standard practice in requirements elicitation, but is absolutely critical when one is dealing with an appstronaut (remember the “short attention span” bit in the field notes above)
  • Use visual aids (screen mock-ups, process diagrams etc.) liberally: This applies both to the analysis sessions and the documents. Often, in requirements sessions I use the whiteboard to sketch process flows, relationships and even app screen layouts.  Any documents or presentations should have lots of visuals as appstronauts  have even less patience than others to go through large swathes of text.

After bagging appstronauts through most of this piece, I should acknowledge that they can play a key role in driving important business initiatives. As mentioned in the field notes above, they  often have good ideas but need some help implementing them. This is where the analyst comes in: he or she is very familiar with the business and/or associated systems, and is thus well placed to help appstronauts add substance to their grand, but ethereal, visions. If approached constructively, working with appstronauts can be an opportunity instead of a trial.

Written by K

May 25, 2008 at 9:33 pm

A project manager’s ruminations on quality

with 2 comments

And what is good, Phædrus,
And what is not good…
Need we ask anyone to tell us these things
?

…so runs the quote at the start of Robert Pirsig’s philosophical novel, Zen and the Art of Motorcycle Maintenance.  In the book, Pirsig introduces his ideas on the metaphysics of quality in which he argues that quality is undefinable because it precedes analysis. Or, to paraphrase the quotation above: when we see something good (i.e of high quality) we just know it is good without having to analyse what makes it so. When I read the book some years ago, it seemed (to me at any rate) that Pirsig was using the term “quality” in a different and expanded sense from its usual meaning(s).  Experience also confirms that using the word in any discussion leaves one open to being misunderstood because it means different things to different people.  This brings me to my point: it is important for project managers to ensure that all stakeholders have a common understanding of quality. I expand on this thought below, drawing on what leading project management frameworks – PMBOK and PRINCE2 – have to say about quality.

It is worth starting with a couple of  dictionary definitions of quality,  if only to show how worthless they are from a project manager’s perspective:

1. An essential or distinctive characteristic, property, or attribute.

2. High grade; superiority; excellence

Not much help, right? In fact, it’s even worse: the second definition confuses quality and grade – two terms that PMBOK assures us are as different as chalk and cheese.

So what is a good definition of quality from the perspective of a project manager? The PMBOK, quoting from the American Society for Quality (ASQ), defines quality as, “the degree to which a set of inherent characteristics fulfil requirements.”  This is clearly a much more practical definition for a project manager, as it links the notion of quality to what the end-user expects from deliverables. PRINCE2, similarly, keeps the end-user firmly in focus when it defines quality, rather informally, as, “fitness for purpose.”

The PMBOK/ASQ definition of quality, through its use of the word “degree”, suggests that any measure of quality should be quantitative – i.e. there should be a number associated with it; or, if not a number, at least a qualitative measure of quantity (high, medium, low, for example). PRINCE2 also insists on measurability through its notion of customer quality expectations and acceptance criteria, the former being high level, qualitative specifications and the latter quantitative, testable quality criteria. Although PMBOK does not formalise customer quality expectations, both methodologies insist on measurable quality criteria being defined as early in the game as possible. Basically these should be aimed at answering the question: how will end-users know they’ve got what they asked for?

Any project management methodology (including informal or homegrown ones!) must include processes that deal with quality. As might be expected, both PMBOK and PRINCE2 accord great importance to it. Quality Management is one of the nine knowledge areas in PMBOK. Further, PMBOK explicitly acknowledges that project quality is affected by balancing the three fundamental variables of scope, time and cost. In PRINCE2 quality is built in to a project up-front: right from start up (pre-initiation) through to initiation,  product delivery and close. However, despite superficial differences in their approaches, both PMBOK and PRINCE2 treat quality in much the same way. Both emphasise the need to plan for quality at the outset (quality planning), put processes in place to ensure quality (quality assurance) and finally test for it when the deliverables (products in PRINCEspeak) are completed (quality control / quality review).

Regardless of your persuasion – or even if you’re a methodology agnost like me – quality is something you have to worry about on your projects. Project managers, unlike Pirsig, cannot afford to leave quality undefined. So, if your end-users’ eyes glaze over when they are asked about their quality expectations, understand it’s only because they don’t know what you mean; quality means different things to different people. Instead, ask them how they’ll know what is good and what is not good, and be sure to pay very close attention to what they say in reply.

Written by K

May 22, 2008 at 9:12 pm

Posted in Project Management

Do project managers learn from experience?

with 2 comments

Do project managers learn from their experiences? One might think the answer is a pretty obvious, “Yes.” However in a Harvard Business Review article entitled, The Experience Trap,   Kishore Sengupta, Tarek Abdel-Hamid and Luk Van Wassenhove suggest the answer may be a negative, especially on complex projects. I found this claim surprising, as I’m sure many project managers would. It is therefore worth reviewing the article and the arguments made therein.

The article is based on a study in which several hundred experienced project managers were asked to manage a simulated software project with specified goals and constraints. Most participants failed miserably: their deliverables were late , over-budget and defect ridden. This despite the fact that most of them acknowledged that the problems encountered on the simulations were reflective of those they had previously  seen on real projects. The authors suggest this indicates problems with the way project managers learn from experience. Specifically:

  • When making decisions, project managers do not take into account consequences of prior actions.
  • Project managers don’t change their approach, even when it is evident that it doesn’t work.

The authors identify three causes for this breakdown in learning:

  • Time lags between cause and effect: In complex projects, the link between causes and effects are not immediately apparent. The main reason for this, the authors contend, is that there can be significant delays between the two – e.g. something done today might affect the project only after a month. The authors studied this effect through another simulated project in which requirements increased during implementation. The participants were asked to make hiring decisions at specified intervals in the project, based on their anticipated staffing requirements. The results showed that the ability of the participants to make good hiring decisions deteriorated as the arrival lag (time between hiring and arrival) or assimilation lag (time between arrival and assimilation) increased. This, the authors claim, shows that project managers find it hard to make causal connections when delays between causes and effects are large.
  • Incorrect estimates: It is well established that software projects are notoriously hard to estimate (see my article on complexity of IT projects for more on why this is so). The authors studied how project managers handle incorrect estimates. This, again, was done  through a simulation. What they found was participants tended to be overly conservative when providing estimates even when things were actually going quite well. The authors suggest this is an indication that project managers attempt to game the system to get more resources (or time), regardless of what the project data tells them.
  • Initial goal bias: Through yet another another simulation,  the authors studied what happens as project goals change with time. The simulation started out with a well-defined initial scope which was then changed some time after the project started. Participants were not required to reevaluate goals as scope changed, but that avenue was open to them.  The researchers found that none of the participants readjusted their goals in response to the change, thus indicating that unless explicitly required to reevaluate objectives, project managers tend to stick to their original targets.

After discussing the above barriers to learning, the authors provide the following suggestions reduce these:

  • Provide cognitive feedback: A good way to understand causal  relationships in complex processes is to provide cognitive feedback – i.e feedback that clarifies the connection between important variables. In the simulation exercise involving arrival / assimilation delays, participants who were provided  with such feedback (basically graphical displays of number of defects detected vs time) were able to make better (i.e. more timely) staffing decisions.
  • Use calibrated model-based tools and guidelines: The authors suggest using decision support and forecasting tools to guide project decision-making. They warn that these tools should be calibrated to the specific industry and environment.
  • Set goals based on behaviours rather than performance: Most project managers are assessed on their performance – i.e.  the success of their projects. Instead, the authors suggest setting goals that promote specific behaviours. An example of such a goal might be the reduction of team attrition. Such a goal would ensure that project managers focus on things such as promoting learning within the team, protecting their team from schedule pressure etc. This, the logic goes, will lead to better team cohesion and morale, ultimately resulting in better project outcomes.
  • Use project simulators: Project simulations provide a safe environment for project managers to hone their skills and learn new ones. The authors cite a case where introduction of project simulation games significantly improved the performance of managers on projects, and also lead to a better understanding of dynamic relationships in complex environments.

Although many of the  problems (e.g. inaccurate estimates) and solutions (e.g.  use of simulation and decision support software) discussed in the article aren’t new,  the authors present an interesting and thought-provoking study on the apparently widespread failure of project managers to learn from experience. However, for reasons which now I outline, I believe their case may somewhat overstated.

Regarding the research methodology, I believe their reliance on simulations limits the strength, if not the validity, of their claims. More on this below:

  1. Having participated in project simulations before, I can say that simulators cannot simulate (!)important people-related factors which are always present in a real project environment. These include factors such as  personal relationships and ill-defined but important notions such as organisational culture. In my experience, project managers always have to take these into account when making project decisions.
  2. Typically many of the important factors on real projects are “fuzzy” and have complex dependencies that are hard to disentangle.  Simulations are only as good as the models they use, and these factors are hard to model.

On the solutions recommended by the authors:

  1. I’m somewhat sceptical about the use of software tools  to supports decision making. In my experience, decision support tools require a fair bit of calibration, practice and (good) data to be of any real use. By the time one gets them working, one usually has a good handle on the problem any way. They’re also singularly useless when extrapolated to new situations – and projects (almost by definition) often involve new situations.
  2. Setting behavioural goals is nice in theory,  but I’m not sure how it would work in practice. Essentially I have a problem with how a project manager’s performance will be measured against such goals. The causal connection between a behavioural goal such as “reduce team attrition” and improved project performance is indirect at best.
  3. Regarding simulators as training tools,  I have used them and have been less than impressed. It is very easy to make  a “wrong” decision on a simulator because information has been hidden from you. In a real life situation, a canny project manager would find ways to gather the information he or she needs to make an informed decision, even if this is hard to do. Typically, this involves using informal communication modes and generally keeping ear to the ground. The best project managers excel at this.

So, in closing, I think the authors have a point about the disconnect between project management practice and learning at the level of an individual project manager. However, I believe their thesis is somewhat diluted because it is based on the results of simulated project games which are limited in their ability to replicate complex, people-related issues that are encountered on real projects.

Written by K

May 18, 2008 at 9:42 am