Archive for June 2009
Visualising arguments using issue maps – an example and some general comments
The aim of an opinion piece writer is to convince his or her readers that a particular idea or point of view is reasonable or right. Typically, such pieces weave facts , interpretations and reasoning into prose, wherefrom it can be hard to pick out the essential thread of argumentation. In an earlier post I showed how an issue map can help in clarifying the central arguments in a “difficult” piece of writing by mapping out Fred Brooks’ classic article No Silver Bullet. Note that I use the word “difficult” only because the article has, at times, been misunderstood and misquoted; not because it is particularly hard to follow. Still, Brooks’ article borders on the academic; the arguments presented therein are of interest to a relatively small group of people within the software development community. Most developers and architects aren’t terribly interested in the essential difficulties of the profession – they just want to get on with their jobs. In the present post, I develop an issue map of a piece that is of potentially wider interest to the IT community – Nicholas Carr’s 2003 article, IT Doesn’t Matter.
The main point of Carr’s article is that IT is becoming a utility, much like electricity, water or rail. As this trend towards commoditisation gains momentum, the strategic advantage offered by in-house IT will diminish, and organisations will be better served by buying IT services from “computing utility” providers than by maintaining their own IT shops. Although Carr makes a persuasive case, he glosses over a key difference between IT and other utilities (see this post for more). Despite this, many business and IT leaders have taken his words as the way things will be. It is therefore important for all IT professionals to understand Carr’s arguments. The consequences are likely to affect them some time soon, if they haven’t already.
Some preliminaries before proceeding with the map. First, the complete article is available here – you may want to have a read of it before proceeding (but this isn’t essential). Second, the discussion assumes a basic knowledge of IBIS (Issue-Based Information System) – see this post for a quick tutorial on IBIS. Third, the map is constructed using the open-source tool Compendium which can be downloaded here.
With the preliminaries out of the way, let’s get on with issue mapping Carr’s article.
So, what’s the root (i.e. central) question that Carr poses in the article? The title of the piece is “IT Doesn’t Matter” – so one possible root question is, “Why doesn’t IT matter?” But there are other candidates: “On what basis is IT an infrastructural technology?” or “Why is the strategic value of IT diminishing?” for example. From this it should be clear that there’s a fair degree of subjectivity at every step of constructing an issue map. The visual representation that I construct here is but one interpretation of Carr’s argument.
Out of the above (and many other possibles), I choose “Why doesn’t IT matter?” as the root question. Why? Well, in my view the whole point of the piece is to convince the reader that IT doesn’t matter because it is an infrastructural technology and consequently has no strategic significance. This point should become clearer as our development of the issue map progresses.
The ideas that respond to this question aren’t immediately obvious. This isn’t unusual: as I’ve mentioned elsewhere, points can only be made sequentially – one after the other – when expressed in prose. In some cases one may have to read a piece in its entirety to figure out the elements that respond to a root (or any other) question.
In the case at hand, the response to the root question stands out clearly after a quick browse through the article. It is: IT is an infrastructural technology.
The map with the root question and the response is shown in Figure 1.
Moving on, what arguments does Carr offer for (pros) and against (cons) this idea? A reading of the article reveals one con and four pros. Let’s look at the cons first:
- IT (which I take to mean software) is complex and malleable, unlike other infrastructural technologies. This point is mentioned, in passing, on the third page of the paper: “Although more complex and malleable than its predecessors, IT has all the hallmarks of an infrastructural technology…”
The arguments supporting the idea that IT is an infrastructural technology are:
- The evolution of IT closely mirrors that of other infrastructural technologies such as electricity and rail. Although this point encompasses the other points made below, I think it merits a separate mention because the analogies are quite striking. Carr makes a very persuasive, well-researched case supporting this point.
- IT is highly replicable. This is point needs no further elaboration, I think.
- IT is a transport mechanism for digital information. This is true, at least as far as network and messaging infrastructure is concerned.
- Cost effectiveness increases as IT services are shared. This is true too, providing it is understood that flexibility is lost when services are shared.
The map, incorporating the pros and cons is shown in Figure 2.
Now that the arguments for and against the notion that IT is an infrastructural technology are laid out, lets look at the article again, this time with an eye out for any other issues (questions) raised.
The first question is an obvious one: What are the consequences of IT being an infrastructural technology?
Another point to be considered is the role of proprietary technologies, which – by definition – aren’t infrastructural. The same holds true for custom built applications. So, this begs the question, if IT is an infrastructural technology, how do proprietary and custom built applications fit in?
The map, with these questions added in is shown in Figure 3.
Let’s now look at the ideas that respond to these two questions.
A point that Carr makes early in the article is that the strategic value of IT is diminishing. This is essentially a consequence of the notion that IT is an infrastructural technology. This idea is supported by the following arguments:
- IT is ubiquitous – it is everywhere, at least in the business world.
- Everyone uses it in the same way. This implies that no one gets a strategic advantage from using it.
What about proprietary technologies and custom apps?. Carr reckons these are:
- Doomed to economic obsolescence. This idea is supported by the argument that these apps are too expensive and are hard to maintain.
- Related to the above, these will be replaced by generic apps that incorporate best practices. This trend is already evident in the increasing number of enterprise type applications that offered as services. The advantages of these are that they a) cost little b) can be offered over the web and c) spare the client all those painful maintenance headaches.
The map incorporating these ideas and their supporting arguments is shown in Figure 4.
Finally, after painting this somewhat gloomy picture (to a corporate IT minion, such as me) Carr asks and answers the question: How should organisations deal with the changing role of IT (from strategic to operational)? His answers are:
- Reduce IT spend.
- Buy only proven technology – follow don’t lead.
- Focus on (operational) vulnerabilities rather than (strategic) opportunities.
The map incorporating this question and the ideas that respond to it is shown in Figure 5, which is also the final map (click on the graphic to view a full-sized image).
Map completed, I’m essentially done with this post. Before closing, however, I’d like to mention a couple of general points that arise from issue mapping of prose pieces.
Figure 5 is my interpretation of the article. I should emphasise that my interpretation may not coincide with what Carr intended to convey (in fact, it probably doesn’t). This highlights an important, if obvious, point: what a writer intends to convey in his or her writing may not coincide with how readers interpret it. Even worse, different readers may interpret a piece differently. Writers need to write with an awareness of the potential for being misunderstood. So, my first point is that issue maps can help writers clarify and improve the quality of their reasoning before they cast it in prose.
Issue maps sketch out the logical skeleton or framework of argumentative prose. As such, they can help highlight weak points of arguments. For example, in the above article Carr glosses over the complexity and malleability of software. This is a weak point of the argument, because it is a key difference between IT and traditional infrastructural technologies. Thus my second point is that issue maps can help readers visualise weak links in arguments which might have been obscured by rhetoric and persuasive writing.
To conclude, issue maps are valuable to writers and readers: writers can use issue maps to improve the quality of their arguments before committing them in writing, and readers can use such maps to understand arguments that have been thus committed.
Managing participant motivation in knowledge management projects
Introduction
One consequence of the increasing awareness of knowledge as an organisational asset is that many organisations have launched projects aimed at managing knowledge. Unfortunately, a large number of these efforts focus entirely on technical solutions, neglecting the need for employee participation. The latter is important; as stated in this paper, published a decade ago, “Knowledge transfer is about connection not collection, and that connection ultimately depends on choice made by individuals…” This suggests that participant motivation is a key success factor for knowledge management initiatives. A recent paper entitled, Considering Participant Motivation in Knowledge Management Projects, by Allen Whittom and Marie-Christine Roy looks at theories of motivation from the context of knowledge management projects. This post is a summary and annotated review of the paper.
Many researchers claim that the failure rate of knowledge management projects is high, but there seems to be some confusion as to just how high the figure is (see this paper, for example). In the introduction to their paper, Whittom and Roy claim that the failure rate may be higher than 80% – but they offer no proof. Still, with many independent researchers quoting figures ranging from 50 to 80%, one can take it as established that it is a matter that merits investigation. Accordingly, many researchers have looked at causes of failure of knowledge management projects (see this paper or this one). Some specifically identify lack of participant motivation as a cause of failure (see this paper). Whittom and Roy claim that despite the work done thus far, knowledge management research does not provide any suggestions as to how motivation is to be managed in such projects. Their aim, therefore, is to:
- Present concepts from theories of motivation that are relevant to knowledge management projects.
- Propose ways in which project managers can foster participant motivation in a way that is consistent with business objectives.
These points are covered in the next two sections. The final section presents some concluding remarks.
Theoretical Overview
Motivation and Knowledge Transfer
The authors define motivation as the underlying reason(s) for a person’s actions. Motivation is usually classified as extrinsic or intrinsic depending on whether its source is external or internal to the individual. People who are extrinsically motivated are driven by rewards such as bonuses or promotions. Typically such individuals work for rewards. On the other hand intrinsically motivated individuals are self-driven, and need little supervision. Their enthusiasm, however, depends on whether or not their personal goals are congruent with the task at hand. This is important: their aims and objectives may not always be aligned with business goals. Further, intrinsically motivated individuals perform creative or complex tasks better than others, but this type of motivation varies greatly from one person to another and cannot be controlled by management. See my post on motivation in project management for a comprehensive discussion on extrinsic and intrinsic motivation.
The authors then discuss the link between motivation and the willingness to share knowledge. Knowledge falls into two categories: tacit and explicit. Tacit knowledge is hard to codify and communicate (e.g. a skill, such as riding a bicycle) whereas explicit knowledge can be formalised and transmitted (e.g. how to open a bank account). Tacit knowledge is in “people’s heads” and is consequently harder to capture. More often than not, though, it turns out to be more valuable than explicit knowledge. In their paper entitled, Motivation, Knowledge Transfer and Organisational Forms, Osterloh and Frey state that, “…Intrinsic motivation is crucial when tacit knowledge in and between teams must be transferred…” Following this work, Gartner researchers Morello and Caldwell proposed a model in which intrinsic motivation drives the creation and sharing of tacit knowledge which in turn drives the dissemination and use of tacit knowledge in the organisation (I couldn’t find a publicly available copy of their work – but there is an illustration of the model in Figure 1 Whittom and Roy’s paper).
The message from motivation research is clear: intrinsic motivation is critical to the success of knowledge management projects.
Rewards and Recognition
Rewards and recognition are “levers of motivation”: they can be used to enhance and direct employee motivation towards achieving organisational goals. Reward systems are aimed at aligning individual efforts with organisational objectives. Recognition systems, on the other hand, are designed to express public appreciation for high standards of achievement or competence. These may be set according to criteria that diverge from preset objectives (as an example – a public thanks for a job well done can be made irrespective of whether the job is in line with company objectives)
Rewards can be extrinsic (not related to the task) or intrinsic (related to the task) and material or non-material. Extrinsic rewards are typically material – i.e. they involve giving the recipient something tangible. Financial incentives are the most common form of extrinsic rewards because they are easily administered through the pay system. Extrinsic rewards can also be non-financial (gift certificates or a meal at a nice restaurant, for example). For the same investment, non-financial rewards are found to have a more lasting effect than financial ones. This makes sense: people are more likely to remember a memorable meal than a few hundred dollar raise; the latter is sometimes forgotten as soon as it comes into effect. A downside of financial rewards is that they are easily forgotten and may actually decrease intrinsic motivation (see this paper by David Beswick). Another is that they may encourage sub-standard work, particularly in cases where benchmarks are based on volume rather than quality of output.
Extrinsic rewards can also be non-material – promotions and training opportunities, for example (see this paper by Wolfgang Semar for more on non-material, extrinsic rewards).
Intrinsic rewards generally pertain to the satisfaction derived from performing a task. The moral satisfaction arising from a job done well is also a form of intrinsic reward. It should be clear that these rewards work only for intrinsically motivated individuals. Intrinsic rewards are invariably non-material and they cannot be controlled by management. However, awareness of factors influencing intrinsic motivation can help managers create the right environment for intrinsically motivated individuals. Kenneth Thomas, in his book entitled, Intrinsic Motivation at Work – Building Energy and Commitment, identifies four psychological factors that can influence intrinsic motivation. They are:
- Feelings of accomplishment: These can be enhanced by devising interesting work tasks and aligning them with employee interests.
- Feelings of autonomy: These can be enhanced by empowering employees with responsibility and authority to do their work.
- Feelings of competence: These can be enhanced by offering employees opportunities to demonstrate and enhance their expertise.
- Feelings of progress: These can be enhanced by fostering a collaborative atmosphere in which project successes are celebrated.
These factors are (to an extent) under management control. If nothing else, it is worth being aware of them so that one can avoid doing things that might reduce intrinsic motivation.
Motivation crowding and psychological contracts
The authors then examine the effects of rewards on intrinsic motivation in the context of knowledge management projects (Recall that intrinsic motivation was seen to be a key success factor in knowledge management projects). They use motivation crowding theory to frame their discussion. Crowding theory suggests that intrinsic motivation can be enhanced (“crowded-in”) or undermined (or “crowded-out”) by external rewards.
To understand motivation crowding, one has to look at how extrinsic (or external) rewards work. Basically there are two ways in which an extrinsic reward can be perceived. To quote from the paper,
External interventions, such as rewards, may influence this perception either through information or control. If people see a reward as being related to their competence (information), intrinsic motivation for the task will be encouraged or maintained. On the other hand, if they see a reward as a way to control their performance or autonomy, intrinsic motivation would be decreased.
Extrinsic rewards can have a positive or negative effect on information and control. This is best understood through an example: consider a company that announces cash incentives for the top three contributors to a knowledge database. This reward has a positive control aspect (i.e. encourages participation) but a negative information aspect (i.e. no check on quality of contributions). Consequently, the reward encourages high volume of contributions with no regard to quality. This situation typically undermines or “crowds-out” intrinsic motivation. Note that motivation “crowding out” is sometimes referred to as motivation eviction in the literature.
Crowding-out is also seen in recurring tasks. For example, if a monetary incentive is offered for a task, there will be an expectation that the incentive be offered the next time around. On the other hand, non-monetary interventions such as increased employee involvement and autonomy in project decision making can “crowd-in” or enhance intrinsic motivation.
These effects are intuitively quite obvious, but it’s interesting to see them from a social science / economics point of view. If you’d like to find out more, I highly recommend the paper, Motivation crowding theory: A survey of empirical evidence, by Bruno Frey and Reto Jegen.
The take home lesson from the above is that intrinsic motivation can sometimes be negatively affected by external rewards. Manager, beware.
Whittom and Roy also discuss the notion of psychological contracts between the employer and employee. These contracts, distinct from formal employment contracts, refer to the unstated (but implied) informal, mutual obligations pertaining to respect, autonomy, work ethic, fairness etc. An employee’s intrinsic motivation can be greatly reduced if he or she perceives that the contract has been breached. For example, if an employee’s regarding improvements to a knowledge database are ignored, she might feel undervalued. In her eyes, management (and hence the organisation) has lost credibility, and the psychological contract has been violated. In psychological contract theory, personal relationships are seen to be an important driver of intrinsic motivation: people are more likely to enjoy working in teams in which they have good relations with team members.
Discussion
Practices to foster intrinsic motivation
One conclusion from the aforementioned theories is that intrinsic motivation is essential for the transfer of tacit knowledge. Accordingly, the authors suggest the following practices to maintain and enhance intrinsic motivation of employees involved in knowledge management projects:
- Avoid the use of monetary rewards. Instead, use non-monetary rewards that recognize competence. Monetary rewards may also encourage the transfer of unimportant knowledge.
- Involve employees in the formulation of project objectives.
- Encourage team work and team bonding. A good team dynamic encourages the sharing of tacit knowledge. The technique of dialogue mapping facilitates the sharing and capture of knowledge in a team environment.
- Emphasise how the employee might benefit from the project – this is the old WIIFM factor. This needs to be done in a way that shows how the benefit is integrated into the organisation’s culture – i.e. the benefit must be a realistic and believable one, else the employee will see right through it.
- Good communication between management and employees. This one is a “usual suspect” that comes up in virtually all best practice recommendations. Unfortunately it is seldom done right.
Contextual recommendations based on knowledge and motivation types
Theories of motivation indicate that, as far as motivation for knowledge sharing is concerned, one size does not fit all. The particular strategy used depends on the nature of the knowledge that is being captured (tacit or explicit), participants’ motivational drivers (intrinsic or extrinsic) and organizational resources. Based on this, the authors discuss the following contexts
- Tacit knowledge management / intrinsic motivation: This is an ideal situation. Here the manager’s role is to support participants in achieving project objectives rather than to influence their behaviour through rewards. Extrinsic rewards should be avoided because participants are intrinsically motivated.
- Tacit knowledge management / extrinsic motivation: From the preceding discussion of motivation theories, it is clear that this is not a good situation. However, all is not lost. A manager can develop knowledge management strategies based on structured training, discussion groups etc. to help codify and transfer tacit knowledge. These strategies should highlight the project benefits (for the employee and the organisation). Further, extrinsic rewards can be offered, but their “crowding-out” effect over time should be kept in mind.
- Explicit knowledge / intrinsic motivation: Here the knowledge management aspect is easier because the knowledge is explicit. Typically, once the objectives are identified, it is clear how knowledge should be captured and organized. Obviously, structured training and tools such as Wikis and databases can help facilitate knowledge transfer. Further, these will be more effective than case (2) above, because the participants are intrinsically motivated.. Recommendations, as far as rewards are concerned are the same as in the first case.
- Explicit knowledge / extrinsic motivation: For knowledge management the same considerations apply as in case (3). However, these strategies will be less effective because employees are extrinsically motivated. For rewards management, the considerations of case (2) apply.
As discussed above, the motivation strategy should be determined by whether the team members are intrinsically or extrinsically motivated. Unfortunately, though, the strategy often dictated by the culture of the organization – the manager may have little say in determining it. The authors do not discuss what a manager might do in such a situation.
Conclusion
The paper presents no new data or analysis of existing data. As such it must be evaluated on the basis of new concepts and theoretical constructs that it presents. From this perspective there’s little that’s new in this paper. That said, project managers leading knowledge management projects might find the paper a worthwhile read because of its coverage of motivation theories (crowding theory and psychological contracts, in particular).
Let me end with an extrapolation of the above discussion to software projects. The holy grail of knowledge management initiatives is to capture tacit knowledge. By definition, this knowledge is difficult to codify. One sees something similar in requirements gathering for application software. The analyst needs to capture all the explicit and tacit process knowledge that’s in users’ heads. The former is easy to capture; the latter isn’t. As a result requirements usually do not capture tacit process knowledge. This is one aspect of what Brooks referred to as the essential problem of software design – figuring out what the software really needs to do (see this post for more on Brooks’ argument). Well designed software embodies both kinds of knowledge, so software projects are knowledge management projects in a sense. As far as motivation is concerned, therefore, the theories and conclusions sketched above should apply to software projects. An intrinsically motivated development team will improve the chances of success greatly; a trite statement perhaps, but one that may resonate with those who have had the privilege of working with such teams.
The legacy of legacy software
Introduction
On a recent ramble through Google Scholar, I stumbled on a fascinating paper by Michael Mahoney entitled, What Makes the History of Software Hard. History can offer interesting perspectives on the practice of a profession. So it is with this paper. In this post I review the paper, with an emphasis on the insights it provides into the practice of software development.
Mahoney’s thesis is that,
The history of software is the history of how various communities of practitioners have put their portion of the world into the computer. That has meant translating their experience and understanding of the world into computational models, which in turn has meant creating new ways of thinking about the world computationally and devising new tools for expressing that thinking in the form of working programs….
In other words, software– particularly application software – embodies real world practices. As a consequence,
…the models and tools that constitute software reflect the histories of the communities that created them and cannot be understood without knowledge of those histories, which extend beyond computers and computing to encompass the full range of human activities…
This, according Mahoney, is what makes the history of software hard.
The standard history of computing
The standard (textbook) history of computing is hardware-focused: a history of computers rather than computing. The textbook version follows a familiar tune starting with the abacus and working its way up via analog computers, ENIAC, mainframes, micros, PCs and so forth. Further, the standard narrative suggests that each of these were invented in order to satisfy a pre-existing demand, which makes their appearance almost inevitable. In Mahoney’s words,
…Just as it places all earlier calculating devices on one or more lines leading toward the electronic digital computer, as if they were somehow all headed in its direction, so too it pulls together the various contexts in which the devices were built, as if they constituted a growing demand for the invention of the computer and as if its appearance was a response to that demand.
Mahoney says that this is misleading for because,
…If people have been waiting for the computer to appears as the desired solution to their problems, it is not surprising that they then make use of it when it appears, or indeed that they know how to use it…
Further, it
…sets up a narrative of revolutionary impact, in which the computer is brought to bear on one area after another, in each case with radically transformative effect….”
The second point – revolutionary impact – is interesting because we still suffer its fallout: just about every issue of any trade journal has an article hyping the Next Big Computing Revolution. It seems that their writers are simply taking their cues from history. Mahoney puts it very well,
One can hardly pick up a journal in computing today without encountering some sort of revolution in the making, usually proclaimed by someone with something to sell. Critical readers recognise most of it as hype based on future promise than present performance…
The problem with revolutions, as Mahoney notes, is that they attempt to erase (or rewrite) history, ignoring the real continuities and connections between present and the past,
Nothing is in fact unprecedented, if only because we use precedents tot recognise, accommodate and shape the new…
CIOs and other decision makers, take note!
But what about software?
The standard history of computing doesn’t say much about software,
To the extent that the standard narrative covers software, the story follows the generations of machines, with an emphasis on systems software, beginning with programming languages and touching—in most cases, just touching—on operating systems, at least up to the appearance of time-sharing. With a nod toward Unix in the 1970s, the story moves quickly to personal computing software and the story of Microsoft, seldom probing deep enough to reveal the roots of that software in the earlier period.
As far as applications software is concerned –whether in construction, airline ticketing or retail – the only accounts that exist are those of pioneering systems such as the Sabre reservation system. Typically these efforts focus on the system being built, excluding any context and connection to the past. There are some good “pioneer style” histories: an example is Scott Rosenberg’s book Dreaming in Code – an account of the Chandler software project. But these are exceptions rather than the rule.
In the revolutionary model, people react to computers. In reality, though, it’s the opposite: people figure out ways to use computers in their areas of expertise. They design and implement programs to make computers do useful things. In doing so, they make choices:
Hence, the history of computing, especially of software, should strive to preserve human agency by structuring its narratives around people facing choices and making decisions instead of around impersonal forces pushing people in a predetermined direction. Both the choices and the decisions are constrained by the limits and possibilities of the state of the art at the time, and the state of the art embodies its history to that point.
The early machines of the 1940s and 50s were almost solely dedicated to numerical computations in the mathematical and physical sciences. Thereafter, as computing became more “mainstream” other communities of practitioners started to look at how they might use computers:
These different groups saw different possibilities in the computer, and they had different experiences as they sought to realize those possibilities, often translating those experiences into demands on the computing community, which itself was only taking shape at the time.
But these different communities have their own histories and ways of doing things – i.e. their own, unique worlds. To create software that models these worlds, the worlds have to be translated into terms the computer can “understand” and work with. This translation is the process of software design. The software models thus created embody practices that have evolved over time. Hence, the models also reflect the histories of the communities that create them.
Models are imperfect
There is a gap between models and reality, though. As Mahoney states,
…Programming is where enthusiasm meets reality. The enduring experience of the communities of computing has been the huge gap between what we can imagine computers doing and what we can actually make them do.
This lead to the notion of a “software crisis: and calls to reform the process of software development, which in turn gave rise to the discipline of software engineering. Many improvements resulted: better tools, more effective project management, high-level languages etc. But all these, as Brooks pointed out in his classic paper, addressed issues of implementation (writing code) not those of design (translating reality into computable representations). As Mahoney state,
…putting a portion of the world into the computer means designing an operative representation of that portion of the world that captures what we take to be its essential features. This has proved, as I say, no easy task; on the contrary it has proved difficult, frustrating and in some cases disastrous.
The problem facing the software historian is that he or she has to uncover the problem context and reality as perceived by the software designer, and thus reach an understanding of the design choices made. This is hard to do because it is implicit in the software artefact that the historian studies. Documentation is rarely any help here because,
…what programs do and what the documentation says they do are not always the same thing. Here, in a very real sense, the historian inherits the problems of software maintenance: the farther the program lies from its creators, the more difficult it is to discern its architecture and the design decisions that inform it.
There are two problems here:
- That software embodies a model of some aspect of reality.
- The only explanation of the model is the software itself.
As Mahoney puts it,
Legacy code is not just old code, but rather a continuing enactment, an operative representation, of the domain knowledge embodied in it. That may explain the difficulties software engineers have experienced in upgrading and replacing older systems.
Most software professionals will recognise the truth of this statement.
The legacy of legacy code
The problem is that new systems promise much, but are expensive and pose too many risks. As always continuity must be maintained, but this is nigh impossible because no one quite understands the legacy bequeathed by legacy code: what it does, how it does it and why it was designed so. So, customers play it safe and legacy code lives on. Despite all the advances in software engineering, software migrations and upgrades remain fraught with problems.
Mahoney concludes with the following play on the word “legacy”,
This situation (the gap between the old and the new) should be of common interest to computer people and to historians. Historians will want to know how it developed over several decades and why software systems have not kept pace with advances in hardware. That is, historians are interested in the legacy. Even as computer scientists wrestle with a solution to the problem the legacy poses, they must learn to live with it. It is part of their history, and the better they understand it, the better they will be able to move on from it.
This last point should be of interest to those running software development projects in corporate IT environments (and to a lesser extent those developing commercial software). An often unstated (but implicit) requirement is that the delivered software must maintain continuity between the past and present. This is true even for systems that claim to represent a clean break from the past; one never has the luxury of a completely blank slate, there are always arbitrary constraints placed by legacy systems. As Fred Brooks mentions in his classic article No Silver Bullet,
…In many cases, the software must conform because it is the most recent arrival on the scene. In others, it must conform because it is perceived as the most conformable. But in all cases, much complexity comes from conformation to other interfaces…
So, the legacy of legacy software is to add complexity to projects intended to replace it. Mahoney’s concluding line is therefore just as valid for project managers and software designers as it is for historians and computer scientists: project managers and software designers must learn to live with and understand this complexity before they can move on from it.