The case of the missed requirement
It would have been a couple of weeks after the kit tracking system was released that Therese called Mike to report the problem.
“How’re you going, Mike?” She asked, and without waiting to hear his reply, continued, “I’m at a site doing kit allocations and I can’t find the screen that will let me allocate sub-kits.”
“What’s a sub-kit?” Mike was flummoxed; it was the first time he’d heard the term. It hadn’t come up during any of the analysis sessions, tests, or any of the countless conversations he’d had with end-users during development.
“Well, we occasionally have to break open kits and allocate different parts of it to different sites,” said Therese. “When this happens, we need to keep track of which site has which part.”
“Sorry Therese, but this never came up during any of the requirements sessions, so there is no screen.”
“What do I do? I have to record this somehow.” She was upset, and understandably so.
“Look,” said Mike, “could you make a note of the sub-kit allocations on paper – or better yet, in Excel?
“Yeah, I could do that if I have to.”
“Great. Just be sure to record all the kit identifier and which part of the kit is allocated to which site. We’ll have a chat about the sub-kit allocation process when you are back from your site visit. Once I understand the process, I should be able to have it programmed in a couple of days. When will you be back?”
“Tomorrow,” said Therese.
“OK, I’ll book something for tomorrow afternoon.”
The conversation concluded with the usual pleasantries.
After Mike hung up he wondered how they could have missed such an evidently important requirement. The application had been developed in close consultation with users. The requirements sessions had involved more than half the user community. How had they forgotten to mention such an important requirement and, more important, how had he and the other analyst not asked the question, “Are kits ever divided up between sites?”
Mike and Therese had their chat the next day. As it turned out, Mike’s off-the-cuff estimate was off by a long way. It took him over a week to add in the sub-kit functionality, and another day or so to import all the data that users had entered in Excel (and paper!) whilst the screens were being built.
The missing requirement turned out to be a pretty expensive omission.
The story of Therese and Mike may ring true with those who are involved with software development. Gathering requirements is an error prone process: users forget to mention things, and analysts don’t always ask the right questions. This is one reason why iterative development is superior to BDUF approaches: the former offers many more opportunities for interaction between users and analysts, and hence many more opportunities to catch those elusive requirements.
Yet, although Mike had used a joint development approach, with plenty of interaction between users and developers, this important requirement had been overlooked.
Further, as Mike’s experience corroborates, fixing issues associated with missing requirements can be expensive.
Fact 25 in the book goes: Missing requirements are the hardest requirements errors to correct.
In his discussion of the above, Glass has this to say:
Why are missing requirements so devastating to problem solution? Because each requirement contributes to the level of difficulty of solving a problem, and the interaction among all those requirements quickly escalates the complexity of the problem’s solution. The omission of one requirement may balloon into failing to consider a whole host of problems in designing a solution.
Of course, by definition, missing requirements are hard to test for. Glass continues:
Why are missing requirements hard to detect and correct? Because the most basic portion of the error removal process in software is requirements-driven. We define test cases to verify that each requirement in the problem solution has been satisfied. If a requirement is not present, it will not appear in the specification and, therefore, will not be checked during any of the specification-driven reviews or inspections; further there will be no test cases built to verify its satisfaction. Thus the most basic error removal approaches will fail to detect its absence.
As a corollary to the above fact, Glass states that:
The most persistent software errors – those that escape the testing process and persist into the production version of the software – are errors of omitted logic. Missing requirements result in omitted logic.
In his research, Glass found that 30% of persistent errors were errors of omitted logic! It is pretty clear why these errors persist – because it is difficult to test for something that isn’t there. In the story above, the error would have remained undetected until someone needed to allocate sub-kits – something not done very often. This is probably why Therese and other users forgot to mention it. Why the analysts didn’t ask is another question: it is their job to ask questions that will catch such elusive requirements. And before Mike reads this and cries foul, I should admit that I was the other analyst on the project, and I have absolutely no defence to offer.