From early in our careers as programmers, we begin learning specific techniques for solving artificial problems. We call them algorithms. In fact, a course in data structures and algorithms  is generally part of the first year undergraduate curriculum in computer science. I have no quarrel with this. Learning to solve small artificial problems (such as sorting an array, or generating the first few Fibonacci numbers) is an essential step in learning to program. Beyond that, these skills are the same ones we will need when working on programs and systems that are designed to solve real world problems. To draw an analogy with medicine, algorithms are akin to to procedures. They are well rehearsed techniques that can be used to address very specific problems. Performing procedures is only one aspect of patient care. In the same sense, implementing (or even selecting) algorithms is only one aspect of software development.

And therein lies the problem. It is easy to become so focused on what we are able to do in a technical sense, that we start to lose sight of the real world problem we are trying to solve. I’ve done it. We’ve all done it. It is the most natural thing in the world to look for those aspects of a problem we know how to address and focus our energies there. To go back to our medical analogy, this is treating the symptoms and not the underlying disease.

Of course, modern software engineering has tried avoid this by defining formal procedures. Typically, it goes something like this:

  1. First, we identify the “business” need and “business” requirements. (As an aside, I think these are unfortunate terms, but they are well established.)
  2. Next, we translate those business requirements into technical requirements. That is, requirements that can be addressed through technical means (e.g., code).
  3. Next, we write the code, and produce a working application (or “build”).
  4. We then test the build to see if our requirements are met.
  5. If so, we release our build and, in iterative projects, return if step 1 or 2.
  6. If not, we return to step 3 and try to correct the problems.

This all sounds well and good, but in reality, it just pushes the problem back. Do the business requirements really capture the essence of the problem, or does it just substitute one formal process for another? Sadly, we are just substituting one formal process for another, and we can be just as blinded by the way we are used to writing requirements as we are the way we are used to writing code.

So, what does this mean? For one thing, we can easily become so caught up in executing processes that we lose sight of the problem we were trying to solve in the first place. Just as is the case with algorithms, requirements and processes are technical tools – they are not ends in themselves. This doesn’t mean there is anything wrong with processes, it doesn’t mean that rigor in software development is a bad thing. But it does mean that we find that are tools and processes are leading us down the wrong path, or leaving fundamental issues unaddressed, then it is time to step back and reconsider our approach. Sometimes, our requirements, and even our business processes need to be reevaluated. Software development is both an art and a science.

This is somewhat controversial. Many people are anxious for software development to be a true engineering discipline, with repeatable processes producing predictable results. This is commendable. It’s also not terribly realistic, at least today. There is progress being made in a number of areas that are bringing us closer. But ultimately, a software developer needs to be able to rely on his or her intuition, and be willing to step back from an unproductive path. This may force us to rethink the culture of IT, and be willing to be more flexible and creative in our work. Again, this is analogous to medicine. If test results don’t “add up” then it may be time reconsider our theory of what disease process may be affecting the patient. Similarly, we may have to accept that our “software theory” is wrong, too.