IMG_0948

We say nothing essential about the cathedral when we speak of its stones. 

-Antoine de Saint-Exupéry

Software development can be expensive, and health information systems are notorious for their cost. Even open source options require a considerable investment in configuration and maintenance. As a result, people responsible for managing these systems often tend to treat the code artifacts (classes, routines, DDL) from which they are built as valuable assets in their own right. On one level, this makes sense, because they are tangible and have a measurable cost. Systems, on the other hand, are emergent phenomena, and their value is much harder to quantify.

On the other hand, there are significant dangers in focusing on code artifacts to the exclusion of larger systems. It can lead to harmful practices (or “antipatterns”) such as continuing to patch the same code when defects are identified or new needs arise. This is a problem because code that is initially clear, easy to understand and maintain, can grow more complicated, obscure, and error prone. I’ve seen developers throw up their hands and rewrite entire systems because they have become too dificult to maintain. If managers and product owners will allow it, this can absolutely be r=the right thing to do, but it can be a hard sell. On the surface, it seems to be wasted effort. Why rewrite an applicaton that is working? It is one thing to come out with enhancements and bug fixes, but a rewrite is neither – or so the reasoning goes.

The counterargument is that the rewritten (or refactored) system will be easier (and less costly), both to maintain and enhance. But that involves spending money now for potential savings later, and that can be a bitter pill to swallow. Unfortunately, savings are not easy to quantify, and there is a risk that systems may need to be replaced by new systems with entireley new requirements, making the rewrite an apparent waste of effort. But obselescence is a risk regardless of whether systems are periodically rewritten (full releases), so this argument is really fallacious. In fact, every system does need to be replaced eventually. Software does have a shelf life, if you will. But when software is properly maintained, its useful life is increased, and when it comes time to replace it, the transition will be easier. The reason for this is twofold: one is that the software will be better understood in functional terms: we will have a clearer understanding of what it does and how. In addition, details such as database schemas and software interfaces will be better understood and more manageable. In essence, when we get too caught up in the details of bricks, we lose sight of the cathedral built from them. It is the cathedral that has value, not the individual bricks.

This entire argument can be made at different levels of abstraction, too. In cloud computing, the underlying infrastructure of the cloud can be incrementally replaced with no apparent change in functionality (from the user perspective). In health IT, a truly interoperable electronic health record is not tied to any particular product – at least in theory. As we’ve discussed, interoperability in healthcare is difficult to achcieve at best, and an unattained goal at worst. But if interoperability can be achieved, if we reach a point when then the electronic health record can be divorced from specific implementations (both in terms of hardware and software), then the specific systems become our “bricks” and the EHR itself the cathedral. This means the same arguments for cost savings and improved quality apply.