
We say nothing essential about the cathedral when we speak of its stones.
-Antoine de Saint-Exupéry
Software development can be expensive, and health information systems are notorious for their cost. Even open source options require a considerable investment in configuration and maintenance. As a result, people responsible for managing these systems often tend to treat the code artifacts (classes, routines, DDL) from which they are built as valuable assets in their own right. On one level, this makes sense, because they are tangible and have a measurable cost. Systems, on the other hand, are emergent phenomena, and their value is much harder to quantify.
On the other hand, there are significant dangers in focusing on code artifacts to the exclusion of larger systems. It can lead to harmful practices (or “antipatterns”) such as continuing to patch the same code when defects are identified or new needs arise. This is a problem because code that is initially clear, easy to understand and maintain, can grow more complicated, obscure, and error prone. I’ve seen developers throw up their hands and rewrite entire systems because they have become too dificult to maintain. If managers and product owners will allow it, this can absolutely be r=the right thing to do, but it can be a hard sell. On the surface, it seems to be wasted effort. Why rewrite an applicaton that is working? It is one thing to come out with enhancements and bug fixes, but a rewrite is neither – or so the reasoning goes.
The counterargument is that the rewritten (or refactored) system will be easier (and less costly), both to maintain and enhance. But that involves spending money now for potential savings later, and that can be a bitter pill to swallow. Unfortunately, savings are not easy to quantify, and there is a risk that systems may need to be replaced by new systems with entireley new requirements, making the rewrite an apparent waste of effort. But obselescence is a risk regardless of whether systems are periodically rewritten (full releases), so this argument is really fallacious. In fact, every system does need to be replaced eventually. Software does have a shelf life, if you will. But when software is properly maintained, its useful life is increased, and when it comes time to replace it, the transition will be easier. The reason for this is twofold: one is that the software will be better understood in functional terms: we will have a clearer understanding of what it does and how. In addition, details such as database schemas and software interfaces will be better understood and more manageable. In essence, when we get too caught up in the details of bricks, we lose sight of the cathedral built from them. It is the cathedral that has value, not the individual bricks.
This entire argument can be made at different levels of abstraction, too. In cloud computing, the underlying infrastructure of the cloud can be incrementally replaced with no apparent change in functionality (from the user perspective). In health IT, a truly interoperable electronic health record is not tied to any particular product – at least in theory. As we’ve discussed, interoperability in healthcare is difficult to achcieve at best, and an unattained goal at worst. But if interoperability can be achieved, if we reach a point when then the electronic health record can be divorced from specific implementations (both in terms of hardware and software), then the specific systems become our “bricks” and the EHR itself the cathedral. This means the same arguments for cost savings and improved quality apply.
Tagged: antipatterns, ehr, interoperability, management
While I don’t disagree with what you have said, I feel strongly that “Software does have a shelf life” should be “Software does have a shelf life but we don’t know what it is and should therefore plan as it if will be there fore ever.” An analogy I draw is with great cities: London, Paris and Rome were all major population centers known to Julius Caesar and have existed uninterrupted from his days to today. Yet, if he were to come back today, he would not recognize any of them. This is because they evolved continuously – at no time did someone say, “We’re going to replace this city with a new one.” Certainly buildings were torn down, and neighborhoods razed and redeveloped, but that is what evolution means for cities, even if that evolutionary path occasionally bequeaths us with streets better suited to yesterday’s horses and wagons than today’s cars and trucks. Certainly better documentation means easier and safer redevelopment – you can dig faster and more safely if you know where the gas mains, pipes and sewers are, but not having that documentation doesn’t stop us. Just as city leaders plan cities as if they will be around for ever, managers should plan software as if it will be around for ever. Yes, someday everything must come to an end – even the universe – but we cannot project a date for that end, and therefore should not plan for it.
LikeLiked by 1 person