Back in the early 2000s, I started my software development journey as a hobbyist kid who used to develop small hacking programs using Visual Basic 6.0.
I succeeded in writing a couple of programs that functioned correctly but had horrible code. By that, I mean that, when I planned to style a button in blue background color with bold text, I’d write the same code statement for every individual button in my VB forms. I had no clue about reusability; I was just a copy-paste programmer.
The impact of this coding style was quite clear: Changing a single feature was extremely painful. If I wanted to change a color, I had to do it through the whole codebase. It was a bad feeling for me, and as a non-English-speaking kid I had nowhere to turn to, given how slow my internet connection was during that period and the lack of quality Arabic language programming resources.
My Coding Journey
As time progressed, I started learning a couple of other languages such as C++ and Java. My English became stronger, and I started to develop an understanding of proper procedural programming. I started reusing existing code into functions, refactoring things into assemblies, avoiding hard-coded parameters, you name it. I also used object-oriented programming (OOP) a lot. Things became a little bit easier, and I was able to centralize many changes as long as they were just parameters or configurations. However, changing certain things remained a pain; fundamental shifts in business logic, for example, required almost a full rewrite.
I was stuck at this level for a couple of years, until I started my professional career and my understanding improved. I learned a handful of design patterns and applied many of them — even though it wasn’t always clear when to apply what. However, things became way better, and I was able to primarily write code with considerable extensibility.
However, when I compare my programming style with more advanced programming — like the styles in .NET libraries and well-known frameworks — the author is writing extremely elegant interfaces that cover wide responsibilities. It’s highly maintainable code where custom business logic is always encapsulated in replaceable constructs. It’s a style you can admire, and it’s easy to see how flexible the software becomes if you adopt it.
Learning how to write a proper code is a non-trivial investment that requires patience, practice and lots of reading. Here, I’d like to make a case for why it’s essential to learn proper coding practices — and how high the cost of lousy code can be.
Bad Code Costs Money
Software code is a smart, intelligent piece of logic that actively responds to different types of data — whether it’s accounting invoices, telecommunication records, banking transactions or more. If a piece of code fails to satisfy a specific requirement, it can easily damage the organization’s bottom line.
In one project I worked on, the company found that they were losing tens of thousands of dollars because of variable types of numeric truncations (using int instead of double). If the developer knew the underlying usage of each variable type and its numeric accuracy, he would have avoided that issue. But who would ever care to dig behind the meaning of every variable declaration?
Bad Code Destroys Data
The thing I hate about buggy code is that it not only irritates your customer’s business, but it also saddles you with the extra effort required to fix persistent data in systems because of faulty business logic. It’s like taking a small child to a restaurant and watching him destroy a cup: You must not only repay the angry host but clean up the broken pieces as well.
I remember working on a project where a problematic business logic was implemented. Even though we fixed the bug, the customer complained about incorrect data in the system. We then had to manually check and query for all faulty records and fix them to restore the correct system state.
The moral of the story is this: Fixing bugs prevents future problems, but it does not remove the past issues. Even though this seems clear and obvious, you’d be surprised how many developers overlook this part.
Bad Code Costs Lives
Have you ever heard of Mariner 1, the spacecraft that exploded within minutes of launching in 1962? Would you suspect that the issue was because of a faulty engine, lack of fuel or perhaps an incorrect mechanical design? None of those guesses would be correct.
It was a software bug in the navigation system that erroneously extrapolated specific measurements to the wrong flight instructions, and boom: a catastrophe. Can you imagine what it would be like to have that level of responsibility as a coder? Or how much risk buggy code could pose to the navigation systems on an Airbus or Boeing plane? Consider what impact the wrong variable assignment could have on a critical medical device or perhaps a nuclear facility.
Mistakes in these environments can lead to terrible disaster, death and even criminal charges. This explains why specific industries are extremely rigorous in their demands for coding quality and accuracy. It can, quite literally, end human lives.
Above, I’ve tried to demonstrate what kind of life-threatening consequences bad code can create. In my next article, I’ll talk about some of the problems bad code can pose for your company's bottom line.