5 Computer Science Papers That Changed How I Write Code

From accidental complexity to Conway’s Law, here are the five papers and crucial concepts that influenced the way I think about programming.
meriam
Meriam Kharbat
Expert Columnist
September 11, 2020
meriam
Meriam Kharbat
Expert Columnist
September 11, 2020

In this article, I will list five timeless computer science papers that influenced how I write code and think about programming.

These papers are all approachable and easy to digest, and if you have programming or product background, they will resonate with you.

 

No Silver Bullet – Essence and Accident in Software Engineering by Fred Brooks

Link to the paper.

This is a famous and widely discussed paper by Fred Brooks, who also happens to be the author of The Mythical Man-Month.

This paper attempts to explain the limitations of software development, which he splits into two categories: essential complexities, which are inherent to the nature of the problem the software is trying to solve. And accidental complexities, which arise from complex programming languages and infrastructure.

Brook argues that “there is no single development, in either technology or management technique, which by itself promises even an order of magnitude improvement in productivity, in reliability, in simplicity.”

For me, the take away from this paper is not to focus on hype-driven development and focus on software engineering practices instead. Re-using existing components, rapid prototyping, iterating on software requirements and knowledge sharing within the team are what make tackling complex challenges somewhat easier.

 

Out of the Tar Pit by Ben Moseley and Peter Marks

Link to the paper.

“Out of the Tar Pit” is an interesting philosophical paper. Its a long read, but its still approachable and covers many exciting topics. Ben Mosely and Peter Marks build upon Brooks’ complexity definitions in “No Silver Bullet,” “but disagree with his premise that most complexity remaining in contemporary systems is essential.”

They go on to demonstrate how state management and control logic are at the heart of accidental complexity. They argue that Object-Oriented Programming is ill-equipped to avoid complexity since it relies on a state that is contained within objects. The paper concludes that functional programming is the best paradigm for avoiding accidental complexity.

Functional programming has gained a lot of traction over the years, and this is an excellent paper to read on the subject.

The key takeaway from this paper is to avoid complexity. In practice, the first solution I have for a problem is always unnecessarily complicated, but after a few polishing iterations, it becomes more clean and elegant. Its similar to writing in that sense, where the first draft is never the final piece.

 

A Plea for Lean Software by Niklaus Wirth

Link to the paper.

This paper was published in 1995, but it’s still as relevant today. It provides another perspective on the subject of software complexity.

According to Niklaus Wirth, software projects are getting out of control. On the one hand, it’s because the hardware is getting faster; in fact, he notes that, “Software is getting slower more rapidly than hardware becomes faster.” On the other hand, complexity is caused by not distinguishing between essential features and nice-to-haves.

The latter causes feature bloat, what he describes as “monolithic design” where people are forced to pay for a full-blown software, but only end up using a few features.

This paper is a great read, not only for software developers but also for product managers, and provides practical advice on lowering product complexity.

It’s also a good reminder that software does not need to be bloated. Some people still access the internet today from mobile phones with limited storage space or places where internet service is unreliable and expensive. These are your future users, and if your web application takes ages to load and is not available offline, it will be completely unusable.

 

Ironies of Automation by Lisanne Bainbridge

Link to paper.

This research paper by Lisanne Bainbridge was published in Automata in 1983. It has been widely recognized as pioneering in the field of automation.

In this paper, Bainbridge defines irony as a combination of circumstances the result of which is the direct opposite of what might be expected. One of the ironies she lists is that, although the classic view of automation is to replace human manual efforts, in practice, with highly automated systems, we need highly skilled individuals to monitor these systems.

These principles are still extremely relevant in software development today and especially in the field of DevOps. My takeaway from this paper is to always evaluate if a task is worth automating, and ask what value automation creates.

In the end, automation can be as unreliable as the rest of software development and will require monitoring and human judgment. It might hide systemic deficiencies: Think of a build/test/deploy pipeline that doesnt catch flaky tests, and causes bugs in production. It might even lead to catastrophic results, as J. Paul Reed explains in his article The 737Max and Why Software Engineers Might Want to Pay Attention and his talk on dangerous automation.

 

How Do Committees Invent? by Melvin E. Conway

Link to paper.

In this paper written in 1968, Melvin E. Conway observed that the design of a system reflects the structure of the organization doing the design — an idea later popularized by Fred Brooks in The Mythical Man-Month and now called Conways Law.

We can see this law in action nowadays since more and more teams are embracing remote work. For example, if youre in the same office, you might not be diligent about writing API documentation. You can afford to skip it because of short communication paths between teammates. In contrast, it is crucial to have better documentation in a remote environment since your colleagues might be working in a different time zone. The changes in the team structure are directly reflected in the code.

This paper led me to reflect on other situations where engineering practices dont make sense, specifically, in my own experience with microservice architecture. This model is only beneficial for an organization with different teams, each working on independent parts of the product.

Suppose you are in an engineering team that works on different codebase modules that depend on each other. In that case, a microservices architecture will only introduce complexity to merge code and ship releases. Imagine that you would need to make changes in at least two different applications for every product feature. For every release, you have to solve the puzzle of which version of each app is compatible with which. (In this keynote talk, Sarah Novotny illustrates how deeply team structure can affect software development.)

In the end, you cannot copy and paste a management methodology or an architectural pattern and expect it to work out without understanding the tradeoffs first. Your team might not even have the same problems that others were trying to solve with this architecture, and, therefore, you might create even more issues. My takeaway from this paper is to always keep in mind that organizational structures influence code and vice versa.

 

Resources

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us