Lately, I’ve become very interested in the question of how Kubernetes will shape the future of web architecture.

When I’m discussing architectural decision making with engineering leaders, I push back on the notion that there’s one objectively better way to build a particular system. Good architecture is about business goals and trade-offs. If the implementation you’ve selected fits the business’s goals at the time, the system has a good shot at being successful.

Mainframes were a great idea when hardware was expensive compared to data transmission. Desktop applications were a great idea when hardware became cheap and transmission grew more expensive. Cloud computing became a great idea when connections got faster and connectivity spread, lowering the costs of transmission as the amount of data to process overtook the amount of processing power in a single machine. Paradigm shifts in computing are common, but the exact technology that will cause one is difficult to anticipate.

Will Kubernetes become the next technology that so dominates the landscape that any prior method of deploying and scaling services just seems passé? I keep wanting to believe the answer is yes, but I keep looking at the evidence and concluding the answer is no. 

Kubernetes won’t overtake all the previous solutions. Instead, it will split the industry into software engineers who work on large-scale systems and those who don’t.

Get More Razor-Sharp Engineering PerspectivesWhen MVPs Hurt

 

Kubernetes Shines at a Scale Most Products Will Never Need 

Ask anyone who’s ever tried to build a Kubernetes-based infrastructure from the ground up, and they’ll tell you: These systems are really complex. The basic deployment and scaling configuration of a server is comfortable enough, but then you start getting into networking, state management and moving data in and out of the cluster. Each challenge in Kubernetes seems to have its own set of tooling, which has its own configuration, which has its own set of engineering decisions to argue about.

And for all this you get what, exactly? Does Kubernetes outperform the simpler tools for automation, provisioning and scaling that came before it? Any sensible engineer working in a small to medium-sized organization would compare the benefits to the aggravation — and then wonder why they bothered at all.

 

It’s a Split, Not a Shift

It used to be that all software engineers wrote low-level code. Then the high-level languages built up ecosystems that were so robust you could spend an entire career learning them. 

Over time, the software engineers who concerned themselves with low level code became a distinctly different group from the software engineers who built applications. They became embedded-systems engineers or kernel developers while the application developers drifted deeper and deeper into web and mobile based development. The two groups can understand each other, but deep experts from one side become complete beginners when they’re asked to work in the other.

That’s what I see developing around Kubernetes: Gradually, those of us who work on shared services and platform infrastructure will work more and more in Kubernetes, while other engineers will spend less and less time thinking about those areas of the stack.

That’s because Kubernetes is complex. But complexity has a purpose in the evolution of systems.

Read More From Marianne BellottiShould You Embrace No-Code Software?

 

What Makes Systems Complex?

Systems become more complex as their usage expands. When you have a tool that you only use for one thing and in one way, that tool can remain simple for thousands of years. 

Tools that do many things or can do the same thing many different ways need controls around them to ensure they do the right things at the right time in the right way. Complexity, therefore, is a product of scaling the state machine. The more potential states something has, the more moving parts it needs, the more complex it becomes.

Abstraction helps hide complexity by taking a chunk of the state graph, throwing it inside a black box and reducing the conversation about it to just the potential state of that box. For example, when we think about the state graph of a web service, we do not include every single thing that might go wrong with the actual container the service is running in — nor do we include all the potential places that the hardware might fail. When software engineers reason about systems in triage, they tend to only think about these parts of the system in terms of simple binaries: on or off, healthy or unhealthy, failing or passing. The exact state inside the black box is not important.

The more potential states something has, the more moving parts it needs, the more complex it becomes.

When our abstractions are particularly good and people can navigate the system without ever opening the black box, the set of technologies inside the box become the domain of a specialized set of engineers. Eventually there’s a split. Whereas once a technologist would be expected to understand both what the black box was doing and the parts built on top of it, now technologists pick a side and stay there. 

In the early days of software it was not usual for people who worked on designing software and programmed languages to start with a background in electrical engineering. That’s almost unheard of now, because those concepts are so well packaged in various hardware and infrastructure abstractions that none of them impact the normal challenges of writing software anymore.
 

The Natural Bifurcation of Complex Systems

So those who point out that Kubernetes is far too complicated for most systems are correct. Kubernetes will not cause a paradigm shift like cloud computing did. Instead, Kubernetes is going to cause a formal split between software engineers and what we will call platform engineers.

One thing we know from the transition from mainframes to desktop applications and eventually the cloud is that when computation is expensive, people rent time on infrastructure. When computation is cheap and transmission is expensive, more processes move in-house to private data centers and personal machines.

The amount of data being produced is currently outpacing Moore’s Law. This suggests that computation is going to be expensive compared to transmission — and that organizations will continue to move away from building their infrastructure in house and look for greater cost savings in renting. 

So many of them are on the cloud already, the next place to trim away at will be the shared services at the next abstraction layer up from the VMs. This includes things like standing up and maintaining databases, identity management, security groups and monitoring. In other words: Platform as a Service.

When computation is expensive, people rent time on infrastructure. When computation is cheap and transmission is expensive, more processes move in-house.

The regulatory landscape might speed this up. Additional rules — around how private data is managed, who has control over it, where AI can be incorporated and how it needs to be built — will make it more and more expensive to handle infrastructure in-house. Already in the defense space, companies are offering specialized platforms that promise a smoother path through the ​​arduous compliance process for the Department of Defense.

The complexity of tools like Kubernetes works in Platform as a Service. Once we’re running a platform as a product to sell to as many organizations as possible, we are also running at a scale where tools like Kubernetes pay dividends. As more and more organizations turn to Platform as a Service, job opportunities for engineers who know Kubernetes will start to consolidate, too. 

Software engineers will no longer be able to dip their toes into Kubernetes while still focusing primarily on general software engineering skills. Instead, platform engineers will continue to refine the abstractions around the Kubernetes ecosystem so that the software engineers employed by the tenant renting time on the platform don’t even know that they are running on Kubernetes.

 

What This Means for Engineers

Are you dooming yourself to legacy modernization hell if you aren’t moving over to Kubernetes right now? That depends on which side of that split you think you and your organization might end up on. 

Do You Need to Learn Kubernetes Now?

If you’re in DevOps, SRE, system administration or any related infrastructure role, then yes: You should be getting comfortable with Kubernetes right now. If your organization sells developer facing products designed to be rented out, then yes: You’ll want the efficiency boosts on scaling long term. But if you’re not a future platform engineer or a Platform-as-a-Service company, you should be focusing on decoupling the shared service layer from the application layer. 

That means focusing more and more on common protocols and standards. That clever SQL query that uses functionality only found in the most recent version of Postgres? When you want someone else running Postgres for you, that's going to be a big blocker.

Here’s the wild card: What happens if computation becomes cheaper than data transmission? Won’t that trigger a shift back to in-house operations? Yes, but it won’t trigger a shift back to the cloud-computing approaches we know today. 

When the processing power of desktop computers hit their limits compared to the cost of bandwidth, IBM didn’t see a spike in mainframe orders. As more organizations look to shave off the costs of shipping their data out for analysis, the edge devices that are being developed look nothing like the personal computers that came before them.

It’s hard to see what could make computation cheaper, however. Quantum computing, even if there’s a major breakthrough, will almost certainly be more expensive than existing solutions. 5G, if it’s everything that is expected, will make transmission even cheaper. No, it’s likely that costs of computation will remain high — and infrastructure engineers will find that they keep running into Kubernetes as their careers develop.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us