Physics isn't the first thing most people think of when they think about software development. And yet, physics is hugely important to how software gets distributed. It’s well-documented that data is growing at a staggering rate. Yet consumer demands require companies to manage this growing volume of information so it flows quickly, in as close to real-time as possible, to the end-user or edge. If these large organizations don’t consider the physics of software delivery, however, their development velocity, even if it starts as a flood, will end up a trickle farther down the pipeline. 

We often describe data as having “the five V’s” — volume, velocity, variety, veracity and value. When moving data from a central server to a mobile application being consumed by thousands of concurrent users, for example, the entire system serving the data could become strained and cause latency to end-users. 

This slowdown happens because developers must push artifacts (the building blocks of software) through the delivery pipeline all the way to last-mile deployment. These artifacts aren’t lightweight, either. In fact, compounded artifacts are quite massive. Every time an end-user accesses this data through an application (or if a development team updates it), said application downloads software artifacts from the pipeline. As is the case when you download files on your mobile device or home computer, larger files take longer to download, and a multitude of files in the pipeline slows the entire process down even further.

Unfortunately, rushing to move data quickly to endpoints and constantly pushing updates through a pipeline often results in high infrastructure costs and latency. Both are bad for business; higher costs directly affect the bottom line, and latency can negatively impact development time, user experience and customer satisfaction.

According to IDC, 50 percent of all new infrastructure will be deployed at the edge of billions of new products launched within the next few years, which will likely exacerbate these issues. Edge computing processes data physically closer to the end destination and reduces the amount of data coming from the primary network, boosting speed and decreasing latency in the process.

More in Software EngineeringWanna Upgrade Your Data Science Game? Think Like an Engineer.

 

Unclogging the Pipeline

So, what can enterprises do to streamline their infrastructure and unclog software pipelines? To overcome the challenging physics of software delivery at scale, enterprises should take several steps.

Create a flexible distribution mechanism that is tightly integrated with the software lifecycle via DevOps processes. Using edges for software distribution, for example, gives businesses the flexibility to distribute software across various environments and remote development teams, which is increasingly vital in this era of distributed work.

Use a dedicated, highly available network to speed up simultaneous downloads and, in turn, quicken the distribution of software. Today’s businesses are increasingly powered on hybrid infrastructures that span multiple regions, edges and IoT devices, and they need app delivery processes and platforms to account for it all.

 

Overcoming Distribution Challenges

The good news for companies attempting to overcome the challenges software physics is that they needn’t reinvent the wheel. A veritable ocean of organizations in the software supply chain have been working on the problem of distributed software for a while and developed a set of best practices. 

For example, the DevOps Institute offers a Continuous Delivery Playbook, which serves as a solid go-to primer on how to speed up DevOps processes. The Cloud Native Computing Foundation (CNCF) provides a snapshot of the cloud-native landscape, a comprehensive and interactive series of charts that organizes the industry's vendors and platforms based on service type (database, key management, observability and analysis, and so on). And IDC has created an infographic from its volumes of research about how to Accelerate Trusted Distribution of Innovation Everywhere, detailing the benefits of robust software distribution capabilities as they relate to successful digital transformation. 

Regardless of which resource you use, remember you should follow a few tenets of trusted software distribution. Pavan Belagatti, a DevOps expert, believes a trusted distribution mechanism is comprised of the following:

Building a Trusted Mechanism for Distributed Software

  • Speed: Using the processes discussed above, developers must be able to distribute pieces of software as quickly as possible to speed up development and reduce downtime for end-users.
  • Security: Security breaches can imperil the software supply chain at every turn. Ensuring security measures are baked in from the get-go — automating common security tasks including promotion and build acceptance, for example — is crucial to keeping distributed software from prying eyes.
  • Reach: Companies should be able to distribute their software anywhere in the world if need be. Doing so effectively often involves leveraging data centers or cloud infrastructure zones and regions in locations with high concentrations of customers and end-users.
  • Scale: Scale here refers to managing and maintaining the performance of the delivery pipeline. This includes setting up a network for multi-site replication, using processes and tools that ensure high availability, and scaling storage needs as the organization grows.
  • Simplicity: Automate what you can, and simplify as much as possible. Gartner calls the concept of automating everything that is possible to automate “hyperautomation,” which in this case could include automatically triggering software distribution as a part of the DevOps process.

More in DevOpsWhich Engineering Pathway Is Right for You?

 

Address the Physics of Software and Reap the Benefits

Companies today compete on the customer experience. Organizations that can deliver products and services to customers quickly, seamlessly, and without downtime will emerge ahead of their competition. For this to happen, companies must understand the requirements for modern software distribution; they must learn the physics of software delivery. By identifying bottlenecks and building a flexible and trusted distribution mechanism, companies can overcome the challenges of physics and reap the benefits of distributed software.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us