If one thing is clear to businesses in this uncertain time, it is the power of data and its strategic value to an organization’s future success.
Whether it’s to better understand customers, to build a cutting-edge artificial intelligence (AI) solution, or to strive for the next big discovery in life sciences, data has become a vital resource for all organizations, regardless of size or vertical market.
This trend is for good reason. The growing data sets that organizations continue to stockpile provide great promise in the ability to make new discoveries and generate distinctive innovations that can change entire industries, from finding a cure for a major disease to automating processes that streamline workflows.
However, your data is only as good as the way you manage it. And, for many organizations, this has become increasingly difficult simply because the data center itself does not fully offer a way to do so. This is particularly the case as the COVID-19 pandemic has forced many businesses to question even the most traditionally accepted practices and start from scratch.
So what is the path forward? It starts — and quite frankly ends — with reimagining the data center of the future. But to do this, organizations need to understand the history of the data center and the new demands that have left the current model obsolete.
The Outdated Data Center
While the data center as we know it has led to strong innovation across industries over the last decade, it can no longer support the complex, software-defined, data-intensive applications of our time, like revolutionary AI systems that take machine intelligence to the next level.
The convergence of historically siloed resources of compute, storage and networking, through what is now known as hyper-converged infrastructure, is simplifying the data center. However, as is often the case, the first generation of new technology typically isn’t as good as the last generation of mature technology.
Think about the evolution of the cell phone in its early days, which combined multiple capabilities into one device. For example, the functionalities of a phone were merged with the capabilities of a camera. While the invention offered one streamlined device, the performance of these individual components suffered. A small camera on a phone simply couldn’t compete with a separate, personal camera — at first.
Today’s data center finds itself in a similar, albeit more complex, situation. The various technologies that have been bundled into the data center provide great simplicity overall. But the individual performance of these pieces has been harmed.
As we have seen with the evolution of the smartphone where phone cameras have improved dramatically, there is a similar path forward for modernizing the data center as well.
A Containerized Solution
By investing in containerized applications, organizations can now have all the tools they require in the same box, but with the flexibility to pick and choose which services work best for them across a range of vendors. For example, IT departments can have the software-defined networking service of their choice in one container, their preferred software-defined storage service in another container, and their database in a third container.
The problem with containers up until recently has been that they have clashed with a very important element of the data center: storage. Because of storage’s historically stateful characteristics, meaning it holds onto application data, IT organizations traditionally haven’t been able to start up a containerized application with ease and flexibility — the only place to do so has been on a very specific node that has access to this data.
Recent technological breakthroughs have helped remove some of these container limitations. For instance, the protocol specification NVMe over Fabrics, a new protocol designed specifically to provide systems access to non-volatile memory devices like SSDs, and VAST’s Disaggregated Shared Everything Architecture, which separates the storage media from the central processing units (CPUs) that manage that media, now allow access to all of the information from every single container so long that it is attached to an Ethernet or Infiniband network.
By recognizing that containers are so lightweight, and by utilizing modern container orchestration platforms that support the management of deploying and scaling containers, organizations can deploy a large number of containers to perform very specific tasks or sub-tasks. No longer does each node need to be cared for in the same way and enterprises can now start up these applications with flexibility.
As such, IT organizations now have the tools they need to build a microservice architecture with the software services of their choosing to remove long-standing limitations and to truly make the data center work for them.
The Data Center as a Supercomputer
What does this all amount to? The data center of the future will act very differently than it has in the past. It will allow both performance and simplicity. It will provide organizations access to all the critical resources they need, including real-time access to all available data, which in turn will drive new insights and discoveries.
Perhaps most importantly, the future data center will be able to support applications in understanding the data and deriving insights from it as well. In fact, as the data center evolves it will increasingly take on the characteristics of a supercomputer, serving less as just another tool for organizations and instead offering groundbreaking solutions. We are moving into an era where data centers will become truly app-centric.
The future data center will power industry-leading AI, a vast range of new applications, and will redefine entire industries, all at a faster rate of innovation while enabling new business models. The key, however, is making the foundational investments for this vision right now.