How to Move AI Development Forward

AI development often meets roadblocks. To get around them, organizations need to rethink data storage, the data center and even the role of chips themselves.

Written by Renen Hallak
Published on Feb. 08, 2021
Brand Studio Logo

The COVID-19 pandemic has underscored the importance of AI and machine learning (ML) for organizations, particularly as a tool that can enable faster decision-making, improve workflow efficiencies and ultimately scale a business through unprecedented and uncertain times.

However, while the pandemic has certainly served as an accelerant of the need for AI, we’ve been seeing this push toward an AI-driven world for some time. One major trend at the heart of this has been the rise of sensors — think camera phones, cameras on cars, genomic sensors or audio recording instruments — basically, anything that takes in the natural world around us.

Computers, however, until now, have largely only been able to deal with human-generated information (numbers and texts), but as the abundance of this natural world data has grown, so has the need for a solution that can make sense of it — and at scale. Whether it’s recognizing a stop sign or pedestrian for an autonomous car, a tool that can track and predict the spread of disease or leveraging massive amounts of data for the next life sciences breakthrough, AI is the solution.

But as this need for advanced AI only increases, its development often meets roadblocks that can hinder its potential — challenges centered around infrastructure simplicity, speed and cost. By taking the time to rethink and reconsider data storage, the data center and even the role of chips themselves, organizations can modernize their infrastructure and fuel the next generation of AI.

 

Modernizing Storage

It’s no secret that the key to developing strong AI and ML algorithms is a good data set, but it is also a main reason that AI development is held back in many organizations. Access to real-time data is limited thanks to archaic infrastructure models.

At the heart of this is storage, specifically traditional tiering models that assign various data sets to slower storage media. The reason for this has largely been cost: storing all of your data on quick and easy-to-access flash storage is great in theory but has been an expensive medium for decades. This means that large quantities of data are often stored on cheaper hard disks as an alternative, but this makes real-time access impossible, which is of massive value for AI and ML.

As a result, much of AI’s development to date has been based on only limited data sets, curbing the potential and effectiveness of the solutions being used today. In reality, advanced AI needs more real-time data access to meet current demands.

Having all data available in a centralized place is critical. If an organization has a data warehouse with a disparate view of the natural data that goes into it, then it will never truly get the full value of AI. An organization must connect the natural data of the outside world with what exists in the data center — they can’t look at AI through the lens of different islands. The pieces need to be connected to become smarter and more valuable. Storage tiering simply doesn’t offer a holistic way to do this.

Thanks to innovations and new technologies such as non-volatile memory express (NVMe) over fabrics, new non-volatile memory technology and low-cost quad-level cell (QLC), flash storage is now becoming a more viable and affordable solution for today’s data-intensive world. As such, organizations must revisit the idea of storage from scratch, recognizing that the value proposition of hard disk and flash storage is not what it was a decade ago — and that simply sticking with the status quo may not be the best path forward.

This should be examined through the lens of what will ultimately enable AI applications to consolidate infrastructure and accelerate the training and inference time of algorithms. Organizations need to ensure they have all the data they require — quickly — to improve efficiency, speed and cost.

 

Modernizing the Data Center

In addition to reimagining storage itself, organizations must take a broader look at how the data center overall needs modernizing. To remove the roadblocks of disparity within infrastructure, the data center has, over the years, become extremely hyper-converged, merging historically siloed resources such as compute, storage and networking.

However, this hyper-convergence is also introducing new roadblocks for organizations. The various technologies that have been bundled into the data center provide great simplicity overall, but the individual performance of these pieces has been harmed due to this hyper-converged and simply crowded state.

Investing in containerized applications offers organizations the tools they require in the same box, but with the flexibility to pick and choose the services that work best for them across a range of vendors. Recognizing that containers are so lightweight, organizations can deploy a large number of containers to perform very specific tasks or subtasks by utilizing modern container orchestration platforms that support the management of deploying and scaling of these efforts. This allows the data center to be hyper-converged, but without losing the performance that is needed to drive applications, like those of AI, forward.

 

Bypassing the CPU

Finally, as the industry grows skeptical about the validity of Moore’s Law moving forward — the idea that the compute power in chips and traditional central processing units (CPUs) will double every two years — there is growing uncertainty about how to meet the demands of the future. Transistors simply can’t get much smaller and multiply in size as they have.

Therefore, organizations will need to identify ways to bypass traditional CPUs, which are creating bottlenecks — much like a tollbooth on a highway does to traffic. In fact, other solutions, like graphics processing units (GPUs), intelligent processing units (IPUs) and data processing units (DPUs), are already increasingly being leveraged for AI as an alternative.

While only time will tell where this goes and what the best path forward truly is, it’s clear that the future of advanced AI will be rooted in avoiding the slowdowns caused by these tollbooths by reworking CPU architectures or bypassing them altogether.

 

Moving AI Forward

AI is still very much a moving target and will continue to rapidly evolve. But regardless of where the industry goes in the years ahead, the more information we can feed into our infrastructure in a simple, fast and affordable way, the better that AI and ML will become.

The organizations that make the right investments in these infrastructure considerations will lay the groundwork for the best-in-class autonomous cars, the solutions to our toughest life sciences challenges and will also power new innovative solutions that we haven’t even imagined.

Read This Next6 Ways to Combat Bias in Machine Learning

Explore Job Matches.