What Does the Future Hold for Edge Computing?

For “the edge” to become as ubiquitous as “the cloud” in the tech industry, a myriad of technical challenges need to be tackled.
Michael Hines
July 2, 2020
Updated: September 29, 2020
Michael Hines
July 2, 2020
Updated: September 29, 2020

The potential of edge computing is extremely high, according to research firm Gartner. The firm predicted two years ago that by 2025, a whopping 75 percent of enterprise data would be generated and processed at “the edge.” Put another way, in five years, the majority of enterprise data could bypass the cloud entirely.
 

What is edge computing?

  • Edge computing can roughly be defined as the practice of processing and storing data either where it’s created or close to where it’s generated — “the edge” — whether that’s a smartphone, an internet-connected machine in a factory or a car.
  • The goal is to reduce latency, or the time it takes for an application to run or a command to execute.
  • While that sometimes involves circumventing the cloud, it can also entail building downsized data centers closer to where users or devices are. 
  • Anything that generates a massive amount of data and needs that data to be processed as close to real time as possible can be considered a use case for edge computing: think self-driving cars, augmented reality apps and wearable devices.


That being said, edge computing is still in its infancy and not quite ready for primetime yet. Gartner’s report admitted as much, noting that just 10 percent of enterprise data was generated and processed at the edge in 2018. 

For “the edge” to become as ubiquitous as “the cloud” in the tech industry, a myriad of technical challenges will need to be tackled. These include the development of compact devices with outsized processing power, the creation of software that enables companies to remotely monitor and update a limitless number of edge devices from across the world and new security technology and protocols to keep everything safe.

Many companies are actively working to solve these problems, including Red Hat, Nutanix and Cloudera, all of which have developed their own edge technology. We recently spoke with senior leaders at each to learn what the future holds for edge computing — and what it will take to realize it.
 

When Will the Edge's “Spreadsheet” Moment Come?Your Edge Computing Primer: Use Cases, Challenges and Companies to Know

 

Nick Barcet Sr. Director of Technology Strategy at Red Hat
PHOTO VIA RED HAT

Nick Barcet, Senior Director of Technology Strategy at Red Hat

Many technical challenges will need to be overcome before edge computing achieves mainstream adoption. Of these, which has Red Hat identified as the most critical?

When it comes to edge, it’s about faster response times to more timely services, and we see three main challenges that are key to overcome. 

First is the ability to scale. Edge deployments can range from hundreds to hundreds of thousands of nodes and clusters that need to be managed in locations where there may be minimal to no IT staff at all. There has to be a centralized way to deploy and manage them, otherwise it can become complicated and costly for IT teams. 

The second challenge is that edge deployments can vary greatly, which can make it hard for a single vendor to build an entire edge stack. Organizations need to ensure interoperability within a multi-vendor hardware and software environment. 

Lastly, we’re thinking about consistency. It’s almost impossible to manage all these deployments if they don’t share a secure control plane via automation, management and orchestration. This is where the hybrid cloud is key, because you’ll want to manage your entire infrastructure in the same way and create an environment where you can consistently develop an application once and deploy it anywhere.

 

What technologies or trends have the capability to drive edge computing forward and why?

5G is one of the drivers for edge computing, as it allows for increased numbers of data sources or processing points that can be interconnected, implying an exponential increase in the volume of data to be processed. This can quickly become too much for existing “sites to cloud” connections and requires data processing much closer to the source. 5G also allows for much lower latencies, a key component to some new applications and another factor in making processing power available closer to where it is consumed or generated. 

We believe that applications at the edge will be primarily containerized as this allows for denser and nimbler deployments, and that’s why we see Kubernetes playing a key role, along with GitOps. We believe GitOps will provide a way to deploy and operate thousands — and potentially millions — of application and infrastructure clusters at the edge and create a standard operating model for managing Kubernetes clusters that doesn’t require a linear increase of people to manage the new environments.

 

Where do you see edge computing in five years?

The future of edge computing will absolutely be open. Edge will converge with the use of data through artificial intelligence and machine learning to turn insight into actions that benefit businesses and their customers. It will eventually be viewed just like any other location where applications can be placed seamlessly with consistency and without compromise.

 

Rajiv Mirani CTO at Nutanix
PHOTO VIA NUTANIX

Rajiv Mirani, CTO at Nutanix

Many technical challenges will need to be overcome before edge computing achieves mainstream adoption. Of these, which has Nutanix identified as the most critical?

One major challenge companies must overcome is managing the volume, velocity and variety of data at an industrial scale and refining it at the edge to get actionable insights, often under tight time constraints. Over the past few years, we’ve seen devices deployed at the network edge increase almost exponentially to process more data than all the public and private clouds of the world combined, leaving teams struggling to adjust to this new volume of data. 

Accompanying this data influx is a fundamental shift in the computing paradigm from “human-oriented” to predominantly “machine-oriented” processing. For example, when collecting sensor data, companies will need to use AI and analytics techniques to convert raw data into business insights. This shift will require a more distributed and interconnected approach between the core and hundreds, if not thousands, of edges to make sure they work as a whole. The alternative would be impossible to manage or secure. I expect the industry will see significant progress in this area in the coming years.

 

What technologies or trends have the capability to drive edge computing forward?

As more carriers deploy 5G networks, edge compute infrastructure can offer faster real-time processing for devices like mobile phones, as opposed to solely processing data in the cloud.

 

Where do you see edge computing in five years?

I expect to see an increased focus on security. Most IoT apps span the edge and the cloud with major implications for the security of both, not least because of the sheer size of the potential attack surface. Nothing can be assumed. New and innovative threats are being released all the time, making robust edge security with minimal oversight at the core an absolute must. To achieve this, a focus on zero trust will be paramount.
 

THE WORLD OF MULTI-ACCESS EDGE COMPUTINGAnd 11 Companies Harnessing Its Power

 

Dinesh Chandrasekhar, Head of Product Marketing at Cloudera
PHOTO VIA CLOUDERA

Dinesh Chandrasekhar, Head of Product Marketing at Cloudera

Many technical challenges will need to be overcome before edge computing achieves mainstream adoption. Of these, which has Cloudera identified as the most critical?

In use cases that have remote endpoints, or where the edge doesn’t have a reliable network connection, or where the roundtrip from the edge to the cloud can be expensive, it’s important for the edge to be smarter. For example, with an aircraft engine whose 300-plus sensors need to be monitored 35,000 feet up in the air, not all the sensor readings can be transmitted to the ground in real-time due to FAA regulations and limitations on what goes on the wire. In such scenarios, the edge needs to have local storage and computing power.

Our edge management solution enables companies to deploy thousands of edge agents to thousands of endpoints. These agents have lightweight runtimes and operate with minimal compute power and storage needs. They can execute intelligent data pipeline orchestration flows to collect data from devices over a wide range of protocols, process it at the edge, extract key pieces of information, run a local machine learning model where needed, score the data and then publish it to a local ingestion gateway, the cloud or a data lake. 

 

What technologies or trends have the capability to drive edge computing forward?

Advances in edge hardware: There’s a whole new world of highly powerful computers that are small, nimble and capable of addressing a lot more compute and storage concerns at the edge. This is paving the way for more complex edge computing use cases. I wouldn’t be surprised if we had “federated data lakes” at the edge emerge as a future trend.

Machine learning and artificial intelligence: Powerful computing devices are coming to the edge, all of which come equipped with GPUs that pave the way for ML and AI to move close to the edge. The need for an ML model at the edge is already here, and it is only going to become more important in the future.

Digital transformation: Every enterprise is embarking on a digital transformation initiative, but the shift we are noticing is that these initiatives are moving to the edge. These can be simple use cases, like an oil and gas company processing log files at the edge from 130,000 machines, or more transformative undertakings, like doing predictive maintenance across thousands of airline kiosks. 

Connected and autonomous cars: All the top 10 automotive brands are our customers, and almost all of them have an active connected or autonomous car initiative. The emergence of such concepts is highly conducive to boosting the rise of edge computing as these use cases require a high degree of processing, storage and ML model enrichment at the edge itself.

The need for instantaneous insights: Instantaneous insights are the order of the day, and enterprises are pushing past predictive analytics and asking for prescriptive analytics. This drives the edge to be smarter so that data doesn’t become stale by the time it reaches a data lake. Processing of such data needs to happen at the edge, or at least way before it reaches the data lake.

 

Where do you see edge computing in five years?

There will be so much compute and storage available that we will start seeing divisions between the edge, data centers — if they even exist — and the cloud. Challenges around data consolidation across the three realms will emerge, and multiple ecosystems will start feeding off data from the edge directly. AI and ML will have emerged to such an advanced degree at the edge that more sophisticated autonomous use cases will be possible. Data management will still be a key factor to consider but the challenges might look different.

 

All responses have been edited for length and clarity.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us