The Shift To Memory-Safe Code Is Coming for Your Industry

The pervasive use of memory-unsafe languages in vital systems is exposing them to vulnerabilities. Our expert explains why that’s a problem.

Written by Adam Ierymenko
Published on Nov. 01, 2024
A woman works on multiple computer terminals showing code
Image: Shutterstock / Built In
Brand Studio Logo

On August 14, 2024, Microsoft revealed 90 vulnerabilities in its products. These included CVE-2024-38063, a critical code execution flaw in Windows. This vulnerability, with a whopping 9.8 severity score out of 10, could have allowed someone to gain full control of any internet-connected Windows system were it not mitigated in time. I wish I were surprised that a $3 trillion company with an army of security experts allowed CVE-2024-38063. But I’m not — and neither is Microsoft, as you’ll see.  

What Is a Memory-Safe Computer Language?

Memory-safe languages help prevent memory errors and may check for them automatically. Such errors occur when software attempts to access data it shouldn’t, use data that is no longer valid or execute something that wasn’t intended to be executable. Popular memory-safe languages include Rust, Go, C#, Java and Python.

Build Better SoftwareThe Software Industry Is Facing an AI-Fueled Crisis. Here’s How We Stop the Collapse.

 

A Good Memory Is Vital

The cause of CVE-2024-38063 was a familiar one: memory errors. For non-programmers, this happens when software attempts to access data it shouldn’t, use data that is no longer valid, or execute something that wasn’t intended to be executable. These vulnerabilities are common because the most popular system-level languages — those used for things like operating systems and network protocols — aren’t designed to prevent memory errors or check for them automatically.

Now, imagine if someone had found and exploited the bug before Microsoft released a patch. The consequences would be nightmarish. That scenario is going to become even more likely as AI tools make it easier to search for bugs in existing code.

Even when bugs aren’t exploitable, they can still wreak havoc. For instance, in July 2024, a memory exception bug caused by C/C++ code in cybersecurity vendor CrowdStrike’s software led to crashes across millions of Windows machines, affecting hospitals, emergency services and banks. This incident, which cost Fortune 500 companies an estimated $5.4 billion, could have been avoided with a memory-safe language.

Many vendors, including my company, ZeroTier, are transitioning to memory-safe languages like Rust, Go, C#, Java, Python and others. We chose Rust because it’s the only mature, memory-safe language that offers performance on par with C and C++ in systems-level software.

There are already murmurs that governments and standards bodies (e.g., PCI and SOC) will require the use of memory-safe languages in critical systems in the coming years. Firms in banking, healthcare, heavy industry, power utilities and other sensitive sectors will follow suit, putting pressure on their partners and vendors to do the same. That means memory-safe code is coming to an industry near you.   

 

Memory Unsafety and the Software Life Cycle

The maddening thing about memory vulnerabilities is that they happen all the time, even though we know how to prevent them. Channeling The Onion, cybersecurity educator and blogger Xe Iaso announced CVE-2024-38063 under a headline they’ve now used 12 times in 2024: “‘No Way to Prevent This Say Users of Only Language Where This Regularly Happens. That language is C/C++.

Of all the factors that prevent developers from switching to memory-safe languages — sunk cost, familiarity, habit, etc. — bravado is probably the toughest to overcome. “But I can write secure C,” say many skilled coders, and they’re not necessarily wrong. That’s not the problem. The problem is that software rots over time.

I’m not talking about mythical bit-rot. Instead, I’m referring to what happens when new people work on a formerly pristine piece of code, modifying it during a 2 a.m. code binge in a rush to get something out the door or mangling it during a branch merge. Bugs can creep in here. If the language itself does nothing to prohibit or detect the dangerous bugs, teams may easily ship them, where they can sit unnoticed and potentially exploitable for years.

I don’t know if that’s how CVE-2024-38063 came about, but the odds are high.

 

AI and Urgency in Software Development

The push for memory-safe languages is overdue. Back in 2018, Microsoft engineer Matt Miller highlighted that 70 percent of Microsoft’s code execution vulnerabilities were due to memory safety issues. “As Microsoft increases its code base … this problem isn’t getting better, it’s getting worse,” responded his colleague Gavin Thomas.

Then no one solved the problem. 

Because of AI, the problem has a new urgency. Bug-hunting in C/C++ requires painstaking examination of code or the use of inherently limited techniques like fuzzing and heuristics. Or it did, at least. Soon, AI will do this work orders of magnitude faster.

AI excels at pattern recognition, and efforts are well underway to train AI tools to recognize familiar classes of bugs. And those are the efforts we know about. Given the military and intelligence value of unknown “zero day” vulnerabilities, we can be certain that intelligence agencies and other clandestine actors are working on this too.

A flood of dangerous vulnerability discoveries might be on the horizon. This acceleration in bug discovery makes migrating to safer languages more urgent than in the past.

Making the World SaferUnderstanding ISO 42001: A Framework for Responsible AI

 

Move Fast and Fix Things

Anything outward facing, exposed to potential cyberattack, needs to be moved to memory-safe languages. Again, governments are already discouraging the use of memory-unsafe languages. The next step would be to prohibit them in critical IT systems.

Now is the time to review your IT systems and the languages in which they’re written. If a vendor is on Rust or similar, great. If they’re not on Rust but developing a new system that is, that’s also great — plan to adopt it. If your vendor isn’t on Rust and doesn’t have plans for a memory-safe system, then you have a decision to make. Is the system replaceable, and is there a comparable system that is memory-safe? 

Usually, regulations favor large incumbents, but this may be a rare case where they don’t. Older, big tech companies like Microsoft, Google, Apple, and Amazon have already invested billions of dollars in software written in C/C++. That software is used in production and therefore hard to change. Startups and smaller ventures with smaller code bases (or those starting from scratch) will reach the post-memory-bug era long before big players do.

To be clear, memory-safe code cannot fix bad passwords, AI impersonations, Geek Squad email scams, and compassion for Nigerian princes. But it can limit the damage bad actors are able to cause once they’re in a system. Think of Rust as a bullet-proof vest: It doesn’t stop the shooting, but it might stop the bullet.

The stakes here are high, not just for governments and enterprises, but for everyone on the internet. AI will create an arms race between vendors trying to patch memory bugs and various entities trying to exploit them. Depending on who leads that race, we may see increasingly frequent disruptions to digital life and services. It’s time to move fast and fix things. 

Explore Job Matches.