How to Prepare Your Engineers for the Wave of Incoming AI-Powered Cyberattacks

AI has given threat actors more options for attacking your systems than ever before. Fortunately, it’s not too late to prepare your engineers for meeting these challenges head-on.

Written by Nenad Zaric
Published on Jul. 23, 2024
A robot hacker works on several computer terminals
Image: Shutterstock / Built In
Brand Studio Logo

If you’ve been following tech news lately, you’ll know that cybersecurity experts are sounding the alarms on the increasing use of AI to design and deploy sophisticated cyberattacks. In fact, thanks to AI, the barrier to becoming a hacker is much lower now, as this tech helps refine social engineering, phishing, and network penetration efforts.

Unfortunately, cybersecurity professionals haven’t jumped on the AI trend as quickly as malicious actors have. Although several new graduates are entering the industry, 84 percent of professionals have no or minimal AI and ML knowledge. As a result, the sector’s workforce faces a flurry of new cyber attacks they’re not equipped to handle. As if that weren’t enough, even U.S. government agencies like the FBI are warning individuals and businesses about the rise of AI-powered cyberattacks. 

As serious as the matter is, however, it’s never too late to get up to speed on it and take action. Business owners are usually left to their own devices, but addressing the AI skills gap is an issue they can confidently take on alongside their CTOs with a few handy tools.

Let’s explore how cybersecurity leaders can prepare engineers to manage and mitigate AI threats and successfully implement the technology in their operations.

3 Ways to Prepare for AI-Based Cyber Threats

  • Build a culture of continuous learning around AI.
  • AI-based red teaming simulations.
  • Security vetting and assessment of AI tools.

More on CybersecurityThese Cyber Attacks Could Have Been Prevented. Here’s How.

 

Training to Incorporate AI’s Unique Capabilities

It isn’t shocking that today’s engineers aren’t yet accustomed to tackling AI head-on. Although this technology isn’t new, it has quickly evolved in the past two years. So, those who finished their training before this period or even during it probably didn’t have a syllabus that addressed AI. But how have hackers implemented it so quickly?

The answer might just be DIY and a collaborative approach to learning. A recent study confirms this, revealing that cultivating a learning culture among engineers and software developers potentially decreases the AI skills threat. Industry professionals shouldn’t be left alone to learn these skills; CTOs and business leaders should facilitate upskilling opportunities for their staff to get ahead of the AI game. This way, they can improve their own AI cybersecurity or do so for clients (if they’re vendors) with the most skilled workforce.

Although staff can use AI chatbots to answer questions and code for them, the real challenge lies in improving productivity, foolproofing systems against AI cyberattacks, and incorporating AI features into existing processes and environments. Such higher-level skills need more than a few Google searches, so investing in specialized AI training programs can go a long way for today’s cybersecurity businesses.

To prepare their staff accordingly, companies can hire AI experts to teach task-specific courses or simply scout online classes that certify their engineers for the latest AI skills. These learning programs can be as rigorous as you want — from Udemy to Harvard online lessons. It’s up to you to decide how and in what areas of expertise you’d like to equip your employees.

If you’re already acquainted with industry experts, the best way to start is by reaching out to them so they can share their knowledge on the basics of AI cybersecurity with your team through a call or quick presentation. Otherwise, adopting a bottom-up approach is best: Browse online courses that cover core concepts and look for prices and lengths that adjust your budget and workload. Move up from there with more rigorous courses depending on how your security team responds and what your priorities look like. When it comes to learning, the possibilities are quite endless on this ever-evolving topic.

 

Launch AI Attacks to Improve Threat Identification

The journey doesn’t end at upskilling your workforce. As with any technology, AI is an ever-evolving tool, and hackers are always on the lookout to modernize their techniques using it. So, the learning shouldn’t stop. 

A great way to keep it going is by running simulated red teaming attack scenarios with a twist. Nearly two-thirds of organizations are already adopting this practice to boost their cybersecurity posture. As new threats emerge, however, red teaming must take a new shape.

Usually, red teaming entails a group of engineers attacking their own systems to find vulnerabilities and subsequently patch them. Now, AI must do the attacking so employees learn to think like it and build resilient systems against it. The race between defenders and attackers has become more intense, with attackers usually being more adept at exploiting new technologies. So, they end up outpacing engineers by rapidly devising and executing innovative attacks, especially when implementing AI.

Cybersecurity experts are already using the technology to recreate red teaming activities, mimicking how hackers would use AI to penetrate their systems. This approach can help teams better understand how AI works, anticipate potential threats, and discover new ways of defending against attacks that traditional methods might overlook.

As AI becomes integral to cybersecurity business offerings, it is crucial to secure its implementation against potential breaches from all sides, including offensive security. Security teams can do this by adopting offensive tactics like vulnerability discovery so their newly integrated AI tools have zero exposed attack surfaces to exploit. This way, companies are better prepared to protect their AI systems from these increasingly sophisticated attacks.

 

Assessments for Proper AI Vetting

Whether your cybersecurity team is already building and implementing AI features or seeking out vendors to do so, it’s essential to vet the safety of these new tools in your arsenal. This is especially important when the National Institute of Standards and Technology (NIST) has highlighted latent AI-related cyber risks companies must protect their systems against.

One of these risks is exposure to untrustworthy data or data poisoning, which hackers use to penetrate AI systems, causing them to malfunction and weakening a company’s entire infrastructure.

As AI introduces these new complexities, engineers must improve their internal security. A great way to do this is by embedding security assessments into the development process of every new feature or product that uses AI. This ensures cybersecurity teams are proactive in securing their infrastructure and are mindful of AI interactions from the outset, fostering a culture of security-first thinking.

Many services already offer such assessments so engineers can follow guides and learn how to run security tests tailored to their organizations’ needs. For example, OWASP offers a free AI security and privacy guide to instruct cybersecurity teams on what to look for when vetting AI systems — a good starting point for employees to get acquainted with innovative security practices.

Strengthen Your Security PostureHow to Attract, Train and Retain Blue Team Cybersecurity Talent

 

Hackers Are Getting Smarter — You Must Too

The cybersecurity workforce has been tasked with protecting an increasingly vulnerable digital world. AI has only proven that malicious actors move as quickly as the available technology evolves, so engineers must move even quicker to keep up with new threats. As such, industry leaders must ensure their employees are prepared to take on this massive challenge. Upskilling, AI red teaming simulations and security assessments are the best approaches to properly train them for the new landscape.

Explore Job Matches.