AI in Cybersecurity: The Good and the Bad

As artificial intelligence evolves, both cybersecurity teams and hackers alike will use it to their advantage.

Written by Ellen Glover
Published on Aug. 22, 2024
Illustration of a lock in a digital style
Image: Shutterstock

Artificial intelligence is transforming just about every industry, and cybersecurity is no exception. Cybercriminals use AI to launch sophisticated and targeted attacks, while security professionals use AI to detect and defend against them. The result is a seemingly unending game of cat and mouse.

AI in Cybersecurity

AI enhances cybersecurity by detecting anomalies, automating threat responses and preventing attacks. But AI is also used by cybercriminals to launch more sophisticated attacks and bypass traditional defenses, making it difficult for security teams to stay ahead.

“Like most technologies, [AI] has a dual use,” Dane Sherrets, a solutions architect at cybersecurity company HackerOne, told Built In. “It’s incumbent on us to make sure that we’re using it to the best of our abilities for good, and trying to stay at least one step ahead of people who use it for bad.”

 

How AI Is Used in Cybersecurity

Artificial intelligence provides powerful ways to monitor, detect and respond to cyberattacks automatically. For example:

  • AI algorithms can analyze large volumes of data in real time, which is useful in identifying anomalies that may indicate a cyber attack.
  • By continuously scanning network traffic, AI systems can flag malware, fraud and other threats with more speed and accuracy than a human alone could, enabling security teams to respond more effectively.
  • AI can predict and prevent some cyber attacks before they occur. By intentionally probing the defenses of a specific software or network, AI tools can identify the signs of potential vulnerabilities and forecast future threats, and then take steps to proactively address these weaknesses before they can be exploited.

At the same time, AI-powered technology makes it easier for malicious actors to launch effective attacks. Cybercriminals — whether they be large-scale enterprises or lone actors — use artificial intelligence to locate vulnerable systems, automate their attacks and impersonate real people. And security teams are struggling to keep up.

 

The Impact of Generative AI on Cybersecurity

Security problems have worsened with the rise of generative AI. Bad actors use this technology to develop adaptive malware that bypass traditional security measures and create highly convincing social engineering scams — like phishing emails and deepfake phone calls — making it harder to distinguish legitimate communications from fraudulent ones. 

“[AI] allows a threat actor to scale a lot faster and across multiple channels,” Kayne McGladrey, chief information security officer at compliance management company Hyperproof, told Built In. “And the defensive tools haven’t quite caught up. Unfortunately, none of this stuff is going away. This has now become a fixture of the landscape. It’s part of our new, modern cybersecurity hellscape that we inhabit continuously.”

Generative AI is also used to bolster cybersecurity efforts, not just evade them. It can transform complex data into clear insights and actionable recommendations, and generate detailed reports to better contextualize specific events and threats using natural language and visuals. It also answers users’ plain language questions about those incidents, helping to improve their overall understanding without having to know special query languages.

“You can actually take your entire body of knowledge that you gather over time, put it in your own enterprise library, and then allow people — through generative AI and conversation — to tap into that knowledge,” Theresa Payton, CEO and president at cybersecurity company Fortalice Solutions, told Built In. “Being able to have that dialogue can really cut down on the time that it takes for everybody to do their job.”

Related ReadingHow to Prepare Your Engineers for the Wave of AI-Powered Cyberattacks

 

Examples of AI in Cybersecurity

By nature, artificial intelligence is good at analyzing large amounts of data and finding patterns in them, making it ideal for a variety of tasks, including:

  • Real-Time Threat Detection: AI is used to monitor network activity across various sources in order to identify patterns. Once the system has established what is normal, it can identify anomalous events that may need further investigation as they come up.
  • Fraud Detection: AI can help flag fraudulent financial activities by analyzing spending patterns, user behavior and other historical data. A sudden increase in the number of transactions, activity in unfamiliar locations and purchases made on suspicious websites are all deviations that may suggest fraud.
  • User Authentication: By analyzing user behaviors like number login attempts and device usage, AI can identify unusual activities that may indicate unauthorized access. In some cases, it can also be combined with biometric information like face and fingerprint data to provide more secure and personalized authentication.
  • Attack Simulations: AI can be used to simulate certain threat scenarios in order to test how security systems will respond to them. This allows organizations to identify any vulnerabilities in their security measures and strengthen them before cybercriminals have the chance to exploit them.
  • Phishing and Deepfake Detection: AI detection tools analyze texts and emails for signs of phishing, from suspicious links to poor grammar. And to identify deepfake scams, they use advanced algorithms to assess video and audio clips for signs of manipulation, such as mismatched audio-visual synchronizations and unusual breathing patterns.
  • Automated Incident Response: Once a security threat is identified, some AI-powered tools can automatically initiate countermeasures to mitigate the damage — without the need for human intervention. Responses include isolating affected areas, blocking malicious traffic and deploying patches against any vulnerabilities. 

 

Benefits of AI in Cybersecurity

With its wide range of applications and use cases, AI offers three main advantages to cybersecurity teams:

Analyzes Lots of Data Quickly

While humans may struggle to deal with large quantities of data, AI systems can process massive cybersecurity data sets quite quickly. They also tend to spot trends faster than humans can, allowing them “home in on what’s important” and take the proper actions, according to Michael Rinehart, VP of artificial intelligence at data security company Securiti.

“Understanding your data systems is really one of the core pillars in any cybersecurity program you may have. And AI is enormously beneficial in that area,” he told Built In. 

Strengthens Cybersecurity Measures

By constantly monitoring, probing and testing a company’s security infrastructure, artificial intelligence can expose weaknesses, recommend fixes and help teams prepare for a variety of cyber attacks before they even happen. And because AI continuously learns and adapts over time, it can be ready for unknown cyberattacks in the future, providing a more robust defense against an evolving threat landscape. 

Automates Repetitive Processes

AI automates repetitive processes in cybersecurity, like monitoring network traffic, analyzing logs, and scanning for vulnerabilities. And it can quickly sift through large amounts of data and identify patterns, flagging potential threats without the need for constant human intervention. This not only expedites threat detection and response times, but it also frees up security professionals to focus on more complex and strategic tasks.

“AI can speed things up,” Payton said. “The things that you used to have to do manually as an analyst in a security operations center you can now give to this kind of miniature assistant.”

But that doesn’t mean artificial intelligence can outright supplant security experts. Despite its capabilities, this technology has limitations that often require human oversight and intervention, including a lack of contextual understanding, bias and a propensity to generate inaccurate results. People are still very important to cybersecurity efforts, and should view AI as an enhancement rather than a replacement.

“It actually empowers humans to do more,” Sherrets said. “It’s not necessarily replacing humans, but it’s helping to remove some of the minutiae.”

Related ReadingDitch Your Passwords — They’re Only Hurting You

 

Drawbacks of AI in Cybersecurity

Cybercriminals are also using artificial intelligence to their advantage, making the work of cybersecurity teams that much harder.

Used for Social Engineering Schemes

Social engineering schemes rely on psychological manipulation to trick individuals into revealing sensitive information, like credit card numbers or passwords. They include a broad range of attacks, including spear phishing, smishing and business email scams

AI enables cybercriminals to automate many of the processes involved in making social engineering schemes, as well as create more personalized messaging to fool unsuspecting victims — ultimately helping them to generate a greater volume of attacks in less time and with a higher success rate.

Used to Make Deepfakes

Deepfakes are AI-generated images, videos or audio clips that convincingly mimic real people by altering their appearance or voice, often making it look as if they did or said something they never did. They can be used to trick individuals and organizations into revealing sensitive information or even granting unauthorized access. For example, a deepfake audio clip could be used to approve a financial transaction over the phone in a two-factor authentication process. Or a deepfake video impersonating an executive could be used to convince employees to transfer funds or share confidential information.

Automates Password Hacking

Hackers use AI to speed up the time it takes to crack victims’ passwords through techniques like brute force attacks, where the AI systematically tries every possible combination until it finds the right one, reducing the time needed to breach weak or common passwords. The AI can also learn from patterns in previously hacked passwords, making educated guesses to bypass security measures even faster. 

These tactics can significantly reduce the time and effort often required in compromising user accounts, making them a dangerous tool in the hands of bad actors.

Vulnerable to Data Poisoning

Data poisoning is a type of cyberattack in which someone intentionally “poisons” or compromises an AI model’s training dataset in order to manipulate how it operates. This can be accomplished by intentionally injecting false or misleading information directly into the dataset, modifying the existing dataset or deleting a portion of the dataset. 

Altering a model during the training process can lead to biases, incorrect outputs and security vulnerabilities (like backdoors). And because most AI models are constantly evolving black boxes, it can be difficult to detect when or how a dataset has been compromised.

Generative AI Can Be Exploited

AI chatbots and other generative AI tools pose a possible security risk to the companies whose data they were trained on. Designed to provide helpful responses to a wide range of queries, these systems may inadvertently disclose sensitive corporate information — all a bad actor has to do is feed it a few well-crafted, plain-language questions to extract details about an organization’s internal processes, proprietary information or even its employees. In his own research, Rinehart has found that this data can be accessed in as few as two queries.

“These systems, if not safeguarded appropriately, can be used as a vector of attack,” Rinehart said. “You’ve now got a channel which an attacker — using only natural language — can access a company’s most sensitive information.”

Related ReadingAI Cybersecurity: 33 Tools to Know

 

Why Is AI Important for Cybersecurity?

The rise in AI-powered cyberattacks has made artificial intelligence an important tool in cybersecurity. From multinational phishing scams to massive data leaks, the scale and sophistication of these threats are growing at an alarming rate, and traditional security measures alone are not enough to protect our sensitive information and systems. 

AI can give everyone — large enterprises and individual people alike — an edge over cybercriminals. Its unique ability to analyze large amounts of data and quickly spot patterns makes it ideal for flagging attacks at their earliest stages, exposing network vulnerabilities and anticipating when and how a future attack will occur. Generative AI in particular is also helping to close the ongoing cybersecurity skills gap by enhancing teams’ productivity. 

At the end of the day, cybersecurity measures enhanced with AI offers a faster and more efficient way of staying one step ahead of the bad guys.

Frequently Asked Questions

Artificial intelligence plays a crucial role in cybersecurity, enabling real-time data analysis, anomaly detection and automated threat response to prevent attacks from happening in the first place.

No, AI is not replacing human cybersecurity experts. While it can automate certain tasks and analyze data quickly, AI has limitations that often require human oversight and intervention, including a lack of contextual understanding, bias and a propensity to generate inaccurate results. Human experts are essential for interpreting an AI tool’s findings and making strategic decisions based on that, and correcting situations where a tool may have made a mistake or been deceived.

Explore Job Matches.