Can Rogue AI Become a Threat to Cybersecurity?

Without human intelligence, it sure can. Our expert explains why prevention is the best cure.

Written by Khurram Mir
Published on May. 01, 2024
A robot is pointing a finger an image of a padlock.
Image: Shutterstock / Built In
Brand Studio Logo

We’ve all joked about it at some point: What if AI went rogue and took over the world with all the information we gave it? Stephen Hawking himself made a note that AI will eventually try to replicate everything until it’s the same as a human, replacing us altogether. And despite all the conditioning we put it under to bring it under control, it is still possible for AI to go rogue.

What Is Rogue AI?

Rogue AI is an artificial intelligence algorithm that commits potentially dangerous activities. The system has been made popular in science fiction as artificial beings that become self-aware bypass what they were originally programmed to do. This type of AI behaves in an unpredictable and malicious manner, making unexpected decisions that would not necessarily benefit its owner. Once the data pool has been corrupted, rogue AI can exhibit characteristics such as autonomy, lack of accountability and can even escalate with time.

AI is still in the developing stage, and many industries are relying on it to automate and simplify different processes. It is particularly used in cybersecurity to help detect threats faster. But unlike humans, AI lacks the empathy that prevents it from going on the wrong path. This can lead to a phenomenon called AI gone rogue.

Human intelligence is still trying to control this element, but we can’t help but wonder: Will it eventually become a cybersecurity threat? If so, how should companies mitigate it? 

further readingWhy We Can’t Ignore the Dark Side of AI


How Does AI Become Rogue?

When AI was created, it was done with one big purpose in mind: to help humanity finish their tasks faster, like a second brain. To that extent, we stored significant amounts of information in its database, equipping it with something that could be both a blessing and a weapon: knowledge. When that knowledge is misused, it leads to the AI becoming rogue. 

This can happen in several ways. It can occur when someone tampers with the data with bad intentions, particularly during its early stages. If it is not appropriately overseen, the AI will not know how to use that data and could become autonomous. Last but not least, if the information pools give it sufficient autonomy to set its own goals, it can make data-based decisions that don’t necessarily have human well-being in mind. 

One popular example of AI gone rogue is Microsoft’s chatbot, Tay. Within less than a few hours of its release, the data pool was tampered with by Twitter users, teaching the AI to be racist. Shortly, Tay was quoting Hitler and exhibiting racist behavior. Microsoft shut the project down, realizing it had gone rogue.

 

Why Could Rogue AI Be Dangerous?

When caught and stopped in its early stages, rogue AI might be prevented from doing severe damage. This is especially true if the purpose it was made for is relatively harmless, like the chatbot Tay. However, when used for security purposes, AI turning rogue may have devastating effects. 

As this flaw of AI is becoming more and more common knowledge, hackers have already begun attempting to make AI systems go wrong. With each security breach, data can be leaked into the AI until it is trained to ignore the original instructions. This could lead to misinformation and disclosure of details that should have otherwise remained private.

Rogue AI can also be dangerous when given significant responsibilities without being adequately overseen. A neglected model can make an incorrect assumption in a delicate domain, such as warfare. For example, when used for military purposes, rogue AI could decide that the best way to achieve a goal is to create alternative subsets. In its attempts to obtain data from enemy teams, it could decide to shut down or breach essential infrastructure like hospitals, leading to civilian damage.

AI systems outside the company management could also be trained by individuals with malicious intent to perform cyberattacks. Hackers could use AI tools to boost their spying capabilities, especially in their early stages, as it could quickly find a weakness in their defense system. AI chatbots could be trained to launch phishing campaigns, delivering malicious or incorrect information to deceive.

recommended readingShould You Hire a Chief AI Officer?

 

Can the Threat Be Prevented?

The main problem with super-intelligent tools is that they are pretty uncontrollable. As long as the data can be tampered with, either maliciously or through human error, the threat of AI absorbing and using that data is real. For this reason, rogue AI cannot be entirely prevented, regardless of the direction it is coming from. However, it can be mitigated with the right actions. 

One way to mitigate the effects of rogue AI and prevent it from occurring is to assess the system consistently. Known conditions for the bad actors should be identified before launching and updating the algorithm to avoid them. The users should also be trained to use AI responsibly and ethically, preventing bias from turning the system rogue.

Rogue AI cannot be entirely prevented, regardless of the direction it is coming from. However, it can be mitigated with the right actions.

For the most part, human intelligence is essential to keeping the AI from going rogue. Alerts and policies should be set to detect potentially abnormal behavior and respond to the incidents based on severity. AI training should be overseen throughout every stage of development, and humans should also stay in charge of shutting the system down if necessary. 

While we haven’t yet reached apocalyptic scenarios where rogue AI can cause worldwide devastation, the potential remains. With incorrect or malicious training, AI can make decisions that do not benefit the industry it is supposed to protect.

Explore Job Matches.