The unexpected growth of online systems and corresponding higher traffic levels have led to an unprecedented increase in malicious network activity. Moreover, with face-to-face discussions transitioning to VoIP environments and an increasing volume of information being forced onto network channels, the available network attack surface has grown notably since the start of 2020.

Ironically, the same machine learning technologies that are improving cybersecurity systems are attacking those same systems. Venerable security protocols and practices, some dating back decades, are unprepared for imaginative new approaches to exfiltration, phishing, identity theft, network incursion, and password cracking. Since so many off-the-shelf solutions are outdated, this turbulent period is better addressed by AI consultants.

In this article, we’ll explore why artificial intelligence is the best security technology using the example of the two most common attack types.

More in CybersecurityHow the Cybersecurity Industry Is Evolving

 

Why Use AI in Cybersecurity? 

Machine learning systems can detect behavioral patterns from vast amounts of historical data across a range of applications and processes in the cybersecurity sector. Unfortunately, the data can be difficult to obtain, outdated by the time it is processed, or so specific to certain cybersecurity scenarios that it becomes overfitted.

For this reason, there is little prospect of a ”set-and-forget” solution to an ever-evolving threat landscape in which a rarely changing, installed solution simply updates definitions periodically from a remote source. Those days are over. Now, new threats may come from completely unexpected channels, like a telephone call to a VoIP chat or even be embedded into machine learning systems themselves.

This new reality makes clear the need for proactive systems designed and maintained by specialists in cybersecurity consulting or the will and resources to establish protection systems in-house. Because the criminal incursion sector is innovative and resourceful, the response requires equal commitment.

 

Social Engineering Attacks Proliferate

A staggering 84 percent of U.S. citizens have experienced social engineering behavior, according to a new survey by NordVPN. Although authentication systems have migrated toward verifying biometric characteristics such as video or voice data, as well as fingerprints and movement recognition, the same research that underpinned these advances is constantly pushing forward new methods to falsify the data.

Currently, deepfakes are one of the most popular types of social engineering attacks. By impersonating influential people, bosses, or friends and loved ones, the attackers can trick people into giving them money or obtaining confidential information. In their 2021 Cyberthreat Analysis, Insikt Group’s security consultants forecast a drastic rise in deepfake attacks. 

 

Fighting Deepfakes With AI

The state sector has been actively developing deepfake detection methods since the technology’s initial emergence in 2018. It’s an ongoing game of whack-a-mole, as deepfake software creators use publicity around the discovery of perceived tells” (such as unblinking faces) as a free bug list, systematically closing most of the loopholes shortly after their discovery.

Emerging attack architectures are quite resistant to challenge, with many incursion attempts anticipating the negotiation of multifactor authentication systems like those commonly implemented for mobile banking security. In cases where biometric data is faked (such as the use of masks, master faces” and even neurally crafted physical makeup to defeat facial ID systems), authentication systems that detect the subject’s liveness are an emerging front in bolstering biometric systems.

The topic of liveness detection has inspired LivDet, a biannual hackathon begun in 2009, that harvests the latest AI-based techniques designed to combat deceptions based on iris and fingerprint spoofing.

One recent system developed by the researchers from the University of Bridgeport uses anisotropic diffusion (the way light reacts with skin) to confirm an authentic face, while others have used blinking as an index of liveness (though this is now correctible in deepfake work flows). In June of 2021, a new liveness detection method was proposed that discerns unforgeable lip motion patterns.  

 

Combatting Network Incursion With Machine Learning

What of the remaining 16 percent of attacks that don’t rely on human susceptibility? Effective cybersecurity tools and systems for enterprise network management must now take an anticipatory, AI-based approach to detecting more traditional types of attack, such as botnets, malware traffic, and other types of network assaults that may fall outside of recognized and protected attack vectors.

Research into AI-based intrusion detection systems (IDSs) has advanced notably over the last 11 years of GPU-accelerated machine learning. Machine learning-enabled systems are capable of ingesting historical data about attacks and incorporating them into active defense frameworks.

Since the base channels through which most network attacks occur are founded on some of the internet’s oldest architecture, traditional DoS-based attacks and other types of network incursion operate in a limited environment and set of parameters compared to the new wave of human-centered incursion campaigns.

 

Protecting Software-Defined Networks With AI

In 2021, research from the Department of Computer Engineering at the King Saud University in Saudi Arabia was able to obtain outstanding results against a gamut of incursion techniques with a new architecture designed to incorporate software-defined networks (SDNs).

To accomplish this, the researchers developed a comprehensive database of attack type characteristics, which also serve as a list of some of the likeliest routes into a network. These include:

Common Types of Cybersecurity Attacks

  • DoS Attacks — The flooding of networks with bogus traffic designed to overload the system.
  • Probes — Hunting out vulnerable or exposed ports in security systems.
  • UR2 — Buffer overflow attacks that seek to collapse security safeguards through software vulnerabilities.
  • Remote to Local Attacks — Sending malicious network packets designed to obtain write access to unprotected parts of the target system.

Understanding Cyber CrimeWhat Is Social Engineering? A Look Into the Sophisticated World of Psychological Cyber Crime.

 

AI Is the New Front Line in Cyber Defense

AI-based cyber security attacks are evolving into industrialized, generic attack packages that incorporate machine learning technologies and are increasingly common in illicit markets on the dark web. Though the systematic attack is still susceptible to systematic defense, the new wave of incursions requires a vanguard approach to local and cloud-based cyber security systems. The objective is now to anticipate rather than respond.

In most cases, this will entail custom cyber security solutions that are developed with the same avidity and obsessive detail as is evidenced in the work of a well-motivated and well-equipped new generation of attackers. It may be a long time before the attack vectors consolidate again into so narrow a channel as a mere TCP/IP switch. In the meantime, we’re living in an era where vigilance and creativity are prerequisites for the effective protection of organizations.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us