On a Friday night, you’re sitting in a small, cozy Italian restaurant waiting for your meal when you overhear two voices chatting passionately. You can tell by the tone of voice and hand movements that the conversation is serious. Unfortunately, the diners are speaking Italian, which you don’t know. You watch as one of these people suddenly stands up, throws down the napkin, and moves behind the other. Since you don’t speak Italian, you have no idea what is about to happen next.
This is the reality that many security and intelligence analysts face when trying to monitor threat actor activities across illicit communications channels. Monitoring the dark web for evolving threats is critical to cybersecurity and risk mitigation, but the barriers of language, colloquialisms and context present unique challenges to effectively tracking the dark web and illicit channels.
Fortunately, thanks to artificial intelligence (AI) and machine learning (ML) tools, security teams can effectively and cost-efficiently monitor these illicit threat actor communications for proactive security risk mitigation, while minimizing the time lost to translation and contextualizing.
How Do LLMs improve Cybersecurity?
With AI and LLMs, cybersecurity teams can solve many of the problems that plague the industry.
- Overcome internal constraints, including limited budget and time resources.
- Identify future targets of cybercrime.
- Identify new tactics and tools in use by threat actors.
Challenges of Monitoring Threat Actor Communications
For most security teams, monitoring threat actor communications is a time-consuming activity that requires the right people with the right skills.
Too Many Locations
Stock photos might have you believe that threat actors all work in dark basements wearing black-hooded sweatshirts, communicating via terminal-based chat apps that resemble a 1990’s warez BBS instead of iMessage. The reality is that malicious actor communications take place in many places across digital space, including:
- Social Media and Chat Applications: Threat actors can often be found posting and communicating on Twitter, Discord, Steam and a wide variety of traditional applications that have a chat or direct message feature.
- Cybercrime Forums: Much cybercrime sales and marketing happens on relatively public forums. Here, threat actors may communicate openly in public threads, in the forums’ shoutboxes, where any online member can post openly without a specific topic or directly with each other through direct messaging.
- Telegram Channels: Telegram currently stands in a league of its own in the cybercrime underworld. It is a messaging platform where users can exchange messages, share files and create public and private groups. Forums, ransomware groups, infostealer authors and cybercriminals of many distinctions operate either in part or almost entirely on Telegram.
With information scattered across several chat apps, hundreds of forums and thousands of Telegram groups, many security teams simply lack the time to adequately monitor all locations because they need to focus on immediate, rather than potential, security issues.
Budgetary and Skill Constraints
Cybersecurity teams already struggle with an inability to hire as many people as they’d like to manage their incoming alerts. For example, the 2024 SME Security Workload Impact Report found that 73 percent of SME IT teams miss alerts, 40 percent blamed lack of staffing and 38 percent blamed lack of time.
A lack of resources and time limitations are intrinsically interrelated. When team members wear multiple hats, they never feel they have enough time to complete their daily tasks. These staffing constraints and the time-consuming manual processes necessary to the field turn threat actor monitoring into something like a nice-to-have rather than a must-have.
Unsurprisingly, then, in the ICS2 Cybersecurity Workforce Study 2023, 23 percent of respondents said that their security team had a skills gap around threat intelligence analysis, which includes monitoring malicious actor communications.
Effectively monitoring threat actor communications means that organizations need people who know how to navigate the dark web and illicit Telegram channels, hide in and monitor forums and channels and speak both the informal and formal languages that threat actors use.
Like people on the clear web, cybercriminals have their own internet vernacular, similar to how a member of Gen Alpha would use “mogging” or “skibidi,” which I will not attempt to do accurately.
Additionally, cybercriminals aren’t always native English speakers, meaning that the security analysts need to know languages like Russian, Arabic, Spanish and French, among others. This knowledge gives analysts a much broader ability to proactively search for threats.
Benefits of Monitoring Threat Actor Communications
To keep pace with the continuously evolving threat landscape, organizations need to turn this type of monitoring into a must have. It’s the only way to understand what threat actors are thinking and how they view potential targets.
Some benefits of tracking these illicit communications include monitoring all the following:
- Brand reputation: Identify threats to an organization's reputation like fake or phishing websites, counterfeit products, unauthorized use of brand assets, social media impersonation, etc.
- Compromised credentials: Identify stolen or leaked credentials or cookies that attackers can use to gain unauthorized access to corporate systems, to customer environments and to application databases or third-party applications in use.
- Data leakage: Identify potential data leaks like corporate financials, competitor research, sales contracts, or more technical data like hardcoded credentials stored in GitHub repositories or AWS access keys exposed in public .env files.
- Domain-based security: Identify weaknesses in domains by tracking expiring domains, monitoring website uptimes, and ensuring SSL certificates remain valid.
- Supply chain risk: Identify the third-party vendors and their vulnerabilities that threat actors actively target, which could impact companies they interact with up- or downstream.
Using AI and LLM to Monitor Threat Actors
Fortunately, despite all these challenges, AI and LLMs are very useful covert listening device enhancements for monitoring threat actor communications.
Overcome Internal Constraints
Monitoring cybercriminal activity can feel like trying to overhear a whispered conversation in a crowded restaurant. With AI and ML tools, however, security teams can overcome the budgetary and staffing constraints that prevent them from implementing cybercriminal communication monitoring. By using these technologies, security teams can:
- Automate scanning: Monitor conversations across various dark web and illicit Telegram channels in a single location, eliminating time-consuming manual processes.
- Filter out noise: Gain context about the information, explaining complex technical information at a level that helps both junior and senior analysts respond to the risk faster.
- Translate communications: Convert and summarize foreign language posts into English so that security analysts can gain actionable intelligence, accounting for slang and colloquialisms in context.
Identify Future Targets
A large part of the cybercrime ecosystem is built on the sharing of stolen information. Whether this is credentials or databases of compromised devices, the information available often provides insights into organization domains linked to credentials, users linked to organizations and devices linked to domains and networks. By creating target keywords that the AI can look for, security teams can gain faster insights that help them identify whether cybercriminals are discussing their organization.
Additionally, when organizations purchase solutions built to operationalize threat actor communication monitoring, they can analyze the collected data more effectively. For example, threat actors are still human, and many have signature speech patterns that identify them. With AI and natural language processing, security teams can find similarities between malicious actors, groups and forums for insights into the risks facing their specific organization, at a scale that is difficult to achieve manually.
Identify New Tactics and Tools
If attackers kept following the same steps, security teams would have no problems. Threat actors collaborate effectively across their illicit communications channels, however. Ransomware-as-a-Service (RaaS) business models have made it easier for less sophisticated cybercriminals to deploy successful ransomware attacks. Instead of every threat actor creating a new tool or tactic, the sophisticated actors build these then sell them through illicit communications channels. The cybercrime ecosystem commodifies tactics and techniques, expanding the number of people deploying attacks.
Using AI and ML to monitor threat actor communications, security teams can do the following:
- Quickly uncover new exploit Proof of Concept (PoC) code
- Identify threat actors’ tactics
- Attribute events to specific actors and groups
- Level up their red teams by sharing insights about new attack methods
Boost Your Reach With AI
Monitoring cybercriminal communications should become a must-have for organizations. To think like a cybercriminal, security teams need insights into how the modern cybercriminal ecosystem works. In the constant battle against malicious actors, security teams should be equipped with helpful technologies, like AI and ML, that enable them to scale their efforts and take proactive steps to mitigate cyber risk based on what threat actors say they plan to do.