AI Is Accelerating the Post-Trust Era. Is Your Business Ready?

Just as AI gives threat actors a whole new suite of tools to execute attacks, its increasing implementation opens new vulnerabilities as well. Our expert offers tips for protecting yourself.

Written by Rik Ferguson
Published on Feb. 16, 2024
AI Is Accelerating the Post-Trust Era. Is Your Business Ready?
Image: Shutterstock / Built In
Brand Studio Logo

The conversation around AI has become a media circus. In less than a year, the technology has progressed from adding sideshow-worthy appendages to images to highly convincing deepfakes that now plague social media. Scammers are using AI-generated celebrities to trick their victims with fake ads and stuffing meetings with deepfaked attendees for financial scams. As P.T. Barnum once said, “A sucker is born every minute.”

Except there’s no evidence P.T. Barnum ever said that. The fact that this quote has been misattributed to him for more than 100 years just goes to show the long-lasting impact of misinformation, a problem that the proliferation of sophisticated AI may exacerbate.

Beyond misinformation, cybersecurity professionals should be concerned that threat actors can now use AI to optimize their attacks. Conversely, organizations must be careful about implementing AI solutions without first understanding their inherent and novel risks.

Ultimately, AI risks are multifaceted, and organizations must address them through a variety of solutions, including proactive risk assessments, data and analytics, and training to develop the soft skills necessary to discern AI-enabled social engineering attacks.

Know Your VulnerabilitiesDeepfake Phishing: Is That Actually Your Boss Calling?

 

Welcome to the Post-Trust Era

The most immediate risk of AI is the increasingly common use of deepfakes. In the broader society, the government is calling for regulation after “alarming” deepfakes of Taylor Swift circulated on social media. The use of deepfakes to disseminate disinformation about the 2024 election is only a matter of (probably very little) time.

Enterprise cybersecurity should be far more concerned with the already present risk of threat actors using deepfakes to conduct social engineering attacks, however. For example, an AI-generated video or voicemail message could convince a finance team member to wire transfer a fraudulent payment or for a help desk worker to provide unauthorized access to IT systems.

These technologies aren’t hard for potential attackers to access. Tencent Cloud has commercialized deepfakes-as-a-service, which threat actors could use for illegitimate purposes. A variety of phishing-as-a-service and ransomware-as-a-service offerings are available on the dark web as well. The nature of the threat has become commercialized, and advanced attack techniques have become commodified, which enables even novice threat actors to conduct sophisticated cyberattacks.

 

AI Isn’t a Panacea

Organizations should be equally wary of rushing to implement AI solutions, whether to keep up with the increasing sophistication of cyberattacks or any other number of promising operational efficiencies. The path to achieving AI’s potential is paved with risks, however. For example, a car dealership made the news when its AI chatbot agreed to sell a car for a dollar

Organizations that have deployed their own large language models (LLMs) should be aware of their propensity to confidently hallucinate incorrect results. Likewise, consider the risk of generating code with TuringBots, which could introduce new vulnerabilities into software. Furthermore, these LLMs could themselves be the target of attack for threat actors, who might exfiltrate data from the models or poison their results with prompt injection attacks.

What Are Prompt Injection Attacks?

An attacker can manipulate a generative AI system by inputting specially crafted prompts that trick the AI into generating outputs that breach or expose its intended operational parameters or ethical guidelines. This can result in the unauthorized access to information, dissemination of misinformation, or execution of tasks that the AI was designed to avoid, such as creating harmful content.

The point is that, despite their advances, you shouldn’t blindly trust current AI offerings. This is why so many AI providers are attempting to shield themselves from liability by only offering restricted “private previews” of their tools – the risks of hallucination and inaccuracy is still too high.

 

Employee Training as a Source of Truth

Finding a source of truth in a world that considers itself post-trust can be difficult. As politicians are calling for regulation of AI to prevent its weaponization, Microsoft is promising they will watermark AI-generated pictures and video, while Adobe is partnering with camera manufacturers to authenticate images at the point of capture. These watermarks and certificates serve as a way to verify the provenance and authenticity of images, similar to the wealth of anti-counterfeiting technologies embedded in paper money the world over. 

In the absence of this sort of authentication, the Department of Homeland Security (DHS) has shared tips on how to detect deepfakes. Key giveaways include blurring of the face, unusual changes of skin tone, unnatural blinking, choppy sentences, and awkward or unusual inflection. To protect against threats, you can develop these analytical skills in your staff through training.

From code generation with TuringBots to AI-enabled security solutions, many organizations are turning to AI solutions to help close their various skills gaps, but they must also consider the risk unfettered AI solutions pose. For example, TuringBots risk introducing vulnerabilities into code. New AI solutions require new corporate policies to prevent their misuse and should be paired with external analytics and often with human interaction to detect anomalies in their behavior.

Here are three policies that organizations should consider:

3 Smart Policies to Mitigate AI Risks

  1. Prohibit the use of intellectual property or customer data in AI prompts to avoid unintended data leaks.
  2. Ban the use of generative code in production environments to avoid introducing vulnerabilities.
  3. Avoid cross-polluting searches with superfluous or unrelated context that can create bias, inaccuracy, or confusion in results. 

The ability to establish context is one of the most powerful aspects of the natural language processing (NLP) power of LLMs, but it is also arguably its greatest weakness. Whereas a search engine only allows users to conduct singular queries, you can ask LLMs clarifying questions and refine prompts to improve the results. Prompts can skew these results, however, by providing improper context or if theyre deliberately designed to “game the system” via prompt injection.

Organizations that are implementing LLMs or integrating them into their own products must ensure that their models are “grounded” with use-case specific data to ensure their results remain relevant to their initial intended purpose. Similarly, the risks of data exposure, hallucination and misuse mean there is still too much risk for organizations to deploy enterprise solutions that provide unfettered, autonomous access to AI-enabled queries and detections.

A middle ground exists, however, where pre-configured prompts can be embedded into cybersecurity solutions to deliver reports that are prioritized and easily understood. For example, a pre-configured prompt could ask “Which of my at-risk devices have admin access?” as a way of prioritizing their remediation or “What is my mean time to detect and respond to security incidents?” as a way of reporting to a CEO.

Handling ThreatsTelecom Fraud Rages On. Can SMEs Keep Up in 2024?

 

Visibility Is the Ringleader

AI needs a ringleader to usher in its future. Just as a ringleader ensures every performer is executing in sync, obtaining visibility into how AI is impacting your network brings clarity to the interplay of data, algorithms and processes, ensuring harmony in their performances. And just as a ringleader anticipates challenges, visibility enables mitigating risks in the emerging AI ecosystem. Finally, just as a ringleader instills confidence in the crowd and the performers, visibility provides transparency and accountability, so that stakeholders can build trust in the deployment of AI systems.

Hiring Now
Klaviyo
Consumer Web • eCommerce • Marketing Tech • Retail • Software • Analytics • Generative AI
SHARE