In schools today, deepfakes are testing how we decide what to trust. In a 2024 EdWeek Research Center survey, 67 percent of school staff said students had fallen for deepfakes, and half said educators had, too.
The risk of deepfakes is both what they make people believe and what they make them reveal. On one hand, schools are seeing students and staff bullied or blackmailed with AI-generated intimate content. On the other, attackers are impersonating trusted teachers and administrators via deepfake audio or video. For example, a Baltimore high school athletic director circulated a deepfake of the principal making discriminatory remarks, prompting widespread debate over whether it was genuine.
What’s hitting schools is part of a much broader pattern. In May 2025, the FBI warned that voice-deepfake impersonations of senior officials were being used to trigger account takeovers, underscoring the rapid escalation of these threats.
Schools are especially exposed. Many rely on cloud platforms like Microsoft 365 and Google Workspace, layered onto open networks, guest Wi-Fi and a patchwork of student and staff devices. Once attackers gain a foothold, they can pull student records, access staff data or use a compromised account to push phishing attempts deeper into the system. Earlier this year, one such message cost a Nebraska district nearly $2 million. As these attacks become more sophisticated, schools require clearer guidance, stronger safeguards and support to respond effectively.
How to Stop Deepfakes From Harming Students
- Multifactor Authentication: Strong MFA stops >99% of account compromise attempts; customized methods (e.g., pictographs) needed for younger students.
- Timely Patch Management: More than half of school breaches are from known, unapplied patches; critical updates should be applied within 30 days.
- AI-Awareness Training: Equip staff/students to spot cues (e.g., urgency, tone shifts) in otherwise convincing deepfake messages/voices.
Hardening Access in Schools
The Center for Internet Security recently reported that 82 percent of schools experienced a cyber incident within an 18-month span. The convenience of open networks and the prevalence of AI-driven attacks, particularly in under-resourced environments such as schools, amplify the need to strengthen identity protections. These help ensure the right people can sign in, and the wrong ones can’t get far.
Multi-factor authentication (MFA) remains the single most powerful defense against unauthorized access. Microsoft’s 2025 security research shows that strong MFA can stop more than 99 percent of attempts to compromise an account.
Most school breaches start with a stolen password, often obtained through a convincingly crafted message or impersonation. Adding a second step of authorization changes the equation so that, even if someone is fooled into revealing their password, the attacker still hits a wall.
Since school environments vary by age group, however, a one-size-fits-all MFA model rarely fits the bill. For example, older high school students typically have smartphones and can use authenticator apps that generate secure time-based codes. These apps are easy to set up and work without cellular service, making them a stronger protection method than SMS authentication.
For younger learners who may not have access to personal devices or a similar level of digital understanding, visual methods such as pictograph-based MFA are emerging as practical alternatives. Instead of typing numbers, students confirm their identity by selecting a familiar sequence of images, avoiding the need for phones in the classroom.
MFA is particularly critical when sign-ins come from unusual locations or devices. IT teams must regularly audit permissions and access to ensure no accounts are bypassing MFA protocols.
Staying Secure With Updates
Schools manage a large mix of software and hardware, all of which require consistent updates and careful configuration. Even the tools designed to keep devices secure can be turned against a district if updates fall behind.
Microsoft issued an emergency fix for a Windows Server Update Services flaw last month. The vulnerability allowed remote code execution on a system many schools rely on to push updates, illustrating how a single missed patch can ripple across an entire environment.
Federal guidance continues to stress the importance of addressing known, actively exploited vulnerabilities quickly. More than half of school breaches are caused by known vulnerabilities with patches that are available but not applied, Ponemon’s 2025 report found. As their analysis suggests, automating patch management enables K through 12 districts to reduce their risk of cyberattacks by up to 60 percent by closing those known, preventable gaps.
While timelines vary by system and severity, the shared message is that the longer a known weakness sits unpatched, the broader the exposure. Schools already operate with lean IT teams, and older systems in particular can become prime targets when fixes aren’t applied.
Districts can start strengthening update practices by tracking which systems handle the most sensitive data, ensuring those receive more regular patches. Automating updates where possible reduces manual workload and shortens the window of risk.
Some schools choose not to update automatically because the risk of breaking instruction, losing compatibility or damaging specialized equipment outweighs the convenience. Those devices need scheduled, hands-on updates instead. If a district’s latest installed security update is more than 30 to 90 days old, the district is putting itself at avoidable risk. Best practices are to apply critical updates within 30 days to all core systems (OS, apps, servers, appliances), or within 14 days for mission-critical systems.
Building AI-Aware Cybersecurity Skills
Because AI makes fraudulent messages, voices and videos more convincing, districts must equip their communities with the judgment and habits needed to navigate these threats.
In 2023 and 2024, the percentage of U.S. high schools that offered cybersecurity education increased to 60 percent. Still, even though a whopping four-fifths of K through 12 cybersecurity teachers said AI should be part of the foundations, less than half felt equipped to teach it.
Microsoft’s AI in Education 2025 report mirrored these figures, with less than half of teachers confident in AI-related subjects, a figure that shrank among younger age groups.
Of the 2,245 teachers who did spend class time on AI content, the majority spent fewer than five hours per course. Elementary school teachers spent the least amount of time, with 70 percent spending only one to two hours.
Most teachers lack both time and confidence to build AI content from scratch. Districts can lighten the load by curating short videos, one-page guides and plug-and-play activities from trusted organizations (e.g., CISA, MS-ISAC, and Microsoft Education). Short, scenario-based microtrainings, delivered monthly or at staff meetings, help build instincts without adding workload.
In light of AI-related attacks, where content looks polished, cues to look out for include unexpected urgency, changes in tone, mismatched sender details, unverified voice requests or shifts in payment or account procedures. Short activities that compare real versus AI-generated messages or voice clips can help teachers and older students familiarize themselves with these cues. A few minutes of practice with “quick-tell” examples does far more than long, annual slide decks.
When something surprises younger students online, teaching them to “Pause, Ask and Decide” helps them learn that extreme reactions such as "You win!" or "Your device is broken!" are signs to get help from an adult, not to click. Teachers can rehearse it the same way they practice fire drills or safety rules.
Protecting Students From Deepfakes
Deepfakes aren’t just arriving from threat actors; they’re being created in classrooms, circulated in group chats and used to impersonate staff and peers. Schools can’t rely on instinct or technology alone to mitigate these attacks. The practical path forward is to lock down access with strong MFA, keep systems patched so exploits can’t take hold and give students and staff the judgment to pause when something feels engineered. None of these steps stop deepfakes entirely, but together they have a better chance of preventing a fabricated video from going viral. If schools want to stay ahead of AI-driven threats, verification has to become habitual for the youngsters and the grown-ups alike.
