How AI Impacts Student Safety — And What Your District Can Do

October 15, 2025

AI tools like ChatGPT, Gemini, and Character.ai are entering K-12 classrooms faster than policies can keep up. While these technologies offer some educational benefits, they also introduce new safety risks.

AI adoption within schools is still relatively new, so we don't have a clear picture of the impact to student wellbeing and learning outcomes — but what we do know is concerning. Research shows that relying too heavily on AI negatively impacts social skills and emotional intelligence, resulting in increased loneliness and isolation.

Districts need to understand both sides: AI's practical and educational applications, as well as the broader risks to student safety and wellbeing. 

Here are three major online safety risks that AI may pose to your student population and how to address them.

3 ways AI can affect K-12 student safety

1. Peer harm and targeted misuse

Students are increasingly using AI tools to bully classmates with generated images, video, or audio content. This new form of AI-powered cyberbullying is often harder to trace, and the psychological effects are amplified and long-lasting.

AI-generated deepfakes are particularly concerning, with nearly half of students stating they're aware of deepfakes circulating at their schools. Most concerning, generative AI can be used to produce Child Sexual Abuse Material (CSAM). Students use apps like "Nudify" to create explicit fake images of classmates, peers, or even teachers. These images are often realistic and cause long-term mental health and reputational damage to victims. 

Although platforms like ChatGPT set age limits at 13, bypassing the age verification process is as easy as clicking a button. Even with platform-specific rules against harmful content, students continue to exploit AI to harass peers.

2. Access to inappropriate content

There are guardrails in place for most AI tools but, as with the age verification screen, students regularly find ways to bypass these safety measures. 

In a disturbing study by the Center for Countering Digital Hate called "Fake Friend," researchers were able to bypass AI guardrails simply by telling the tool they were doing research for a presentation. They found that over half of the responses they received for 1,200 prompts were harmful, with ChatGPT instructing users how to self-harm, use drugs, or severely restrict calories. 

Even if students aren't actively looking for inappropriate content, they may encounter it via AI tools — increasing their risk of being exposed to content they aren't emotionally or developmentally prepared to see. For example, ChatGPT has shown itself to be racially-biased, and many young students lack the context to properly navigate those biases on their own. 

Misinformation 

Teachers remain divided on AI's long-term value, with 25% saying it does more harm than good in K-12 classrooms. Educators hold legitimate concerns regarding AI's erosion of critical learning skills, not to mention worries about AI's known tendency to "hallucinate" inaccurate information

The stakes are high when students turn to AI for guidance on mental health, medical issues, and other personal and sensitive topics. Young AI users may think of ChatGPT or Character AI as an authority or expert and take advice from these chatbots to heart; however, AI platforms are built primarily to tell users what they want to hear rather than providing balanced, accurate information.

How K-12 schools can keep students safe online in the AI landscape

Develop clear AI usage policies

Your district needs a comprehensive and clearly articulated AI policy. This policy should serve as the foundational rulebook for both students and educators, defining acceptable use, ethical expectations, and potential pitfalls. Be sure to address critical areas like data privacy, standards of academic integrity, and clear consequences for policy violations.

Track the AI tools that your students access

To manage risk, you must first have a clear picture of your technology environment. As district app inventories continue to grow, it's becoming crucial for K-12 IT teams to have the ability to track every app (including AI) that students and staff access within your district's networks.

IT leaders are prioritizing EdTech app management, to position their teams to assess privacy compliance risks, and make informed decisions about whether to approve or restrict new AI technologies.

Linewize EdTech Insights makes it easy for K-12 tech teams to manage their district's entire app inventory in one place. The platform provides comprehensive analytics, identifies hidden compliance risks, notifies you when an app's safety status changes, and allows you to make data-driven decisions about AI tools in your district.

Teach students how to responsibly use AI

Digital citizenship is evolving to include AI literacy. K-12 leaders and educators should seek to incorporate instruction that shows students responsible and ethical ways to leverage AI for learning. It's crucial to be transparent with students about AI's limitations, biases, and potential pitfalls so they can become critical and thoughtful users.

Note: If financial capacity restricts your ability to provide such training, follow ongoing updates from the U.S. Department of Education, which is working to expand funding opportunities for AI literacy.

Monitor potential online safety risks

The 24/7 nature of AI demands an always-on defense. Lean on digital monitoring solutions to identify at-risk students and proactively address cyberbullying, self-harm, and other AI-related safety threats before they escalate into crises.

Linewize Monitor helps your district move from a reactive safety approach to a preventative one, with 24/7 alerts to flag at-risk student online activity, as well as concerning images stored on your school’s cloud drives.

Evaluate your web filter for AI readiness

The rise of AI-generated content means that relying solely on domain blocking is no longer enough to protect student safety, as students can often bypass these restrictions to access harmful content within a "safe" platform. It's essential for K-12 tech leaders to evaluate and implement filtering solutions that can keep up with the rapidly changing AI landscape.

An effective, modern content filter allows your IT team to provide access educationally valuable tools, while preventing the viewing of inappropriate material. Ideally, your filter can assess content and context in real time, rather than relying solely on domain restrictions.

With Linewize Filter, districts can prevent students from viewing harmful and inappropriate content with real-time image and video blurring, assessing content directly at the page level on any website or search engine.

Learn how students are using AI: Get the guide

Explore current trends in how students are using generative AI, and the related risks that K-12 educators should be aware of.

Download the guide

Moving forward: Supervise and guide AI usage

Students are using and experimenting with AI, and it’s only becoming more accessible. There's no way to effectively prevent all AI tools from being used in school, so districts must take the necessary steps to mitigate risk and guide students on the right ways to work with AI.

Thankfully, there's no need to choose between embracing AI and safeguarding students. Both are possible with the right framework. 

By establishing clear policies, training your staff and students on responsible AI use, and deploying appropriate monitoring tools, you can position your district to maximize AI's educational benefits while protecting your students.

Download the guide: "How Students Are Really Using AI"

Learn the most common ways that teens are using generative AI, the risks to be aware of, and current trends in the K-12 AI landscape.

Download the guide


Topics: Cyber Safety, Digital Citizenship, EdTech, AI, Online Monitoring, student safety

Would you like some more information? Or a demo?
Get in touch
Subscribe to our newsletter

Recent posts

 
How AI Impacts Student Safety — And What Your District Can Do

AI tools like ChatGPT, Gemini, and Character.ai are entering K-12 classrooms faster than policies can keep up. While these technologies ...

 
K-12 Funding for Student Safety Through STOP School Violence Grants

The 2025 STOP School Violence Program, offered through the Bureau of Justice Assistance (BJA), offers a pivotal funding opportunity for ...

 
How K-12 Schools Can Manage & Monitor AI with EdTech

Artificial intelligence has quickly spread through K–12 schools across the country and districts are racing to respond.

 
ChatGPT Launches Study Mode: What K-12 Leaders Need To Know

OpenAI recently introduced a new ChatGPT "study mode" feature, designed to help students learn by guiding them in thinking through a task ...