AI tools like ChatGPT, Gemini, and Character.ai are entering K-12 classrooms faster than policies can keep up. While these technologies offer some educational benefits, they also introduce new safety risks.
AI adoption within schools is still relatively new, so we don't have a clear picture of the impact to student wellbeing and learning outcomes — but what we do know is concerning. Research shows that relying too heavily on AI negatively impacts social skills and emotional intelligence, resulting in increased loneliness and isolation.
Districts need to understand both sides: AI's practical and educational applications, as well as the broader risks to student safety and wellbeing.
Here are three major online safety risks that AI may pose to your student population and how to address them.
Students are increasingly using AI tools to bully classmates with generated images, video, or audio content. This new form of AI-powered cyberbullying is often harder to trace, and the psychological effects are amplified and long-lasting.
AI-generated deepfakes are particularly concerning, with nearly half of students stating they're aware of deepfakes circulating at their schools. Most concerning, generative AI can be used to produce Child Sexual Abuse Material (CSAM). Students use apps like "Nudify" to create explicit fake images of classmates, peers, or even teachers. These images are often realistic and cause long-term mental health and reputational damage to victims.
Although platforms like ChatGPT set age limits at 13, bypassing the age verification process is as easy as clicking a button. Even with platform-specific rules against harmful content, students continue to exploit AI to harass peers.
There are guardrails in place for most AI tools but, as with the age verification screen, students regularly find ways to bypass these safety measures.
In a disturbing study by the Center for Countering Digital Hate called "Fake Friend," researchers were able to bypass AI guardrails simply by telling the tool they were doing research for a presentation. They found that over half of the responses they received for 1,200 prompts were harmful, with ChatGPT instructing users how to self-harm, use drugs, or severely restrict calories.
Even if students aren't actively looking for inappropriate content, they may encounter it via AI tools — increasing their risk of being exposed to content they aren't emotionally or developmentally prepared to see. For example, ChatGPT has shown itself to be racially-biased, and many young students lack the context to properly navigate those biases on their own.
Teachers remain divided on AI's long-term value, with 25% saying it does more harm than good in K-12 classrooms. Educators hold legitimate concerns regarding AI's erosion of critical learning skills, not to mention worries about AI's known tendency to "hallucinate" inaccurate information.
The stakes are high when students turn to AI for guidance on mental health, medical issues, and other personal and sensitive topics. Young AI users may think of ChatGPT or Character AI as an authority or expert and take advice from these chatbots to heart; however, AI platforms are built primarily to tell users what they want to hear rather than providing balanced, accurate information.
Your district needs a comprehensive and clearly articulated AI policy. This policy should serve as the foundational rulebook for both students and educators, defining acceptable use, ethical expectations, and potential pitfalls. Be sure to address critical areas like data privacy, standards of academic integrity, and clear consequences for policy violations.
To manage risk, you must first have a clear picture of your technology environment. As district app inventories continue to grow, it's becoming crucial for K-12 IT teams to have the ability to track every app (including AI) that students and staff access within your district's networks.
IT leaders are prioritizing EdTech app management, to position their teams to assess privacy compliance risks, and make informed decisions about whether to approve or restrict new AI technologies.
Linewize EdTech Insights makes it easy for K-12 tech teams to manage their district's entire app inventory in one place. The platform provides comprehensive analytics, identifies hidden compliance risks, notifies you when an app's safety status changes, and allows you to make data-driven decisions about AI tools in your district.
Digital citizenship is evolving to include AI literacy. K-12 leaders and educators should seek to incorporate instruction that shows students responsible and ethical ways to leverage AI for learning. It's crucial to be transparent with students about AI's limitations, biases, and potential pitfalls so they can become critical and thoughtful users.
Note: If financial capacity restricts your ability to provide such training, follow ongoing updates from the U.S. Department of Education, which is working to expand funding opportunities for AI literacy.
The 24/7 nature of AI demands an always-on defense. Lean on digital monitoring solutions to identify at-risk students and proactively address cyberbullying, self-harm, and other AI-related safety threats before they escalate into crises.
Linewize Monitor helps your district move from a reactive safety approach to a preventative one, with 24/7 alerts to flag at-risk student online activity, as well as concerning images stored on your school’s cloud drives.
The rise of AI-generated content means that relying solely on domain blocking is no longer enough to protect student safety, as students can often bypass these restrictions to access harmful content within a "safe" platform. It's essential for K-12 tech leaders to evaluate and implement filtering solutions that can keep up with the rapidly changing AI landscape.
An effective, modern content filter allows your IT team to provide access educationally valuable tools, while preventing the viewing of inappropriate material. Ideally, your filter can assess content and context in real time, rather than relying solely on domain restrictions.
With Linewize Filter, districts can prevent students from viewing harmful and inappropriate content with real-time image and video blurring, assessing content directly at the page level on any website or search engine.
Students are using and experimenting with AI, and it’s only becoming more accessible. There's no way to effectively prevent all AI tools from being used in school, so districts must take the necessary steps to mitigate risk and guide students on the right ways to work with AI.
Thankfully, there's no need to choose between embracing AI and safeguarding students. Both are possible with the right framework.
By establishing clear policies, training your staff and students on responsible AI use, and deploying appropriate monitoring tools, you can position your district to maximize AI's educational benefits while protecting your students.