Artificial intelligence has quickly spread through K–12 schools across the country and districts are racing to respond.
AI tools have become classroom staples for staff and students alike, and the speed of its growth has outpaced schools’ ability to effectively manage and monitor it in a timely manner.
Most districts now face the same challenge: how to set clear boundaries around AI use while giving staff and students access to valuable AI resources that:
For IT professionals and administrators, this is no easy task. New AI tools seem to emerge everyday and students often discover them well before staff have time to review them.
To gain control, districts are turning to EdTech tools.
In this article, we’ll explore common AI challenges, insights from school districts, and edtech solutions that support schools today.
Ahead of an August 2025 webinar on AI management in K–12 schools, Linewize asked districts about their most pressing challenges:
Detailed responses from more than 40 districts highlighted areas where schools need the most support:
As schools continue to navigate this uncharted territory, developing guidelines is a critical first step toward proactive AI management.
AI policies set clear rules for how students and staff can use artificial intelligence technology. They aim to protect student safety and privacy, guide curriculum use, and keep districts compliant.
As of July 2025, 28 U.S. states had issued AI guidance for schools. Despite a growing number of resources around developing AI guidance, most teachers still feel unequipped to use and supervise AI tools.
In a survey from EdWeek, 79% of teachers said state policy guidance was unclear, while 58% said they weren’t trained on how to use approved generative AI, highlighting the lack of preparedness paired with those policies.
“Creating AI policies isn’t just challenging — it’s a moving target,” said Terrisa Reeves, Territory Director at Linewize. “Teachers want to use these tools. But without the right guidance and visibility, it’s hard to know what’s safe, what’s effective, and what’s hiding in student behavior.”
Districts can begin building strong AI policies by reviewing state guidance, studying sample frameworks & policies, collaborating with staff and students, and regularly updating guidance based on feedback and results.
Key areas every AI policy should address include:
Start by finding your state’s available guidance in the chart below.
Most school web filters still do what they were originally built to do: block access to inappropriate websites. But many still rely on filters designed for a different decade.
Today, districts need modern filters that go beyond domain-level blocking to gain full visibility into how students are using AI tools and can easily adapt as new tools emerge.
“Think back to 2001, during the Clinton administration,” said Adam Lee, Senior Vice President of Sales at Linewize. “None of your students today were around then — yet we’re still shaping digital journeys with policies written 25 years ago.”
Linewize Filter is a real time, content-aware hybrid filter that makes it easy to balance access to AI resources with student safety, security, and compliance. It allows districts to:
Traditionally, schools relied on web filters to gauge the wellbeing of students and monitor online behavior.
But when AI tools can be used for generating harmful content, to get around the filter, and even for companionship from chatbots, districts need to take an entirely different approach to student safety.
Many districts have caught students exchanging explicit messages or sharing sensitive content through trusted platforms like Google Docs that would be overlooked by web filters. As a result, the warning signs of self-harm, violence, and bullying have become even more difficult to detect.
Safety should be a key consideration for new and updated guidelines around digital behavior. To make AI policies meaningful, districts will need to redefine student safety processes to reflect how youth use these platforms today.
Linewize Monitor is an advanced student threat detection platform that identified students in need and can detect when AI is misused in harmful ways, allowing schools to provide timely intervention. With Linewize Monitor:
AI is no longer confined to standalone tools — it’s embedded into nearly every application educators and students use. That raises difficult questions for IT leaders:
“None of these are easy questions,” said Lee. “I’ve asked them at events all over the country, and the answers have been — pun intended — all over the map. I’ve heard everything from, ‘We have 212 apps,’ to half-updated spreadsheets, sticky notes, whiteboards, and even scrap paper.”
Understanding what tools are being used, how safe those tools are, and how frequently they’re accessed allows districts to identify areas of improvement and opportunity and make informed decisions that shape guidance.
“Without usage visibility, aligning access with your policies isn’t just hard,” said Lee. “It’s impossible.”
EdTech Insights is a data analytics platform that helps districts uncover key data about your district’s tech stack and see exactly which AI tools are used, which are unapproved, and which could cause compliance or security issues.
Schools can use EdTech Insights to:
With AI here to stay, schools need a robust strategy that leverages a combination of edtech tools: filtering, monitoring, and app analytics.
No single solution will solve every problem. But through partnerships between districts and trusted, forward-thinking vendors, educators can create safer, smarter environments that maximize AI’s benefits while minimizing its risks.
“Linewize is fully locked into the AI conversation,” says Lee. “Every new feature and module we release is driven by your feedback — what challenges you’re facing and how the AI landscape is shifting beneath us. We’re not just watching AI evolve; we’re building alongside it to help you stay ahead.”
Book a demo to learn more about Linewize and try our solutions in your environment for free.