Why You Need Both AI and Human Monitoring For Student Cybersafety

April 9, 2021

Keeping students safe online is becoming increasingly complex as schools leverage a growing number of technologies for remote education and hybrid models. The average school district now relies on more than 1,000 EdTech tools monthly, requiring them to set up diligent systems to track and manage students’ cyber activities.

The two-sided coin of EdTech is that while the use of technology presents new risks for students, it also presents advanced tools to help keep students safe.

Artificial intelligence (AI) provides efficient tools for monitoring students’ online behavior and identifying risks. While technology can track cyber activity faster than a human and can often highlight warning signs early enough to help a child, many education leaders express concern about using AI alerts without a human eye to provide context and determine how to respond appropriately.

The reality is that districts can’t keep students completely safe with only one or the other. It has become essential to have both AI and human monitoring for student cybersafety.

How safe are your students?

Over a 30-day period, we'll assess your students’ online behavior. You'll learn which online safety categories are most prominent and we'll alert you to at-risk students who need timely intervention. 

Get a Student Safety Audit

AI for Cybersafety

Content filtering solutions give visibility across every activity that occurs on school devices and networks. The capabilities for these solutions have grown, enabling technology to track the videos students watch, the websites they visit, and the terms they search for online. 

With AI, we can score those online activities based on their nature and thereby identify early indicators of a threat to student safety — such as a student viewing inappropriate material, or searching for a term that may indicate an intent to self-harm. Without AI, early signs may go unnoticed by human eyes until a student displays a more extreme behavior.

AI provides an efficient way to identify early indicators of a student being in danger, but machines aren't perfect. False positives are possible, technology can sometimes miss things, and we still need a human to interpret the information AI gathers.

Technology can only go so far in keeping a student safe. AI can point educators in the right direction, and highlight signs that the technology thinks are warning signals — it’s still up to a human to decide how to respond to the data.

Human Monitoring for Cybersafety

While technology is powerful, the human element is crucial to student safety and wellbeing. Human monitoring is less likely to result in false positives or errors. A counselor or school psychologist is better able to understand which activities indicate a risk, and determine how to respond. 

This can only succeed if a district has trained professionals available to take action. If an AI solution identifies a potentially suicidal search term, many districts don’t have a policy in place for how to respond. How do you get that information to the right person in an efficient manner, so they can intervene appropriately? This is an area where districts struggle; despite having the tech to identify warning signs, they don’t have the processes in place to respond.

In some cases, this can even deter districts from wanting to use AI solutions to monitor student safety. District leaders may fear that receiving alerts about threats to student safety without an appropriate plan to handle such information can cause more damage or liability than simply not having the information at all. 

If network admins receive alerts without context, how will they know which ones are immediate and which aren’t? The IT staff who manage district content filters aren’t trained in mental health. The potential burden of a suicidal child, which they need to respond to in real-time, is a responsibility they are not prepared for. And if a network admin did reach out to a student or family because of an alert, it may cause a breach of student privacy

Without proper training, human involvement in student cyber safety can be complicated; but the answer cannot be to turn away from tech that can catch early indicators.

Using Both Together to Get a Holistic View of Each Student

AI is far more efficient at monitoring student behavior across devices and identifying threats, while human monitoring provides much-needed context to handle student safety issues. With both, you can achieve a holistic view of each student. 

Content filtering solutions like Linewize use AI and machine learning to sort through data on student behavior, and prioritize activities that may be worth examining. For example, many alerts are simple disciplinary issues that can be routed to the principal, assistant principal, or counselor. For those ranked higher, we organize three emergency contacts at the district level that can be contacted about more serious alerts.

It all comes down to training. Districts need to leverage technology to monitor and identify early warnings, and also leverage services to train staff on how to set up and implement processes to reach a child as quickly as possible when there is a safety matter. Only with both can we truly monitor cybersafety.

Get a Student Safety Audit


Topics: Cyber Safety, Human Monitoring, AI

Would you like some more information? Or a demo?
Get in touch
Subscribe to our newsletter

Recent posts

 
How Wellbeing Platforms Help Schools Address Chronic Absenteeism

Students are increasingly absent from school, and educators are worried about the impact on learning outcomes.

 
4 Key Findings for Schools from Qustodio’s 2023 Annual Report

After listening to what 400,000 parents had to say about their children’s digital safety and wellbeing, one thing is clear: they are ready ...

 
Bipartisan Bill to Study the Effects of Cell Phones in Classrooms

Should cell phones be banned in classrooms?

 
Navigating Harmful Content Online

A guide to managing children's exposure to distressing content online