April 9, 2021
Keeping students safe online is becoming increasingly complex as schools leverage a growing number of technologies for remote education and hybrid models. The average school district now relies on more than 1,000 EdTech tools monthly, requiring them to set up diligent systems to track and manage students’ cyber activities.
The two-sided coin of EdTech is that while the use of technology presents new risks for students, it also presents advanced tools to help keep students safe.
Artificial intelligence (AI) provides efficient tools for monitoring students’ online behavior and identifying risks. While technology can track cyber activity faster than a human and can often highlight warning signs early enough to help a child, many education leaders express concern about using AI alerts without a human eye to provide context and determine how to respond appropriately.
The reality is that districts can’t keep students completely safe with only one or the other. It has become essential to have both AI and human monitoring for student cybersafety.
Content filtering solutions give visibility across every activity that occurs on school devices and networks. The capabilities for these solutions have grown, enabling technology to track the videos students watch, the websites they visit, and the terms they search for online.
With AI, we can score those online activities based on their nature and thereby identify early indicators of a threat to student safety — such as a student viewing inappropriate material, or searching for a term that may indicate an intent to self-harm. Without AI, early signs may go unnoticed by human eyes until a student displays a more extreme behavior.
AI provides an efficient way to identify early indicators of a student being in danger, but machines aren't perfect. False positives are possible, technology can sometimes miss things, and we still need a human to interpret the information AI gathers.
Technology can only go so far in keeping a student safe. AI can point educators in the right direction, and highlight signs that the technology thinks are warning signals — it’s still up to a human to decide how to respond to the data.
While technology is powerful, the human element is crucial to student safety and wellbeing. Human monitoring is less likely to result in false positives or errors. A counselor or school psychologist is better able to understand which activities indicate a risk, and determine how to respond.
This can only succeed if a district has trained professionals available to take action. If an AI solution identifies a potentially suicidal search term, many districts don’t have a policy in place for how to respond. How do you get that information to the right person in an efficient manner, so they can intervene appropriately? This is an area where districts struggle; despite having the tech to identify warning signs, they don’t have the processes in place to respond.
In some cases, this can even deter districts from wanting to use AI solutions to monitor student safety. District leaders may fear that receiving alerts about threats to student safety without an appropriate plan to handle such information can cause more damage or liability than simply not having the information at all.
If network admins receive alerts without context, how will they know which ones are immediate and which aren’t? The IT staff who manage district content filters aren’t trained in mental health. The potential burden of a suicidal child, which they need to respond to in real-time, is a responsibility they are not prepared for. And if a network admin did reach out to a student or family because of an alert, it may cause a breach of student privacy.
Without proper training, human involvement in student cybersafety can be complicated; but the answer cannot be to turn away from tech that can catch early indicators.
AI is far more efficient at monitoring student behavior across devices and identifying threats, while human monitoring provides much-needed context to handle student safety issues. With both, you can achieve a holistic view of each student.
Content filtering solutions like Linewize use AI and machine learning to sort through data on student behavior, and prioritize activities that may be worth examining. For example, many alerts are simple disciplinary issues that can be routed to the principal, assistant principal, or counselor. For those ranked higher, we organize three emergency contacts at the district level that can be contacted about more serious alerts.
With the technology doing the data work, districts need training and processes for responding to alerts. This is where a partner like Gaggle comes into play, helping districts manage the process of delegating alerts from their content filter to the right people and handling them appropriately. This includes developing protocols for how to handle cyberbullying, and arranging professional development to teach staff what to do in the case of potential child pornography to avoid liability as well as other scenarios that may arise.
It all comes down to training. Districts need to leverage technology to monitor and identify early warnings, and also leverage services to train staff on how to set up and implement processes to reach a child as quickly as possible when there is a safety matter. Only with both can we truly monitor cybersafety.
Keeping students safe online is becoming increasingly complex as schools leverage a growing number of technologies for remote education and ...
Before the pandemic, few districts had tools enabling teachers to control how students used digital assets in the classroom, or see ...