Imagine a world where artificial intelligence (AI) becomes your child's counselor, tracking their every move and thought. It's a controversial idea that's gaining traction in schools, but is it truly safe?
In Putnam County, Florida, a middle school counselor named Brittani Phillips received an alert on her phone one evening. It was a severe alert about an eighth-grader, and Phillips sprang into action, spending her night on the phone with the student's mom, trying to understand the situation and assess the risk.
Phillips believes that this interaction, facilitated by an AI-enabled therapy platform, built trust with the family. The student now greets her in the halls, a sign of the positive impact she feels the platform has had.
But here's where it gets controversial: Phillips' school, Interlachen Jr-Sr High, is using AI to vet students' mental health needs due to budget constraints and a limited mental health staff. Phillips' district has been using an automated student monitoring system called Alongside for three years, and it's part of a growing trend with at least nine companies securing funding deals since 2022.
Alongside claims its platform is used by over 200 schools across the US, offering better services than typical telehealth options with its social and emotional skill-building chat tool. Students can chat with a llama named Kiwi, who teaches them resilience, and the AI-generated content is monitored by clinicians.
However, this raises concerns. AI is a key component of the Trump administration's national education agenda, but parents, educators, and lawmakers are wary of increasing screen time for teens. States have even started restricting AI use in telehealth.
Many experts and families worry about students becoming too attached to AI. A recent national survey found that 20% of high schoolers have used AI romantically or know someone who has, prompting interest in preventing emotional connections with bots. There's even a proposed federal law to remind students that chatbots aren't real people.
Phillips believes the tool is exceptional at handling 'small fires,' allowing her to focus on students in crisis. Students often find it easier to confide in AI, and school counselors attribute this to nervousness and the familiarity of chat interfaces.
Sarah Caliboso-Soto, a licensed clinical social worker, agrees that speaking with a mental health professional can be intimidating, especially for adolescents. Linda Charmaraman, director of the Youth, Media & Wellbeing Research Lab, adds that today's kids find texting easier than calling.
Using AI also allows students to avoid facial expressions and judgment, and chatbots are available without the hassle of appointments.
"It's almost more natural than interacting with another human being," Caliboso-Soto says.
But here's the catch: AI lacks the discernment and human connection that clinicians provide. While it can speed up diagnostics and free up time, over-reliance on AI for mental health is risky. It can miss nuances, give unrealistic positive reinforcement, and schools must adopt a holistic approach involving families.
Ava Shropshire, a youth adviser for Alongside, argues that the app is a stepping stone to seeking help from adults, making mental health and social-emotional learning more normal for students.
However, critics like Sam Hiner, executive director of Young People's Alliance, argue that AI threatens to replace human companionship, a critical aspect of therapy and social connection. Hiner believes these bots can fuel the loss of social skills and pull people away from real relationships.
Privacy experts also raise concerns about the lack of privacy protections with these chatbots compared to licensed therapists. When concerns about student privacy and encounters with the police are high, the use of these tools raises messy privacy issues.
Phillips and the company stress the need for human oversight, and Phillips feels the tool is an improvement over other monitoring systems. She has learned that it takes a human to perceive teenage humor, as some alerts aren't genuine. Middle school boys, in particular, test the boundaries of the technology, typing provocative statements to see if anyone cares.
Phillips pulls these students aside, observes their body language, and decides whether the comment was real or a joke. If it was a joke, they often apologize. Phillips feels she has more options than other monitoring systems, and the students learn to trust that she's actively monitoring the system.
The number of boys testing the system decreases each year, and Phillips believes the tool has been beneficial.
So, is AI safe for tracking students' mental health? The debate continues, and your thoughts are welcome in the comments below. Should we embrace AI as a helpful tool or be wary of its potential risks and consequences?