The Watching Problem
Students want AI help (but hate being watched to get it)
Here’s a tension that should keep every edtech builder up at night: students want AI help but hate being watched to get it.
A new study from Australian universities surveyed 132 higher education students about their attitudes toward AI systems that monitor their attention and emotions to deliver personalized learning interventions. The findings reveal something important. Students appreciated the targeted help. They wanted the hints and nudges that kept them on track. But they bristled at the monitoring itself, preferring automated assistance over teacher interventions to preserve their independence and avoid embarrassment.
In other words: help me, but don’t watch me.
The Independence Instinct
This isn’t surprising if you’ve ever watched someone learn. The messy middle of figuring something out, the wrong turns and backtracking, the moments of confusion before clarity arrives. That process feels private. Having an AI (or worse, a teacher notified by an AI) swoop in during a moment of struggle can feel like having someone read over your shoulder while you draft an email.
The students in this study made a clear distinction: they wanted systems that helped them help themselves, not systems that flagged their struggles for human intervention. Automated hints that arrived at the right moment? Great. A notification to the professor that they’d zoned out? Absolutely not.
I think this points to a design flaw in how we’re building educational AI. We’ve inherited the surveillance model from proctoring software and attendance tracking. But monitoring and helping are different functions, and conflating them erodes the trust that makes learning possible.
When Machines Think, Humans Must Think Higher
This connects to something larger happening in education right now. EdSurge published a provocative piece this week arguing that as AI takes over basic cognitive tasks, educators need to focus on higher-order thinking skills that distinguish human intelligence from artificial intelligence.
The connection might not be obvious, but here it is: if we’re asking students to develop higher-order thinking, they need space to struggle. They need room to be confused, to sit with uncertainty, to work through problems without someone watching the process. That’s where the deep learning happens.
AI surveillance optimizes for the wrong thing. It catches students when they’re confused and intervenes. But confusion isn’t always a problem to solve. Sometimes it’s exactly where learning lives.
What Good Design Looks Like
I’m not saying AI can’t help students. It obviously can. But the Australian study suggests we need to rethink the model:
Help without watching. Students wanted hints and scaffolding. They didn’t want emotion tracking or attention monitoring. The former respects autonomy; the latter treats students as subjects to be observed.
Automate, don’t escalate. When help arrived automatically, students accepted it. When it triggered teacher intervention, they felt exposed. Build systems that handle routine support quietly.
Preserve the private struggle. Learning involves failure, confusion, and frustration. Those moments are valuable. Design systems that support students throughout their development without surveilling them.
The institutions that get this right will build AI that students actually want to use. Those that don’t will wonder why adoption lags despite impressive feature lists.
The technology isn’t the constraint here. The design philosophy is.



