The Tools We Trust Are Under Attack
We tell students to Google it. We point them toward AI-powered search. We integrate these platforms into our curricula and call it digital literacy.
And right now, those tools are being actively exploited.
Wired reported this week that Google’s AI Overviews feature is being weaponized to spread scams and misinformation through AI-generated search summaries. Not hallucinations. Not errors. Deliberate exploitation. Someone figured out that if you poison the inputs, the AI will confidently serve harmful content to millions of users who’ve been trained to trust that blue box at the top of search results.
Meanwhile, Ars Technica reported that attackers prompted Google’s Gemini more than 100,000 times to extract its knowledge and clone the model. That’s not a theoretical threat. That’s an active siege on the infrastructure educators are building on top of.
Here’s what bothers me: we’re in the middle of an ed-tech land grab. Google, Microsoft, and OpenAI are all pushing AI tools into classrooms as quickly as possible. Google just launched five new Gemini features specifically for students. The pitch is always the same: smarter studying, personalized learning, better outcomes. And some of that is genuinely useful.
But no one is leading with the security story. Nobody’s saying, “Hey, by the way, the search engine your students use every day is being gamed by scammers, and the AI model powering your new study tools is under constant attack.”
A new CoSN report gets closer to the right framing. It emphasizes intentional, responsible edtech use and proper teacher professional development. But even that feels too polite. “Intentional use” implies the tools are trustworthy if you use them correctly. What happens when the tools themselves are compromised?
This is the trust problem I keep coming back to. Trust isn’t a governance checklist you complete once and file away. It’s a living system that requires active defense. When we adopt AI platforms for education, we’re not just buying features. We’re inheriting their attack surface.
So what do we actually do?
Teach the exploit, not just the tool. Students need to understand that AI-generated summaries can be manipulated. That’s not a footnote in a digital literacy unit. That’s the lesson.
Demand transparency from vendors. If Google is deploying Gemini in classrooms, schools deserve to know about any security incidents that have occurred and how they were addressed. Not a marketing page. An incident log.
Build redundancy into research workflows. No single AI tool should be the sole source for anything. Cross-referencing isn’t old-fashioned. It’s survival.
The platforms will keep shipping features. That’s their job. Our job is to stop treating their tools as neutral infrastructure and start treating them as what they are: actively contested territory.
Trust isn’t just the hardest infrastructure problem in AI education. It’s the one that’s currently losing.



