Trust Is the Real Infrastructure Problem in AI Education
Why the hardest part of AI in education isn’t the technology
This week, three unrelated stories landed in my feed. Together, they tell us something important about where AI in education is headed.
Story one: An AI toy company called Bondu accidentally exposed 50,000 private conversations between children and their AI stuffed animals. Anyone with a Gmail account could access them. No hacking required. Just poor security practices with products designed for the most vulnerable users imaginable.
Story two: Colleges and universities are creating formal Chief AI Officer positions. The early leaders in these roles are sharing lessons learned about building AI governance from scratch. It’s a sign that institutions are recognizing AI needs dedicated leadership, not just a line item in the IT budget.
Story three: Researchers published a comprehensive framework for “trustworthy AI in education,” organizing the field into five key areas: student assessment, content recommendation, learning analytics, content understanding, and instructional assistance.
At first glance, these seem disconnected. A toy company data breach, university org charts, and an academic taxonomy. But they’re all circling the same question:
Who do we trust with educational data, and why?
The Speed Problem
AI is moving fast. That’s not news. What’s less discussed is how that speed creates a trust deficit that compounds over time.
When Bondu shipped an AI toy that talks to children, someone decided that getting to market mattered more than securing the data generated by those conversations. That’s not malice. It’s just what happens when the incentive to ship outweighs the incentive to protect.
This pattern repeats everywhere in edtech. A startup raises funding, promises AI-powered personalization, and rushes to deploy. The data model is an afterthought. The security review happens post-launch, if at all. The privacy policy is copied and pasted from a template.
And then 50,000 kids’ conversations are sitting in an unsecured database.
The problem isn’t that AI is dangerous. It’s that we’re building AI systems faster than we’re building the governance structures to manage them.
The Governance Gap
This is why the Chief AI Officer trend matters more than it might seem.
Higher education moves slowly. That’s a feature, not a bug. Universities are supposed to be deliberate. They’re supposed to ask hard questions before adopting new technologies. They’re supposed to protect students.
But “move slowly” doesn’t mean “don’t move.” And for the past few years, many institutions have been frozen, unsure how to respond to AI that’s evolving faster than any committee can meet.
The CAIO role is intended to close that gap. It’s an acknowledgment that AI needs dedicated attention, a single point of accountability, and a mandate to build policy before problems arise.
The early lessons from these leaders are instructive:
Start with principles, not policies. You can’t write rules for technology that doesn’t exist yet. But you can establish values that guide future decisions.
Build coalitions, not fiefdoms. AI touches everything. The CAIO who tries to control it all will fail. The one who convenes stakeholders will succeed.
Transparency beats perfection. Institutions that communicate openly about what they’re trying (and what they’re learning) build more trust than those who wait for a polished strategy.
Students are stakeholders, not subjects. The best AI governance includes student voices from the start.
Governance is a product, not a project. It needs iteration, feedback, and continuous improvement.
None of this is revolutionary. But the fact that universities are formalizing it is significant. It means the governance gap is being addressed.
The Framework Problem
Which brings us to the academic framework for “trustworthy AI in education.”
I’m generally skeptical of taxonomies. They tend to be more useful for researchers seeking citation counts than practitioners trying to build things. But this one is worth reading because it maps the territory we’re all navigating.
The five areas they identify:
1. Student Assessment — AI that evaluates student work, from automated grading to plagiarism detection to competency measurement. Trust issue: Are these systems fair? Do they encode bias? Can they be gamed?
2. Content Recommendation — AI that suggests what students should learn next, from adaptive learning platforms to content libraries. Trust issue: Who decides what’s “optimal”? What happens when the algorithm is wrong?
3. Learning Analytics — AI that monitors student behavior to predict outcomes and trigger interventions. Trust issue: Where’s the line between helpful and surveillance? Who sees this data?
4. Content Understanding — AI that helps students comprehend material, from tutoring systems to explanation generators. Trust issue: How do we know the AI is correct? What happens when it confidently hallucinates?
5. Instructional Assistance — AI that supports teachers, from lesson planning to administrative automation. Trust issue: Does this augment teachers or replace them? Who’s accountable when the AI makes a mistake?
Each of these areas has its own trust calculus. And here’s the uncomfortable truth: we’re deploying systems in all five categories without resolving the trust questions first.
Trust as Infrastructure
I keep coming back to a simple frame: Trust is infrastructure.
We don’t think of it that way. We think of infrastructure as the physical stuff, the servers and networks and data centers. Or maybe the digital stuff, the APIs, protocols, and data models.
But trust is what makes any of it work. A student who doesn’t trust the learning analytics system will game it. A teacher who doesn’t trust the grading AI will ignore it. A parent who doesn’t trust the tutoring chatbot won’t let their kid use it.
And once trust is broken, it’s incredibly expensive to rebuild.
Bondu didn’t just expose 50,000 conversations. They damaged the entire category of AI toys for children. Every competitor now has to answer for Bondu’s failure. Every parent now has a reason to be skeptical.
This is the real cost of moving fast and breaking things in education. When you break trust, you don’t just hurt yourself. You hurt everyone building in the space.
What This Means for Builders
If you’re building AI for education, here’s my take:
1. Security isn’t a feature. It’s table stakes. If you can’t secure student data, you shouldn’t be collecting it. Full stop.
2. Governance is a competitive advantage. Institutions are looking for partners who can help them navigate AI policy, not just vendors selling them AI products. The companies that help their customers build trust will win.
3. Transparency builds trust faster than perfection. Don’t wait until you have all the answers. Share what you’re doing, what you’re learning, and what you’re still figuring out.
4. Involve educators and students early. The best way to build trustworthy AI is to build it with the people who will use it. Not for them. With them.
5. Play the long game. The AI education market is going to be enormous. The winners will be the companies that are still trusted in ten years, not the ones that shipped fastest in 2026.
The Bottom Line
AI in education is moving fast. But trust moves slowly.
The institutions investing in governance now, the Chief AI Officers building frameworks, the researchers mapping the landscape, the companies prioritizing security, these are the ones laying the foundation for what comes next.
The Bondus of the world will keep making headlines. They’ll ship fast, break things, and leave the rest of us to clean up the trust deficit they create.
But trust compounds, too. Every institution that gets governance right, every company that prioritizes security, every educator who brings students into the conversation, they’re building something more durable than any algorithm.
The hardest part of AI in education isn’t the technology. It’s earning the trust to use it.
Want daily analysis of AI in education? Subscribe to get these posts in your inbox.



