The Week in AI + Education: Three Shifts That Actually Matter
Simulation beats substitution, skill gaps collapse, and the era of proving impact begins - January 27-29, 2026
Every week, I read through the latest AI-in-education research, news, and experiments. Most of it is noise. Some of it hints at where things are actually going.
This week, three patterns kept emerging. Not hype cycles. Not “AI will replace teachers” panic. These are actual shifts in how we’re thinking about learning technology.
1. Simulation is Winning Over Substitution
The most interesting AI education tools aren’t trying to replace human experiences. They’re creating practice spaces that didn’t exist before.
The case study: Researchers built SWITCH, an AI that simulates counseling clients for social work students. It doesn’t replace fieldwork. It gives students reps before the stakes are real. A student can practice navigating a difficult conversation ten times before ever sitting across from an actual client.
This pattern showed up again with LEGO robotics programs teaching machine learning to teenagers without requiring code, and voice chatbots giving students in under-resourced schools their first real opportunity to practice speaking English.
The instinct when building AI into education is always to make it do everything. Answer every question. Replace every interaction. But the tools that actually work? They make human learning experiences more accessible, not less necessary.
The principle: We don’t need AI that replaces practice. We need AI that makes practice possible when it otherwise wouldn’t be.
2. The Skill Gap is Collapsing (Faster Than We Expected)
Ethan Mollick ran an experiment at Wharton this month. MBA students with zero coding experience built complete startup prototypes in four days using Claude Code and Google Antigravity.
Not “pretty good for beginners.” Working products. Real business plans.
A year ago, “I don’t know how to code” meant you couldn’t build a prototype. Now it means you need a different approach.
This isn’t about AI making education “more efficient.” It’s about collapsing the gap between wanting to learn something and actually being able to do it.
Research on AI coding assistants reinforced something important here: students trusted AI more and learned more when the tools connected them to existing communities (Stack Overflow, documentation, forums) rather than trying to be the sole source of answers. AI as a connector, not an oracle.
The question this raises: If the barrier to entry for complex skills keeps dropping, what happens to how we credential expertise? What does “qualified” mean when the tools change faster than the curriculum?
3. The Efficacy Imperative Has Arrived
eSchool News declared 2026 “the efficacy imperative” for AI in education, and they’re not wrong.
For the past few years, we’ve been in pilot mode. “We’re experimenting with AI.” “We launched a chatbot.” “Students can use generative AI for X.” That era is ending.
The conversation is shifting from “do we have AI?” to “can we prove it improves learning outcomes reliably and at scale?”
Here’s what makes this interesting: new research tracking 10,000+ ChatGPT interactions over a year found that students naturally develop AI literacy through use, not formal training. They identified five distinct patterns: trial-and-error prompting, output evaluation, task decomposition, metacognitive reflection, and strategic tool switching.
This complicates efficacy measurement. If the goal is “did students learn the subject?” that’s one metric. But if students are simultaneously developing an entirely new skill set (learning with AI), traditional assessments miss half the picture.
Meanwhile, systems like GuideAI are pushing into new territory, using real-time biometrics (eye tracking, heart rate, posture) to adapt tutoring based on cognitive load. We’re entering an era where AI might know you’re confused before you do.
The implication for builders: The tools that win the efficacy conversation will be the ones that make learning visible. Not just “student used AI,” but “here’s how their AI collaboration skills improved over time.”
The Throughline
Three shifts. One theme.
AI in education is moving past the “is it coming?” phase. We’re now in the “how do we build it responsibly, measure it honestly, and ensure it actually helps people learn?” phase.
The research this week points in a consistent direction:
Simulation over substitution (let people practice safely)
Connection over isolation (AI should bridge learners to communities, not silo them)
Visibility over vanity metrics (prove the learning, not just the usage)
If you’re building in this space, those are the principles worth anchoring to. Everything else is noise.
I write about AI in education weekly. Follow along if you’re building in this space or just trying to make sense of where it’s all heading.



