It’s Not About the AI. It’s About the Design.
Instructional design is still a key component of AI-driven teaching and learning
Medical schools have a problem. Training future doctors to talk to patients is expensive. Standardized patients (actors who simulate medical conditions) cost real money, require scheduling, and can only see so many students per day.
So when researchers set out to build AI-powered standardized patients, you’d expect the headline to be about how realistic the conversations were. How well the AI mimicked human responses. How students couldn’t tell the difference.
That’s not what they found.
According to their research, the students who co-designed these AI patients with researchers landed on something more interesting: instructional design matters more than realistic dialogue. The AI could talk like a patient, but that wasn’t the point. The point was whether students actually learned communication skills.
This should be a wake-up call for everyone building AI into education.
The Design Problem Hiding Behind the Tech Problem
We’ve spent the last few years obsessing over what AI can do. Can it write? Can it code? Can it tutor? Can it grade?
The better question: What should AI do in this specific learning context?
The medical AI study found that students didn’t need the AI to be a perfect patient simulator. They needed the interaction to be structured in ways that built skills. The scaffolding mattered. The feedback loops mattered. The progression mattered.
This is instructional design 101, but it gets lost when we’re dazzled by the technology.
The Management Shift
This week, OpenAI launched Frontier, a platform for managing multiple AI agents “like human employees.” Onboarding. Training. Permissions. The pitch is about coordination, not capability.
The same day, Ars Technica noted that AI companies want us to stop chatting with bots and start managing them. The interface is shifting from conversation to orchestration.
For education, this is significant. We’re moving from “here’s an AI tutor” to “here’s a system of AI components that need to work together.” And that means someone needs to design how they work together. How they hand off. When they escalate. What they prioritize.
That’s not an AI problem. That’s a design problem.
The EDUCAUSE Reality Check
EDUCAUSE’s latest podcast tackled risk management for AI in higher ed. The conversation wasn’t about whether to adopt AI. It was about how to adopt it responsibly. Workforce implications. Governance structures. Institutional readiness.
Again: not a technology problem. A design problem. A systems problem. A people problem.
What This Means for EdTech
If you’re building AI into learning products, the medical training study should be taped to your monitor. The AI is the easy part. The hard part is:
What learning outcome are you designing for?
How does the AI interaction fit into the broader learning journey?
What scaffolding does the learner need before, during, and after the AI interaction?
How do you know if it’s working?
These are instructional design questions. They require learning science expertise, not just prompt engineering.
The institutions that get this right will build AI tools that actually teach. The ones that don’t will build impressive demos that don’t move the needle.
The Bottom Line
We’re past the “can AI do this?” phase. We’re into the “how do we design AI systems that achieve learning outcomes?” phase.
That requires a different set of skills. A different set of questions. And probably a lot more collaboration between AI engineers and learning designers than we’ve seen so far.
The AI can talk like a patient. The question is whether the student learns to be a doctor.



