SFU’s Bold Step Into the AI Classroom
What?!? Two newsletter pieces in one day? It’s rare I know, but I came across an interesting news article and wanted to write about it and get some thoughts down while they were fresh and in advance of tomorrow’s podcast.
Simon Fraser University has stepped into uncharted territory by introducing one of the first university courses in the world co-taught by an expressive AI persona. Professor Steve DiPaola’s course, Artificial Intelligence Today & Tomorrow, is delivered in partnership with “Kia,” a digital persona designed to engage with students in real time. Kia can speak, gesture, and respond with emotional nuance, making her presence feel natural rather than mechanical. She is not grading or writing curriculum. Instead, she serves as a conversational partner who challenges assumptions and extends discussion in ways that spark critical reflection.
When I consider this move, I cannot help but view it through the lens of pedagogy. Vygotsky’s zone of proximal development reminds us that learners advance when they are guided just beyond their current abilities. Kia is a fascinating extension of this idea, an additional scaffold who helps students stretch their thinking, not by replacing the teacher, but by augmenting the space for dialogue. From a constructivist perspective, learning is deepened through interaction and co-construction of meaning. Having an AI persona participate in a discussion about ethics or history turns abstract concepts into lived inquiry.
What excites me is how directly this connects to strategies that are central in K-12 education. Take inquiry-based learning, which thrives on student questions and open exploration. An AI persona could play the role of a provocative partner, surfacing counterarguments, pressing for evidence, or offering new perspectives. Consider project-based learning, where students work on extended investigations. A classroom AI could become a collaborator, role-playing stakeholders in a debate, or offering feedback as students refine their solutions. Even differentiated instruction could benefit: an AI partner could adapt to a student’s pace, giving advanced learners more complex challenges while supporting others with guided prompts. And in formative assessment, imagine students being able to “test” their understanding against a digital interlocutor that pushes them to articulate, justify, and revise their reasoning before ever turning in an assignment.
The equity implications are also significant. Many schools cannot offer specialized courses in philosophy, ethics, or advanced STEM. AI teaching partners could help close those gaps by acting as a kind of distributed expertise, giving students access to perspectives and conversations that would otherwise be out of reach. A rural high school might not have a history teacher who specializes in ancient civilizations, but an AI persona could embody a figure from that period, allowing students to interrogate ideas through dialogue.
Of course, pedagogy reminds us that technology is never neutral. The way Kia is being used at SFU is instructive. She is not dictating curriculum or assessing performance. She is positioned as a partner in exploration. That framing keeps the human teacher at the center while allowing AI to expand the zone of learning. If K-12 systems can adopt that same stance, the classroom could become far more dialogic, reflective, and personalized.
What SFU is signalling is not that AI belongs in education someday, but that it is already here. The question for K-12 is how quickly we will move to integrate these tools thoughtfully, rooted in proven pedagogical strategies rather than novelty. Those who prepare now will create classrooms where AI extends curiosity, deepens inquiry, and supports every learner in shaping the future rather than being shaped by it.