Self-Regulated Learning: The Missing Piece in AI Literacy?

Self-Regulated Learning: The Missing Piece in AI Literacy?

March 2nd, 2026

By Stephanie Becerra, Learning Experience Design Associate

Lately, I find myself sitting in conversations about AI literacy and feeling like something is missing. Across higher education, we all agree that students need discernment, judgment, domain knowledge, and the ability to evaluate what AI gives them with a critical eye. However, when it comes to building those competencies, we default to awareness as a solution: Adding more content. Another AI module. Another policy statement. Another tool demonstration.

To be clear, awareness matters. Students do need to know what tools are available, what AI can and can’t do, and what resources they have at their disposal. The real gap isn’t awareness itself; it’s that we’re not pairing it with the pedagogy that helps students do something with what they know. In self-regulated learning, that kind of awareness is foundational. It’s part of how students plan and make strategic decisions about their own learning (Zimmerman, 2000). But if we stop at awareness — if we treat it as the finish line instead of the starting point — we leave students with knowledge they don’t yet know how to act on.

Here’s what I mean. Every AI literacy framework I’ve encountered emphasizes higher-order capacities: applying tools critically, monitoring for accuracy, evaluating AI, and adapting strategies across contexts (Kassorla et al., 2024). These are not new skills that showed up with AI. These are the core phases of self-regulated learning. When students set goals, monitor their understanding, evaluate their reasoning, and adjust their approach — they are already practicing the discernment and judgment we say we want.

I realized we may be trying to solve an AI literacy problem with content solutions instead of pedagogical ones. We understand that students need these higher-order competencies. However, without a clear instructional mechanism, we fall back on adding more information rather than strengthening how students engage with learning itself. That pattern is one I’ve seen before — in workplaces, in training programs, and in my own education. Instinct is always to give people more material, when what they need is a better way to process and apply it. When self-regulated learning principles anchor course design—irrespective of discipline or AI integration—AI literacy competencies are likely to develop as a natural byproduct.

What if we intentionally framed more of that work through a self-regulated learning lens? Instead of asking, “How do we add AI literacy?” we might ask, “How are we cultivating students who can plan, monitor, evaluate, and adapt in complex environments?” That feels like a fundamentally different question, and I think it leads to fundamentally different outcomes.

AI will continue to evolve. The capacity to regulate one’s own learning will remain foundational. If we truly want AI-literate engineers, we must teach students to plan, monitor, evaluate, and adapt. In doing so, you prepare them not just for AI, but for whatever comes next.

Author’s Note: This article was drafted with the assistance of a generative AI tool to support wording and formatting; content generated by AI has been reviewed and approved by the author.

References

Kassorla, M., Georgieva, M., & Papini, A. (2024). AI Literacy in Teaching and Learning: A Durable Framework for Higher Education. EDUCAUSE.

Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). Academic Press.

Stephanie Becerra

Learning Experience Design Associate