Artificial Intelligence in Literacy Education

Megan V. Gierka, Ed.D. & Timothy N. Odegard, Ph.D.
Middle Tennessee State University

Artificial intelligence (AI) tools are entering K–12 classrooms at a remarkable pace. 86% of educational organizations use generative AI, the highest of any industry in the United States (Microsoft, 2025) Yet, this high rate of adoption is not being matched by a high rate of scientific certainty. Most AI-based educational tools have not undergone independent validation, and few have been tested through rigorous methods such as randomized controlled trials. AI capabilities are advancing so rapidly that traditional research cycles struggle to keep pace, forcing educators to be very critical consumers. This brief is intended to give educators an accessible framework for introducing and evaluating artificial intelligence tools in the classroom.

Human Learning

Human learning is not a single, unified process. It is driven by multiple interacting memory systems that operate on different timescales and respond to different types of instruction and practice (Bjork & Bjork, 2011; McClelland & Rumelhart, 1985; Norman & O’Reilly, 2003; Roediger & Butler, 2011; Squire, 2004). Understanding these systems is essential for evaluating whether an AI tool is likely to support or undermine meaningful learning.

Explicit (declarative) memory supports the conscious recall of facts, rules, and concepts (Schacter & Tulving, 1994; Squire, 2004). For example, when students can explain that a vowel team produces one sound, they are engaging this memory system. Implicit memory (procedural/nondeclarative) memory supports the automatic, fluent performance of skills developed through extensive practice. If a skilled reader is able to recognize grade-level words instantly, they are able to free up cognitive resources for new learning (LaBerge & Samuels, 1974; Perfetti, 2007; Schwanglugel et al., 2006). Both systems are engaged across phases of literacy development, and effective instruction depends on supporting both. To evaluate whether AI tools can support these human learning processes, educators must first understand how generative AI itself works.

Generative AI

Generative AI tools like ChatGPT and other large language models (LLMs) are built on statistical pattern recognition. These systems are trained on massive text datasets, learning to predict likely word sequences based on patterns in their training data. They do not “understand” content in the way humans do. AI excels at tasks that benefit from scale, consistency, and immediate response: generating practice items, providing real-time feedback on discrete skills, scaffolding retrieval practice, and adapting difficulty based on performance patterns. These capabilities make AI particularly well-suited for high-volume practice and formative assessment in structured learning environments.

These systems also have fundamental limitations that educators must understand. AI cannot replace the relational, interpretive, and improvisational work of teaching. It cannot read a student’s body language, sense when they need encouragement and/or correction, or understand the social and cultural context that shapes how a student approaches learning.

The question is not whether AI is “intelligent” in a human sense, but whether a specific tool, given its capabilities and limitations, aligns with the cognitive demands and instructional goals. The Expanded Instructional Hierarchy provides a framework for answering these questions systematically

Instructional Design

The Instructional Hierarchy (Haring et al., 1978) describes learning as a progression through four stages: acquisition, fluency, generalization, and adaptation. Each phase requires distinct types of instruction and practice. The Expanded Instructional Hierarchy (Odegard & Gierka, 2025) builds on this foundation by explicitly linking each phase to the memory systems and learning processes that cognitive science has revealed. This framework offers educators a clear lens for evaluating AI tools: Does this tool support the cognitive work required at this phase of learning?

Phase 1: Acquisition

Learning Goal: Build accuracy and declarative knowledge through explicit instruction.

The acquisition stage is when students are introduced to a new concept or skill for the first time. Instruction during this phase must emphasize accuracy and clarity through direct, explicit teaching. Students in this phase may be learning a new grapheme-phoneme correspondence (e.g., the letters <igh> make the /ī/ sound), a new decoding strategy, or a morphological pattern.

AI Integration: AI tools can support acquisition by delivering structured prompts, providing immediate corrective feedback, scaffolding retrieval practice, and offering spaced repetition. For example, a well-designed tool might:

  • Present a phonics pattern explicitly, then prompt the student to decode words containing that pattern, providing corrective feedback when errors occur.
  • Use spaced retrieval to re-introduce previously taught graphemes at optimal intervals, reinforcing retention.
  • Guide attention to the relevant features of print (e.g., “Look at the vowel team. What sound does <ea> make in this word?”)

Risk: Acquisition requires instructional fidelity; the tool must align tightly with what the teacher is teaching, when, and in what order. A tool that generates responses for students or bypasses the need for active retrieval is undermining the acquisition process.

Phase 2: Fluency

Learning Goal: Develop automaticity of learned skills and more automatic use of strategies.

As students move into the fluency phase, the emphasis shifts from accuracy alone to accuracy with speed and efficiency. This is the phase where procedural consolidation occurs. Explicit knowledge becomes implicit, automatic, and retrievable without conscious effort. Fluency depends on repeated, cumulative practice that allows students to recognize patterns quickly and execute skills with minimal cognitive load. Fluency-building might involve timed readings, repeated oral reading, word chain drills, or passage fluency practice.

AI Integration: AI tools can be particularly effective at the fluency phase because they can provide the high volumes of adaptive practice that fluency-building requires. A well-designed fluency tool might:

  • Deliver repeated practice with real-time feedback, adjusting difficulty based on the student’s performance
  • Provide pacing support and immediate accuracy reinforcement, helping students detect and self-correct errors
  • Embed spacing and interleaving to ensure that practice promotes retention

Risk: If a tool introduces content too early or practices skills in isolation from the classroom curriculum, it risks creating confusion or overload. Fluency practice should be cumulative, building on what has already been taught and reinforcing it through structured variability.

Phase 3: Generalization and Adaptation

Learning Goal: Promote flexible transfer and application.

As students enter the generalization and adaptation phase, they begin to interact with texts and tasks as readers, writers, and thinkers. Foundational skills support deeper engagement with content-specific ways of reasoning. Each subject area has its own discourse, methods of inquiry, and ways of constructing knowledge.

AI Integration:

AI tools can play a more expansive role at this phase, supporting intellectually demanding learning opportunities while preserving cognitive effort. A well-designed tool might:

  • Support interdisciplinary tasks that require students to connect prior knowledge to new problems (e.g., How would you decode this unfamiliar science term? What morphemes do you recognize?).
  • Scaffold strategy reflection and metacognitive awareness without replacing student thinking (e.g., What strategy did you use? Why did it work here? Would it work in a different context?)

Risk: Metacognitive laziness, a phenomenon described by Oakley and colleagues (2025), is a tendency for students to disengage from deep reasoning when too much cognitive work is outsourced to AI. If the tool generates responses, solves problems, or provides answers that students should be constructing themselves, it short-circuits the very cognitive demand that this phase is designed to cultivate.

Promise or Poison?

AI holds genuine promise for literacy instruction when aligned with how students actually learn. Until rigorous evaluation becomes the norm, educators must rely on frameworks like the Expanded Instructional Hierarchy to distinguish tools that support learning from those that merely simulate it.

References

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). Worth Publishers.

Haring, N. G., Lovitt, T. C., Eaton, M. D., & Hansen, C. L. (1978). The fourth R: Research in the classroom. Charles E. Merrill Publishing Company.

LaBerge, D., & Samuels, S. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6(2), 293–323. https://doi.org/10.1016/0010-0285(74)90015-2

McClelland, J. L., & Rumelhart, D. E. (1985). Distributed memory and the representation of general and specific information. Journal of Experimental Psychology: General, 114(2), 159–197. https://doi.org/10.1037/0096-3445.114.2.159

Norman, K. A., & O’Reilly, R. C. (2003). Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach. Psychological Review, 110(4), 611–646. https://doi.org/10.1037/0033-295X.110.4.611

Odegard, T., & Gierka, M. (2025). The science of reading meets the science of learning: Memory systems, structured literacy, and the role of AI. Annals of Dyslexia. https://doi.org/10.1007/s11881-025-00345-y

Perfetti, C. (2007). Reading ability: Lexical quality to comprehension. Scientific Studies of Reading, 11(4), 357–383. https://doi.org/10.1080/10888430701530730

Roediger, H. L., III, & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27. https://doi.org/10.1016/j.tics.2010.09.003

Schacter, D. L., & Tulving, E. (1994). Memory systems 1994. MIT Press.

Schwanenflugel, P. J., Meisinger, E. B., Wisenbaker, J. M., Kuhn, M. R., Strauss, G. P., & Morris, R. D. (2006). Becoming a fluent and automatic reader in the early elementary school years. Reading Research Quarterly, 41(4), 496–522. https://doi.org/10.1598/RRQ.41.4.4