This webpage builds on AI Literacy content in AI 101 and AI 102 and introduces the following AI literacy competencies (Hibbert et al., 2024; University of Adelaide, 2024):

  1. Reflecting on the impact AI tools can have on learning and critical thinking.
  2. Interpreting results produced by AI systems; identifying limitations and potential errors. 
  3. Formulating arguments about the benefits and drawbacks of AI implementation.

Considering the role of AI in learning and critical thinking can be enhanced by the following brief overview of the neuroscience of learning. Learning develops over time as the brain builds neural connections via a combination of cognitive processes that are influenced by emotions, sleep, motivation, nutrition, sensory experiences, and many other factors unique to each person. Those neural connections are strengthened and expanded across the neocortex through repetition in varied contexts, environments, interactions. This creates neural pathways throughout the brain that are tapped for focus, information retrieval, sense-making, and all other cognitive processes involved when a person is actively engaged in learning.

Critical thinking skills, such as problem solving, risk processing, analytical reasoning, and examining assumptions, are enacted in the prefrontal cortex of the brain, which develops primarily during adolescence and young adulthood. Since the majority of undergraduates are young adults, learning how to think critically is an especially important element of a university education. Teaching critical thinking skills is increasingly important to prepare students for a dynamically evolving future in which AI is integrated into everyday life and careers.

Cognitive Offloading and AI

Many university students and instructors juggle class, studies, work, social, and family obligations. To do so, they employ a common practice called cognitive offloading, or “the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand (Rieke and Gilbert, 2016, p.676).” Common examples of cognitive offloading in university environments include taking notes during class or studying in a quiet place to reduce distractions. Learners and instructors can also use AI-enabled tools for cognitive offloading. For example, they might turn to apps that incorporate AI to study vocabulary words or proofread a research paper. Cognitive offloading can help people focus, improve performance, and/or increase efficiency. However, learners and instructors have to be careful not to let cognitive offloading to AI disrupt the necessary hard work of establishing and reinforcing the neural connections that deepen critical thinking skills through repeated study and practice in varied contexts.

Some research shows that routinely offloading cognitive tasks to AI can lead the brain to trust AI’s outputs. The concern is that people will become accustomed to superficially processing the information needed to handle complex tasks. Others raise concern that AI users with less robustly developed digital literacy or AI literacy may trust AI with certain cognitive tasks without fully understanding how different agents work or how to test the validity of its outputs (see next section on Lateral Reading). Repeated superficial processing of information could lead to skill atrophy, increased dependence on AI, and reduced focus on details instead of the increased focus traditionally associated with cognitive offloading. Gerlich’s (2025) findings suggest that “as users develop greater trust in AI, they are more likely to delegate cognitive tasks to these tools… This trust creates a dependence on AI for routine cognitive tasks, thus reducing the necessity for individuals to engage deeply with the information they process. Increased trust in AI tools leads to greater cognitive offloading, which in turn reduces critical thinking skills (p. 24).

At the same time, AI agents may also benefit learners’ abilities to learn to think critically and manage cognitive offloading. AI has some potential to help manage students’ cognitive load and provide avenues for practicing critical thinking skills. However, they lack the human connection, understanding, empathy and expertise of a real tutor who helps students guide their own learning and engage with deep, reflective thinking on complex topics.

How to reflect on uses of AI for learning

Consider the following points when using AI for learning to help you think critically about both the AI output and your own thinking processes. These points (adapted from King, 2025 and Center for Teaching Innovation at Cornell University) will also help you formulate arguments about the limitations and benefits of using AI for learning.

  • What you already knew before asking AI
  • How the AI output compares to course materials
  • How you can assess the accuracy of the AI output using credible sources
  • What the AI struggled to explain or is missing
  • What biases and perspectives the AI output forefronts
  • What new ideas the AI gave you
  • Which ideas you used and why
  • Which ideas you rejected and why
  • How your thinking was influenced through this process
  • How you safeguarded your data and privacy

Interpret and Identify AI’s Limitations and Errors

Fact-check AI by UCSB Library (June 2025)

Lateral reading offers a powerful approach to verifying AI-generated content and helping us formulate an argument about its benefits and drawbacks. Unlike traditional reading, which involves analyzing a single source in isolation, lateral reading encourages users to leave the original source and consult other trusted, independent references to assess its validity. This skill is essential when dealing with AI-generated material, as such AI systems that produce content that seems authoritative but may include inaccuracies, biases, or even fabricated information. By adopting lateral reading practices, people can make better-informed decisions and judgements about the content we are consuming. (The previous paragraph was edited from a Perplexity.ai response to “write an opening paragraph to introduce the importance of lateral reading when fact-checking sources of information that are generated by an artificial intelligence system,” March 3, 2025).

Steps for Lateral Reading of AI Output

AI may hallucinate sources or incorrectly format sources, so it’s very important to break information from the AI into specific, searchable claims.  

  1. Make a list: What are the specific verifiable or searchable claims in the AI generated content? If the AI shows its reasoning process, what steps did it go through?
  2. What sources/links indicate where the information was sourced by the AI? Is this a real and reliable source? 
  3. In the linked sources, search for the content generated by AI and then read its surrounding information and context. (It could have been taken out of context.)
     

AI can often put true information in an incorrect context (e.g. Correctly writing the title and source, but misidentifying the author; quoting a phrase that is in an article about a distantly related topic.)

  1. Open a few new tabs on a browser to check links provided by the AI to verify the accuracy and applicability of the claim and information. Who are the authors and publishers? What can you tell about their biases based on their experience, credentials, employer, and funding sources?
  2. Look for additional information about the claim, author and publication on trustworthy and official sites. Open multiple reliable and valid website sources to see if they support or refute the claim and your findings. Tips: be skeptical of headlines, look closely at the URL, and investigate the source.
  3. To confirm a source use Google Scholar or UCSB Library search.
     
  1. What assumptions are inherent in the prompt you used? 
  2. What are the limitations and biases of the AI training data, and how could these impact its output?
  3. Who are experts on this topic? What different perspectives do they have compared to AI output? 
     
  1. Is the information accurate/correct? Did you find correct information in the sources you found in your lateral reading?
  2. Specify any misinformation and explain why it is wrong, then offer the correct explanation.
  3. If incorrect, is it useful to continue using AI for this task? If yes, could you modify your prompt or choose a different AI tool to improve the quality of the information produced?
     

Sources

Center for Teaching Innovation, C. U. (nd). Ethical AI for Teaching and Learning [Education]. Center for Teaching Innovation.  

de Graaf-Peters, V. B., & Hadders-Algra, M. (2006). Ontogeny of the human central nervous system: What is happening when? Early Human Development, 82(4), 257–266. 

Gerlich, M. (2024). Balancing Excitement and Cognitive Costs: Trust in AI and the Erosion of Critical Thinking Through Cognitive Offloading. Available at SSRN 4994204.

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6.

Grinschgl, S., & Neubauer, A. C. (2022). Supporting cognition with modern technology: Distributed cognition today and in an ai-enhanced future. Frontiers in Artificial Intelligence, 5, 908261.

Hibbert, M., Altman, E., Shippen, T., & Wright, M. (2024, June 3). A Framework for AI Literacy [Education]. EDUCAUSE Review. 

King, A. (2025, March 28). Teaching critical thinking with AI. Thinking Ed Tech, LinkedIn.

Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, 135–142. 

Kozyreva, A., Lorenz-Spreen, P., Herzog, Ecker, U., Lewandowsky, S., & Hertwig, R. (2024, July 1). Toolbox: Conceptual overview [Max Planck Society for the Advancement of Science e.V.]. 

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in cognitive sciences, 20(9), 676-688.

Texas A&M Fact-checking AI with Lateral Reading lib guide

UMD Libraries (Director). (2023, August 17). AI Fact Checking Text & Links [Video recording].

University of Adelaide, L. (2024, August). Artificial Intelligence Literacy Framework. University of Adelaide.

University of Maryland Fact-checking AI with Lateral Reading lib guide