Can Generative AI Strengthen Critical Thinking? A Pedagogical Framework for LLM Integration in Higher Education

The rapid integration of large language models (LLMs) such as GPT-4 and DeepSeek R1 into higher education has generated considerable enthusiasm among educators and institutions alike. These tools offer unprecedented capabilities for information retrieval, text generation, and interactive dialogue, making them attractive supplements to traditional learning environments. Yet beneath the surface of this enthusiasm lies a pressing pedagogical concern: when students offload their thinking to AI systems, what happens to the cognitive habits that higher education is designed to cultivate? A new study published in Computers and Education: Artificial Intelligence takes on this question with rigorous theoretical care, offering a framework that reframes how educators should think about AI not as an instructional tool but as an object of critical inquiry itself.

Mireia Vendrell from the Artificial Intelligence Research Institute at the Spanish National Research Council and Samantha-Kaye Johnston from the Assessment and Evaluation Research Centre at the University of Melbourne argue that the educational impact of LLMs is neither inherently beneficial nor harmful. Rather, it is contingent on design. What an AI system affords, obscures, or prioritises, and the pedagogical conditions under which it is deployed, collectively determine whether the technology deepens or diminishes the quality of student thinking. This distinction matters enormously, especially in fields like health management, clinical decision-making, and health policy, where the consequences of uncritical reasoning can extend well beyond the classroom.

The Risk of Cognitive Offloading

The central concern animating this research is what the authors describe as unstructured LLM use. When students interact with AI without pedagogical scaffolding, three interconnected risks emerge. The first is cognitive offloading, a process by which students delegate effortful intellectual work to the AI and thereby bypass the very reasoning processes that learning is meant to strengthen. The second is metacognitive disengagement, in which learners abandon the self-monitoring and self-regulating behaviours that allow them to evaluate the quality of their own understanding. The third is a reduction in epistemic agency, meaning students gradually lose the disposition and capacity to question knowledge, assess its validity, and take intellectual ownership of their conclusions.

These risks are not hypothetical. The authors draw on a growing body of empirical research demonstrating that AI-assisted work can reduce students’ cognitive effort, impair depth of inquiry, and produce a troubling conflation between linguistic fluency and epistemic credibility. When an AI generates a well-structured, confident-sounding paragraph, students may accept it as knowledge without interrogating its foundations. This tendency is particularly dangerous in health sciences education, where students must learn to distinguish between credible evidence and plausible-sounding claims, and where that distinction carries real consequences for patient outcomes.

Six Intellectual Processes at the Core of Critical Thinking

To develop their framework, Vendrell and Johnston begin by identifying six essential intellectual processes that underpin genuine critical engagement with knowledge. Conceptual interpretation refers to the ability to make sense of ideas within their proper context, to read meaning from complexity rather than accepting surface-level summaries. Inferential reasoning is the capacity to draw logical conclusions from evidence, to move from what is known toward what can be justifiably inferred. Evaluative judgement involves the critical assessment of arguments, sources, and claims, including the willingness to reject poorly supported conclusions regardless of how authoritatively they are expressed.

Metacognitive regulation is the practice of monitoring and directing one’s own thinking, recognising when understanding is incomplete and adjusting one’s approach accordingly. Intellectual curiosity describes the orientation toward questioning and exploration, the refusal to treat any answer as final. Finally, epistemic integrity encompasses honesty about the limits of one’s knowledge and a commitment to justifying beliefs through evidence rather than convenience. Taken together, these six processes represent a portrait of what it means to think well. The framework’s central claim is that AI integration in education should be designed to exercise and reinforce each of these capacities, not to substitute for them.

Eight Design Principles for AI-Enhanced Learning

The framework translates these six processes into eight actionable pedagogical design principles, each addressing a different dimension of how educators can structure AI-mediated learning to preserve cognitive rigour. Among the most conceptually significant is the principle of preserving cognitive friction. The authors argue that productive struggle, the intellectual discomfort that comes from grappling with difficult material, is not a problem to be solved by AI assistance but a mechanism through which understanding is built. Educators must deliberately design spaces in which students encounter difficulty, make errors, and work through them, rather than receiving polished AI-generated solutions.

Closely related is the principle of positioning LLMs as provisional thinking partners rather than authoritative sources. When students understand that AI outputs are probabilistic, context-dependent, and subject to error, they are more likely to engage critically with those outputs rather than simply consuming them. The framework also emphasises the importance of embedding evaluation throughout the learning process, so that students develop the habit of assessing AI-generated content against independent standards rather than treating it as a final product. Another key principle involves sequencing AI-mediated and AI-free phases of learning, ensuring that students develop foundational competencies through independent reasoning before AI assistance is introduced. This sequencing prevents the premature outsourcing of intellectual work and ensures that AI augments rather than replaces the formation of core skills.

Implications for Health Sciences and Health Management Education

Although the framework is presented as a general model for higher education, its implications are especially pronounced in the health sciences. Health management students, for instance, must develop the capacity to analyse complex organisational and policy data, evaluate competing evidence bases, and make recommendations under uncertainty. These are precisely the capabilities most at risk when AI use is unstructured. A student who learns to delegate data synthesis to an LLM without first developing independent analytical skills is likely to enter professional practice with significant epistemic blind spots.

The framework’s emphasis on epistemic integrity is particularly relevant in clinical and health policy contexts, where the authority attributed to AI-generated content can distort professional judgement. The authors are careful to note that their framework does not advocate for the removal of AI from educational settings. On the contrary, they argue that AI can serve as a powerful instrument for developing critical thinking when it is treated as an object of inquiry. Students who learn to interrogate AI outputs, to identify what the system does not know, to recognise the assumptions embedded in its responses, are developing exactly the kind of critical faculties that health professionals need when evaluating algorithmic recommendations in clinical practice.

A Theoretically Grounded and Practically Oriented Contribution

One of the distinguishing features of this study is its grounding in established theoretical traditions. The framework draws on constructivist and sociocognitive theories of learning, foregrounding active meaning-making and inferential reasoning in the tradition of Bruner, Piaget, and Vygotsky. It incorporates research on metacognition and self-regulated learning to emphasise the importance of strategic control over one’s own thinking. And it aligns with critical pedagogy in treating knowledge as provisional, situated, and open to interrogation, particularly when that knowledge is mediated by an AI system whose epistemic values and training assumptions are rarely made explicit to learners.

The authors illustrate their framework through two detailed classroom scenarios that demonstrate practical application across different disciplinary contexts. These scenarios are not prescriptive blueprints but illustrative cases designed to show how the design principles can be operationalised within existing curricular structures. The framework is explicitly positioned as a provisional orientation rather than a definitive solution, one that must remain open to critique, adaptation, and contextual reinterpretation as the technology and its educational uses continue to evolve.

Conclusion

Vendrell and Johnston’s framework represents a timely and theoretically rigorous response to one of the central challenges facing contemporary higher education. By shifting the question from whether to use AI in learning to how AI should be designed into learning environments, they offer educators a conceptual vocabulary and a practical orientation for navigating a genuinely difficult pedagogical moment. The value of generative AI in education, they argue, is not technical but fundamentally pedagogical, shaped by whether students are guided to think with, about, and beyond these tools rather than simply through them. For institutions training the next generation of health professionals, researchers, and policy makers, this distinction is not merely academic. It is essential.


Reference

Vendrell, M., & Johnston, S.-K. (2026). Scaffolding critical thinking with generative AI: Design principles for integrating large language models in higher education. Computers and Education: Artificial Intelligence. Advance online publication. https://doi.org/10.1016/j.caeai.2026.100572

Subscribe to the Health Topics Newsletter!

Google reCaptcha: Invalid site key.