Wed. Apr 29th, 2026

Are Students Thinking Less or Differently? Rethinking Critical Thinking in the Age of AI


Reading Time: 6 minutes

Quick summary: Students at Regent College London use generative AI mainly to explain concepts and generate ideas rather than to produce complete assignments, with 85.2% reporting some engagement. Frequent AI use shows only a small negative link to overall critical thinking scores, yet students strong in applying knowledge and creativity tend to use it more often. Higher education providers must therefore provide structured guidance to encourage reflective evaluation of AI outputs, safeguarding deeper thinking skills essential for student wellbeing and effective learning.




Artificial intelligence is now firmly embedded in students’ academic lives. Tools such as ChatGPT are used to explain concepts, generate ideas, and support learning tasks across disciplines. As their use becomes more widespread, concerns have emerged about what this means for students’ thinking. A central question is whether these tools are reducing critical thinking, or whether they are reshaping how it takes place.

A recent study conducted with 148 students at Regent College London examined patterns of generative AI use and their relationship with critical thinking. Regent College London is a private Higher Education provider. The survey included 25 items from Critical Thinking Questionnaire and four items from the Student Generative AI survey

Thinking fast, thinking slow, thinking with AI

Understanding how AI may influence thinking requires a psychological perspective. Daniel Kahneman distinguishes between two modes of thought. System 1 thinking is fast, intuitive, and based on mental shortcuts, while System 2 thinking is slower, more deliberate, and involves effortful reasoning.

Higher education has traditionally aimed to cultivate System 2 thinking. Critical thinking involves questioning assumptions, evaluating evidence, and engaging in reflective judgement. These processes require time, effort, and active engagement.

Generative AI introduces a new tension. Because it produces rapid and fluent responses, it may encourage reliance on faster, more automatic forms of thinking. At the same time, when used critically, it has the potential to support deeper engagement. The key issue is therefore not whether students use AI, but whether it encourages fast, uncritical responses or slower, reflective thinking.

Frequency of AI use

The study found that most students reported using generative AI at least occasionally, although patterns of use varied. Specifically, 8.8% of students reported using AI every day, 25.7% used it at least once a week, and 21.6% used it at least once a month. A further 29.1% reported using it less than once a month, while 14.9% reported that they never used it.

These findings indicate that while some students use AI regularly, others engage with it only occasionally or not at all.

National data from the Higher Education Policy Institute survey (HEPI) shows that AI use is now widespread across higher education. The survey reports that student use of AI has surged in the past year, with almost all students (92%) using AI in some form in 2025, up from 66% in 2024. In our study, 85.2% of students reported using AI, which is slightly lower than in the HEPI survey but still indicates high overall uptake.

These findings suggest that while AI use is now widespread across higher education, the frequency of engagement varies considerably, with a clear distinction between regular users and those who use AI only sporadically or not at all.

Students are using AI to support understanding

The findings show that students are not primarily using AI to generate complete academic work. Instead, they are using it to support their understanding and engagement with course material.

In the present study, 61.5% of students reported using AI to explain difficult concepts, while 43.9% used it to translate material. In addition, 32.4% reported using AI to generate ideas for assignments. Lower proportions of students reported using AI to summarise texts (18.9%), edit writing (17.6%), compile references (12.8%), or generate full text (8.8%).

These findings broadly align with national data from the HEPI survey, which shows that the most common use of generative AI is to explain concepts (58%). It was also found that nearly two-thirds of students (64%) have used AI to generate text, around one-quarter use AI-generated text to help draft assignments, and 18% report including AI-generated text directly in their work.

Overall, this suggests that students are frequently using AI to support understanding and idea development, although the HEPI data indicates that a notable minority are also integrating AI-generated content more directly into their academic work.

Student concerns about AI 

Students in our study expressed a range of concerns about AI use. The most frequently reported concern was the risk of academic misconduct, identified by 75.7 % of respondents. In addition, 56.8 % of students reported concerns about false or biased outputs, and 41.2 % reported a lack of confidence in AI-generated responses. A further 25.7 % indicated that they did not know how to use AI effectively.

These concerns are consistent with findings from the HEPI survey, where it was found that 53% of students are worried about being accused of cheating, 51% are concerned about false or inaccurate outputs, and 37% about biased results. It also highlights ongoing uncertainty around appropriate use, with only 36% of students reporting that they have received support from their institution to develop their AI skills.

An important finding from the current study is that only 21 students agreed that AI-generated content could help them achieve good grades. This suggests that, while students are using AI, many may have limited confidence in its effectiveness for assessed work.

Effect of AI on critical thinking

The study examined whether the frequency of AI use predicts overall critical thinking ability. The results showed that the overall model explained only 4.8% of the variance in critical thinking, indicating that AI use is not a strong predictor of overall critical thinking ability.

However, frequency of AI use did show a small but statistically significant relationship with critical thinking. Specifically, more frequent use of AI was associated with slightly lower critical thinking scores.

This finding is important in the context of existing literature. Current evidence focuses on short-term task performance, such as reduced cognitive engagement, lower originality, or uncritical acceptance of AI-generated outputs. In contrast, the present study examines critical thinking as a broader construct and shows that AI use alone does not substantially explain differences in overall critical thinking ability.

But this finding should be interpreted with caution. The study design does not allow conclusions about causality. For example, it might be suggested that students with lower confidence in their thinking are more likely to rely on AI tools, as opposed to the causal effect of frequency of AI on critical thinking.

Existing research suggests that AI use may shape how students engage in thinking during tasks. Studies have shown reduced cognitive effort in AI-assisted work and a tendency to accept outputs without critical evaluation unless prompted. These patterns point to a potential shift in the depth and quality of engagement during learning activities.

The findings suggest a small negative association between frequency of AI use and critical thinking. However, given the cross-sectional design and the small amount of explained variance, these results do not indicate meaningful differences in overall critical thinking ability. Instead, they may reflect more context-dependent effects of AI use that are not captured by global measures of critical thinking.

Higher-order thinking

When the study examined different domains of critical thinking, a more nuanced pattern emerged. The model predicting AI use from critical thinking domains explained 10.9 % of the variance. Critical thinking was conceptualized within the current study as comprising of analysis (examining and interpreting information), evaluation (judging the quality and credibility of arguments), creativity (generating and synthesising ideas), reflection (monitoring and evaluating one’s own thinking), and applying (using knowledge in new or practical contexts).

Within this model, the domains of applying and creating were significant predictors of AI use. This indicates that students who demonstrate stronger abilities in applying knowledge and generating ideas are more likely to use AI more frequently. Perhaps some students may be using AI as part of more complex cognitive processes, such as idea generation and problem-solving.

This interpretation is supported by research. A study found that students working with AI tools often accept outputs without questioning them unless prompted to do so. But other research studies show that when students are encouraged to question and evaluate AI responses, they engage more critically. Additionally, another study found that structured AI support can improve learning outcomes and metacognitive awareness.

Implications for higher education

The findings from this study, together with the HEPI survey, indicate that students are already using AI in varied and purposeful ways. At the same time, they highlight a need for clearer guidance and support.

If AI tools encourage faster, more automatic responses, then teaching approaches need to support slower, more reflective thinking. This may involve asking students to evaluate AI outputs, justify their reasoning, and reflect on how they use these tools.

Research suggests that strategies such as metacognitive prompting and reflective scaffolding can help students engage more critically with AI-generated content. These approaches encourage students to move beyond passive acceptance and to engage in more deliberate and analytical thinking.

AI and critical thinking

The relationship between AI and critical thinking is not straightforward. The findings show a small negative association between frequent AI use and overall critical thinking, but they also show that some aspects of higher-order thinking are linked to greater use of AI.

The evidence may suggest that AI is not simply reducing thinking but changing how and where thinking takes place.

Slower, effortful thinking is central to judgement and learning. The challenge for higher education is therefore not to prevent students from using AI, but to ensure that its use supports reflective, critical, and independent thinking.




Athina Ntasioti, FHEA is a lecturer and module leader at Regent College London and Mediterranean College, teaching undergraduate and postgraduate programmes in Health and Social Sciences and Psychology. Her areas of specialism include mental health, inclusion, and artificial intelligence.




Elizabeth Kaplunov, PhD is a chartered psychologist who evaluates projects about health technology for disabled and vulnerable people.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *