top of page

Will “cognitive offloading” to AI harm students’ learning? Here’s what the research says

In 1998, the philosophers Andy Clark and David Chalmers published a paper entitled The Extended Mind. There, they argued that external objects – books, computers and smartphones – spread cognitive processes beyond the boundaries of the individual mind, becoming (literally) part of the mind itself. And with that, the idea of "active externalism" or “extended cognition” was born.


Sound bizarre? The idea, though it flies in the face of our intuitions, feels strangely compelling: it explains several features of human cognition, and never more so in the age of smart devices, AI, and even (gulp) human-machine interfaces. 


Clark and Chalmers illustrated the concept of extended cognition with the more mundane example of Otto, an Alzheimer's patient who relies on a notebook to store and retrieve important information and form beliefs, much like a person with normal memory would use their biological memory. In both cases, the notebook and biological memory play the same functional role in explaining their beliefs.


“There is nothing sacred”, the authors conclude, “about skull and skin.” 


Of course, the idea of extended cognition has not gone unchallenged in philosophical circles, however what interests us here is its implications for cognition and learning itself. Could excessive “offloading” of our cognitive processes, made so easy by artificial intelligence, harm cognition and learning?



An image of a hand holding a model brain


Does cognitive offloading harm learning?

What characterises “extended cognition” is our habit of “offloading” mentally taxing tasks to our environment. 


There’s nothing bad per se about this habit: our earliest ancestors did it by assigning individuals specific roles (e.g., hunting, gathering, tool-making), distributing the mental load, and by creating external representations of knowledge in the form of drawings. When relying on internal processes alone, we often encounter hard limits –– humans have restricted memory storage, strong constraints on attention, and our perceptual abilities continually decline with age. 


Today, cognitive offloading takes many forms: making to-do lists on your phone, notetaking during a lecture, using a calculator, using a pen and paper to factorise an equation, using a GPS to navigate, or relying on a partner or friend to remember appointments or shared experiences.


Unsurprisingly, cognitive offloading has been shown to improve immediate task performance by accelerating it and/or reducing errors, helping people compensate for and overcome the well-established capacity limits of cognitive processes. A quick example: using Excel’s advanced functionalities to make forecasts is going to yield more accurate results more quickly than manually plotting linear regression graphs. Similarly, asking a KS2 child to carry out a long multiplication operation without a pen and paper is likely to generate some interesting results.


However, there is a trade-off: several studies confirm that “frequent externalisation of internal cognitive processes leads to an impoverishment of the corresponding internal abilities.” In other words, frequent cognitive offloading harms cognition. 


Negative effects have been found for spatial memory, problem solving, and information recall. In this study, researchers tested the immediate task performance and long-term memory of two groups in the recall of 20 unique spatial arrangements of objects. Participants in the first group were allowed to consult the model arrangement as often as they wished (i.e. engage in more cognitive offloading), while the second group were “locked out” of the model arrangement for several seconds at a time, reducing offloading behaviour. Both groups were then subject to an unexpected memory test. 


The result? The participants in the “no lockout” condition performed more cognitive offloading and performed better in the task, but subsequently showed less accurate memory performance.



How will the rise of AI technology influence the cognitive offloading strategies employed by students?

Recent neuroscience research has already shed light on how modern information environments are altering our cognitive patterns. The fragmented nature of the Internet has been shown to reduce attentional scope with prolonged use, while fMRI studies have shown reduced activation of the ventral stream (the so-called “what” stream) during online information gathering compared to traditional encyclopedia-based learning, potentially explaining poorer recall of Internet-sought information. 


As the mass-scale experiment in extensive Internet usage continues unabated, AI promises to bring on yet another cognitive revolution. Unlike other technologies which can free up cognitive load that can then be directed towards more complex cognitions, AI is capable of highly complex cognitive tasks such as reasoning and decision-making.


Empirical research into the cognitive impacts of AI remains somewhat limited, however some theoretical papers have extrapolated from what we already know about the effect of other technologies on our cognitive offloading habits. 


This paper, for example, asserts that AI can act as a “cognitive prosthesis”, going beyond traditional technologies by independently generating ideas and solving problems. “ChatGPT represents a logarithmic amplifier of cognitive offloading compared to the classical technologies previously available,” the author, Professor Umberto León Domínguez, told PsyPost


The theory is underpinned by the “neuronal recycling hypothesis”, which posits that the brain undergoes structural transformation by incorporating new cultural tools (e.g. reading, arithmetic or computers) into "neural niches," consequently altering individual cognition. In the case of AI, this hypothesis suggests that our brains may adapt to rely more heavily on AI systems for complex cognitive tasks, potentially leading to neglect of the higher cognitive skills.


“Just as one cannot become skilled at basketball without actually playing the game, the development of complex intellectual abilities requires active participation and cannot solely rely on technological assistance,” Dominguez says. 


AI also presents new problems insofar as it makes cognitive offloading incredibly easy, and this in turn encourages further cognitive offloading. 


“Whether humans tend to offload cognitive processes such as memory often depends on cost–benefit evaluations of internal processing versus externalisation,” Grinschgl, Papenmeier and Meyerhoff say. “Raising the costs of externalisations (e.g., by adding additional physical or temporal demands) increases the use of internal strategies such as memory-based processing, whereas lowering the costs of externalisations increases the use of technical tools.”



What does all this mean for educators?

“Proceed with caution” might be the motto of the day, but early research suggests that AI with guardrails might offer some protection from cognitive “laziness”. 


A study by researchers at the Wharton School of the University of Pennsylvania revealed that generative AI, specifically OpenAI's GPT-4, significantly boosted performance—by an impressive 48%. However, when GPT-4 was later removed, students who had relied on it performed 17% worse than those who had never used it. 


Interestingly, the study also examined the effects of GPT Tutor, a version of GPT-4 designed with specific safeguards. Students who used GPT Tutor not only achieved a whopping 127% performance improvement but also avoided the negative effects on learning associated with unrestricted generative AI.

Text warns about AI's impact on cognitive abilities, relevant to children's education. Background includes math equations, pink text, and logo.

GPT Tutor employed two key safeguards. First, it provided correct solutions to practice problems along with teacher input on common student mistakes and guidance for offering accurate feedback. This ensured the AI avoided giving incorrect information. Second, the AI was instructed to offer hints rather than directly supplying answers. 


The conclusion? Unrestricted gen-AI use appears to harm learning outcomes when access is taken away, while gen-AI with guardrails mitigates these negative effects, but doesn’t provide additional benefits either.


So, dust off those textbooks, because book-based learning isn’t dead yet.


Comentarios


We'd love to hear
from you...

Manchester, United Kingdom

  • Instagram
  • LinkedIn
  • TikTok

Follow us

on socials!

bottom of page