Education Research Current Organisation and Cooperation NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Exchange programme VU Amsterdam Summer School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Energy in transition
Israël and Palestinian regions Women at the top Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

ChatGPT is terrific - but at what costs?

As we speak, the hype for new AI trends is insatiable and the use of AI-based applications is expanding to almost every industry and group of society. Generative AI has seen significant advancements over the course of 2022, for example in the field of text-to-image generation. Tools such as Dalle-2, Stable Diffusion, and Midjourney have garnered significant attention for their ability to transform human language into pictures, causing an outcry in the artistic world since these tools offer a unique way to create art “on command” without really needing a human artist anymore.

However, probably the most controversial and popular application in this domain is ChatGPT, an "all-knowing oracle" chatbot built on top of OpenAI's GPT family of large language models, with superb capabilities to solve information processing tasks. ChatGPT can perform a variety of tasks, including question answering with in-depth explanations, composing poetry, or writing and debugging programming code, just to name a few. Its speed and accuracy to generate responses within the context of a given prompt are admirable, notwithstanding its current flaws of occasionally providing incorrect or unsuitable answers, particularly for trick questions. ChatGPT's sophisticated features therefore open up a wide range of potential applications across numerous industries and a considerable efficiency boost to process or generate information.

Since all the technical capabilities that ChatGPT offers are already being vividly discussed as well as their potential disruptive consequences for incumbent technologies, like Google search for example, one question still seems mostly unaddressed: How do generative AI applications affect essential human thinking capabilities such as critical, reflective, deep thinking? While the internet as a sheer infinitely large external- and always available information medium already made it difficult for us to memorize information, do advanced chatbots go even further and affect other human cognitive abilities such as logical reasoning and analytical thinking? By writing this article, we aim to reflect on what we can learn from the previous technology adaptation to the internet and hypothesize the (unintended) consequences of the use of generative AI applications on human cognitive capabilities. We primarily draw on the theoretical lens of cognitive psychology to assess the impact of technology on human thinking.

To do so, we first examine some core human cognition theories and then apply them to the use of advanced chatbots. This way, we identify three (unintended) cognitive consequences for the rise of generative AI.

A glimpse into the cognitive psychology perspective

In the field of cognitive psychology, the human mind is often referred to as a "cognitive miser" because people, regardless of their intelligence, tend to choose simpler and easier thinking methods instead of more complex and cognitively demanding ones (Stanovich, 2009). The underlying assumption of the concept is that humans are limited in their capacity to process information, so they take shortcuts whenever they can (Fiske & Taylor, 1991; Kahneman, 2003). The cognitive miser theory, first introduced by Fiske & Taylor (1991), suggests that people engage in economically cost-effective thought processes instead of rationally weighting costs against benefits and updating expectations based upon the results of their everyday actions (Fiske & Taylor, 1991).

Much of the cognitive miser theory is built upon research done on heuristics in human judgment and decision-making. For instance, dual-process theories explain how human thought processes are mainly distinguished: System 1, being intuitive, automatic and unconscious processing of information, whereas System 2, being analytical, controlled and conscious thinking (Stanovich & West, 2000; Kahneman, 2003; Kahneman, 2011). System 1 is therefore often also referred to as fast thinking, following mental shortcuts and heuristics, whereas System 2 is referred to as slow thinking, relying on conscious and careful reasoning of information and arguments (Kahneman, 2003; Kahneman, 2011). Because System 2 is slower and more cognitively demanding than System 1, humans often switch to System 1 thinking, thus using heuristic thinking for a faster, more efficient computation of information, but with the risk to arrive at a suboptimal decision.

Heuristics can be defined as the "judgmental shortcuts that generally get us where we need to go—and quickly—but at the cost of occasionally sending us off course" (Gilovich & Savitsky, 1996, p.36). To reduce their cognitive load, humans tend to ignore part of the information associated with certain tasks and rather rely on mental shortcuts to solve the underlying task, because information search and processing costs time and cognitive resources (Gigerenzer & Gaissmaier, 2011).

In today's world, the search for suitable information is easier than ever thanks to the internet as an “external memorization” medium. Since the information required no longer must be retrieved by people themselves from their memory, but is almost always at hand via intelligent devices, the associated cognitive efforts for gathering and storing information are reduced. However, this “comfortable” method of information retrieval has also compromised the ability of humans to recall information, also known as the Google Effect. A study by Sparrow et al. (2011) investigated the cognitive consequences of having information at our fingertips due to the advent of the internet and found that people do not tend to remember information if they believe it will be available to look up later. Further, if people have future access to information, they are much more likely to remember where the information is located than recalling the information itself (Sparrow et al., 2011). Another study by Dong & Potenza (2015) found that information learned through internet search is recalled less accurately and with less confidence as compared with traditional book searching.

The proliferation of information through the internet has seemingly caused a decline in our ability to memorize and recall information effectively, however generative AI applications like ChatGPT might go even further and compromise other cognitive capabilities associated with System 2 thinking, like logical and analytical thinking, as well. Whether it is writing code or solving complex math problems, for which humans require logical thinking and problem-solving skills, or composing neatly written text related to a particular topic or artistic poems, which requires us to be creative and stresses our writing skills. With generative AI, our cognitive capabilities will be further shifted from independent, creative thinking to mainly “requesting” knowledge. By inserting a prompt, we get everything “served on a silver platter”, so the only cognitive effort lies within creating the prompt.

For the further course of this article, we have outlined three potential consequences that the utilization of generative AI applications, particularly chatbots, may have on human cognitive processes. We will elaborate each of these consequences, providing additional details and illustration through examples.

Consequences of Generative AI on Human Cognition

1) Short-cutting to “final outcome” without engaging in the process of developing it

It is commonly counter argued that external memory sources, such as the internet, enable individuals to perform at higher cognitive levels by allowing them to efficiently access and process information without spending cognitive resources on recall. However, there lies a big difference between incumbent external memory sources and advanced chatbots: While traditional external sources like the internet provide specific information in response to a query, the human user still must engage in the cognitively demanding process of reasoning and drawing conclusions based on that information. In contrast, advanced chatbots like ChatGPT can perform this entire process of reasoning and decision-making on behalf of the user, effectively substituting almost the full human thinking and decision-making process. For example, imagine a financial analyst who needs to decide about a potential investment. In the past, the analyst would have to spend significant time researching the company, analyzing financial statements, and considering market trends. With the use of an advanced chatbot, the analyst can simply input a prompt asking for the chatbot's recommendation on the investment and receive a detailed and well-reasoned response that includes financial analysis, market trends, and a prediction of the company's future performance. The chatbot's sophisticated features effectively take over the cognitively demanding process of researching and analyzing, thus reducing human error and providing a more efficient way of decision making, while the analyst can rely on the chatbot's output and make the final decision. Another example can be drawn from the field of software engineering: Traditionally, software developers would need to spend a significant amount of time and cognitive effort researching and understanding a problem or task at hand, as well as troubleshooting and debugging the code they write. However, with the use of chatbots, a developer could simply provide the chatbot with a prompt outlining the problem or task, and the chatbot would be able to generate the necessary code, troubleshoot and debug it, and even provide explanations for its decisions. This greatly reduces the amount of time and cognitive effort required from the developer.

2) Defaulting the uncritical and cognitively appealing (mindless) consumption of synthetic content in general

Picking up the examples from above, the shift in thinking patterns may not be limited to just the work environment. The transition from critical System 2 thinking to superficial System 1 thinking can become the default way of approaching thought processes in other areas of life, too. For instance, white-collar workers may use ChatGPT to manage their email traffic at work in a more efficient manner by letting chatbots do the demanding task of formulating emails, while the worker only provides the content in condensed bullet points. This bears the risk that those employees may gradually lean into this mode of “short-cutting” to handle personal emails or personal communication in the same manner. But also, for more trivial cognitive tasks outside of work, like meal planning, organizing of daily activities or decision-making in the shopping process, humans may start to “outsource” cognitive effort to generative AI solutions like chatbots since the efficiency boost and cognitive relief is tempting for humans in their role as cognitive misers.

If humans stop learning or stop engaging in cognitive effortful tasks, the brain may be less able to adapt and change, potentially leading to a decline in cognitive abilities and overall intellectual capacity. This is because the brain can be metaphorically seen as a “muscle”, in the sense that if it's not used, established neuronal connections and thus learned knowledge and cognitive abilities regress (Shors et al., 2012). Generative AI applications may cause individuals to gradually avoid engaging in System 2 thinking in general, as the brain is no longer accustomed to this mode of operation.

3) Superficially accepting the knowledge without/before critically examining it

If this pattern of superficial requesting of knowledge continues and bears the risk to become the default mode of working, humans may gradually also lose their cognitive abilities to engage in critical examination of the results generated by generative technologies, too. Since a critical examination and evaluation of the outputs of a chatbot can also be cognitively demanding, and users may not be experts in the field for which they are retrieving knowledge, the danger of pure acceptance of the chatbot's output is tempting. This creates a double risk: not only may humans lose certain essential cognitive capabilities regarding their deep thinking, but they may also be unable to correct problematic or incorrect results generated by these technologies, which is especially dangerous in critical fields such as healthcare.

While it can be argued that advanced chatbots like ChatGPT can perform complex tasks on a level that can surpass human abilities, this doesn't mean that they are resistant to errors or biases. Chatbots, and generative AI in general, is only as good as the data it is trained on. If the underlying data contains biases or inaccuracies, the AI’s outputs will also be biased or inaccurate. Moreover, there are certain situations where a human touch is still required, for example in ethical or moral decision-making. A chatbot may be able to provide recommendations based on the data it has been trained on, but it lacks the ability to understand the nuances of a situation and make a decision that considers the broader context and implications. Therefore, even if chatbots become more advanced in the future and will eventually surpass human-level capabilities in problem-solving tasks, there will still be cases where human intervention and critical or analytical thinking are necessary to resolve issues and overcome challenges. For this reason, it is important to maintain our ability to critically examine information and engage in independent thinking, so that we can step in when required and make informed decisions based on our unique understanding of the situation.

What to be mindful about and what’s next?

As advanced chatbots may continue to substitute more and more the independent human thought process, there is a risk that individuals will increasingly rely on superficial, heuristic thinking (System 1 thinking), rather than the more analytical and conscious thinking (System 2 thinking). This shift towards superficial thinking can have far-reaching consequences as it may reduce the ability of individuals to critically evaluate and understand complex information. Moreover, it may also lead to a decline in the human capability to think creatively and to come up with innovative solutions to problems on their own. Furthermore, heuristic thinking is a way of approximating information, which can lead to errors and biases that can be problematic in certain situations and decision making without automation assistance. In that sense, when people rely on heuristics too much, they might miss important information and make poor decisions, which can be particularly dangerous in fields that involve high stakes, like healthcare or law enforcement. Therefore, it is important to be aware of the potential risks associated with relying on heuristic evaluation on the output of generative AI and to actively cultivate and maintain our ability for analytical System 2 thinking.

ChatGPT has reached 100 million users just two months after launching (Hu, 2023), so the widespread future use of advanced chatbots is almost certain since economic, external incentives, as well as internal, psychological incentives favor automation over human cognitive thinking: In the business world, competition favors every little possibility to gain market advantage over competitors, whereby the most efficient methods are always chosen to conduct work, thus automation becomes more pervasive and continues to replace “inefficient” human thinking in the work environment. From a psychological perspective, humans themselves can be seen as cognitive misers, where as a result, we welcome any form of automation that relieves us of cognitive effort in our private life.

ChatGPT, with its superb capabilities to solve information processing tasks, is set to become more advanced with each iteration. Microsoft already launched a new version of its Bing search engine that incorporates ChatGPT to make it easier for users to create content and find answers on the web (Mehdi. 2023). Google has followed suit and introduced its own chatbot, Bard, which will also be integrated into the company's search engine (Pichai, 2023). Due to market pressure, every player in the market will aim to beat the competitors' chatbot in terms of their capabilities and try to incorporate the technology in their services as much as possible. Therefore, the previously addressed weaknesses are likely to be overcome sooner than later. Chatbots will likely become increasingly prevalent in the business world and may take part in our daily lives, making our interactions with technology more seamless and efficient. As the technology continues to improve, it is likely that advanced chatbots will play a significant role in shaping the future of how we access and process information.

However, the question remains how independent human thinking will be shaped: Will we continue to be able to draw logical conclusions ourselves through deep, but tedious thinking? Or will we lose cognitive capabilities and become like “shallow waters”, lacking the depth and flow of ideas and imagination by only perceiving and interpreting information superficially?

Rather than making predictions about the future of the interplay between humans and chatbots, this article should serve more as a theoretical foundation on how humans are exposed to automated information processing from a cognitive-psychological perspective. The article should stress and call for further research in the area of cognitive biases that may arise from the use and adoption of advanced chatbots in order to better understand potential consequences. Let’s think deeply together!

About the authors

Marcel Peter is an alumnus of the Digital Business & Innovation Programme at VU Amsterdam, where he obtained his Master's degree in 2022. As a part of his Master's thesis, he researched the effect of explainable AI in the domain of radiology, specifically how explainable AI methods influence radiologists in their decision-making process while reading mammograms. Marcel was supervised by Prof. Dr. Mohammad Rezazade Mehrizi and worked closely together with him over the course of the thesis trajectory. After graduating, Marcel continued to stay in contact with Mohammad, and together they have further extended their research in the field of AI in radiology. After testing ChatGPT, Marcel found that much of the literature from his Master's thesis on Cognitive Psychology and Hybrid Intelligence was highly applicable to the way humans interact with chatbots, what led him to write this article.

Mohammad H. Rezazade Mehrizi is an Associate Professor of work and organizational learning, at KIN center for digital innovation, Vrije Universiteit Amsterdam. His is passionate about understanding and helping practitioners and organizations to learn and unlearn beyond their current limitations. His current research he examines the dynamics of expertise and learning among knowledge works and professionals in relation to emerging algorithmic technologies.

References

Dong, G., & Potenza, M. N. (2015). Behavioural and brain responses related to Internet search and memory. European Journal of Neuroscience, 42(8), 2546–2554. https://doi.org/10.1111/ejn.13039

Fiske, S. T., & Taylor, S. E. (1991). Social Cognition. McGraw-Hill Education.

Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic Decision Making. Annual Review of Psychology, 62(1), 451–482. https://doi.org/10.1146/annurev-psych-120709-145346

Gilovich, T., & Savitsky, K. (1996). Like goes with like: The role of representativeness in erroneous and pseudoscientific beliefs. Skeptical Inquirer: The Magazine for Science and Reason, 20, 34-40.

Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base - analyst note.

Reuters.https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697–720. https://doi.org/10.1037/0003-066X.58.9.697

Kahneman, D. (2011). Thinking, Fast and Slow (1st ed.). Farrar, Straus and Giroux

Mehdi, Y. (2023, February 7). Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web - The Official Microsoft Blog. The Official Microsoft Blog.https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/

Pichai, S. (2023, February 6). An important next step on our AI journey. Google.
            https://blog.google/technology/ai/bard-google-ai-search-updates/

Shors, T. J., Anderson, M. L., Curlik, D. M., & Nokia, M. S. (2012). Use it or lose it:
            How neurogenesis keeps the brain fit for learning. Behavioural Brain Research,
            227(2), 450–458. https://doi.org/10.1016/j.bbr.2011.04.023

Stanovich, K. (2009). SIX. The Cognitive Miser: Ways to Avoid Thinking. In What Intelligence Tests Miss: The Psychology of Rational Thought (pp. 70-85). New Haven: Yale University Press. https://doi.org/10.12987/9780300142532-008

Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645–665. https://doi.org/10.1017/S0140525X00003435

Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776–778. https://doi.org/10.1126/science.1207745

About this research

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Veiligheid Webcolofon Cookies Webarchief

Copyright © 2024 - Vrije Universiteit Amsterdam