Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

How to deal with ChatGPT and Microsoft Copilot as a teacher

Last updated on 7 May 2024
As of December 2022, the generative AI chatbot ChatGPT is publicly accessible and since February 2024, VU provides students and staff with a secure version of Microsoft Copilot. With this, students can create papers in no time: the chatbot writes based on artificial intelligence (AI). In this tip, we describe threats as well as opportunities. How do you know if students can formulate and produce something on their own? But also: what opportunities does this present for shaping education?

Students could always have texts produced by others, such as paid ghostwriters or by a friend or family member. But ChatGPT is free and instantly accessible. Banning will not prevent the use of ChatGPT and all its successors. In short: there is no way around it. 

Developments are moving fast.

  • July 2023: Considering assessment and examination, VU Amsterdam created an information page for students on what is and is not allowed with generative AI.
  • February 2024: Microsoft Copilot was made available to VU students and staff. To use this chatbot securely, log in with your VU credentials. The data then remains only at VU and is not shared with OpenAI. Teachers cannot force students to use ChatGPT, but can in fact ask them to use Microsoft Copilot for assignments via the VU license.

Readable texts by a computer
Generative AI systems such as ChatGPT pose several threats, especially for take-home exams and assignments. Students can ask AI chatbots simple, but also very extensive questions, and the answers very often get it right, and are linguistically put together quite well. The student can then easily have these texts translated and paraphrased or have new versions generated. However, the teacher cannot, or almost cannot, detect that the text was (partly) created by a computer. And AI detectors  cannot be used to detect such processing as they pose a privacy risk and are very unreliable.

How to deal with this in education? We offer some suggestions below. 

Assessing during the process rather than only the result 
Assessing only a finished product is no longer feasible: generative AI can produce that. Assessment must shift to the process. Consider the skills and competencies associated with information search, writing, producing, and creating texts (including content-based texts). These skills need to be recalibrated in higher education. In fact, this should have always been the case, but generative AI enforces this practice. 

Enhancing education 
With that, organizing the (peer) feedback conversation, doing intermediate and final presentations and questioning students directly about what they have done and learned becomes even more important than before (self-reflection). This may create additional workload, but it will enhance the authenticity, personalization and inclusiveness of education. 

Making topics very specific and topical 
Another way to deal with it is to have papers deal with very specific topics of students and their own context. For example, problems in the context of Community Service Learning, a business, their neighborhood, their research, their stakeholders, their community organizations, and the like. The more specific the topic, the more difficult it is for generative AI to use its background data for a readable text. 

In addition, it is necessary to focus on the most recent developments in society and science because ChatGPT is only working with information until September 2021. Also, ChatGPT does not want to say anything about companies. These last two restrictions, by the way, will surely disappear. Microsoft Copilot, now, also includes the latest information from the internet in its results.

More could also be done to carry out assignments during class time under direct supervision and peer review. Think interviews or oral explanations or tests. These methods are, of course, well known in higher education. Their use could be expanded. 

Using generative AI in higher education 
But maybe you can also use generative AI productively to encourage students to think critically? In other words, how can students use generative AI in higher education? The following suggestions enable academic integrity and the development of higher thinking skills. Lucinda McKnight of Deakin devised them, and translation was done using DeepL. This is the original source

  1. Use generative AI as researcher assistant 
    Generative AI can exhaustively research a topic in seconds and compile a text for review, along with references for students to check. This material can then serve as the basis for original and carefully referenced student texts. 
  2. Use generative AI to produce text on a particular topic for critique 
    Design assessment tasks with generative AI's produced text as input and then have students make critical annotations of the produced text or suggest improvements. A variation is to have generative AI produce different versions of text on the same topic and then have them compare and evaluate them. 
  3. Use generative AI for routine text, e.g., a poster, blog content or an informational brochure 
    Have students find out when a generative AI text, human text or hybrid text is appropriate for specific messages and have them justify this method and the result. 
  4. Use and assign generative AI for creative text, e.g., poetry 
    Students can research the multiple programs and algorithms offered to explore differences and most appropriate forms for e.g., poetry, stories etc. But also to research bias, for example.
  5. Research and evaluate which different types of generative AI tools are a good fit for your field 
    How useful can generative AI tools be, for example, to produce text in multiple languages within seconds? Or to create text that is optimized for search engines? Or a text that has optimal knowledge of your field? 
  6. Explore how to apply generative AI ethically and appropriately. 
    Here, discuss how AI can lead to various forms of plagiarism and how this can or should be prevented. Who, and what, is excluded from this material, and thus potentially from the generated text? What assumptions, biases and injustices are embedded in this material, and thus possibly in the generated text? 

    Describe principles and regulations on how students can effectively use the systems and have them keep track of how they use the systems for text production. They should then attach that method and results to a final product, for example (just like analyses of literature should also describe the search strategy). This can also be used formatively in discussion with fellow students and supervisors. 

Revival of the classic test? 
Or should we return to the old-fashioned test? Another solution - not immediately appealing at first - would be to (still) return to questioning students on knowledge, and skills under supervised conditions. This would mean a greater focus on classroom-based examinations. (A nice touch with generative AI is that these system can also generate multiple-choice questions for you on command.) But whether that's a proper solution...?

More information