ChatGPT Is in Its Infancy and Not Yet Ready for School

ChatGPT, the latest innovation in the field of Artificial Intelligence, has taken the world by storm. The Large Language Model (LLM) is designed for conversation and is shockingly good at providing fast and well-formulated answers to user’s questions. With 1 million users in only 5 days and recently surpassing Instagram with 10 million daily users, ChatGPT has reached into every facet of society, including education. The popularity of the tool among students has sparked a heated debate on social media, with a stark divide in the opinions of the popular media on whether ChatGPT should be embraced in education, or banned. A number of schools have already banned the use of this tool, whereas others advocate to integrate the technology into modern education.

One party was left out, so we asked our good friend ChatGPT for its opinion on the matter. Its response was:

Before it can be an extension to education, ChatGPT agrees that issues must be addressed. With the chatbot capable enough to pass the final exam for a Master of Business Administration program, experts fear that the tool could morph into the ultimate cheating machine. Additionally, much of the ongoing debate is centered around the concern of plagiarism. In our opinion, however, the problem of plagiarism detection will not remain relevant for much longer. Developers are actively trying to counter the issue by “watermarking” the bot’s output and creating Large Language Model text detection tools like ZeroGPT and DetectGPT. By focussing on the plagiarism issue, the attention is diverted from problems that in our perspective will be more persistent. Banning is no option, but embracing it would be too soon.

While the attempt of GPT to ask users for responsible and critical use sounds praiseworthy, we argue that the intrinsics of LLMs in the information seeking process currently make this impossible.

Heard it through the grapevine

Increasing amounts of information are available online, and assessing its credibility has become ever more important. With the advent of LLM’s used for information retrieval, we might even need a new kind of scepticism according to ethicists. The question is whether it will still be possible to be critical like we should be. Clearly, humans attribute credibility differently to different sources, making use of cues about author and source. A scientifically published book is held to higher standards than for example a facebook post. With ChatGPT, however, all these sources are mashed, and presented in the same form; On the platform of an already impressive AI-tool. The context in which the original information is presented vanishes completely in the responses of LLMs. Additionally, these tools are currently not transparent in their sourcing. This is a key issue because it hinders the user in validating the information they are provided with. Unlike with Google or Wikipedia, LLMs provide no way to engage with or understand the information apart from the generated text. This type of information retrieval is unprecedented, and the use of LLMs therefore limits the ability for critical thinking. But if the language model is so clever, then ChatGPT must be reliable, right? 

Mean what you say

Responses of ChatGPT are often fluent and appear meaningful. While the text could indeed mean something that is verifiably true, the produced text is nothing more than a predicted string of words. The aim of an LLM is never to produce truth. Rather, the only goal for ChatGPT is to sound reasonable and accurate by mimicking what humans have said before. The chatbot speaking like a confident expert, without valuing truth or meaning, is a problematic combination. LLMs are able to reason, explain and debate, and have been already successful in actively convincing people. The more fluent the language model, the more users perceive it as credible and objective. Although OpenAI themselves call for awareness on the subject, saying: “It also has the ability to provide responses that sound very real but are in fact made up”, they argue that the problem can be averted and that users should have information to check the claim made by ChatGPT. However, as we saw in the previous section, the direct sources are not available. And who would really expect someone to double check every statement, especially if ChatGPT’s purpose is to sound convincing. 

Tell me I’m right 

Misinformation and the current information overload are extensively discussed topics in current media. Without actively searching, news is presented to users to fit their initial presumptions. Even while people might think they are fact checking when they search for answers on the internet, more often they are looking for confirmation. This phenomenon has been extensively researched and is called the confirmation bias. Because it can disproportionally affect people’s beliefs, it is rightfully regarded as something to avoid. ChatGPT, however, is likely to feed into people’s confirmation bias. While a Wikipedia page shows one fixed page for each subject that is searched for, the prompt that is used for ChatGPT directly influences what it outputs. Its response can be completely different based on the phrasing of the original question. The more a user already includes its assumptions, intentionally or by accidental choice of words, the closer the output will be to that sentiment. You may not purposely ask for the answer you want. However, you will get the answer that you specifically ask for. 

No training, no gaining

While everyone can publish on the internet and change Wikipedia pages, only a selection of developers is involved in training the language models. This gives them more power over what ChatGPT knows, includes and represents. Though experts argue that transparency and curated databases would help to explain biases, this does not mean they can be eradicated that easily. An LLM cannot learn what it isn’t trained on. Therefore, excluding racist terms to ensure the chatbot will not turn out racist ignores texts where the words are used in a positive context. This preventings it from becoming anti-racist, and excludes all conversations around it. Including both sides will also not solve the issue, because the LLM does not understand the difference. The inability to handle these sentiments differently goes back to the issues of loss of context as well as its failure to produce meaning. The LLM only predicts from what it is fed, no matter the intention.  

Making sense of the facts

Retrieving knowledge is often more than a black-or-white answer. There is much more to learn from the process of sense-making than just studying facts. Students need to be shown possible answers outside of their initial scope and be able to critically assess their sources. Only then they can adapt their questions and continue to look to broaden their perspective. Forming your own thoughts should be the main goal in education. The way the LLM behind ChatGPT works is currently inadequate to do so. Maybe AI experts will figure out a way to adapt the LLM so it alleviates the above mentioned issues. Or maybe we will need a whole different model behind our chatbots all together. In the meantime, it is important to be aware of the working mechanisms and pitfalls of LLMs. Both for teachers and students, but most certainly for developers. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More