This article was written for the "Issues" section of Omalius magazine #28, March 2023.

It's true that this chatbot, which seems to think and write in real time—like a friend you'd chat with on Messenger or WhatsApp—is as intuitive as it is surprising. Completely free and accessible with just an email address, it can calculate, produce recipes, write articles, and compose poetry (Oh ChatGPT, source of infinite knowledge,/You are our guide in the digital world,/Offering us your endless knowledge,/You are the source of our logical intelligence). It can also explain with unwavering confidence that cows are oviparous or provide bibliographic references that appear authentic but are in fact inaccurate. The tool is based on the principle of plausibility rather than truth. "ChatGPT works on the basis of a probabilistic equation," explains Bruno Dumas, professor at the Faculty of Computer Science at UNamur and co-chair of the NaDI Research Institute. "It will string together the most probable sequence of words based on the question you have given it. This sequence of words will form sentences that make sense and are interesting because ChatGPT has been trained on a large amount of data, some of which is known: the entire English Wikipedia, the entire Reddit social network, two large book databases equivalent to a gigantic library, and the rest of the Internet, including Twitter."

Personal assistant

In a world where the relationship with truth is already severely damaged, fears related to plagiarism, cheating, the fabrication of false evidence and sources, and the standardization of thought are widespread. But since fear does not ward off danger, UNamur teachers quickly considered the best way to integrate the conversational robot into their courses. "There is always enthusiasm for a new tool, but we must obviously remain critical," comments Guillaume Mele, researcher and assistant in the university's Education and Technology Department and member of the IRDENa Research Institute. "We went through a phase of analysis to understand the possibilities offered by ChatGPT, but also its limitations. It's a great tool, but it must remain a tool, not the core of an educational system."

"Personally, I am a big believer in the idea of mentoring in education, in one-to-one relationships: from this perspective, ChatGPT can be seen as an opportunity for students to have a personal digital assistant at home," suggests Michaël Lobet, professor in the Department of Physics at UNamur and FNRS-qualified researcher at the NISM Research Institute. "Of course, we're not there yet, but if our students manage to use the tool correctly, even if it's just to clear some of the groundwork, that will already be very good. We can envisage different educational scenarios: they could use it for course summaries, mind mapping, redefining certain concepts, etc. However, I remain skeptical and cautious because we are still in the initial phase. I submitted three physics problems to ChatGPT and it got two of them wrong... However, the goal is not to "trick" ChatGPT: that would be to treat it as a rival and forget that it is only a machine that is destined to improve. "If I use a hammer, I don't compete with the hammer," sums up Michaël Lobet. On the other hand, there's nothing like it for hammering in a nail.

We need to move away from stereotypes that teachers are necessarily old, out-of-touch fools and students are necessarily cheaters... Our job is to teach students how to evolve with the tools that exist.

Elise Degrave Professor at the Faculty of Law of the University of Namur and Member of the NADI Institute

Critical thinking

Of course, ChatGPT will bring about change, as will its competitors, led by Bard, Google's conversational tool. But just as teaching and research survived Wikipedia, they could survive these intelligent chatbots. "With Covid, we had to introduce Teams urgently, which took everyone out of their comfort zone and changed the way classes were taught, but not only in a negative way. It also led to more pedagogical reflection and helped to develop teachers' digital skills, resulting in some impressive teaching methods. There were fears, then a period of familiarization until everyone started using it (or not) according to their needs. That's probably what will happen with ChatGPT," says Guillaume Mele.

While many teachers are already interested in it, ChatGPT remains unknown to many students. Marie Lobet, assistant at the Department of Education and Technology (DET) at UNamur, has conducted an initial study showing that only a third of them have heard of it. Now is therefore the right time for presentations. "ChatGPT can quickly put students at a disadvantage if they don't use it critically," comments Olivier Sartenaer, philosopher of science at UNamur. "That's why I suggest they use it and cite it as a source, just like a traditional source, by putting the results in the appendices. It's a way of using the tool honestly, showing where it's right and where it's wrong. As in any investigative work, it must obviously be considered as one source among others and as an unreliable source since it operates on the principle of probability and not truth..."For its part, ChatGPT does not source any of the information it provides, which could pose intellectual property issues. Above all, the results it offers should be viewed as raw material: clay that needs to be molded and reshaped if we want to get closer to what is right, true, or even beautiful (Oh ChatGPT, you are a technological gem, a marvel that makes our lives easier, you save us precious time, and guide us to success).

Élise Degrave, professor at the UNamur Faculty of Law and member of the NaDI Institute, also believes that it is necessary to introduce students to this tool and that it would even be "dangerous to ban it": "We need to move away from the stereotypes that professors are necessarily old, out-of-touch fools and students are necessarily cheaters... Our job is to teach students to evolve with the tools that exist because they are the ones who will change the professions in the future. This is all the more important given that many students, even if they are very comfortable with social media, do not have a digital culture: many do not know what a filter bubble is (editor's note: a system for personalizing search results) or that on Tinder, the algorithm matches attractive people with other attractive people... This knowledge is all the more necessary as artificial intelligence is likely to become increasingly prevalent in our lives. So we might as well make it our ally. Because, as ChatGPT writes to itself in the last stanza: "We address this ode to you with love,/You, our dear virtual friend,/You are part of our daily lives,/And we could not replace you."

ChatGPT: a normative tool?

ChatGPT has an answer for everything and never gets upset. It's impossible to make it lose its temper or plunge it into the depths of doubt. "ChatGPT's answers have a stereotypical aspect to them. And it's still a black box: we know that the tool has 'eaten' the entire Wikipedia, but in English, which is not neutral," comments Olivier Sartenaer. The fundamental question in social sciences—where am I speaking from?—is of no concern to the conversational robot. "Usually, we can try to see why a source tells us this or that, but here that's not the case because the statement has no author. It therefore becomes impossible to contextualize the statement." At best, we know from a Time investigation that OpenAI worked with Kenyan workers through the company Sama to eliminate racist, sexist, and hateful content that ChatGPT could have fed on... "These workers were paid $2 an hour to deal with the dregs of humanity. This obviously raises questions," comments Bruno Dumas.

Neither empathy nor transgression

"ChatGPT encapsulates norms," explains Élise Degrave. "For example, if you ask it how to build bombs, it will respond that if you have a problem, you should consult a healthcare professional... But who defines these norms? From a legal perspective, it's interesting." In her view, ChatGPT is characterized above all by its inability to transgress these norms. "In law, the concept of morality has been defined by the same article of the Civil Code since 1830, except that before, what was contrary to morality was cohabitation, and now, to caricature, it is selling your eggs on the internet... Society is changing, but ChatGPT is incapable of interpreting and evolving norms. It is also incapable of considering cases of force majeure. In law, for example, it is forbidden to speed unless it is to rescue a person in danger." Royally psychologically rigid, ChatGPT is unaware of exceptions and empathy. In this sense, according to Élise Degrave, it is "a magnificent plea for the role of humans." Because it is incapable of the worst, ChatGPT is also incapable of the best.

Interview with Laurent Schumacher, Vice President for Education

At the end of January, UNamur organized a PUNCh (Pédagogie Universitaire Namuroise en Changement) session dedicated to ChatGPT, which was a real success, with some 260 participants...

Laurent Schumacher: Yes, out of 600 or 700 people concerned, that's significant. We had some dynamic individuals who developed a resolutely positive approach to the tool, based on the premise that it was part of the practices of future professionals and that it was worth exploring how it could be used for learning and career purposes. The various speakers presented the tool, but also its limitations and the scenarios in which it could be implemented. The aim was to pool and share best practices.

What could ChatGPT bring to the academic world?

L.S.: Typically, when conducting research, ChatGPT facilitates the creation of an initial state of the art. The first limitation is that it can only reproduce what it already knows, in other words, the knowledge available at the time it collected information, in this case late 2021 and early 2022. Nor can we take what ChatGPT says at face value: it requires a critical eye on the part of the student. The fact that it presents itself as a conversational tool also gives a different perspective on the answers it provides: it feels like you are actually talking to someone, when in fact its response is nothing more than a sequence of words. So the sequence could be different and the resulting text could say the exact opposite.

Do you have any concerns about the risk of plagiarism and cheating?

L.S.: It would be an exaggeration to say that we have no concerns. But we now have a different relationship with the availability of knowledge. Of course, we cannot imagine training doctors who have no knowledge of the human body, but we can imagine a scenario in which doctors act in alliance with their own knowledge and what they can glean from digital resources: this is more a case of augmented reality, an augmentation rather than a substitution.

Omalius

This article is taken from Omalius magazine #28 (March 2023).