Usually, we can try to figure out why a source is telling us something, but in this case, that's not possible because the statement has no author. This makes it impossible to contextualize the statement. All we know, following an investigation by Time magazine, is that OpenAI worked with Kenyan workers through the company Sama to eliminate racist, sexist, and hateful content that ChatGPT could have fed on...These workers were paid $2 an hour to deal with the dregs of humanity. This obviously raises questions...

Bruno Dumas, Expert in human-machine interaction, Faculty of Computer Science, Co-President of the NaDI Institute (Namur Digital Institute)

Excerpt from our article "CHatGPT: an opportunity for education" 

By placing his research prism on the user's side, Bruno Dumas is, in a way, a "computer psychologist." He advocates for the sensible and informed use of emerging technologies.

We are currently testing an AI system that will help doctors identify tumors in medical images. The challenge? Ensuring that doctors can determine whether the AI's response is reliable and, if so, to what degree. We are developing and testing this process with doctors. The process will enable the AI to provide them with its degree of certainty. Initial feedback shows that this transparency will be fundamental. With this principle of transparency, AI is no longer just a machine that provides a solution, but a technology that assesses the degree of certainty and explains its decision-making process. This creates a genuine collaborative approach between the doctor and the AI.  

Bruno Dumas, Expert in human-machine interaction, Faculty of Computer Science, Co-President of the NaDI Institute (Namur Digital Institute)

Excerpt from our "Expert" article in Omalius magazine #36 (March 2025)