< Back to news _(2).jpg?version=7134&width=640)


6 December 2023
How do you humanise talking machines (but without the bad parts)?
This year, AI systems that can write almost human-like texts have made a breakthrough worldwide. However, many academic questions on how exactly these systems work remain unanswered. Three UvA researchers are trying to make the underlying language models more transparent, reliable and human.
The launch of ChatGPT by OpenAI on 30 November 2022 was a game changer for artificial intelligence. All of a sudden the public became aware of the power of writing machines. Some two months later, ChatGPT already had 100 million users.
Now, students are using it to write essays, programmers are using it to generate code, and companies are automating everyday writing tasks. At the same time, there are significant concerns about the unreliable nature of automatically generated text, and about the adoption of stereotypes and discrimination found in the training data.
The media across the world quickly jumped on ChatGPT, with stories flying around on the good, the bad and everything in between. ‘Before the launch of ChatGPT, I hadn’t heard a peep from the media about this topic for a long time,’ says UvA researcher Jelle Zuidema, ‘while my colleagues and I have tried to tell them multiple times over the years that important developments were on the horizon.’
Zuidema is an associate professor of Natural Language Processing, Explainable AI and Cognitive Modelling at the Institute for Logic, Language and Computation (ILLC). He is advocating for a measured discussion on the use of large language models, which is the kind of model that forms the basis for ChatGPT (see Box 1). Zuidema: ‘Downplaying or acting outraged about this development, saying things like “it’s all just plagiarism”, is pointless. Students use it, scientists use it, programmers use it, and many other groups in society are going to be dealing with it. Instead, we should be asking questions like: What consequences will language models have? What jobs will change? What will happen to the self-worth of copywriters?’
Read more here.
This article was published by the UvA.
The image was generated by the University of Amsterdam using Adobe Firefly (keywords: shallow brain architecture).
The image was generated by the University of Amsterdam using Adobe Firefly (keywords: shallow brain architecture).
Vergelijkbaar >
Similar news items
_(2).jpg?version=7134&width=640)
June 18
Six UvA Researchers Awarded Prestigious ERC Advanced Grants
The European Research Council (ERC) has awarded Advanced Grants to six researchers at the University of Amsterdam. Each grant, worth up to €2.5 million, supports cutting-edge fundamental research by established scholars with a track record of significant achievements.
read more >

June 17
Strengthen Your Innovation with AiNed InnovationLab Round 2. Open Call!
The second round of the AiNed InnovationLabs is about to open, presenting an exciting opportunity for organisations across the Netherlands to accelerate AI innovation with financial support, expert guidance, and a strong collaborative network.
read more >

September 17
Rethinking AI Curricula: Ethics Must Play a Bigger Role, Says UvA Lecturer
Current AI education is too narrowly focused on technology and programming, warns Frank Wildenburg, a lecturer in artificial intelligence at the University of Amsterdam. In a recent opinion piece, he argues that the dominant curriculum instills a worldview of technological determinism—the belief that technological progress is inevitable and society must simply adapt.
read more >