September 9
Multilingual organizations risk inconsistent AI responses
AI systems do not always give the same answers across languages. Research from CWI and partners shows that Dutch multinationals may unknowingly face risks, from HR to customer service and strategic decision-making.
For multinationals, one AI system operating across languages may seem efficient. Yet new research from the Centrum Wiskunde & Informatica (CWI) reveals a hidden risk: AI responses can differ significantly depending on the language used. Researcher Davide Ceolin, part of CWI’s Human-Centered Data Analytics group and lecturer at Vrije Universiteit Amsterdam, found with international colleagues that the same large language models shift political and business advice depending on the language.
Testing fifteen models with the Political Compass Test revealed striking inconsistencies. GPT-4o, for example, scored economically left in English, but shifted to center-right in Chinese. Such inconsistencies have real consequences for companies using AI in HR, customer service, or compliance.
Ceolin warns that this bias can systematically affect performance and cause reputational risks. He recommends persona-based prompting tests across languages and building governance structures that account for linguistic bias.
The issue is not a temporary glitch but a structural challenge attracting international attention, including collaboration with France’s INRIA. For organizations, this means AI governance must expand to include linguistic consistency, ensuring that one system does not act as multiple “personalities” depending on language.
Read the full article on the website of ICT Magazine:
Meertalige organisaties riskeren inconsistente AI-antwoorden.
Vergelijkbaar >
Similar news items

September 9
Multilingual organizations risk inconsistent AI responses
read more >

September 9
Making immunotherapy more effective with AI
read more >

September 9
ERC Starting Grant for research on AI’s impact on labor markets and the welfare state
read more >