AI in the public sector: looking for the human in AI

ECSO Members

AI in the public sector: looking for the human in AI

This article has been translated into English. 

The Dutch government is making full use of artificial intelligence (AI) in tackling societal problems, often with success. But not always. Sometimes things go wrong because the use of AI is quite opaque, or because people simply have too little expertise about AI systems. This can make it difficult to properly adjust such a system. It's time to tackle that problem. How? By experimenting with AI living labs where transparency and people are central.

Too often, AI applications are deployed not in the what way they should. There are several examples of algorithms that stigmatise and disadvantage certain groups of people. In the development phase of such an algorithm, the human being is not given enough focus. Citizens often do not even know if the government deploys AI systems in a certain area. And if that is clear, citizens do not know what role such a system plays in government decisions. They therefore do not know on the basis of which data those decisions are made. There is a lack of transparency and explainability. That does not help to increase trust in AI.


What is becoming increasingly clear is that citizens are not yet actively involved enough in AI developments. As a result, their interests are guarded too limited. Meanwhile, AI-based decisions are increasingly impacting the daily lives of citizens. That is also a problem that needs to be addressed quickly.

Click here to read more.

Source: TNO

Photo credits: Pexels, Tara Winstead