Is it possible to deal with AI responsibly?
Everyone wants to jump on the AI bandwagon, but how do you do it responsibly? Alumna Lieke Dom advises organisations on responsible AI and is involved in GPT-NL. ‘AI is not really intelligent; it’s statistics. Yet we need to set boundaries. Because it has created powerful systems and perpetuates inequality.’
Author: Aafke Jochems | Photography: Yvonne Compier
One of the biggest misconceptions that organisations have is that AI is intelligent. ‘The narrative is that AI can imitate or even surpass humans,’ Lieke begins. ‘But systems are not more intelligent than people. A system has only one way of thinking: binary. It's about statistics and categories. Language models – which do this in a clever and detailed way – also make a statistical prediction of what the next word is. For us as humans, this kind of classification thinking is just one way of taking in the world.’
Lieke advises organisations at TNO on the responsible use of AI. She also conducts research into the unintended consequences of AI. She also teaches AI Ethics & Governance at Beeckestijn Business School. Responsible AI has become a buzzword, according to Lieke: ‘Trustworthy AI or human-centred AI are also good terms.’
‘The narrative is that AI can imitate or even surpass humans, but systems are not more intelligent than people.’
Ethical questions behind AI
The seed of Lieke's enthusiasm for the responsible side of AI was planted at the end of her bachelor's degree in Communication Science at the University of Twente. There she took a minor in philosophy of technology and was captivated by it. To learn more about how technology is applied in practice, she completed the master's in Digital Business and Innovation at VU Amsterdam. She then obtained a master's degree in Applied Ethics at Utrecht University, expanding her skillset to include assessing the impact of these applications.
Intersection of technology and ethics
Her studies came together when she joined Google's Responsible Innovation team, where the impact of AI was examined from both the application and a societal perspective. Her philosophical interest brought Lieke to the intersection of technology and ethics: she wants to understand how systems work and what values play a role in them. ‘AI is an umbrella term and developments are moving at lightning speed. But if you look at the ethical questions behind them, they have been firmly in place for centuries.’
What often goes wrong when using AI
At the moment, all attention is focused on generative AI, particularly language models. Meanwhile, there are still major risks associated with simpler AI systems, such as algorithms, Lieke points out: ‘People sometimes now refer to an application as ‘simple AI’ because generative AI is more complex. But machine learning also involves risks such as bias and reinforcing or maintaining inequality. And its harmful effects often remain invisible. That leads to inequality, and that problem is far from being solved.’
‘Machine learning also involves risks such as bias and reinforcing or maintaining inequality. And its harmful effects often remain invisible.’
The illusion of humanity
Other risks also arise. Precisely because AI is not intelligent, but can come across as extremely convincing. For example, more and more people are using language models for psychological or medical support. Lieke finds this disturbing: ‘Anyone can develop a tool powered by ChatGPT and wrap it in a coaching jacket. You get drawn in and it seems so real. It's a big pitfall, because a conversation with a chatbot is not monitored by a doctor or psychologist.’
Lieke therefore argues that we should set boundaries for AI. Also to properly shape the narrative. And because such powerful systems have been created. ‘There are parties that have power over decisions made about and by AI. They are in a formative position. You have to set boundaries for those tech companies and their leaders.’
‘There are parties that have power over decisions made about and by AI. They are in a formative position. You have to set boundaries for those tech companies and their leaders.’
How to use AI responsibly
When asked how organisations can use AI responsibly and arm themselves against harmful effects, Lieke responds resolutely: ‘Knowledge.’ Then smiling: ‘Choosing GPT-NL.’ GPT-NL is a language model developed by TNO together with SURF and the Netherlands Forensic Institute. Lieke helped define the four core values for it: reliability, transparency, reciprocity and sovereignty. ‘You can see those core values as setting boundaries. And at the same time, they are also boundary-pushing, because our approach requires a new way of innovating.’
GPT-NL a responsible alternative
GPT-NL only uses data that has been obtained lawfully, is transparent about the development and has a fair revenue model in which data providers share in the returns. This removes concerns about privacy, intellectual property and lack of control with international models. It is therefore a responsible alternative for organisations. Lieke: ‘To be clear, it is not a Dutch ChatGPT. Think of it as an engine that an organisation can then use to safely develop its own AI tool. Both the organisations that want to use GPT-NL and the parties that provide the data for the development see the need for sovereignty: no more dependence on American tech companies.’ The first version will be ready in early 2026.
Knowledge of your blind spots
What does she think is the biggest challenge for organisations that want to work responsibly with AI? ‘Having knowledge of the system and of your own blind spots’, she answers. ‘I understand that not every organisation can develop AI itself or bring experts in-house. But it is possible to find out what your blind spots are. From there, gain knowledge and above all: what are the boundaries within which you can work? What do you do to mitigate risks? And do you know enough about the system to take responsibility for the choices that have been made?’
AI has become political
In recent years, the conversation about responsible AI has shifted. A few years ago, it was mainly about what a dataset looks like and whether an algorithm is discriminatory. ‘There is still a lot to be gained from that’, says Lieke. ‘Now it's more about sustainability, about raw materials, about sovereignty. AI has become a political topic and with it responsible AI. Ideally, responsible AI would disappear as a discipline and AI developers have not only technical but also ethical skills.’
‘Now it's more about sustainability, about raw materials, about sovereignty.’
How to push boundaries
‘I also see how AI can help push boundaries, for example in creative processes’, concludes Lieke. ‘But remember, AI is trained on Western data. That Western view is dominant, the output homogeneous.’ There is also an opportunity for organisations to push boundaries here: ‘There is a dominant focus on efficiency, but you can look at more than just reducing costs. If everyone works faster with AI but no longer likes their work, what then? Or if your customers don’t want to interact with a chatbot? Be prepared to evaluate the use of AI in terms of employee well-being or, for example, trust. That is pushing boundaries.’
‘There is a dominant focus on efficiency, but you can look at more than just reducing costs. If everyone works faster with AI but no longer likes their work, what then?’
