The opportunities and dangers of Artificial Intelligence
Author: Yrla van de Ven
Developments in artificial intelligence are moving fast. This offers opportunities but also presents dangers. How can organisations seize the opportunities of artificial intelligence without endangering themselves, their customers or society? Researchers from VU Amsterdam give advice based on their expertise.
In November 2022, the general public was introduced to ChatGPT, and within 2 months, 100 million users worldwide were already using this modern chatbot. Since then, even more sophisticated language models have come online, and images and videos can also be generated with artificial intelligence.
According to Frans Feldberg, Professor of Data Driven Business Innovation, we should not be naïve: artificial intelligence is a system technology that has the potential to fundamentally change the relationships between individuals, organisations and governments. It is a technology whose influence can be compared to the introduction of electricity. He is co-founder of Data Science Alkmaar, a "platform for innovation where business, public sector and education/research from North Holland work together to utilise the opportunities offered by big data and artificial intelligence in a responsible manner, for the benefit of regional economic growth and development". Through this platform, organisations can attend lectures and workshops on digital innovations such as artificial intelligence. More than 500 people attended a lecture on ChatGPT in June at the AFAS stadium in Alkmaar.
"Big tech companies and start-ups are developing data-driven products and services, including using artificial intelligence, undermining the business models of many organisations. I therefore advise organisations not to be naïve, not to hope that it is a trend that will blow over on its own and not affect them," advises Feldberg. "Get started with these new technologies. Investigate what it means for your organisation and your raison d'être. And seize the opportunities that data and artificial intelligence offer, but in a responsible way."
"Don't be naïve and seize the opportunities offered by data and AI in a responsible manner"
Ethical Issues
It is tempting for organisations to unleash algorithms on large amounts of data and thus, for example, gain a better understanding of what customers want or how business processes can be set up more efficiently. But according to organisational scientist Christine Moser, these algorithms are unsuitable for tackling moral, ethical and social issues.
As an example, she mentions the reports from September that showed that the Dutch Healthcare Authority (NZa) collects privacy-sensitive information from mental health patients in order to feed an algorithm that is supposed to predict the demand for care. "At first glance, this seems to be a planning issue. Yet very privacy-sensitive data is being used here in an unethical way," says Moser.
According to Moser, organisations are allowed to use algorithms, but should not blindly trust them. Yet, according to her, they do that too often. In her research, she finds three main reasons for this. Moser: "To begin with, it is easy for organisations to express things in numbers, in the context of 'to measure is to know'. Algorithms seem to be a good solution because they can handle numbers well. But not everything can be expressed in numbers. A question such as 'On a scale of 1 to 10, what does fear feel like?' clearly doesn’t cover the full extent. Some things you can't measure."
"Furthermore, an algorithm does not care in which country it is used, algorithms are agnostic to culture. This makes it easy for organisations to roll out the same algorithm everywhere. But for people, the environment does matter," Moser continues.
A third explanation for blind trust is that the results of algorithms, such as scores or percentages, are powerful and convincing. Moser: "This may sound a bit strange, because we humans often think that we are in charge of technology. But when we are confronted with the outcome of such an advanced calculation model, it is very difficult not to go along with the logic of the number."
Developments in the field of artificial intelligence seem unstoppable, but according to Moser, organisations have a duty to use this new technology responsibly. "Algorithms are unsuitable for tackling moral, ethical, and social issues," Moser says.
"It is important that organisations know how to instill sufficient, but not excessive, trust in artificial intelligence among employees"
Medical AI
Artificial intelligence also offers many opportunities in the medical world, for instance to diagnose patients faster and more accurately. But to make the most of those opportunities, healthcare organisations will have to make changes in healthcare processes, says organisational scientist Mohammad Rezazade Mehrizi.
For the past five years, he has been researching the use of artificial intelligence in radiology. "If artificial intelligence is used in diagnoses, it may change the content and order of actions doctors have to perform. The hospital has to be set up for that. In addition, it is important that organisations know how to instill sufficient, but not excessive, trust in artificial intelligence among employees," says Rezazade Mehrizi.
A part of the research was recently published in Nature Scientific Reports. "In an experiment, we presented a mixture of correct and incorrect AI recommendations to radiologists. In half the cases, we deliberately provided incorrect suggestions to the radiologists. The result was that radiologists followed the correct suggestions about as often as the incorrect ones. So they had too much confidence in the algorithm."
The researchers also looked at two possible ways to control trust: offering additional information with the algorithm's suggestion and reminding the radiologists to be critical of artificial intelligence. "We randomly divided the radiologists in the experiment into different groups. Some groups were given more information than others. Also, some groups were shown a positive video about artificial intelligence, while others were shown a video highlighting the dangers."
The experiment showed that neither method had a significant effect on radiologists' trust. All groups followed the incorrect suggestions about equally often. Rezazade Mehrizi and his colleagues therefore continue to research which methods do work to temper trust.
Rezazade Mehrizi is also bringing together medical experts, developers of new technology and organisational scientists in a Learning Lab. This collaboration between SBE and Leiden Medical University allows them to experiment with new technologies in a safe environment, yielding many useful insights. Rezazade Mehrizi: "As organisational scientists, we help think about how these new technologies can best be implemented in organisations."