In November 2022, the general public was introduced to ChatGPT, and within 2 months, 100 million users worldwide were already using this modern chatbot. Since then, even more sophisticated language models have come online, and images and videos can also be generated with artificial intelligence.
According to Frans Feldberg, Professor of Data Driven Business Innovation, we should not be naïve: artificial intelligence is a system technology that has the potential to fundamentally change the relationships between individuals, organisations and governments. It is a technology whose influence can be compared to the introduction of electricity. He is co-founder of Data Science Alkmaar, a "platform for innovation where business, public sector and education/research from North Holland work together to utilise the opportunities offered by big data and artificial intelligence in a responsible manner, for the benefit of regional economic growth and development". Through this platform, organisations can attend lectures and workshops on digital innovations such as artificial intelligence. More than 500 people attended a lecture on ChatGPT in June at the AFAS stadium in Alkmaar.
"Big tech companies and start-ups are developing data-driven products and services, including using artificial intelligence, undermining the business models of many organisations. I therefore advise organisations not to be naïve, not to hope that it is a trend that will blow over on its own and not affect them," advises Feldberg. "Get started with these new technologies. Investigate what it means for your organisation and your raison d'être. And seize the opportunities that data and artificial intelligence offer, but in a responsible way."
Ethical Issues
It is tempting for organisations to unleash algorithms on large amounts of data and thus, for example, gain a better understanding of what customers want or how business processes can be set up more efficiently. But according to organisational scientist Christine Moser, these algorithms are unsuitable for tackling moral, ethical and social issues.
As an example, she mentions the reports from September that showed that the Dutch Healthcare Authority (NZa) collects privacy-sensitive information from mental health patients in order to feed an algorithm that is supposed to predict the demand for care. "At first glance, this seems to be a planning issue. Yet very privacy-sensitive data is being used here in an unethical way," says Moser.
According to Moser, organisations are allowed to use algorithms, but should not blindly trust them. Yet, according to her, they do that too often. In her research, she finds three main reasons for this. Moser: "To begin with, it is easy for organisations to express things in numbers, in the context of 'to measure is to know'. Algorithms seem to be a good solution because they can handle numbers well. But not everything can be expressed in numbers. A question such as 'On a scale of 1 to 10, what does fear feel like?' clearly doesn’t cover the full extent. Some things you can't measure."
"Furthermore, an algorithm does not care in which country it is used, algorithms are agnostic to culture. This makes it easy for organisations to roll out the same algorithm everywhere. But for people, the environment does matter," Moser continues.