How does Artificial Intelligence (AI) actually learn?
“You can compare AI to a child learning through repeated exposure. A child learns by touching or seeing things. At a certain point, they no longer have to consciously think about the question: ‘is this a cat or a dog?’ That knowledge comes from experience. AI works in a similar way, though unlike a child, it lacks understanding or intent. During the training phase, AI is fed a large number of examples, after which it eventually learns to distinguish between the different things it observes. AI then offers a statistical way to interpret new data.
“Because AI is trained on a dataset, its strength lies in the fact that the rules it learns apply not only to the training data but can also be applied to new information. That’s AI’s great power, but also its weakness. The rules are based on the training data. If that data is biased or skewed, AI will adopt those biases as well.”
So, could you say that AI discriminates?
“Of course, AI itself doesn’t decide to discriminate. Discrimination arises from how AI systems are trained, what data is used, and how this technology is then applied in society. Imagine a company uses AI for recruitment: the system is trained on past hiring decisions, in which men were more frequently given leadership positions. AI picks up on this pattern, which then leads to discrimination in the job market.
“Another example is discrimination in healthcare as a result of an AI system being trained on historical data that’s primarily made up of male patients. As a consequence, women may be less accurately diagnosed for certain diseases.”
Back to the recruitment example. An AI system could wrongly favour men over women?
“The goal of using AI in that case is to identify patterns that lead to success in a particular role. The system is optimised to predict success based on patterns in prior data, which may encode historical inequities. Think of factors like whether someone is frequently late to work.
“If the dataset contains mostly men and very few women, the system may statistically associate gender with success, even if irrelevant. That’s a simple example, but you can imagine how the system could draw incorrect correlations. For instance, an AI system might wrongly infer that if women have historically been more responsible for childcare, they might have been late to work more often. But such a conclusion fails to consider individual nuances and context. And that’s where the danger lies. AI’s goal is not to discriminate deliberately against certain groups. But if the training data is largely based on men who have held leadership positions for centuries, then there simply isn’t enough data about women in those roles to go on. How can AI recognise something it has never seen?
“That’s why AI isn’t just a statistical or a mathematical issue – it’s also a political issue. How we choose to train AI determines how it functions. And since AI is trained by humans, it reflects our own choices and biases. That’s why AI systems are not neutral: their design and deployment reflect political decisions about whose interests are prioritised and whose voices are excluded.”
How can AI become more inclusive?
“The solution lies in involving the community in shaping AI. If only a few people are in control of how AI is designed, that’s problematic. Because then, a small group makes decisions for a vast number of others who are completely different from them. And everyone working with AI, even with the best of intentions, has certain biases.
“Inclusive AI development requires broader participation, especially from communities historically underrepresented in technology. If you really engage the community and involve them in decision-making, you gain a better understanding of their identity, needs and motivations. You get a clearer picture of what inclusivity means for different people.
“It’s important to consider not just how to reduce bias in AI outputs, but also where and how AI is developed and used. If AI infrastructure, datasets, and training pipelines are concentrated within a limited set of regions or institutions, it will raise concerns about generalizability, representational diversity and long-term robustness across different social and cultural contexts”.
Are you still optimistic about AI?
“Absolutely. Again, AI itself doesn’t have discriminatory intentions. And I’m optimistic about the fact that as humans, we have the opportunity to make this technology more inclusive. To integrate AI into our society and train it based on the cultural beliefs and values of the community.
“A major issue with AI right now is that it’s primarily driven by profit rather than by social interests. There’s just so much money involved. But I’m optimistic that as open-source ecosystems and public initiatives grow, we’ll see more democratic approaches to AI ownership and development. And that over time, the computing power for AI development will become more widely distributed, so that it can serve society as a whole.”