...My colleagues in the Hybrid Intelligence Center, one’s in Groningen and in Leiden are working on computational theory of mind where we can equip a computer with that ability to reason about different layers of knowledge that agents have about each other. They perform experiments where they can show that even in different settings, in a competitive setting or in a collaborative setting, computers with a theory of mind perform better than computers without a theory of mind. That’s just a very concrete example of a new research question that you suddenly have to start asking because you’re thinking about collaborative AI rather than replacement AI.
...The four research challenges, we summarize them in the acronym CARE, right? The C for collaboration. The A for adaptation. The R for responsible behavior. The E for explainability. Right? Together CARE. Those are all things that you would expect in a collaboration. You would expect it to be collaborative, adaptive to the circumstances, responsible behavior in a team, and you should be able to explain your behavior to the team members. I think maybe the hardest one has been the one about responsibility because this notion of ethical behavior, of responsible behavior, of having a shared set of norms that governs the behavior in the team, that has proved quite elusive to make that computationally concrete.
... The world is making a lot of progress on explainable AI. Adaptive AI there, reinforcement learning is a big ingredient. Learning from interactions in the world. If the world changes, you learn from the changes in the world and you learn to adapt. Collaboration is at the heart. I think we’re making progress on all of those, but the responsible and ethical and shared-norm behavior, I think that’s been maybe the hardest one.
... I’m also very happy that this is at the center of the AI debate in Europe. I think that’s what sets European AI apart from the debate in America, which is mostly dominated by big tech, and the debate in China, which is mostly dominated by a very centralist government. I think this notion of responsibility and responsible behavior of AI systems is… Or to European AI, actually, it’s a strength of European AI. Just because we found it hard, it’s certainly not a reason to drop it. Absolutely not.
...I think all of AI should be done to keep in mind that AI is there to collaborate with people and not to replace us.
To listen to the podcast, visit the website: Frank van Harmelen: hybrid human-machine intelligence for AI