Can we trust artificial intelligence to protect our privacy? LLMs like ChatGPT and Claude are trained with both public and private data. How can such a model be designed with privacy by design? These are the questions posed by computer security professor Marten van Dijk in his inaugural lecture. Cryptographic methods like Differential Privacy and PAC Privacy offer partial solutions, but they have limitations. Can these be formally described mathematically? And can training algorithms be adapted to maintain accuracy while guaranteeing privacy?
Besides privacy, poison and evasion attacks also play a role in undermining the security of AI. There is a need for rigorous comparison of defense mechanisms, something that is often lacking. Fairness by design is also crucial: it has been mathematically demonstrated that not all fairness definitions are compatible. How can LLMs still be used fairly and responsibly in decision-making?
Ultimately, we strive for Trustworthy AI: systems that are accurate, secure, fair, and explainable, according to Van Dijk.