BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Vrije Universiteit Amsterdam//NONSGML v1.0//EN
NAME:Inaugural lecture prof.dr.ir. M.E. van Dijk
METHOD:PUBLISH
BEGIN:VEVENT
DTSTART:20260226T154500
DTEND:20260226T171500
DTSTAMP:20260226T154500
UID:2026/inaugural-lecture-prof-dr@8F96275E-9F55-4B3F-A143-836282E12573
CREATED:20260408T222938
LOCATION:Hoofdgebouw, Aula De Boelelaan 
 1105 1081 HV  Amsterdam
SUMMARY:Inaugural lecture prof.dr.ir. M.E. van Dijk
X-ALT-DESC;FMTTYPE=text/html: <html> <body> <p>Can your Private Data b
 e Secured in the age of Machine Learning?</p> <p>Can we trust artific
 ial intelligence to protect our privacy? LLMs like ChatGPT and Claude
  are trained with both public and private data. How can such a model 
 be designed with privacy by design? These are the questions posed by 
 computer security professor at Centrum Wiskunde &amp; Informatica (CW
 I) Marten van Dijk in his inaugural lecture. Cryptographic methods li
 ke Differential Privacy and PAC Privacy offer partial solutions, but 
 they have limitations. Can these be formally described mathematically
 ? And can training algorithms be adapted to maintain accuracy while g
 uaranteeing privacy?</p><p>Besides privacy, poison and evasion attack
 s also play a role in undermining the security of AI. There is a need
  for rigorous comparison of defense mechanisms, something that is oft
 en lacking. Fairness by design is also crucial: it has been mathemati
 cally demonstrated that not all fairness definitions are compatible. 
 How can LLMs still be used fairly and responsibly in decision-making?
 </p><p>Ultimately, we strive for Trustworthy AI: systems that are acc
 urate, secure, fair, and explainable, according to Van Dijk.</p> </bo
 dy> </html>
DESCRIPTION: Can we trust artificial intelligence to protect our priva
 cy? LLMs like ChatGPT and Claude are trained with both public and pri
 vate data. How can such a model be designed with privacy by design? T
 hese are the questions posed by computer security professor at Centru
 m Wiskunde &amp; Informatica (CWI) Marten van Dijk in his inaugural l
 ecture. Cryptographic methods like Differential Privacy and PAC Priva
 cy offer partial solutions, but they have limitations. Can these be f
 ormally described mathematically? And can training algorithms be adap
 ted to maintain accuracy while guaranteeing privacy? Besides privacy,
  poison and evasion attacks also play a role in undermining the secur
 ity of AI. There is a need for rigorous comparison of defense mechani
 sms, something that is often lacking. Fairness by design is also cruc
 ial: it has been mathematically demonstrated that not all fairness de
 finitions are compatible. How can LLMs still be used fairly and respo
 nsibly in decision-making? Ultimately, we strive for Trustworthy AI: 
 systems that are accurate, secure, fair, and explainable, according t
 o Van Dijk. Can your Private Data be Secured in the age of Machine Le
 arning?
END:VEVENT
END:VCALENDAR
