Computer scientist Inès Blin shows that giving AI a structured memory of facts and relationships can make its answers more accurate and easier to verify.
Blin’s research focused on how AI systems can better support human sense-making. People naturally understand the world by linking new events or information to what they already know, and turning this into a coherent narrative (for example: what happened, why it happened, and why it matters). Today’s AI can generate fluent text, but it may also invent details or struggle to explain where its answers come from. I investigated how combining structured knowledge (such as knowledge graphs) with other AI methods can help systems build narratives that are more reliable, transparent, and useful across different domains. The motivation was to design AI that can use structured memories to support explanations, generate hypotheses, and analyse debates in ways that people can trust.
Useful alternatives by AI
Her research showed that giving AI a structured memory of facts and relationships can make its answers more accurate and easier to verify. Instead of generating text directly, the systems Blin built first collect relevant information and organise it into a structured map of key entities and their connections. She tested this approach in three domains: history, social media discussions, and social science research. In the historical domain, structuring information improved the relevance of what the system retrieved and reduced factual errors. In the social media domain, it helped make complex debates easier to explore. In the social science domain, AI-generated hypotheses did not always outperform human ones, but they often added useful alternatives, showing strong potential for human–AI collaboration.
These findings matter for anyone who uses AI to understand complex topics, especially when trust and clarity are important. For everyday users, structured narrative representations can help AI explain historical events in a clearer and more accurate way, instead of producing confident but incorrect answers. For expert users, the same approach can support tasks such as summarising large public debates, like social media discussions about inequality, or helping researchers generate new ideas in social science. In practice, this could lead to tools that help users quickly navigate large amounts of information, understand the main viewpoints, and see how claims connect to evidence. These applications are realistic in the near future, because they build on existing AI systems and improve them with structured knowledge.
Collaboration with domain experts
Blin conducted her research using a mix of literature review, computer-based experiments, and user studies. First, she reviewed existing research on narratives and how they can be represented computationally. Then she developed methods to retrieve relevant information and convert it into structured knowledge representations, and tested them across several real-world use cases. She evaluated the results using quantitative measures as well as qualitative analysis. For the qualitative analyses, she ran user studies to assess how helpful the system outputs were, both for AI-generated hypotheses in social science and for the quality of answers in the historical domain. Lastly, Blin collaborated with domain experts when needed to ensure the results were meaningful and realistic in practice.
More information on the thesis