The seminar will take place on Friday, September 20th, from 12:00 to 13:00 (IN-3B58). You can find more information below.
This is a lunch seminar; please indicate your availability by replying to your email by Tuesday, September 17th, for catering purposes.
Please be aware that the room has been changed into IN-3B58.
Abstract
The rapid advancement of Artificial Intelligence (AI), particularly in the form of large language models, has raised concerns about the potential for humans to over-rely on AI-generated content without critical examination. While the integration of AI into knowledge work has shown benefits in terms of improved creativity, productivity, and innovation, the risks of propagating biased or inaccurate "AI hallucinations" and "botshit" necessitate a deeper understanding of the cognitive mechanisms underlying human over-reliance on AI. This research therefore addresses the question: How does the interplay of mental effort and AI output quality affect users' utility perceptions of AI and how do these shape their reliance on AI? Two experimental studies were conducted to investigate this phenomenon in the context of using generative AI for knowledge work tasks. Contrary to initial expectations, the findings suggest that differences in participants' need for cognition play a major role in determining how participants rely on AI-generated output. These results have significant implications for managing hybrid human-AI teams and designing AI systems that can effectively mitigate the risks of uncritical integration of AI outputs. The study contributes to both theoretical and practical understanding of the cognitive biases and heuristics that influence human-AI interactions, moving beyond a binary view of acceptance or rejection. The findings highlight the need for continued research on personality traits and cognitive processes underlying human-AI collaboration to ensure the reliable and ethical use of AI technologies in knowledge-intensive domains.