The seminar will take place on Tuesday, June 24th, from 12:00 to 13:00 (HG-05A16).
As this is a lunch seminar, we kindly ask you to confirm your attendance by accepting or declining your emailed invitation by Friday, June 20, at 10:00 AM, to allow for proper catering arrangements.
Abstract
This study examines how process experts who use advanced AI technologies that generate highly detailed and realistic representations can create what we term “artificial certainty”—the illusion that complex future outcomes are definitively knowable, even though they are inherently uncertain. Through a comparative study of two urban planning organizations using the same AI simulation tool, we reveal how this artificial certainty emerges from the way process experts create and use AI-generated representations. The findings reveal three interconnected representational practices that influence how laypeople perceive the level of certainty inherent in a representation: managing representational granularity (controlling detail and complexity), mediating representational immersion (determining stakeholder engagement), and producing representational epistemics (framing how outputs are interpreted). We find that when process experts attempt to enhance these practices by amplifying technological capabilities, stakeholders mistake representations for reality, undermining expert authority. Conversely, when they approach these practices by modulating the role that AI plays in decision-making, they are able to maintain the expert authority necessary to keep uncertainty alive. These findings reconceptualize process expertise as representational work that helps to maintain useful levels of uncertainty in the face of increasing pressures toward artificial certainty. Based on these insights, we develop a critical distinction between representations of the future versus representations for the future. This framework offers new ways to theorize decision-making under uncertainty as organizations begin to deploy sophisticated AI systems.