The new collaborative study of KIN researchers, Mohammad H. Rezazade Mehrizi, Ferdinand Mol, Marcel Peter, with medical professionals, Daniel Pinto Dos Santos, Erik Ranschaert, Ramin Shahidi, Mansoor Fatehi, Thomas Dratsch, investigates the various ways in which medical experts interact with AI and eventually make their own decisions. The study, published in Nature Scientific Reports, examines the effect of correct and incorrect algorithmic suggestions on the diagnosis performance of radiologists and how various measures — providing additional information to explain AI outcomes and forming critical attitudes towards AI — can affect this relation.
Two quasi-experimental studies explored how two factors can affect the decision-making process. In the first experiment, radiologists were provided with different explanation inputs, such as a heatmap or numerical attributes, in addition to AI suggestions. In the second experiment, radiologists were primed on different attitudes about AI by watching videos that highlighted both positive and negative facts about AI in medical settings.
After analyzing 2,760 decisions made by 92 radiologists examining 15 pairs of mammography images, the researchers concluded that radiologists' diagnoses followed both incorrect and correct AI suggestions, despite supplementing them with explanatory information and interventions to evoke a critical attitude.
Ideally, professionals must engage in “reflective practices” while using AI, especially in the medical context, since medical decisions are high stakes and carry strict legal liabilities.
Mehrizi notes that we must be critical — and less wishful — about the real effect of common measures such as offering explanation inputs or inducing critical attitudes, which are often assumed to trigger reflective engagement.