This is a lunch seminar; please register your attendance by accepting/declining your emailed invite by Friday, March 1st, at 10 AM at the latest (for catering).
ABRI Lunch Seminar Olgerta Tona and Lisen Selander 5 March 2024 11:00 - 13:30
About ABRI Lunch Seminar Olgerta Tona and Lisen Selander
Starting date
- 5 March 2024
Time
- 11:00 - 13:30
Location
- VU Main Building
Address
- De Boelelaan 1105
- 1081 HV Amsterdam
Organised by
- ABRI and the KIN Center for Digital Innovation
Language
- English
Biography
Olgerta Tona is an assistant professor at the department of Applied IT, University of Gothenburg, Sweden, and a research affiliate at the Swedish Center for Digital Innovation (SCDI) and MIT Center for Information Systems Research (MIT CISR). She received her Ph.D. in information systems from Lund University in 2017 and was awardedthe Börje Langeforspriset by Svenska Informationssystem Akademin for the best Ph.D. thesis. Olgerta’s research focuses on the organizational and societal implications of data analytics, algorithmic systems, and personal data digitalization.
Title
Algo-Political Work: Challenging Injustice In Algorithmic Decision-Making Systems
Abstract
While organizations deploy algorithmic decision-making (ADM) as they pursue greater efficiency and effectiveness, mounting evidence suggests that ADM systems have potential to reproduce injustice to structurally disadvantaged populations. With its focus on preventive technical solutions and accountability frameworks, scholarship has given less attention to what happens after injustice has already occurred: how the systems can be challenged and ways of responsibly initiating actions to address that injustice. Applying the theoretical lenses of political responsibility and brokering in combination to the case of an ADM system deployed in an Australian government agency, the paper introduces the notion of algo-political brokering to explain how moral agents together took it upon themselves to challenge the system and help its victims via brokering initiatives. This form of action is “algo” in that one party in the brokered relation is an algorithmic system, it is “political” in drawing citizen participation and public action to transform systems into engagement with the justice- and fairness-related political questions raised by such systems, and it is “brokering” because it facilitates rectifying the connections between the ADM system and the victims of its unjust decisions. The concept has important applications for research and practice: Scholars can draw on it to interpret implications of current and future political technologies at societal level. Meanwhile, policymakers oriented toward it can better develop and apply proactive measures/rules to govern such systems, and designers become able to “inscribe” a rectification vision in future algorithmic tools.
Biography
Lisen Selander is professor in Information Systems with expertise in the area of digital transformation and contemporary collective action. She is a co-founder of TechnAct, a research cluster funded by the Swedish research council, which examines gender and technocultural assemblages. Her research has been published in esteemed academic journals such as the MIS Quarterly, Academy of Management Discoveries, European Journal of Information Systems, and the Information Systems Journal. She was associate editor for the MIS Quarterly 2021-2024.
Title
Algorithmic Discovery Work as Collective Action
“Imagine you stand on an isolated island and see some traces of an accident on the ocean surface, you see pieces of debris floating around, but you are not sure if it is an airplane or a boat, or the severity of the accident, but you understand that you are obliged to do something, and that you need to investigate what happened. That was the sensation that I had”
Abstract
Individuals in civil society are increasingly impacted by algorithmic decision-making but are often unaware that decisions targeting them are delegated to machines. Such unawareness is particularly problematic in the case of faulty decisions and institutional transgressions. How do individuals begin to suspect that they have been targeted by an algorithm and how do they uncover the hidden nature and opacity associated with these systems (their inputs, process, and outputs)? In resource-scarce environments, such as the public sector, noticing such transgressions and tracing them to the algorithm is crucial to protect social justice and public trust in institutions. In this manuscript, we build on the work of von Krogh (2018) on the discovery process of algorithmic decision-making and expand this theory to a non-user perspective exploring algorithmic decision-making from the perspective of the targets of the decisions.