Wie und warum verwenden zukünftige Verwaltungsentscheider*innen KI-basierte Informationen? Empirische Erkenntnisse auf Basis einer nicht-experimentellen Fallvignette bei Hochschulstudent*innen
Hauptsächlicher Artikelinhalt
Abstract
Anwendungen der Künstlichen Intelligenz haben für die Reform des öffentlichen Sektors besondere Bedeutung erlangt, da sich auch große Datenmengen in kurzer Zeit verarbeiten lassen. Ausgehend von einem KI-Blackbox-Szenario mit ungewissem Ausgang (sog. nicht-experimentelle Fallvignette) befragten wir 109 Student*innen in Studiengängen der Bundesverwaltung zu ihren Wahrnehmungen eines Algorithmus-gestützten Dashboards. Auf Basis der Einflussfaktoren KI-Leistungsvermögen, Einfachheit der Nutzung, eigenes Kompetenzerleben, KI-Expertenvertrauen, KI-Überlegenheit und Datenschutzbedenken untersucht diese Studie verschiedene Nutzungsintentionen von KI-generierten Empfehlungen. In Anlehnung an Thea Snow (2021) wird zwischen vier Strategien – einer direkten Übernahme von Empfehlungen, einer reflektierten Verwendung von Empfehlungen, einer reinen Kenntnisnahme und einem bewussten Ignorieren des KI-Outputs unterschieden.
Schlagwörter: Künstliche Intelligenz, Verwendung von KI-Informationen, Einflussfaktoren, Arten der Verwendung
Bibliographie: Krause, Tobias (2025). Soziale Wohnraumförderung in den deutschen Bundesländern: Wie und warum verwenden zukünftige Verwaltungsentscheider*innen KI-basierte Informationen? Empirische Erkenntnisse auf Basis einer nicht-experimentellen Fallvignette bei Hochschulstudent*innen. dms – der moderne staat – Zeitschrift für Public Policy, Recht und Management, 18(1-2025), online first, 1-24.
Artikel-Details
Literatur
Alexopoulos, Charalampos, V., Lachana, Zoi, Androutsopoulou, Angeliki, Diamantopoulou, Vasiliki, Charalabidis, Yannis, & Loutsaris, Michalis (2019). How machine learning is changing egovernment. Proceedings of the 12th International Conference on Theory and Practice of Electronic Governance, 354–363, https://doi.org/10.1145/3326365.332641
Alon-Barkat, Saar & Busuioc, Madalina (2023). Human–AI Interactions in Public Sector Decision Making: „Automation Bias“ and „Selective Adherence“ to Algorithmic Advice. Journal of Public Administration Research and Theory, 33 (1), 153–169, https://doi.org/10.1093/jopart/muac007
Alshahrani, Albandari, Dennehy, Denis, & Mäntymäki, Matti (2022). An attention-based view of AI assimilation in public sector organizations: The case of Saudi Arabia. Government Information Quarterly, 39 (4), 101617, https://doi.org/10.1016/j.giq.2021.101617
Anthony, Callen (2021). When knowledge work and analytical technologies collide: The practices and consequences of black boxing algorithmic technologies. Administrative Science Quarterly, 66, 1173–1212, https://doi.org/10.1177/00018392211016755
Atchley, Andrew, Barr, Hannah, O’Hear, Emily,Weger, Kristin, Mesmer, Bryan, Gholston, Sampson, & Tenhundfeld, Nathan (2024). Trust in systems: identification of 17 unresolved research questions and the highlighting of inconsistencies. Theoretical Issues in Ergonomics Science, 25 (4), 391–415, https://doi.org/10.1080/1463922X.2023.2223251
Bailey, Nathan R. & Scerbo, Marc W. (2007). Automation-induced complacency for monitoring highly reliable systems: the role of task complexity, system experience, and operator trust. Theoretical Issues in Ergonomics Science, 8 (4), 321–348, https://doi.org/10.1080/14639220500535301
Benamati, John, Fuller, Mark, Serva, Mark & Baroudi, Jack (2010). Clarifying the integration of trust and TAM in E-commerce environments: Implications for systems design and management. IEEE Transactions on Engineering Management, 57 (3), 380–393, https://doi.org/10.1109/TEM.2009.2023111
Bullock, Justin (2019). Artificial Intelligence, Discretion, and Bureaucracy, The American Review of Public Administration, 49 (7), 751–761, https://doi.org/10.1177/0275074019856123
Busuioc, Madalina (2021). Accountable artificial intelligence: holding algorithms to account. Public Administration Review, 81 (5), 825–836, https://doi.org/10.1111/puar.13293
Campion, Averill, Gasco-Hernandez, Mila, Mikhaylov, Slava Jankin, & Esteve, Marc (2020). Overcoming the challenges of collaboratively adopting artificial intelligence in the public sector. Social Science Computer Review,40 (2), https://doi.org/10.1177/0894439320979953
Chandra, Yanto & Feng, Naikang (2025). Algorithms for a new season? Mapping a decade of research on the artificial intelligence-driven digital transformation of public administration, Public Management Review (Im Erscheinen), https://doi.org/10.1080/14719037.2025.2450680
Chen, Liwei, Hsieh, Po-An, & Rai, Arun (2022). How does intelligent system knowledge empowerment yield payoffs? Uncovering the adaptation mechanisms and contingency role of work experience. Information Systems Research, 33, 1042–1071, https://doi.org/10.1287/isre.2021.1097
Clausen, Nelly & Schäfer, Mirko (2023). Angewandte Ethik für Daten- und KI-Projekte in der öffentlichen Verwaltung. In Tobias Krause,Christian Schachtner & Basantha Tapa Basantha (Hrsg.), Handbuch Digitalisierung der Verwaltung (S. 233–251).UTB/Transcript.
Coglianese, Cary & Lehr, David (2017). Regulating by robot: Administrative decision making in the machine-learning era. Georgetown Law Journal, 105 (5), 1147–1223
Compton, Mallory, Young, Matthew, Bullock, Justin & Greer, Robert (2023). Administrative Errors and Race: Can Technology Mitigate Inequitable Administrative Outcomes? Journal of Public Administration Research & Theory,33 (3), 512–528, https://doi.org/10.1093/jopart/muac036
Cummings, Mary (2006). Automation and Accountability in Decision Support System Interface Design. The Journal of Technology Studies, 32 (1), 23–31.
Davis, Fred (1986). Technology Acceptance Model for Empirically Testing New End user Information Systems Theory and Results, Nicht publizierte Dissertation, MIT.
Davis, Fred, Bagozzi, Richard & Warshaw, Paul (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982–1003, https://doi.org/10.1287/mnsc.35.8.982
De Boer, Noortje & Raaphorst, Nadine (2021). Automation and discretion: explaining the effect of automation on how street-level bureaucrats enforce. Public Management Review, 25 (1), 42–62, https://doi.org/10.1080/14719037.2021.1937684
Deci, Edward (1975). Intrinsic Motivation. Plenum.
Deci, Edward & Ryan, Richard (2004). Handbook of Self- determination.The University of Rochester Press.
Desiere, Sam & Struyven, Ludo (2021). Using Artificial Intelligence to Classify Jobseekers: The Accuracy-Equity Trade-Off. Journal of Social Policy, 50 (2), 367–385, https://doi.org/10.1017/S0047279420000203
Desouza, Kevin, Dawson, Gregory & Chenok, Daniel (2020). Designing, developing, and deploying artificial intelligence systems: lessons from and for the public sector. Business Horizons, 63 (2), 205–213, https://doi.org/10.1016/j.bushor.2019.11.004
Dietvorst Berkeley, Simmons, Joseph & Massey, Cade (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err., Journal of Experimental Psychology: General,144 (1), 114–126, https://doi.org/10.1037/xge0000033
Dzindolet, Mary T., Pierce, Linda G., Beck, Hall P. & Dawe, Lloyd A. (2002). The Perceived Utility of Human and Automated Aids in a Visual Detection Task. Human Factors, 44(1), 79–94, https://doi.org/10.1518/0018720024494856
Eggers, William, Schatsky, David & Viechnicky, Peter (2017). AI-Augmented Government. Using cognitive technologies to redesign public sector work. Deloitte University Press.
Erkut, Burak (2020). From digital government to digital governance: Are we there yet? Sustainability, 12 (3), 860, https://doi.org/10.3390/su12030860
Europäische Kommission (2024). What factors influence perceived artificial intelligence adoption by public managers? A survey among public managers in seven EU countries. Publications Office of the European Union, DOI: 10.2760/0179285
Fishbein, Martin & Ajzen, Icek (1975). Belief, Attitude, Intention, and Behavior. Addison-Wesley.
Floruss, Julia & Vahlpahl, Nico (2020). Artificial Intelligence in Healthcare: Acceptance of AI-based Support Systems by Healthcare Professionals. Jonkoping University.
Frey, Bruno (1997). Not Just for the Money – An Economic Theory of Personal Motivation. Edward Elgar.
Gagné, Marylene & Deci, Edward (2005). Self- determination theory and work motivation. Journal of Organizational Behavior, 26 (4), 331–362, https://doi.org/10.1002/job.322
Gartner (2021). Gartner information technology glossary. Artificial Intelligence (AI). Abgerufen am 15. 2. 2021 unter: https://www.gartner.com/en/information-technology/glossary/artificial- intelligence.
Ghazizadeh, Mahtab, Lee, John & Boyle, Linda (2012). Extending the Technology Acceptance Model to assess automation. Cognition, Technology andWork, 14 (1), 39–49, https://doi.org/10.1007/s10111-011-0194-3
Gong, Yiwei & Janssen, Marijn (2021). Roles and capabilities of Enterprise architecture in big data analytics technology adoption and implementation. Journal of Theoretical and Applied Electronic Commerce Research, 16 (1), 37–51, https://doi.org/10.4067/S0718-18762021000100104
Green, Samuel (1991). How Many Subjects Does It Take To Do A Regression Analysis, Multivariate Behavioral Research, 26 (3), 499–510.
Green, Ben & Chen, Yiling (2019). The Principles and Limits of Algorithm-in-the-Loop Decision Making, Proceedings of the ACM on Human-Computer Interaction 3 (50), 1–24, https://doi.org/10.1207/s15327906mbr2603_7
Grimmelikhuijsen, Stephan, DeVries, Femke & Bouwman, Robin (2024). Regulators as Guardians of Trust? The Contingent and Modest Positive Effect of Targeted Transparency on Citizen Trust in Regulated Sectors. Journal of Public Administration Research and Theory, 34 (1), 136–149, https://doi.org/10.1093/jopart/muad010
Guidotti, Riccardo, Monreale, Anna, Ruggieri, Salvatore Turini, Franco, Giannotti, Fosca & Pedreschi, Dino (2019). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51 (5), 93, 1–42, https://doi.org/10.1145/3236009
Habel, Johannes, Alavi, Sascha, & Heinitz, Nicolas (2024). Effective implementation of predictive sales analytics. Journal of Marketing Research, 61, 718–741, https://doi.org/10.1177/00222437221151039
Hair, Joseph, Black, William, Babin, Barry & Anderson, Rolph (2007). Multivariate data analysis, Upper Saddle River. Pearson Prentice Hall.
Heine, Moreen, Dhungel, Anna-Katharina., Schrills, Tim & Wessel, Daniel (2023). Künstliche Intelligenz in öffentlichen Verwaltungen. SpringerGabler.
Hendriks, Friederike, Kienhues, Dorothe & Bromme, Rainer (2014). The Muenster Epistemic Trustworthiness Inventory (METI). Westfälische Wilhelms-Universität Münster, Pädagogische Psychologie, https://doi.org/10.1371/journal.pone.0139309
Keding, Christoph, & Meissner, Philip (2021). Managerial overreliance on AI-augmented decisionmaking processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisions. Technological Forecasting and Social Change, 171,120970, https://doi.org/10.1016/j.techfore.2021.120970
Kelly, Sage, Kaye, Sherrie-Anne & Oviedo-Trespalacios, Oscar (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review, Telematics and Informatics, 77, 1–33, https://doi.org/10.1016/j.tele.2022.101925
Kuhlmann, Sabine, Proeller, Isabella, Schimanke, Dieter & Ziekow, Jan (2021). Public Administration in Germany. Palgrave Macmillan.
Kuziemski, Maciej, & Misuraca, Gianluca (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecomm Policy, 44 (6), https://doi.org/10.1016/j.telpol.2020.101976
Lebovitz, Sarah, Lifshitz-Assaf, Hila, & Levina, Natalia (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization Science, 33, 126–148, https://doi.org/10.1287/orsc.2021.1549.
Li, Cheng (2013). Little’s test of missing completely at random. The Stata Journal 13, 4, 795–809, https://doi.org/10.1177/1536867X1301300407
Li, Guoying (1985). Robust Regression, in Hoaglin, David C., Mosteller, Frederic & Turkey, John W. (Eds.), Exploring Data: Trends and Shapes (pp. 281–340). Wiley.
Little, Roderick (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association 83, 1198–1202, https://doi.org/10.1080/01621459.1988.10478722
Lyell, David & Coiera, Enrico (2017). Automation Bias and Verification Complexity: A Systematic Review, Journal of the American Medical Informatics Association, 24 (2), 423–431, https://doi.org/10.1093/jamia/ocw105
Lyons, Joseph & Stokes, Charlene (2012). Human–Human Reliance in the Context of Automation. Human Factors, 54 (1), 112–121, DOI; 10.1177/0018720811427034
Madan, Rohit & Ashok, Mona (2023). AI adoption and diffusion in public administration: A systematic literature review and future research agenda. Government Information Quarterly, 40 (1), 101774, https://doi.org/10.1016/j.giq.2022.101774
Madhavan, Poornima & Wiegmann, Douglas (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8, 277–301, https://doi.org/10.1080/14639220500337708
Matsuyama, Lisa, Zimmerman, Rileigh, Eaton, Casey, Weger, Kristin, Mesmer, Bryan, Tenhundfeld, Nathan, Van Bossuyt, Douglas & Semmens, Rob (2021). Determinants that influence the acceptance and adoption of mission critical autonomous systems. Proceedings of the AIAA SciTech Forum. https://doi.org/10.2514/6.2021-1156
Maxwell, Scott (2000). Sample size and multiple regression analysis, Psychological Methods, 5 (4), 434–458, https://doi.org/10.1037/1082-989X.5.4.434
Mayer, Roger, Davis, James & Schoorman, David (1995). An integrative model of or-ganizational trust, Academy of Management Review, 20 (3), 709–734, https://doi.org/10.2307/258792
McEvily, Bill & Tortoriello, Marco (2011). Measuring trust in organisational research: Review and recommendations. Journal of Trust Research, 1 (1), 23–63, https://doi.org/10.1080/21515581.2011.552424
Mertens, Stephanie, Herberz, Mario, Hahnel, Ulf & Brosch, Tobias (2022). The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains. PNAS 2022, 119, 1, 1–10, https://doi.org/10.1073/pnas.2107346118
Mikalef, Patrick, Lemmer, Kristina, Schaefer, Cindy, Ylinen, Maija, Olsen, Fjørtoft, Siw, Yngvar Torvatn, Hans, Gupta, Manjul & Niehaves, Bjoern (2022). Enabling AI capabilities in government agencies: a study of determinants for European municipalities. Government Information Quarterly, 39 (4), https://doi.org/10.1016/j.giq.2021.101596
Nadeem, Ayesha, Marjanovic, Olivera & Abedin, Babak (2022). Gender bias in ai-based decisionmaking systems: a systematic literature review. Australian Journal of Information Systems, 26, 1–34, https://doi.org/10.3127/ajis.v26i0.3835.
Neumann, Oliver, Guirguis, Katharina & Steiner, Reto (2024). Exploring artificial intelligence adoption in public organizations: a comparative case study, Public Management Review, 26 (1), 114–141, https://doi.org/10.1080/14719037.2022.2048685
Palmiotto, Francesca (2024). When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis. German Law Journal, 25 (2), 210–236, https://doi.org/10.1017/glj.2023.112
Parasuraman, Raja, Molloy, Robert & Singh, Indramani (1993). Performance consequences of automation-induced „complacency.“ The International Journal of Aviation Psychology, 3 (1), 1–23, https://doi.org/10.1207/s15327108ijap0301_1
Parasuman, Raja & Riley, Victor (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. In: Human Factors, 39 (2), 230–253, https://doi.org/10.1518/001872097778543886
Phillips, Elizabeth, Zhao, Xuan, Ullman, Daniel & Malle, Bertram (2018). What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database. ACM/IEEE International Conference on Human-Robot Interaction, https://doi.org/10.1145/3171221.3171268
Pinaya, Walter, Graham, Mark, Kerfoot, Eric, Tudosiu, Petru-Daniel, Dafflon, Jessica, Fernandez, Virginia, Sanchez, Pedro,Wolleb, Julia, da Costa, Pedro & Patel, Ashay (2023). „Generative AI for Medical Imaging: extending the MONAI Framework“, https://doi.org/10.48550/arXiv.2307.15208
Poretschkin, Maximilian, Schmitz, Anna, Akila, Maram, Adilova, Linara, Becker, Daniel, Cremers, Armin & Hecker, Dirk (2021). Leitfaden zur Gestaltung vertrauenswürdiger künstlicher Intelligenz. Frauenhofer IAIS, DOI: 10.24406/publica-fhg-301361
Rastogi, Charvi, Zhang, Yunfeng,Wei, Dennis, Varshney, Kush, Dhurandhar, Amit & Tomsett, Richard (2020). Deciding fast and slow: the role of cognitive biases in ai-assisted decision-making. IBM CCSW 2021. https://doi.org/10.48550/arXiv.2010.07938
Rich, Elaine (1985). Artificial Intelligence and the Humanities. Computers and the Humanities, 19, 117–122, https://doi.org/10.1007/BF02259633
Rieger, Tobias, Roesler, Eileen & Manzey, Dietrich (2022). Challenging presumed technological superiority when working with (artificial) colleagues. Scientific Reports, 12, 3768, https://doi.org/10.1038/s41598-022-07808-x
Ruschemeier, Hannah (2023). The Problems of the Automation Bias in the Public Sector – A Legal Perspective. Weizenbaum Conference Proceedings, https://ssrn.com/abstract=4521474
Russell, Stuart & Norvig, Peter (2021). Artificial Intelligence: A Modern Approach. Pearson.
Ryan, Richard & Deci, Edward (2004). An overview of self-determination theory: an organismic dialectic perspective. In Deci, Edward L. & R.M. Ryan (Eds.), Handbook of Self- determination Research (S. 3–33). University of Rochester Press.
Salvini, Pericle, Reinmund, Tyler, Hardin, Benjamin, Grieman, Keri, Ten Holter, Carolyn, Johnson, Aaron, Kunze, Lars,Winfield, Alan & Jirotka, Marina (2023). Human involvement in autonomous decision-making systems. Lessons learned from three case studies in aviation, social care and road vehicles. Frontiers in Political Science, 5,1238461, https://doi.org/10.3389/fpos.2023.1238461
Schaefer, Cindy, Lemmer, Kristina, Samy, Kret, Ylinen, Maija, Mikalef, Patrick & Niehaves, Bjoern (2021). „‘Truth or Dare?“ – How Can We Influence the Adoption of Artificial Intelligence in Municipalities?’ Proceedings of the 54th Hawaii International Conference on System Sciences, 2347–2356, http://hdl.handle.net/10125/70899
Schepman, Astrid & Rodway, Paul (2023) The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. International Journal of Human–Computer Interaction, 39,13, 2724–2741, DOI: 10.1080/10447318.2022.2085400
Selten, Friso, Robeer, Marcel, & Grimmelikhuijsen, Stephan (2023). ‘Just like I thought’: Street‐level bureaucrats trust AI recommendations if they confirm their professional judgment. Public Administration Review, 83 (2), 263–278, https://doi.org/10.1111/puar.13602
Simon, Herbert (1997). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization. 4th ed. The Free Press.
Snow, Thea (2021). From Satisficing to Artificing: The Evolution of Administrative Decision-Making in the Age of the Algorithm. Data & Policy 3:e3, https://doi.org/10.1017/dap.2020.25.
Sohn, Kwonsang & Kwon, Ohbyung (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47, 1–14, https://doi.org/10.1016/j.tele.2019.101324
Srinivasan, Ramya & Chander, Ajay (2021). Biases in AI Systems. Communications of the ACM,64 (8), 44–49, https://doi.org/10.1145/3464903
Steerling, Emilie, Siira, Elin, Nilsen, Per, Svedberg, Petra & Nygren, Jens (2023). Implementing AI in healthcare – the relevance of trust: a scoping review. Frontiers in Health Services 3, 1211150, https://doi.org/10.3389/frhs.2023.1211150
Stiehler, Steve, Fritsche, Caroline & Reutlinger, Christian (2012). Der Einsatz von Fall-Vignetten. Potential für sozialräumliche Fragestellungen. In: sozialraum.de (4) Ausgabe 1/2012, https://www.sozialraum.de/der-einsatz-von-fall-vignetten.php
Stuck, Rachel, Tomlinson, Brianna, &Walker, Bruce (2021). The Importance of Incorporating Risk into Human-Automation Trust. Theoretical Issues in Ergonomics Science, https://doi.org/10.1080/1463922X.2021.1975170
Ulbrich, Christian & Frey, Bruno (2024). Automated Democracy. Die Neuverteilung von Macht und Einfluss im Digitalen Staat. Herder Verlag.
Vaccaro, Michelle, Almaatouq, Abdullah & Malone, Thomas (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 8, 2293–2303, https://doi.org/10.1038/s41562-024-02024-1
Van den Broeck, Anja, Vansteenkiste, Maarten, De Witte, Hans, Soenens, Bart & Lens, Willy (2010). Capturing autonomy, competence, and relatedness at work: Construction and initial validation of the Work-related Basic Need Satisfaction scale. Journal of Occupational and Organizational Psychology, 83 (4), 981–1002, DOI:10.1348/096317909X481382
Venkatesh, Viswanath, Morris, Michael, Davis, Gordon & Davis, Fred (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27 (3), 425–478, https://doi.org/10.2307/30036540
Yeung, Karen (2017). „Hypernudge“: Big data as a mode of regulation by design. In: Information, Communication & Society, 20 (1), 118–136, https://doi.org/10.1080/1369118X.2016.1186713