Human-AI Decision Dynamics: How Risk Propensity and Trust Impact Choices Through Decision Fatigue, Conditional on AI Understanding

Authors

  • Sheriff Y. Ahmed Department of Management, School of Business, King Faisal University, P.O Box: 400, Al-Ahsa, 31982, Saudi Arabia.
  • Jamshid Pardaev Associate Professor of Finance and Tourism Department, Termez University of Economics and Service, Uzbekistan.

DOI:

https://doi.org/10.31181/dmame8220251484

Keywords:

Trust in AI, Risk Propensity, Decision Fatigue, AI Knowledge, Decision Making

Abstract

This research investigates how trust in AI, risk-taking propensity, decision fatigue, and knowledge of AI interact to shape human-AI decision-making processes in organizational settings. With AI systems now central to decision-making, it is vital to understand the psychological and cognitive underpinnings behind their adoption and performance. This study seeks to examine these interplays and emphasize how these variables combine to determine decision results. Quantitative research design was used, which gathered data from 244 workers from different organizations. Structured questionnaires with previously validated measures were used. ADANCO software was utilized to analyze the data, where Structural Equation Modeling (SEM) was applied to examine the hypothesized associations between variables. The findings substantiated all six hypothesized paths. Decision making was positively affected by trust in AI and risk propensity, while decision fatigue negatively affected it. Decision fatigue mediated and AI understanding moderated many paths, affirming its key position within decision dynamics. The model provided strong explanatory power for AI-integrated decision contexts. The research has theoretical contribution by synthesizing psychological concepts with AI interaction scholarship. At a practical level, it provides tactical guidance for managers to develop AI decision systems to fit human cognitive traits and behavioral inclinations.

Downloads

Download data is not yet available.

References

[1] Abduljaber, M. F. (2024). Perceived influence of artificial intelligence on educational leadership's decision-making, teaching, and learning outcomes: A transcendental phenomenological study. https://digitalcommons.liberty.edu/doctoral/5714/

[2] Alamäki, A., Khan, U. A., Kauttonen, J., & Schlögl, S. (2024). An Experiment of AI-Based Assessment: Perspectives of Learning Preferences, Benefits, Intention, Technology Affinity, and Trust. Education Sciences, 14(12), 1386. https://doi.org/10.3390/educsci14121386

[3] Bostrom, A., Demuth, J. L., Wirz, C. D., Cains, M. G., Schumacher, A., Madlambayan, D., Bansal, A. S., Bearth, A., Chase, R., & Crosman, K. M. (2024). Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences. Risk analysis, 44(6), 1498-1513. https://doi.org/10.1111/risa.14245

[4] Campion, J. R., O'Connor, D. B., & Lahiff, C. (2024). Human-artificial intelligence interaction in gastrointestinal endoscopy. World Journal of Gastrointestinal Endoscopy, 16(3), 126. https://doi.org/10.4253/wjge.v16.i3.126

[5] Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727-1739. https://doi.org/10.1080/10447318.2022.2050543

[6] Duan, W., Zhou, S., Scalia, M. J., Yin, X., Weng, N., Zhang, R., Freeman, G., McNeese, N., Gorman, J., & Tolston, M. (2024). Understanding the evolvement of trust over time within Human-AI teams. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2), 1-31. https://doi.org/10.1145/3687060

[7] Ejjami, R., & Boussalham, K. (2024). Resilient supply chains in Industry 5.0: Leveraging AI for predictive maintenance and risk mitigation. International Journal For Multidisciplinary Research, 6(4), 25116. https://jngr5.com/public/blog/Resilient%20Supply%20Chains.pdf

[8] Fahnenstich, H., Rieger, T., & Roesler, E. (2024). Trusting under risk–comparing human to AI decision support agents. Computers in Human Behavior, 153, 108107. https://doi.org/10.1016/j.chb.2023.108107

[9] Hickman Jr, R. L., Pignatiello, G. A., & Tahir, S. (2018). Evaluation of the decisional fatigue scale among surrogate decision makers of the critically ill. Western journal of nursing research, 40(2), 191-208. https://doi.org/10.1177/0193945917723828

[10] Hong, S. J. (2025). What drives AI-based risk information-seeking intent? Insufficiency of risk information versus (Un) certainty of AI chatbots. Computers in Human Behavior, 162, 108460. https://doi.org/10.1016/j.chb.2024.108460

[11] Hu, M., Zhang, G., Chong, L., Cagan, J., & Goucher-Lambert, K. (2025). How being outvoted by AI teammates impacts human-AI collaboration. International Journal of Human–Computer Interaction, 41(7), 4049-4066. https://doi.org/10.1080/10447318.2024.2345980

[12] Kaplan, A. D., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2023). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 337-359. https://doi.org/10.1177/00187208211013988

[13] Kerstan, S., Bienefeld, N., & Grote, G. (2024). Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare. Risk Analysis, 44(4), 939-957. https://doi.org/10.1111/risa.14216

[14] Kilian, K. A. (2025). Beyond accidents and misuse: Decoding the structural risk dynamics of artificial intelligence. AI & SOCIETY, 1-20. https://doi.org/10.1007/s00146-025-02419-2

[15] Kim, D., Song, Y., Kim, S., Lee, S., Wu, Y., Shin, J., & Lee, D. (2023). How should the results of artificial intelligence be explained to users?-Research on consumer preferences in user-centered explainable artificial intelligence. Technological Forecasting and Social Change, 188, 122343. https://doi.org/10.1016/j.techfore.2023.122343

[16] Kim, T., & Peng, W. (2024). Do we want AI judges? The acceptance of AI judges’ judicial decision-making on moral foundations. AI & SOCIETY, 1-14. https://doi.org/10.1007/s00146-024-02121-9

[17] Koutsikouri, D., Hylving, L., Bornemark, J., & Lindberg, S. (2024). Human judgment in the age of automated decision-making systems. In Research Handbook on Artificial Intelligence and Decision Making in Organizations (pp. 144-159). Edward Elgar Publishing. https://doi.org/10.4337/9781803926216.00017

[18] Leoni, L., Gueli, G., Ardolino, M., Panizzon, M., & Gupta, S. (2024). AI-empowered KM processes for decision-making: empirical evidence from worldwide organisations. Journal of Knowledge Management, 28(11), 320-347. https://doi.org/10.1108/JKM-03-2024-0262

[19] Lester, C., Rowell, B., Zheng, Y., Marshall, V., Kim, J. Y., Chen, Q., Kontar, R., & Yang, X. J. (2025). Effect of Uncertainty-Aware AI Models on Pharmacists’ Reaction Time and Decision-Making in a Web-Based Mock Medication Verification Task: Randomized Controlled Trial. JMIR Medical Informatics, 13(1), e64902. https://doi.org/10.2196/64902

[20] Lewandowska-Tomaszczyk, B., & Sousa, S. (2025). AI Trustworthiness and Persuasive Effects in Users’ Judgment. In Cultures, Narratives, and Concepts (pp. 281-296). Springer. https://doi.org/10.1007/978-3-031-86158-1_16

[21] Liang, J., Zhu, Y., Wu, J., & Chen, C. (2025). “When I Have the Advantage, I Prefer AI!” The Influence of an Applicant’s Relative Advantage on the Preference for Artificial Intelligence Decision-making. Journal of Business and Psychology, 1-21. https://doi.org/10.1007/s10869-025-10012-z

[22] Liu, Z., Lin, Q., Tu, S., & Xu, X. (2025). When robot knocks, knowledge locks: how and when does AI awareness affect employee knowledge hiding? Frontiers in Psychology, 16, 1627999. https://doi.org/10.3389/fpsyg.2025.1627999

[23] Luo, Y., Li, X., & Ye, Q. (2023). The impact of privacy calculus and trust on user information participation behavior in AI-based medical consultation-the moderating role of gender. Journal of Electronic Commerce Research, 24(1), 48-67. http://ojs.jecr.org/jecr/sites/default/files/2023vol24no1_Paper4.pdf

[24] Martin, H., James, J., & Chadee, A. (2025). Exploring Large Language Model AI tools in Construction Project Risk Assessment: Chat GPT Limitations in Risk Identification, Mitigation Strategies, and User Experience. Journal of Construction Engineering and Management, 151(9), 04025119. https://doi.org/10.1061/JCEMD4.COENG-16658

[25] Meertens, R. M., & Lion, R. (2008). Measuring an individual's tendency to take risks: the risk propensity scale 1. Journal of applied social psychology, 38(6), 1506-1520. https://doi.org/10.1111/j.1559-1816.2008.00357.x

[26] Mehrotra, S., Jorge, C. C., Jonker, C. M., & Tielman, M. L. (2024). Integrity-based explanations for fostering appropriate trust in AI agents. ACM Transactions on Interactive Intelligent Systems, 14(1), 1-36. https://doi.org/10.1145/3610578

[27] Mei, K. X., Pang, R. Y., Lyford, A., Wang, L. L., & Reinecke, K. (2025). Passing the Buck to AI: How Individuals' Decision-Making Patterns Affect Reliance on AI. arXiv preprint arXiv:2505.01537. https://arxiv.org/abs/2505.01537

[28] Mustikasari, A., Hurriyati, R., Dirgantari, P. D., Sultan, M. A., & Sugiana, N. S. S. (2025). The Role of Artificial Intelligence in Brand Experience: Shaping Consumer Behavior and Driving Repurchase Decisions. International Journal of Advanced Computer Science & Applications, 16(4). https://repository.lpkia.ac.id/id/eprint/59/1/Dokumen%20publish%20Loa%20artikel%20dan%20reviewed.pdf

[29] Passalacqua, M., Pellerin, R., Magnani, F., Doyon-Poulin, P., Del-Aguila, L., Boasen, J., & Léger, P.-M. (2025). Human-centred AI in industry 5.0: a systematic review. International Journal of Production Research, 63(7), 2638-2669. https://doi.org/10.1080/00207543.2024.2406021

[30] Sun, L., Tang, Y., & Ma, X. (2025). It just would not work for me: perceived preference heterogeneity and consumer response to AI-driven product recommendations. European Journal of Marketing, 59(5), 1426-1452. https://doi.org/10.1108/EJM-02-2023-0082

[31] Tamò‐Larrieux, A., Guitton, C., Mayer, S., & Lutz, C. (2024). Regulating for trust: Can law establish trust in artificial intelligence? Regulation & Governance, 18(3), 780-801. https://doi.org/10.1111/rego.12568

[32] Vanneste, B. S., & Puranam, P. (2024). Artificial intelligence, trust, and perceptions of agency. Academy of management review(ja), amr-2022. https://doi.org/10.5465/amr.2022.0041

[33] Wangzhou, K., Khan, M., Hussain, S., Ishfaq, M., & Farooqi, R. (2021). Effect of regret aversion and information cascade on investment decisions in the real estate sector: The mediating role of risk perception and the moderating effect of financial literacy. Frontiers in Psychology, 12, 736753. https://doi.org/10.3389/fpsyg.2021.736753

[34] Wong, L.-W., Tan, G. W.-H., Ooi, K.-B., & Dwivedi, Y. (2024). The role of institutional and self in the formation of trust in artificial intelligence technologies. Internet Research, 34(2), 343-370. https://doi.org/10.1108/INTR-07-2021-0446

[35] Yin, J., Ngiam, K. Y., Tan, S. S.-L., & Teo, H. H. (2025). Designing ai-based work processes: How the timing of ai advice affects diagnostic decision making. Management Science. https://doi.org/10.1287/mnsc.2022.01454

[36] Zafar, A. (2024). Balancing the scale: navigating ethical and practical challenges of artificial intelligence (AI) integration in legal practices. Discover Artificial Intelligence, 4(1), 27. https://doi.org/10.1007/s44163-024-00121-8

[37] Zhang, K., Ye, G., Xiang, Q., & Chang, Y. (2025). Influencing mechanism of optimism bias on construction worker’s unsafe behavior: the role of risk perception and risk propensity. Engineering, Construction and Architectural Management. https://doi.org/10.1108/ECAM-07-2023-0756

[38] Zhao, L., Wu, X., & Luo, H. (2022). Developing AI literacy for primary and middle school teachers in China: Based on a structural equation modeling analysis. Sustainability, 14(21), 14549. https://doi.org/10.3390/su142114549

Downloads

Published

2025-08-10

How to Cite

Sheriff Y. Ahmed, & Jamshid Pardaev. (2025). Human-AI Decision Dynamics: How Risk Propensity and Trust Impact Choices Through Decision Fatigue, Conditional on AI Understanding. Decision Making: Applications in Management and Engineering, 8(2), 96–113. https://doi.org/10.31181/dmame8220251484