The Influence of Neural Computing Engineering and AI on Social Media User Behavior and Decision Making

Authors

  • Muna Nama Thweny Scientific Research Commission, Baghdad, Iraq
  • Hamid Sh. Aldulami Scientific Research Commission, Baghdad, Iraq
  • Laith M Fawzi Scientific Research Commission, Baghdad, Iraq
  • Ammar Abdullatif Hadi Scientific Research Commission, Baghdad, Iraq
  • Zeyad Ibrahim Bakri Scientific Research Commission, Baghdad, Iraq

DOI:

https://doi.org/10.31181/dmame8220251576

Keywords:

Decision Making; social media; Perceived Algorithms Transparency; Decision Influence; Content Personalization

Abstract

The objective of this study was to investigate the impact of received personalization of content, perceived algorithm transparency and perceived ethical concerns on user decision-making behaviour on social media (SM). For this purpose, quantitative data from the primary resources was collected and this study used RStudio for robust analysis. The population of this study was SM users from USA, Canada, United Kingdom, Brazil, India, Australia, Japan and Germany. A sample of 395 respondents was analysed, and the findings of this study highlight the impact of received personalization of content, perceived algorithm transparency and perceived ethical concerns on user decision-making behaviour on SM is positive and significant. The study has significant implications for the body of knowledge where the identified gaps were addressed. In addition, this study provides practical implications how the users can avoid the influence of SM to influence on their decision-making behaviour.

Downloads

Download data is not yet available.

References

[1] Auliya, S. F., Kudina, O., Ding, A. Y., & Van de Poel, I. (2025). AI versus AI for democracy: exploring the potential of adversarial machine learning to enhance privacy and deliberative decision-making in elections. AI and Ethics, 5(3), 2801-2813. https://doi.org/10.1007/s43681-024-00588-2

[2] Aysolmaz, B., Müller, R., & Meacham, D. (2023). The public perceptions of algorithmic decision-making systems: Results from a large-scale survey. Telematics and Informatics, 79, 101954. https://doi.org/https://doi.org/10.1016/j.tele.2023.101954

[3] Bastian, M., Helberger, N., & Makhortykh, M. (2021). Safeguarding the Journalistic DNA: Attitudes towards the Role of Professional Values in Algorithmic News Recommender Designs. Digital Journalism, 9(6), 835-863. https://doi.org/10.1080/21670811.2021.1912622

[4] Benesty, J., Chen, J., Huang, Y., & Cohen, I. (2009). Pearson Correlation Coefficient. In I. Cohen, Y. Huang, J. Chen, & J. Benesty (Eds.), Noise Reduction in Speech Processing (pp. 1-4). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-00296-0_5

[5] Brüns, J. D., & Meißner, M. (2024). Do you create your content yourself? Using generative artificial intelligence for social media content creation diminishes perceived brand authenticity. Journal of Retailing and Consumer Services, 79, 103790. https://doi.org/https://doi.org/10.1016/j.jretconser.2024.103790

[6] Cho, H., Lee, D., & Lee, J.-G. (2023). User acceptance on content optimization algorithms: predicting filter bubbles in conversational AI services. Universal Access in the Information Society, 22(4), 1325-1338. https://doi.org/10.1007/s10209-022-00913-8

[7] Cloarec, J., Meyer-Waarden, L., & Munzel, A. (2024). Transformative privacy calculus: Conceptualizing the personalization-privacy paradox on social media. Psychology & Marketing, 41(7), 1574-1596. https://doi.org/https://doi.org/10.1002/mar.21998

[8] Cohen, J. (1992). Statistical power analysis. Current directions in psychological science, 1(3), 98-101. https://doi.org/10.1111/1467-8721.ep10768783

[9] Cools, H., & Diakopoulos, N. (2024). Uses of Generative AI in the Newsroom: Mapping Journalists’ Perceptions of Perils and Possibilities. Journalism Practice, 1-19. https://doi.org/10.1080/17512786.2024.2394558

[10] Dogruel, L., Facciorusso, D., & Stark, B. (2022). ‘I’m still the master of the machine.’ Internet users’ awareness of algorithmic decision-making and their perception of its effect on their autonomy. Information, Communication & Society, 25(9), 1311-1332. https://doi.org/10.1080/1369118X.2020.1863999

[11] Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and Validation of an Algorithm Literacy Scale for Internet Users. Communication Methods and Measures, 16(2), 115-133. https://doi.org/10.1080/19312458.2021.1968361

[12] Ebrahimi, S., Abdelhalim, E., Hassanein, K., & Head, M. (2025). Reducing the incidence of biased algorithmic decisions through feature importance transparency: an empirical study. European Journal of Information Systems, 34(4), 636-664. https://doi.org/10.1080/0960085X.2024.2395531

[13] Eg, R., Demirkol Tønnesen, Ö., & Tennfjord, M. K. (2023). A scoping review of personalized user experiences on social media: The interplay between algorithms and human factors. Computers in Human Behavior Reports, 9, 100253. https://doi.org/https://doi.org/10.1016/j.chbr.2022.100253

[14] Elmimouni, H., Rüller, S., Aal, K., Skop, Y., Abokhodair, N., Wulf, V., & Tolmie, P. (2025). Exploring Algorithmic Resistance: Responses to Social Media Censorship in Activism. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1-24. https://doi.org/10.1145/3710970

[15] Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a Silver Bullet. Journal of Marketing Theory and Practice, 19(2), 139-152. https://doi.org/10.2753/MTP1069-6679190202

[16] Hermann, E. (2021). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New Media & Society, 24(5), 1258-1277. https://doi.org/10.1177/14614448211022702

[17] Kim, K., & Moon, S.-I. (2021). When Algorithmic Transparency Failed: Controversies Over Algorithm-Driven Content Curation in the South Korean Digital Environment. American Behavioral Scientist, 65(6), 847-862. https://doi.org/10.1177/0002764221989783

[18] Kushwaha, A. K., Pharswan, R., Kumar, P., & Kar, A. K. (2023). How Do Users Feel When They Use Artificial Intelligence for Decision Making? A Framework for Assessing Users’ Perception. Information Systems Frontiers, 25(3), 1241-1260. https://doi.org/10.1007/s10796-022-10293-2

[19] Lambillotte, L., Bart, Y., & Poncin, I. (2022). When Does Information Transparency Reduce Downside of Personalization? Role of Need for Cognition and Perceived Control. Journal of Interactive Marketing, 57(3), 393-420. https://doi.org/10.1177/10949968221095557

[20] Liao, M., & Sundar, S. S. (2022). When E-Commerce Personalization Systems Show and Tell: Investigating the Relative Persuasive Appeal of Content-Based versus Collaborative Filtering. Journal of Advertising, 51(2), 256-267. https://doi.org/10.1080/00913367.2021.1887013

[21] Martin, K., & Waldman, A. (2023). Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions. Journal of Business Ethics, 183(3), 653-670. https://doi.org/10.1007/s10551-021-05032-7

[22] Mogaji, E., & Jain, V. (2024). How generative AI is (will) change consumer behaviour: Postulating the potential impact and implications for research, practice, and policy. Journal of Consumer Behaviour, 23(5), 2379-2389. https://doi.org/https://doi.org/10.1002/cb.2345

[23] Murad, M., Othman, S. B., & Kamarudin, M. A. I. B. (2024). Three stages of entrepreneurial university support and students’ entrepreneurial behavior: A statistical analysis using R Studio. Journal of Education for Business, 99(6), 400-407. https://doi.org/10.1080/08832323.2024.2417292

[24] Ozanne, M., Bhandari, A., Bazarova, N. N., & DiFranzo, D. (2022). Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms. Big Data & Society, 9(2), 20539517221115666. https://doi.org/10.1177/20539517221115666

[25] Perez Vallejos, E., Dowthwaite, L., Creswich, H., Portillo, V., Koene, A., Jirotka, M., McCarthy, A., & McAuley, D. (2021). The impact of algorithmic decision-making processes on young people’s well-being. Health Informatics Journal, 27(1), 1460458220972750. https://doi.org/10.1177/1460458220972750

[26] Saura, J. R. (2024). Algorithms in Digital Marketing: Does Smart Personalization Promote a Privacy Paradox? FIIB Business Review, 13(5), 499-502. https://doi.org/10.1177/23197145241276898

[27] Scalvini, M. (2023). Making Sense of Responsibility: A Semio-Ethic Perspective on TikTok’s Algorithmic Pluralism. Social Media + Society, 9(2), 20563051231180625. https://doi.org/10.1177/20563051231180625

[28] Shulner-Tal, A., Kuflik, T., & Kliger, D. (2023). Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions. International Journal of Human–Computer Interaction, 39(7), 1455-1482. https://doi.org/10.1080/10447318.2022.2095705

[29] Silva, D. E., Chen, C., & Zhu, Y. (2022). Facets of algorithmic literacy: Information, experience, and individual factors predict attitudes toward algorithmic systems. New Media & Society, 26(5), 2992-3017. https://doi.org/10.1177/14614448221098042

[30] Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society, 9(2), 20539517221115189. https://doi.org/10.1177/20539517221115189

[31] Ernst, J. (2025). Understanding algorithmic recommendations. A qualitative study on children’s algorithm literacy in Switzerland. Information, Communication & Society, 28(11), 1945-1961. https://doi.org/10.1080/1369118X.2024.2382224

[32] Saurwein, F., Brantner, C., & Möck, L. (2025). Responsibility networks in media discourses on automation: A comparative analysis of social media algorithms and social companions. new media & society, 27(3), 1752-1773. https://doi.org/10.1177/14614448231203310

[33] Voorveld, H. A. M., Meppelink, C. S., & Boerman, S. C. (2024). Consumers’ persuasion knowledge of algorithms in social media advertising: identifying consumer groups based on awareness, appropriateness, and coping ability. International Journal of Advertising, 43(6), 960-986. https://doi.org/10.1080/02650487.2023.2264045

[34] Wang, S. (2023). Factors related to user perceptions of artificial intelligence (AI)-based content moderation on social media. Computers in Human Behavior, 149, 107971. https://doi.org/https://doi.org/10.1016/j.chb.2023.107971

[35] Wang, S., Zhang, X., Wang, Y., & Ricci, F. (2024). Trustworthy recommender systems. ACM Transactions on Intelligent Systems and Technology, 15(4), 1-20. https://doi.org/10.1145/3627826

[36] Wu, W., Huang, Y., & Qian, L. (2024). Social trust and algorithmic equity: The societal perspectives of users' intention to interact with algorithm recommendation systems. Decision Support Systems, 178, 114115. https://doi.org/https://doi.org/10.1016/j.dss.2023.114115

[37] Yu, L., & Li, Y. (2022). Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort. Behavioral Sciences, 12(5). https://doi.org/10.3390/bs12050127

[38] Zarouali, B., Boerman, S. C., & de Vreese, C. H. (2021). Is this recommended by an algorithm? The development and validation of the algorithmic media content awareness scale (AMCA-scale). Telematics and Informatics, 62, 101607. https://doi.org/https://doi.org/10.1016/j.tele.2021.101607

[39] Zhang, C., & Zhang, H. (2025). The impact of generative AI on management innovation. Journal of Industrial Information Integration, 44, 100767. https://doi.org/https://doi.org/10.1016/j.jii.2024.100767

Downloads

Published

2025-12-01

How to Cite

Muna Nama Thweny, Hamid Sh. Aldulami, Laith M Fawzi, Ammar Abdullatif Hadi, & Zeyad Ibrahim Bakri. (2025). The Influence of Neural Computing Engineering and AI on Social Media User Behavior and Decision Making. Decision Making: Applications in Management and Engineering, 8(2), 600–612. https://doi.org/10.31181/dmame8220251576