Research on the impact of explainability on users' acceptance of AI for knowledge creation

Hu Baoliang, Wang Jiawen, Yan Shuai

Science Research Management ›› 2026, Vol. 47 ›› Issue (3) : 117-127.

PDF(1261 KB)
PDF(1261 KB)
Science Research Management ›› 2026, Vol. 47 ›› Issue (3) : 117-127. DOI: 10.19571/j.cnki.1000-2995.2026.03.012  CSTR: 32148.14.kygl.2026.03.012

Research on the impact of explainability on users' acceptance of AI for knowledge creation

Author information +
History +

Abstract

The black-box problem of artificial intelligence (AI) is troubling users to accept AI for knowledge creation. The explainable AI is one of the important solutions to solve the problem. However,existing literature has rarely explored how the explainability of AI affects users' acceptance of AI for knowledge creation. Therefore,this study focused on exploring the question,including the path mechanism of explainability affecting users' acceptance of AI for knowledge creation,and the moderating effect of user characteristics on the path. This paper proposed some theoretical hypotheses and conducted the structural equation modeling and hierarchical regression analysis on 425 questionnaire data to test the hypotheses. The results showed that the three dimensions of explainability,i.e.,completeness,format,and currency,have an influence on users' acceptance of AI for knowledge creation;the influence of explainability on users' acceptance of AI for knowledge creation is indirect,with perceived usefulness and perceived ease of use playing a mediating role. The results also showed that the influence of explainability on users' acceptance of AI for knowledge creation is moderated by user characteristics such as education level,usage experience,and position. This study will not only contribute to the theories of AI knowledge creation and AI explainability theory by providing a user acceptance model based on the explainability,but also provide insights for enterprises to correctly play the role of AI explainability and promote AI knowledge creation.

Key words

artificial intelligence / explainability / knowledge creation / user acceptance

Cite this article

Download Citations
Hu Baoliang , Wang Jiawen , Yan Shuai. Research on the impact of explainability on users' acceptance of AI for knowledge creation[J]. Science Research Management. 2026, 47(3): 117-127 https://doi.org/10.19571/j.cnki.1000-2995.2026.03.012

References

[1]
OLAN F, ARAKPOGUN E O, SUKLAN J, et al. Artificial intelligence and knowledge sharing: Contributing factors to organizational performance[J]. Journal of Business Research, 2022, 145:605-615.
[2]
张志学, 华中生, 谢小云. 数智时代人机协同的研究现状与未来方向[J]. 管理工程学报, 2024, 38(1):1-13.
ZHANG Zhixue, HUA Zhongsheng, XIE Xiaoyun. Research status and future directions of human-computer collaboration in the era of digital intelligence[J]. Journal of Industrial Engineering and Engineering Management, 2024, 38(1):1-13.
[3]
HARFOUCHE A, QUINIO B, SABA M, et al. The recursive theory of knowledge augmentation: Integrating human intuition and knowledge in artificial intelligence to augment organizational knowledge[J]. Information Systems Frontiers, 2023, 25(1):55-70.
[4]
MAKARIUS E E, MUKHERJEE D, FOX J D, et al. Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization[J]. Journal of Business Research, 2020, 120:262-273.
[5]
LOVE P E D, FANG W, MATTHEWS J, et al. Explainable artificial intelligence (XAI): Precepts, models, and opportunities for research in construction[J]. Advanced Engineering Informatics, 2023, 57:102024.
[6]
HAQUE A B, ISLAM A K M N, MIKALEF P. Explainable artificial intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research[J]. Technological Forecasting and Social Change, 2023, 186:122120.
[7]
DIKMEN M, BURNS C. The effects of domain knowledge on trust in explainable AI and task performance: A Goyal case of peer-to-peer lending[J]. International Journal of Human-Computer Studies, 2022, 162:102792.
[8]
ARRIETA A B, DÍAZ-RODRÍGUEZ N, DEL SER J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information Fusion, 2020, 58:82-115.
[9]
DONG J, CHEN S, MIRALINAGHI M, et al. Why did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems[J]. Transportation Research Part C: Emerging Technologies, 2023, 156:104358.
[10]
KIM M, KIM S, KIM J, et al. Do stakeholder needs differ? Designing stakeholder-tailored explainable artificial intelligence (XAI) interfaces[J]. International Journal of Human-Computer Studies, 2024, 181:103160.
[11]
SAEED W, OMLIN C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities[J]. Knowledge-Based Systems, 2023, 263:110273.
[12]
MESKE C, BUNDE E, SCHNEIDER J, et al. Explainable artificial intelligence: Objectives, stakeholders, and future research opportunities[J]. Information Systems Management, 2022, 39(1):53-63.
[13]
CONATI C, BARRAL O, PUTNAM V, et al. Toward personalized XAI: A case study in intelligent tutoring systems[J]. Artificial Intelligence, 2021, 298:103503.
[14]
NONAKA I. A dynamic theory of organizational knowledge creation[J]. Organization Science, 1994, 5(1):14-37.
[15]
LI J, HUANG J, LIU J, et al. Human-AI cooperation: Modes and their effects on attitudes[J]. Telematics and Informatics, 2022, 73:101862.
[16]
VÖSSING M, KÜHL N, LIND M, et al. Designing transparency for effective human-AI collaboration[J]. Information Systems Frontiers, 2022, 24(3):877-895.
[17]
张成洪, 陈刚, 陆天, 等. 可解释人工智能及其对管理的影响:研究现状和展望[J]. 管理科学, 2021, 34(3):63-79.
ZHANG Chenghong, CHEN Gang, LU Tian, et al. Explainable artificial intelligence and its impact on management: Research status and prospects[J]. Journal of Management Science, 2021, 34(3):63-79.
[18]
DE BRUIJN H, WARNIER M, JANSSEN M. The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making[J]. Government Information Quarterly, 2022, 39(2):101666.
[19]
DAVIS F D. Perceived usefulness, perceived ease of use, and user acceptance of information technology[J]. MIS Quarterly, 1989, 13(3):319-340.
[20]
吴俊, 张迪, 刘涛, 等. 人类对人工智能信任的接受度及脑认知机制研究:实证研究与神经科学实验的元分析[J]. 管理工程学报, 2024, 38(1):60-73.
WU Jun, ZHANG Di, LIU Tao, et al. A study on the acceptance of human trust in artificial intelligence and brain cognitive mechanism: A meta-analysis of empirical studies and neuroscience experiments[J]. Journal of Industrial Engineering and Engineering Management, 2024, 38(1):60-73.
[21]
LIM J S, ZHANG J. Adoption of AI-driven personalization in digital news platforms: An integrative model of technology acceptance and perceived contingency[J]. Technology in Society, 2022, 69:101965.
[22]
杨祎, 刘嫣然, 李垣. 替代或互补:人工智能应用管理对创新的影响[J]. 科研管理, 2021, 42(4):46-54.
YANG Yi, LIU Yanran, LI Yuan. Substitution or complementation: The impact of AI application and management on innovation[J]. Science Research Management, 2021, 42(4):46-54.
[23]
JARRAHI M H, ASKAY D, ESHRAGHI A, et al. Artificial intelligence and knowledge management: A partnership between human and AI[J]. Business Horizons, 2023, 66(1):87-99.
[24]
LEBOVITZ S, LIFSHITZ-ASSAF H, LEVINA N. To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis[J]. Organization Science, 2022, 33(1):126-148.
[25]
WYSOCKI O, DAVIES J K, VIGO M, et al. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making[J]. Artificial Intelligence, 2023, 316:103839.
[26]
孔祥维, 王子明, 王明征, 等. 人工智能使能系统的可信决策:进展与挑战[J]. 管理工程学报, 2022, 36(6):1-14.
KONG Xiangwei, WANG Ziming, WANG Mingzheng, et al. Trustworthy decision-making in artificial intelligence-enabled systems: Progress and challenges[J]. Journal of Industrial Engineering and Engineering Management, 2022, 36(6):1-14.
[27]
LANGER M, OSTER D, SPEITH T, et al. What do we want from explainable artificial intelligence (XAI)? A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research[J]. Artificial Intelligence, 2021, 296: 103473.
[28]
CHONG A Y L, BLUT M, ZHENG S. Factors influencing the acceptance of healthcare information technologies: A meta-analysis[J]. Information & Management, 2022, 59(3):103604.
[29]
JANSSEN M, HARTOG M, MATHEUS R, et al. Will algorithms blind people? The effect of explainable AI and decision-makers' experience on AI-supported decision-making in government[J]. Social Science Computer Review, 2022, 40(2):478-493.
[30]
ULLAH R, BIN ISMAIL H, KHAN M T I, et al. Nexus between ChatGPT usage dimensions and investment decisions making in Pakistan: Moderating role of financial literacy[J]. Technology in Society, 2024, 76: 102454.
[31]
VENKATESH V, DAVIS F D. A theoretical extension of the technology acceptance model: Four longitudinal field studies[J]. Management Science, 2000, 46(2): 186-204.
[32]
VENKATESH V, MORRIS M G, DAVIS G B, et al. User acceptance of information technology: Toward a unified view[J]. MIS Quarterly, 2003, 27(3): 425-478.
PDF(1261 KB)

Accesses

Citation

Detail

Sections
Recommended

/