可解释性对用户接受AI用于知识创造的影响研究

胡保亮, 王嘉雯, 闫帅

科研管理 ›› 2026, Vol. 47 ›› Issue (3) : 117-127.

PDF(1261 KB)
PDF(1261 KB)
科研管理 ›› 2026, Vol. 47 ›› Issue (3) : 117-127. DOI: 10.19571/j.cnki.1000-2995.2026.03.012  CSTR: 32148.14.kygl.2026.03.012

可解释性对用户接受AI用于知识创造的影响研究

作者信息 +

Research on the impact of explainability on users' acceptance of AI for knowledge creation

Author information +
文章历史 +

摘要

人工智能(AI)黑箱问题正困扰着用户接受AI用于知识创造。可解释AI被认为是解决AI黑箱问题的重要方案之一。然而,现有研究鲜有探讨可解释性如何影响用户接受AI用于知识创造的机制。为此,本文着重研究了这一机制,这包括AI可解释性影响用户接受AI用于知识创造的路径机制以及用户特征对此路径的调节机制。本文提出了理论假设并对425份问卷数据进行了结构方程模型分析与层次回归分析,检验了相关假设。研究发现:可解释性的完整性、格式化与现时性维度对用户接受AI用于知识创造具有正向的影响;可解释性对用户接受AI用于知识创造的影响是间接的,需要感知有用性与感知易用性的中介。研究也发现:可解释性对用户接受AI用于知识创造的一些影响受学历、使用经验与职位等用户特征的调节。本文的结论由于为AI知识创造研究提供了一个基于可解释性的用户接受模型而对AI知识创造理论与可解释AI理论有贡献,也为企业正确发挥AI可解释性的作用、推进AI知识创造提供了启示。

Abstract

The black-box problem of artificial intelligence (AI) is troubling users to accept AI for knowledge creation. The explainable AI is one of the important solutions to solve the problem. However,existing literature has rarely explored how the explainability of AI affects users' acceptance of AI for knowledge creation. Therefore,this study focused on exploring the question,including the path mechanism of explainability affecting users' acceptance of AI for knowledge creation,and the moderating effect of user characteristics on the path. This paper proposed some theoretical hypotheses and conducted the structural equation modeling and hierarchical regression analysis on 425 questionnaire data to test the hypotheses. The results showed that the three dimensions of explainability,i.e.,completeness,format,and currency,have an influence on users' acceptance of AI for knowledge creation;the influence of explainability on users' acceptance of AI for knowledge creation is indirect,with perceived usefulness and perceived ease of use playing a mediating role. The results also showed that the influence of explainability on users' acceptance of AI for knowledge creation is moderated by user characteristics such as education level,usage experience,and position. This study will not only contribute to the theories of AI knowledge creation and AI explainability theory by providing a user acceptance model based on the explainability,but also provide insights for enterprises to correctly play the role of AI explainability and promote AI knowledge creation.

关键词

人工智能 / 可解释性 / 知识创造 / 用户接受

Key words

artificial intelligence / explainability / knowledge creation / user acceptance

引用本文

导出引用
胡保亮, 王嘉雯, 闫帅. 可解释性对用户接受AI用于知识创造的影响研究[J]. 科研管理. 2026, 47(3): 117-127 https://doi.org/10.19571/j.cnki.1000-2995.2026.03.012
Hu Baoliang, Wang Jiawen, Yan Shuai. Research on the impact of explainability on users' acceptance of AI for knowledge creation[J]. Science Research Management. 2026, 47(3): 117-127 https://doi.org/10.19571/j.cnki.1000-2995.2026.03.012
中图分类号: F270.7   

参考文献

[1]
OLAN F, ARAKPOGUN E O, SUKLAN J, et al. Artificial intelligence and knowledge sharing: Contributing factors to organizational performance[J]. Journal of Business Research, 2022, 145:605-615.
[2]
张志学, 华中生, 谢小云. 数智时代人机协同的研究现状与未来方向[J]. 管理工程学报, 2024, 38(1):1-13.
ZHANG Zhixue, HUA Zhongsheng, XIE Xiaoyun. Research status and future directions of human-computer collaboration in the era of digital intelligence[J]. Journal of Industrial Engineering and Engineering Management, 2024, 38(1):1-13.
[3]
HARFOUCHE A, QUINIO B, SABA M, et al. The recursive theory of knowledge augmentation: Integrating human intuition and knowledge in artificial intelligence to augment organizational knowledge[J]. Information Systems Frontiers, 2023, 25(1):55-70.
[4]
MAKARIUS E E, MUKHERJEE D, FOX J D, et al. Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization[J]. Journal of Business Research, 2020, 120:262-273.
[5]
LOVE P E D, FANG W, MATTHEWS J, et al. Explainable artificial intelligence (XAI): Precepts, models, and opportunities for research in construction[J]. Advanced Engineering Informatics, 2023, 57:102024.
[6]
HAQUE A B, ISLAM A K M N, MIKALEF P. Explainable artificial intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research[J]. Technological Forecasting and Social Change, 2023, 186:122120.
[7]
DIKMEN M, BURNS C. The effects of domain knowledge on trust in explainable AI and task performance: A Goyal case of peer-to-peer lending[J]. International Journal of Human-Computer Studies, 2022, 162:102792.
[8]
ARRIETA A B, DÍAZ-RODRÍGUEZ N, DEL SER J, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information Fusion, 2020, 58:82-115.
[9]
DONG J, CHEN S, MIRALINAGHI M, et al. Why did the AI make that decision? Towards an explainable artificial intelligence (XAI) for autonomous driving systems[J]. Transportation Research Part C: Emerging Technologies, 2023, 156:104358.
[10]
KIM M, KIM S, KIM J, et al. Do stakeholder needs differ? Designing stakeholder-tailored explainable artificial intelligence (XAI) interfaces[J]. International Journal of Human-Computer Studies, 2024, 181:103160.
[11]
SAEED W, OMLIN C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities[J]. Knowledge-Based Systems, 2023, 263:110273.
[12]
MESKE C, BUNDE E, SCHNEIDER J, et al. Explainable artificial intelligence: Objectives, stakeholders, and future research opportunities[J]. Information Systems Management, 2022, 39(1):53-63.
[13]
CONATI C, BARRAL O, PUTNAM V, et al. Toward personalized XAI: A case study in intelligent tutoring systems[J]. Artificial Intelligence, 2021, 298:103503.
[14]
NONAKA I. A dynamic theory of organizational knowledge creation[J]. Organization Science, 1994, 5(1):14-37.
[15]
LI J, HUANG J, LIU J, et al. Human-AI cooperation: Modes and their effects on attitudes[J]. Telematics and Informatics, 2022, 73:101862.
[16]
VÖSSING M, KÜHL N, LIND M, et al. Designing transparency for effective human-AI collaboration[J]. Information Systems Frontiers, 2022, 24(3):877-895.
[17]
张成洪, 陈刚, 陆天, 等. 可解释人工智能及其对管理的影响:研究现状和展望[J]. 管理科学, 2021, 34(3):63-79.
ZHANG Chenghong, CHEN Gang, LU Tian, et al. Explainable artificial intelligence and its impact on management: Research status and prospects[J]. Journal of Management Science, 2021, 34(3):63-79.
[18]
DE BRUIJN H, WARNIER M, JANSSEN M. The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making[J]. Government Information Quarterly, 2022, 39(2):101666.
[19]
DAVIS F D. Perceived usefulness, perceived ease of use, and user acceptance of information technology[J]. MIS Quarterly, 1989, 13(3):319-340.
[20]
吴俊, 张迪, 刘涛, 等. 人类对人工智能信任的接受度及脑认知机制研究:实证研究与神经科学实验的元分析[J]. 管理工程学报, 2024, 38(1):60-73.
WU Jun, ZHANG Di, LIU Tao, et al. A study on the acceptance of human trust in artificial intelligence and brain cognitive mechanism: A meta-analysis of empirical studies and neuroscience experiments[J]. Journal of Industrial Engineering and Engineering Management, 2024, 38(1):60-73.
[21]
LIM J S, ZHANG J. Adoption of AI-driven personalization in digital news platforms: An integrative model of technology acceptance and perceived contingency[J]. Technology in Society, 2022, 69:101965.
[22]
杨祎, 刘嫣然, 李垣. 替代或互补:人工智能应用管理对创新的影响[J]. 科研管理, 2021, 42(4):46-54.
YANG Yi, LIU Yanran, LI Yuan. Substitution or complementation: The impact of AI application and management on innovation[J]. Science Research Management, 2021, 42(4):46-54.
[23]
JARRAHI M H, ASKAY D, ESHRAGHI A, et al. Artificial intelligence and knowledge management: A partnership between human and AI[J]. Business Horizons, 2023, 66(1):87-99.
[24]
LEBOVITZ S, LIFSHITZ-ASSAF H, LEVINA N. To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis[J]. Organization Science, 2022, 33(1):126-148.
[25]
WYSOCKI O, DAVIES J K, VIGO M, et al. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making[J]. Artificial Intelligence, 2023, 316:103839.
[26]
孔祥维, 王子明, 王明征, 等. 人工智能使能系统的可信决策:进展与挑战[J]. 管理工程学报, 2022, 36(6):1-14.
KONG Xiangwei, WANG Ziming, WANG Mingzheng, et al. Trustworthy decision-making in artificial intelligence-enabled systems: Progress and challenges[J]. Journal of Industrial Engineering and Engineering Management, 2022, 36(6):1-14.
[27]
LANGER M, OSTER D, SPEITH T, et al. What do we want from explainable artificial intelligence (XAI)? A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research[J]. Artificial Intelligence, 2021, 296: 103473.
[28]
CHONG A Y L, BLUT M, ZHENG S. Factors influencing the acceptance of healthcare information technologies: A meta-analysis[J]. Information & Management, 2022, 59(3):103604.
[29]
JANSSEN M, HARTOG M, MATHEUS R, et al. Will algorithms blind people? The effect of explainable AI and decision-makers' experience on AI-supported decision-making in government[J]. Social Science Computer Review, 2022, 40(2):478-493.
[30]
ULLAH R, BIN ISMAIL H, KHAN M T I, et al. Nexus between ChatGPT usage dimensions and investment decisions making in Pakistan: Moderating role of financial literacy[J]. Technology in Society, 2024, 76: 102454.
[31]
VENKATESH V, DAVIS F D. A theoretical extension of the technology acceptance model: Four longitudinal field studies[J]. Management Science, 2000, 46(2): 186-204.
[32]
VENKATESH V, MORRIS M G, DAVIS G B, et al. User acceptance of information technology: Toward a unified view[J]. MIS Quarterly, 2003, 27(3): 425-478.

基金

国家社会科学基金重点项目:“人工智能推动大中小企业融通创新的新模式、新困境与政策优化研究”(23AGL009)
国家社会科学基金重点项目:“人工智能推动大中小企业融通创新的新模式、新困境与政策优化研究”(2023.09—2026.08)
浙江省自然科学基金项目:“人机知识编排视角下的大数据分析能力形成机制研究”(LY24G020005)
浙江省自然科学基金项目:“人机知识编排视角下的大数据分析能力形成机制研究”(2024.01—2026.12)

PDF(1261 KB)

Accesses

Citation

Detail

段落导航
相关文章

/