Factors influencing international students’ adoption of generative artificial intelligence

The mediating role of perceived values and attitudes

Authors

  • Muhammad Ittefaq James Madison University, USA https://orcid.org/0000-0001-5334-7567
  • Ali Zain Arizona State University, USA
  • Rauf Arif Towson University, USA
  • Taufiq Ahmad University of Maryland, USA
  • Laeeq Khan Ohio University, USA
  • Hyunjin Seo University of Kansas, USA

DOI:

https://doi.org/10.32674/fnwdpn48

Keywords:

GenAI, international students, perceived value, TPB, VAM, TAM

Abstract

The present study examines the factors influencing international students’ intentions to use generative artificial intelligence (GenAI). Our results showed that attitude toward GenAI use, perceived ease of use, perceived usefulness, enjoyment, subjective norms, novelty, trust in technology, perceived value, and AI literacy were positively associated with intention to use GenAI. Fear of plagiarism had a negative relationship with intention to use GenAI. Our mediation analysis suggested that trust in technology, perceived ease of use, fear of plagiarism, perceived usefulness, and AI literacy indirectly influenced GenAI usage intention via attitude and perceived value, underscoring both the appeal and concerns of GenAI in learning. This study contributed to the TPB, VAM, and TAM frameworks by incorporating fear of plagiarism, trust in technology, and AI literacy to demonstrate how cognitive, affective, and value-based factors collectively influence the adoption of GenAI technologies among international students.

Author Biographies

  • Muhammad Ittefaq, James Madison University, USA

    Mohammad Ittefaq, Ph.D., is an assistant professor in the School of Communication Studies at James Madison University. His research examines into the ways in which people consume and interact with information through mainstream and social media, including how they interpret scientific messages, make decisions related to health and climate, and support policies related to science.

  • Ali Zain, Arizona State University, USA

    Ali Zain, Ph.D. is an assistant professor in strategic communication at the Walter Cronkite School of Journalism and Mass Communication at Arizona State University. His research is focused on studying strategic message features that drive public perception, engagement, and behavioral outcomes in a range of contexts including health and science communications, misinformation, information processing, and societal disparities. Email: me.alizain@gmail.com

  • Rauf Arif, Towson University, USA

    Rauf Arif, Ph.D., is an Assistant Professor of Journalism at Towson University, Maryland, USA, in the Department of Mass Communication. He studies artificial intelligence & society, social media and digital social movements, cross-cultural media practices, and how information & communication technologies are reshaping the dynamics of human communication. Email: rarif@towson.edu

  • Taufiq Ahmad, University of Maryland, USA

    TAUFIQ AHMAD, M.A. is a doctoral student in the Department of Communication at the University of Maryland. His research interests include AI, digital media, strategic communication, and crisis communication.

    Email: taufiq@umd.edu

  • Laeeq Khan, Ohio University, USA

    LAEEQ KHAN, Ph.D., is a Director of the Social Media Analytics Research Team (SMART) Lab and an Associate Professor at the Scripps College of Communication at Ohio University. Dr. Khan’s expertise is social media and analytics, with a focus on analyzing big data and social media for businesses, health communications, politics and a range of other areas where social media are employed. Email: khanm1@ohio.edu

  • Hyunjin Seo, University of Kansas, USA

    Hyunjin Seo, Ph.D., is the Oscar Stauffer Professor and Associate Dean for Research and Faculty Development in the William Allen White School of Journalism and Mass Communications at the University of Kansas, as well as the founding director of the KU Center for Digital Inclusion. Her research examines how social collaborative networks facilitated by digital communication technologies affect social change at local, national, and international levels. Email: hseo@ku.edu

References

Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10–22. https://doi.org/10.1186/s41239-024-00444-7

Abdaljaleel, M., Barakat, M., Alsanafi, M., Salim, N. A., Abazid, H., Malaeb, D., Mohammed, A. H., Hassan, B. A. R., Wayyes, A. M., Farhan, S. S., Khatib, S. E., Rahal, M., Sahban, A., Abdelaziz, D. H., Mansour, N. O., AlZayer, R., Khalil, R., Fekih-Romdhane, F., Hallit, R., … Sallam, M. (2024). A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Scientific Reports, 14(1), 1983–2014. https://doi.org/10.1038/s41598-024-52549-8

Adapa, S., Fazal-e-Hasan, S. M., Makam, S. B., Azeem, M. M., & Mortimer, G. (2020). Examining the antecedents and consequences of perceived shopping value through smart retail technology. Journal of Retailing and Consumer Services, 52(January), 1-11. https://doi.org/10.1016/j.jretconser.2019.101901

Agarwal, R., & Karahanna, E. (2000). Time flies when you’re having fun: Cognitive absorption and beliefs about information technology usage. MIS Quarterly, 24(4), 665-694. https://doi.org/10.2307/3250951

Ahmetoglu, S., Cob, Z. C., & Ali, N. (2023). Internet of things adoption in the manufacturing sector: A conceptual model from a multitheoretical perspective. Applied Sciences, 13(6), 1-21. https://doi.org/10.3390/app13063856

Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In Action control: From cognition to behavior (pp. 11-39). Berlin, Heidelberg: Springer Berlin Heidelberg.

Ajzen, I., & Fishbein, M. (1988). Theory of reasoned action-theory of planned behavior. University of South Florida, 2007, 67-98.

Ajzen, I. (1991). The Theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T

Al-Qaysi, N., Al-Emran, M., Al-Sharafi, M. A., Iranmanesh, M., Ahmad, A., & Mahmoud, M. A. (2024). Determinants of ChatGPT use and its impact on learning performance: An integrated model of BRT and TPB. International Journal of Human-Computer Interaction, 1–13. Online First Article. https://doi.org/10.1080/10447318.2024.2361210

Al-Abdullatif, A. M., & Alsubaie, M. A. (2024). ChatGPT in learning: Assessing students’ use intentions through the lens of perceived value and the influence of AI literacy. Behavioral Sciences, 14(9), 1-23. https://doi.org/10.3390/bs14090845

Armitage, C. J., & Conner, M. (2001). Efficacy of the theory of planned behavior: A meta‐analytic review. British Journal of Social Psychology, 40(4), 471-499. https://doi.org/10.1348/014466601164939

Baines, A., Ittefaq, M., & Abwao, M. (2022). Social media for social support: A study of international graduate students in the United States. Journal of International Students, 12(2), 345–365. https://doi.org/10.32674/jis.v12i2.3158

Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(43), 1-18. https://doi.org/10.1186/s41239-023-00411-8

Choung, H., David, P., & Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727-1739. https://doi.org/10.1080/10447318.2022.2050543

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008

Diao, Y., Li, Z., Zhou, J., Gao, W., & Gong, X. (2024). A meta-analysis of college students’ intention to use generative artificial intelligence. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2409.06712

Fishbein, M., & Ajzen, I. (2011). Predicting and changing behavior: The reasoned action approach (1st ed.). Psychology Press.

Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65 (May), 1-15. https://doi.org/10.1016/j.techsoc.2021.101535

Giray, L. (2024). The problem with false positives: AI detection unfairly accuses scholars of AI plagiarism. The Serials Librarian, 85(5-6), 181-189. https://doi.org/10.1080/0361526X.2024.2433256

Greaves, M., Zibarras, L. D., & Stride, C. (2013). Using the theory of planned behavior to explore environmental behavioral intentions in the workplace. Journal of Environmental Psychology, 34 (June), 109-120. https://doi.org/10.1016/j.jenvp.2013.02.003

Hashim, S., Masek, A., Mahthir, B. N. S. M., Rashid, A. H. A., & Nincarean, D. (2021). Association of interest, attitude and learning habit in mathematics learning toward enhancing students’ achievement. Indonesian Journal of Science and Technology, 6(1), 113–122. https://doi.org/10.17509/ijost.v6i1.31526

Hasija, A., & Esper, T. L. (2022). In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance. Journal of Business Logistics, 43(3), 388-412. https://doi.org/10.1111/jbl.12301

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57(4), 542–570. https://doi.org/10.1111/ejed.12533

Hsu, C., & Lin, J. C. (2016). Effect of perceived value and social influences on mobile app stickiness and in-app purchase intention. Technological Forecasting and Social Change, 108 (July), 42–53. https://doi.org/10.1016/j.techfore.2016.04.012

Huang, A., Ozturk, A. B., Zhang, T., de la Mora Velasco, E., & Haney, A. (2024). Unpacking AI for hospitality and tourism services: Exploring the role of perceived enjoyment on future use intentions. International Journal of Hospitality Management, 119 (May), 1-9. https://doi.org/10.1016/j.ijhm.2024.103693

Huang, C. W., Coleman, M., Gachago, D., & Van Belle, J. P. (2023). Using ChatGPT to encourage critical AI literacy skills and for assessment in higher education. In H. E. Van Rensburg, D. P. Snyman, L. Drevin, & G. R. Drevin (Eds.), ICT education. SACLA 2023 (Communications in Computer and Information Science, Vol. 1862, pp. 96–110). Springer. https://doi.org/10.1007/978-3-031-48536-7_8

Ittefaq, M., Zain, A., Arif, R., Ala-Uddin, M., Ahmad, T., & Iqbal, A. (2025). Global news media coverage of artificial intelligence: A comparative analysis of frames, sentiments, and trends across 12 countries. Telematics and Informatics, 96 (January), 1-18. https://doi.org/10.1016/j.tele.2024.102223

Ivanov, S., Soliman, M., Tuomi, A., Alkathiri, N. A., & Al-Alawi, A. N. (2024). Drivers of generative AI adoption in higher education through the lens of the theory of planned behavior. Technology in Society, 77 (June), 1-14. https://doi.org/10.1016/j.techsoc.2024.102521

Jarrah, A. M., Wardat, Y., & Fidalgo, P. (2023). Using ChatGPT in academic writing is (not) a form of plagiarism: What does the literature say. Online Journal of Communication and Media Technologies, 13(4), 1-20. https://doi.org/10.30935/ojcmt/13572

Jereb, E., Urh, M., Jerebic, J., & Šprajc, P. (2018). Gender differences and the awareness of plagiarism in higher education. Social Psychology of Education: An International Journal, 21(2), 409–426. https://doi.org/10.1007/s11218-017-9421-y

Jin, Y., Martinez-Maldonado, R., Gašević, D., & Yan, L. (2024). GLAT: The generative AI literacy assessment test. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2411.00283

Kampa, R. K., Padhan, D. K., Karna, N., & Gouda, J. (2025). Identifying the factors influencing plagiarism in higher education: An evidence-based review of the literature. Accountability in Research, 32(2), 83–98. https://doi.org/10.1080/08989621.2024.2311212

Khalaf, M.A. (2025). Does attitude toward plagiarism predict aigiarism using ChatGPT? AI Ethics, 5(2025), 677-688. https://doi.org/10.1007/s43681-024-00426-5

Kier, C. A., & Ives, C. (2022). Recommendations for a balanced approach to supporting academic integrity: perspectives from a survey of students, faculty, and tutors. International Journal for Educational Integrity, 18(1), 1-19. https://doi.org/10.1007/s40979-022-00116-x

Kim, H. W., Chan, H. C., & Gupta, S. (2007). Value-based adoption of mobile internet: An empirical investigation. Decision Support Systems, 43(1), 111-126. https://doi.org/10.1016/j.dss.2005.05.009

Kim, Y., & Han, H. (2010). Intention to pay conventional-hotel prices at a green hotel – a modification of the theory of planned behavior. Journal of Sustainable Tourism, 18(8), 997–1014. https://doi.org/10.1080/09669582.2010.490300

Kim, S. W., & Lee, Y. (2024). Investigation into the influence of sociocultural factors on attitudes toward artificial intelligence. Education and Information Technologies, 29(8), 9907–9935. https://doi.org/10.1007/s10639-023-12172-y

Kim, Y., Park, Y., & Choi, J. (2017). A study on the adoption of IoT smart home service: Using value-based adoption model. Total Quality Management & Business Excellence, 28(9–10), 1149–1165. https://doi.org/10.1080/14783363.2017.1310708

King, W. R., & He, J. (2006). A meta-analysis of the technology acceptance model. Information & Management, 43(6), 740-755. https://doi.org/10.1016/j.im.2006.05.003

Koufaris, M., & Hampton-Sosa, W. (2004). The development of initial trust in an online company by new customers. Information & Management, 41(3), 377-397. https://doi.org/10.1016/j.im.2003.08.004

Lai, P. C. (2016). Design and Security impact on consumers’ intention to use single platform E-payment. Interdisciplinary Information Sciences, 22(1), 111–122. https://doi.org/10.4036/iis.2016.r.05

Lee, M. C. (2009). Understanding the behavioral intention to play online games: An extension of the theory of planned behavior. Online Information Review, 33(5), 849–872. https://doi.org/10.1108/14684520911001873

Li, K. (2023). Determinants of college students’ actual use of AI-based systems: An extension of the technology acceptance model. Sustainability, 15(6), 1-16. https://doi.org/10.3390/su15065221

Lin, T.C., Wu, S., Hsu, J. S.C., & Chou, Y.C. (2012). The integration of value-based adoption and expectation–confirmation models: An example of IPTV continuance intention. Decision Support Systems, 54(1), 63–75. https://doi.org/10.1016/j.dss.2012.04.004

Lukyanenko, R., Maass, W., & Story, V. C. (2022). Trust in artificial intelligence: From a foundational trust framework to emerging research opportunities. Electronic Markets, 32(4), 1993–2020. https://doi.org/10.1007/s12525-022-00605-4

Ma, X., & Huo, Y. (2023). Are users willing to embrace ChatGPT? Exploring the factors on the acceptance of chatbots from the perspective of AIDUA framework. Technology in Society, 75 (November), 1-13. https://doi.org/10.1016/j.techsoc.2023.102362

Malik, A., Khan, M. L., Hussain, K., Qadir, J., & Tarhini, A. (2025). AI in higher education: unveiling academicians’ perspectives on teaching, research, and ethics in the age of ChatGPT. Interactive Learning Environments, 33(3), 2390-2406. https://doi.org/10.1080/10494820.2024.2409407

McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334-359. https://doi.org/10.1287/isre.13.3.334.81

Mohamed Eldakar, M. A., Khafaga Shehata, A. M., & Abdelrahman Ammar, A. S. (2025). What motivates academics in Egypt toward generative AI tools? An integrated model of TAM, SCT, UTAUT2, perceived ethics, and academic integrity. Information Development, 0(0). https://doi.org/10.1177/02666669251314859

Nazaretsky, T., Mejia-Domenzain, P., Swamy, V., Frej, J., & Käser, T. (2025). The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions. Computers and Education. Artificial Intelligence, 8, Article 100368. https://doi.org/10.1016/j.caeai.2025.100368

Ng, D. T. K., Wu, W., Leung, J. K. L., Chiu, T. K. F., & Chu, S. K. W. (2024). Design and validation of the AI literacy questionnaire: The affective, behavioral, cognitive and ethical approach. British Journal of Educational Technology, 55(3), 1082-1104. https://doi.org/10.1111/bjet.13411

Omrani, N., Rivieccio, G., Fiore, U., Schiavone, F., & Agreda, S. G. (2022). To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts. Technological Forecasting and Social Change, 181(August), 121763. https://doi.org/10.1016/j.techfore.2022.121763

Pan, X. (2020). Technology acceptance, technological self-efficacy, and attitude toward technology-based self-directed learning: Learning motivation as a mediator. Frontiers in Psychology, 11(2020), 1-11. https://doi.org/10.3389/fpsyg.2020.564294

Park, S., Ju, Y., Kim, E., & Chang, J. (2025). Research On Consumer's Intention to Use Mobile Payment Platforms: Based on the VAM and TAM Models. KSII Transactions on Internet and Information Systems (TIIS), 19(3), 1007-1026. https://doi.org/10.3837/tiis.2025.03.016

Parveen, K., Phuc, T. Q. B., Alghamdi, A. A., Hajjej, F., Obidallah, W. J., Alduraywish, Y. A., & Shafiq, M. (2024). Unraveling the dynamics of ChatGPT adoption and utilization through Structural Equation Modeling. Scientific Reports, 14(1), 1-15. https://doi.org/10.1038/s41598-024-74406-4

Prasad, K. D. V., & De, T. (2024). Generative AI as a catalyst for HRM practices: Mediating effects of trust. Humanities and Social Sciences Communications, 11(1), 1-16. https://doi.org/10.1057/s41599-024-03842-4

Romero-Rodríguez, J. M., Ramírez-Montoya, M. S., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Use of ChatGPT at university as a tool for complex thinking: Students’ perceived usefulness. Journal of New Approaches in Educational Research, 12(2), 323-339. https://doi.org/10.7821/naer.2023.7.1458

Salloum, S. A., Mohammad Alhamad, A. Q., Al-Emran, M., Abdel Monem, A., & Shaalan, K. (2019). Exploring students’ acceptance of e-learning through the development of a comprehensive technology acceptance model. IEEE Access, 7, 128445–128462. https://doi.org/10.1109/ACCESS.2019.2939467

Seo, H., Liu, Y., Ebrahim, H., Ittefaq, M., & Chung, D. (2023). The COVID-19 pandemic and international students: A mixed-methods approach to relationships between social media use, social support, and mental health. First Monday, 28(2), 1-22. https://doi.org/10.5210/fm.v28i2.11516

Serholt, S., Barendregt, W., Leite, I., Hastie, H., Jones, A., Paiva, A., ... & Castellano, G. (2014). Teachers’ views on the use of empathic robotic tutors in the classroom. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication (pp. 955-960). IEEE. https://doi.org/10.1109/ROMAN.2014.6926376

Shahzad, M. F., Xu, S., & Asif, M. (2024). Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. British Educational Research Journal, 51(2), 489-513. https://doi.org/10.1002/berj.4084

Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47(2020), 101324. https://doi.org/10.1016/j.tele.2019.101324

Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education. Artificial Intelligence, 7(2024), 100259. https://doi.org/10.1016/j.caeai.2024.100259

Strzelecki, A. (2024). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learning Environments, 32(9), 5142–5155. https://doi.org/10.1080/10494820.2023.2209881

Teo, T., & Noyes, J. (2011). An assessment of the influence of perceived enjoyment and attitude on the intention to use technology among preservice teachers: A structural equation modeling approach. Computers & Education, 57(2), 1645–1653. https://doi.org/10.1016/j.compedu.2011.03.002

Turel, O., Serenko, A., & Bontis, N. (2010). User acceptance of hedonic digital artifacts: A theory of consumption values perspective. Information & Management, 47(1), 53-59. https://doi.org/10.1016/j.im.2009.10.002

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273-315. https://doi.org/10.1111/j.1540-5915.2008.00192.x

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926

von Garrel, J., & Mayer, J. (2023). Artificial intelligence in studies—use of ChatGPT and AI-based tools among students in Germany. Humanities & Social Sciences Communications, 10(1), 799–809. https://doi.org/10.1057/s41599-023-02304-7

Wang, B., Rau, P.-L. P., & Yuan, T. (2023). Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behavior & Information Technology, 42(9), 1324–1337. https://doi.org/10.1080/0144929X.2022.2072768

Wang, C., Wang, H., Li, Y., Dai, J., Gu, X., & Yu, T. (2024). Factors influencing university students’ behavioral intention to use generative Artificial Intelligence: Integrating the theory of planned behavior and AI literacy. International Journal of Human–Computer Interaction. Online First Article. https://doi.org/10.1080/10447318.2024.2383033

Wells, J. D., Campbell, D. E., Valacich, J. S., & Featherman, M. (2010). The effect of perceived novelty on the adoption of information technology innovations: A risk/reward perspective. Decision Sciences, 41(4), 813-843. https://doi.org/10.1111/j.1540-5915.2010.00292.x

Wood, D., & Moss, S. H. (2024). Evaluating the impact of students’ generative AI use in educational contexts. Journal of Research in Innovative Teaching & Learning, 17(2), 152–167. https://doi.org/10.1108/JRIT-06-2024-0151

Xie, L., Liu, X., & Li, D. (2022). The mechanism of value cocreation in robotic services: customer inspiration from robotic service novelty. Journal of Hospitality Marketing & Management, 31(8), 962–983. https://doi.org/10.1080/19368623.2022.2112354

Zhao, S., & Chen, L. (2021). Exploring residents’ purchase intention of green housings in China: An extended perspective of perceived value. International Journal of Environmental Research and Public Health, 18(8), 4074-4093. https://doi.org/10.3390/ijerph18084074

Zhu, W., Huang, L., Zhou, X., Li, X., Shi, G., Ying, J., & Wang, C. (2025). Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. International Journal of Human–Computer Interaction, 41(1), 742-764. https://doi.org/10.1080/10447318.2024.2323277

Zogheib, S., & Zogheib, B. (2024). Understanding university students’ adoption of ChatGPT: Insights from TAM, SDT, and beyond. Journal of Information Technology Education Research, 23(1), 1-14. https://doi.org/10.28945/5377

Downloads

Published

2025-07-02

How to Cite

Ittefaq, M., Zain, A. ., Arif, R., Ahmad, T. ., Khan, L., & Seo, H. (2025). Factors influencing international students’ adoption of generative artificial intelligence: The mediating role of perceived values and attitudes. Journal of International Students, 15(7), 127-154. https://doi.org/10.32674/fnwdpn48

Most read articles by the same author(s)