Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study

Tom Nadarzynski, Oliver Miles, Aimee Cowie, Damien Ridge, Tom Nadarzynski, Oliver Miles, Aimee Cowie, Damien Ridge

Abstract

Background: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants' willingness to engage with AI-led health chatbots.

Methods: The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor.

Results: Three broad themes: 'Understanding of chatbots', 'AI hesitancy' and 'Motivations for health chatbots' were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI95%:0.13-0.78] and dislike for talking to computers OR = 0.77 [CI95%:0.60-0.99] as well as positively correlated with perceived utility OR = 5.10 [CI95%:3.08-8.43], positive attitude OR = 2.71 [CI95%:1.77-4.16] and perceived trustworthiness OR = 1.92 [CI95%:1.13-3.25].

Conclusion: Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients' concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients' perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.

Keywords: AI; Acceptability; Artificial Intelligence; bot; chatbot.

References

    1. Bostrom N, Yudkowsky E. The ethics of artificial intelligence. In: Frankish K and Ramsey WM (eds) The Cambridge Handbook of Artificial Intelligence Cambridge: Cambridge University Press, 2014, pp.316–334.
    1. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc Neurol 2017; 2(4): 230–243.
    1. He J, Baxter SL, Xu J, et al. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019; 25(1): 30–36.
    1. Iacobucci G. NHS long term plan: Care to be shifted away from hospitals in “21st century” service model. BMJ 2019; 364: l85.
    1. Ivanovic M, Semnic M. The Role of Agent Technologies in Personalized Medicine. In: 2018 5th International Conference on Systems and Informatics (ICSAI), 2018, IEEE, pp.299–304.
    1. Hoermann S, McCabe KL, Milne DN, et al. Application of synchronous text-based dialogue systems in mental health interventions: Systematic review. J Med Internet Res 2017; 19(8): e267.
    1. Fadhil A, Gabrielli S. Addressing challenges in promoting healthy lifestyles: the al-chatbot approach. In: Proceedings of the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare, ACM, 2017, pp.261–265.
    1. Comendador BEV, Francisco BMB, Medenilla JS, et al. Pharmabot: A pediatric generic medicine consultant chatbot. J Automat Control Eng 2015; 3(2): 137--140.
    1. Abashev A, Grigoryev R, Grigorian K, et al. Programming tools for messenger-based chatbot system organization: Implication for outpatient and translational medicines. BioNanoScience 2017; 7(2): 403–407.
    1. Harwich E, Laycock K. (2018). Thinking on its Own—AI in the NHS. (Reform). (accessed 2 November 2018).
    1. Montenegro JLZ, da Costa CA, da Rosa Righi R. Survey of conversational agents in health. Exp Syst Appl 2019; 129: 56–67.
    1. Tripathy AK, Carvalho R, Pawaskar K, et al. Mobile based healthcare management using artificial intelligence. In: 2015 International Conference on Technologies for Sustainable Development (ICTSD), IEEE, 2015, pp.1–6.
    1. Crutzen R, Peters GJY, Portugal SD, et al. An artificially intelligent chat agent that answers adolescents' questions related to sex, drugs, and alcohol: An exploratory study. J Adolesc Health 2011; 48(5): 514–519.
    1. Ghosh S, Bhatia S, Bhatia A. Quro: Facilitating user symptom check using a personalised chatbot-oriented dialogue system. Stud Health Technol Inform 2018; 252: 51–56.
    1. Razzaki S, Baker A, Perov Y, et al. A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis. arXiv preprint arXiv 2018; 1806: 10698.
    1. Grolleman J, van Dijk B, Nijholt A, et al. Break the habit! Designing an e-therapy intervention using a virtual coach in aid of smoking cessation. In: Persuasive Technology Springer: Berlin, Heidelberg, 2006, pp.133–141.
    1. Oh KJ, Lee D, Ko B, et al. A chatbot for psychiatric counseling in mental healthcare service based on emotional dialogue analysis and sentence generation. In 18th IEEE International Conference on Mobile Data Management (MDM), 2017. IEEE, 2017, pp.371–375.
    1. Elmasri D, Maeder A. A conversational agent for an online mental health intervention. In International Conference on Brain and Health Informatics Springer: Cham. 2016, pp.243–251.
    1. Suganuma S, Sakamoto D, Shimoyama H. An embodied conversational agent for unguided internet-based cognitive behavior therapy in preventative mental health: Feasibility and acceptability pilot trial. JMIR Ment Health 2018; 5(3): e10454.
    1. Ly KH, Ly AM, Andersson G. A fully automated conversational agent for promoting mental well-being: A pilot RCT using mixed methods. Internet Interv 2017; 10: 39–46.
    1. Morris RR, Kouddous K, Kshirsagar R, et al. Towards an artificially empathic conversational agent for mental health applications: System design and user perceptions. J Med Internet Res 2018; 20(6): e10148.
    1. Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: An overview of reviews and development of a theoretical framework. BMC Health Serv Res 2017; 17(1): 88.
    1. Palanica A, Flaschner P, Thommandram A, et al. Physicians’ perceptions of chatbots in healthcare: A cross-sectional web-based survey. J Med Internet Res 2019; 21(4): e12887.
    1. Denscombe M. Communities of practice: A research paradigm for the mixed methods approach. J Mixed Method Res 2008; 2(3): 270–283.
    1. Mason J. Mixing methods in a qualitatively driven way. Qual Res 2006; 6(1): 9–25.
    1. Gehl RW. Teaching to the Turing Test with Cleverbot. Transformations 2014; 24(1-2): 56–66.
    1. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006; 3(2): 77–101.
    1. Laufer R. The social acceptability of AI systems. Artif Intell Crit Concept 2000; 4(l992): 343.
    1. Ward R. The application of technology acceptance and diffusion of innovation models in healthcare informatics. Health Policy Technol 2013; 2(4): 222–228.
    1. Yardley L, Morrison L, Bradbury K, et al. The person-based approach to intervention development: Application to digital health-related behavior change interventions. J Med Internet Res 2015; 17(1): e30.
    1. Ho A, Hancock J, Miner AS. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. J Commun 2018; 68(4): 712–733.
    1. Zamora J. I'm Sorry, Dave, I'm Afraid I Can't Do That: Chatbot Perception and Expectations. In: Proceedings of the 5th International Conference on Human Agent Interaction. ACM, 2017, pp.253–260.
    1. Dubé E, Laberge C, Guay M, et al. Vaccine hesitancy: An overview. Human Vaccin Immunother 2013; 9(8): 1763–1773.

Source: PubMed

3
Subskrybuj