Impact and Safety of AI in Decision Making in the ICU: a Simulation Experiment (ICU)

February 24, 2023 updated by: Imperial College London

The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in particular with regards to how it will influence human decision makers. Previous research demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other hand, excessive confidence and trust placed in the AI could have several adverse consequences including ability to detect harmful AI decisions, leading to patient harm as well as human deskilling. Some of these aspects relate to automation bias.

In this simulation study, the investigators intend to measure whether medical decisions in areas of high clinical uncertainty are modified by the use of an AI-based clinical decision support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by doctors in adult patients with sepsis (severe infection with organ failure) in the ICU), changes as a result of disclosing the doses suggested by a hypothetical AI will be measured. The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high variability in practice. This study will not specifically mention the AI Clinician (Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for which there is some evidence of effectiveness on retrospective data in another clinical setting (e.g. a model that was retrospectively validated using data from a different country than the source data used for model training) but no prospective evidence of effectiveness or safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The investigators will intentionally introduce unsafe AI suggestions (in random order), to measure the sensitivity of our participants at detecting these.

Study Overview

Status

Completed

Conditions

Intervention / Treatment

Detailed Description

The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in particular with regards to how it will influence human decision makers. Previous research demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other hand, excessive confidence and trust placed in the AI could have several adverse consequences including ability to detect harmful AI decisions, leading to patient harm as well as human deskilling. Some of these aspects relate to automation bias.

In this simulation study, the investigators intend to measure whether medical decisions in areas of high clinical uncertainty are modified by the use of an AI-based clinical decision support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by doctors in adult patients with sepsis (severe infection with organ failure) in the ICU), changes as a result of disclosing the doses suggested by a hypothetical AI will be measured. The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high variability in practice. This study will not specifically mention the AI Clinician (Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for which there is some evidence of effectiveness on retrospective data in another clinical setting (e.g. a model that was retrospectively validated using data from a different country than the source data used for model training) but no prospective evidence of effectiveness or safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The investigators will intentionally introduce unsafe AI suggestions (in random order), to measure the sensitivity of our participants at detecting these.

The investigators will examine what participant characteristics are linked with an increase likelihood of being influenced by the AI, and conduct a number of pre-specified subgroup analyses, e.g. junior versus senior ICU doctors, and separating those with a positive or a negative attitude towards AI.

Study Type

Observational

Enrollment (Actual)

38

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

      • London, United Kingdom, W2 1PG
        • Imperial College Hospitals NHS Trust

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

18 years and older (Adult, Older Adult)

Accepts Healthy Volunteers

Yes

Genders Eligible for Study

All

Sampling Method

Non-Probability Sample

Study Population

Junior (senior house officer) or senior (registrar/fellow/consultant) ICU doctor

Description

Inclusion Criteria:

  • Junior (senior house officer) or senior (registrar/fellow/consultant) ICU doctor

Exclusion Criteria:

  • Participants not meeting the inclusion criteria.

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

Cohorts and Interventions

Group / Cohort
Intervention / Treatment
ICU Clinicians
n/a - There is no intervention. Clinicians will review the suggestions of a hypothetical AI

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Influence of AI on ICU Clinicians
Time Frame: 3 months
Influence of AI on ICU Clinicians, this will be divided into the following categories: overall and stratified by safe/unsafe, junior/senior and positive/negative attitude towards AI.
3 months

Secondary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Participants' characteristics
Time Frame: 3 months
What are the characteristics of those taking part in the simulation and how does this affect decision making.
3 months
Trust in AI
Time Frame: 3 months
How much do ICU clinicians trust the AI system.
3 months
Confidence in participants' decisions
Time Frame: 3 months
How much confidence do clinicians place in the AI system
3 months
Proportion of time with attention on AI explanation
Time Frame: 3 months
Where is attention focused during the simulation
3 months

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start (Actual)

July 22, 2022

Primary Completion (Actual)

October 31, 2022

Study Completion (Actual)

October 31, 2022

Study Registration Dates

First Submitted

August 4, 2022

First Submitted That Met QC Criteria

August 8, 2022

First Posted (Actual)

August 10, 2022

Study Record Updates

Last Update Posted (Estimate)

February 27, 2023

Last Update Submitted That Met QC Criteria

February 24, 2023

Last Verified

February 1, 2023

More Information

Terms related to this study

Other Study ID Numbers

  • 22CX7592

Plan for Individual participant data (IPD)

Plan to Share Individual Participant Data (IPD)?

NO

IPD Plan Description

Individual participant data will only be reviewed by the study team.

Drug and device information, study documents

Studies a U.S. FDA-regulated drug product

No

Studies a U.S. FDA-regulated device product

No

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Sepsis

Clinical Trials on Hypothetical AI

3
Subscribe