Use of Behavioral Economics to Improve Treatment of Acute Respiratory Infections (Pilot Study) (BEARI)

March 31, 2017 updated by: Jason Doctor, University of Southern California

Bacteria resistant to antibiotic therapy are a major public health problem. The evolution of multi-drug resistant pathogens may be encouraged by provider prescribing behavior. Inappropriate use of antibiotics for nonbacterial infections and overuse of broad spectrum antibiotics can lead to the development of resistant strains. Though providers are adequately trained to know when antibiotics are and are not comparatively effective, this has not been sufficient to affect critical provider practices.

The intent of this study is to apply behavioral economic theory to reduce the rate of antibiotic prescriptions for acute respiratory diagnoses for which guidelines do not call for antibiotics. Specifically targeted are infections that are likely to be viral.

The objective of this study is to improve provider decisions around treatment of acute respiratory infections.

The participants are practicing attending physicians or advanced practice nurses (i.e. providers) at participating clinics who see acute respiratory infection patients. A maximum of 550 participants will be recruited for this study.

Providers consenting to participate will fill out a baseline questionnaire online. Subsequent to baseline data collection and enrollment, participating clinic sites will be randomized to the study arms, as described below.

There will be a control arm, with clinic sites randomized in a multifactorial design to up to three interventions that leverage the electronic medical record: Order Sets that are triggered by EHR workflow containing exclusively guideline concordant choices (SA, for Suggested Alternatives); Accountable Justification (AJ) triggered by discordant prescriptions that populate the note with provider's rationale for guideline exceptions ; and performance feedback that benchmarks providers' own performance to that of their peers (PC, for Peer Comparison).

The outcomes of interest are antibiotic prescribing patterns, including prescribing rates and changes in prescribing rates over time.

The intervention period will be over one year, with a one-year follow up period to measure persistence of the effect after EHR features are returned to the original state and providers no longer receive email alerts.

Study Overview

Detailed Description

Each consented provider will be randomized to 1 of 8 cells in a factorial design with equal probability. If results of retrospective data analysis imply that design will be improved by stratification, randomization will be stratified by factors that could influence outcomes.

Data will be collected from Northwestern University's Enterprise Data Warehouse which houses copies of data recorded in the Epic electronic health record. Data elements from qualifying office visits will be collected from coded portions of the electronic health record.

An encounter is eligible for intervention if the patient's diagnosis is in the selected group of acute respiratory infections. The intervention EHR functions will be triggered when clinicians initiate an antibiotic prescription or enter a diagnosis for an acute respiratory infection that has a defined Order Set. If an antibiotic from a list of frequently misprescribed antibiotics is ordered and a diagnosis has not yet been entered, providers will be prompted to enter a diagnosis. If the diagnosis entered is acute nasopharyngitis; acute laryngeopharyngitis/acute upper respiratory infection; acute bronchitis; bronchitis not specified as acute or chronic; or flu; the interventions will be triggered. The diagnosis-appropriate order set will pop-up for providers in the Suggested Alternatives (SA) arm, while clinicians randomized to the Accountable Justification (AJ) arm will receive an alert and be required to enter a brief statement justifying their antibiotic prescription if antibiotics are not indicated for the diagnosis entered. This note will then be added to the patient's medical record.

Clinicians randomized to the Peer Comparison (PC) condition will receive monthly updates about their antibiotic prescribing practices relative to other clinicians in their practice.

Study Type

Interventional

Enrollment (Actual)

28

Phase

  • Not Applicable

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

    • Illinois
      • Chicago, Illinois, United States, 60611-2923
        • Northwestern Medical Faculty Foundation General Internal Medicine Clinic

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

18 years and older (Adult, Older Adult)

Accepts Healthy Volunteers

No

Genders Eligible for Study

All

Description

Inclusion Criteria:

A practicing attending physician or advanced practice nurse ("provider") at Northwestern University's NMFF GIM Clinic in 2011-2013 who sees acute respiratory infection patients.

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

  • Primary Purpose: Treatment
  • Allocation: Randomized
  • Interventional Model: Factorial Assignment
  • Masking: Single

Arms and Interventions

Participant Group / Arm
Intervention / Treatment
Experimental: SA, AJ
Participants receive the Suggested Alternatives and Accountable Justification interventions, but not the Peer Comparison intervention.
Order Sets that are triggered by EHR workflow containing exclusively guideline concordant choices (SA, for Suggested Alternatives).
Other Names:
  • SA
  • Suggested Alternatives
Accountable Justification is triggered by discordant prescriptions that populate the EHR note with provider's rationale for guideline exceptions (AJ).
Other Names:
  • Accountable Justification
  • AJ
Experimental: SA, AJ, PC

Participants are given all 3 interventions:

Suggested Alternatives, Accountable Justification, and Peer Comparison.

Performance feedback that benchmarks providers' own performance to that of their peers (PC, for Peer Comparison).
Other Names:
  • PC
  • Peer Comparison
Order Sets that are triggered by EHR workflow containing exclusively guideline concordant choices (SA, for Suggested Alternatives).
Other Names:
  • SA
  • Suggested Alternatives
Accountable Justification is triggered by discordant prescriptions that populate the EHR note with provider's rationale for guideline exceptions (AJ).
Other Names:
  • Accountable Justification
  • AJ
Experimental: SA, PC
Participants receive the Suggested Alternatives and Peer Comparison interventions, but not the Accountable Justification intervention.
Performance feedback that benchmarks providers' own performance to that of their peers (PC, for Peer Comparison).
Other Names:
  • PC
  • Peer Comparison
Order Sets that are triggered by EHR workflow containing exclusively guideline concordant choices (SA, for Suggested Alternatives).
Other Names:
  • SA
  • Suggested Alternatives
Experimental: AJ, PC
Participants receive the Accountable Justification and Peer Comparison interventions, but not the Suggested Alternatives intervention.
Performance feedback that benchmarks providers' own performance to that of their peers (PC, for Peer Comparison).
Other Names:
  • PC
  • Peer Comparison
Accountable Justification is triggered by discordant prescriptions that populate the EHR note with provider's rationale for guideline exceptions (AJ).
Other Names:
  • Accountable Justification
  • AJ
Experimental: Peer Comparison
Participants receive the Peer Comparison intervention, but do not receive the Suggested Alternatives or Accountable Justification interventions.
Performance feedback that benchmarks providers' own performance to that of their peers (PC, for Peer Comparison).
Other Names:
  • PC
  • Peer Comparison
Experimental: Suggested Alternatives
Participants receive the Suggested Alternatives intervention, but not the Accountable Justification or Peer Comparison interventions.
Order Sets that are triggered by EHR workflow containing exclusively guideline concordant choices (SA, for Suggested Alternatives).
Other Names:
  • SA
  • Suggested Alternatives
Experimental: Accountable Justification
Participants receive the Accountable Justification intervention, but do not receive the Suggested Alternatives or Peer Comparison interventions.
Accountable Justification is triggered by discordant prescriptions that populate the EHR note with provider's rationale for guideline exceptions (AJ).
Other Names:
  • Accountable Justification
  • AJ
No Intervention: Control
Participants do not receive any of the 3 interventions.

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Antibiotic Prescribing Rate for 5 Specific Acute Respiratory Infection Diagnoses
Time Frame: 2 years

Changes in antibiotic prescribing rate for the following ICD-9 diagnoses:

460 Acute nasopharyngitis (common cold)

465 Acute laryngeopharyngitis/acute upper respiratory infection

466 Acute bronchitis

490 Bronchitis not specified as acute or chronic

487 Flu

2 years

Secondary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Antibiotic Prescribing Rates for Expanded List of Acute Respiratory Infection Diagnoses
Time Frame: 2 years
We will monitor overall prescribing for the specified diagnoses and other Acute Respiratory Infection diagnoses, including cough/fever and pneumonia.
2 years

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Investigators

  • Principal Investigator: Stephen Persell, MD, Northwestern University
  • Study Director: Jason N Doctor, PhD, University of Southern California

Publications and helpful links

The person responsible for entering information about the study voluntarily provides these publications. These may be about anything related to the study.

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start

July 1, 2011

Primary Completion (Actual)

February 1, 2013

Study Completion (Actual)

September 1, 2014

Study Registration Dates

First Submitted

August 4, 2011

First Submitted That Met QC Criteria

October 18, 2011

First Posted (Estimate)

October 19, 2011

Study Record Updates

Last Update Posted (Actual)

April 4, 2017

Last Update Submitted That Met QC Criteria

March 31, 2017

Last Verified

March 1, 2017

More Information

Terms related to this study

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Acute Respiratory Infections (ARIs)

Clinical Trials on Audit and Feedback: Peer Comparison

3
Subscribe