Quality IQ Patient Simulation Physician Practice Measurement and Engagement (Q-IQ)

March 2, 2020 updated by: Qure Healthcare, LLC
This study will test the quality of physician care decisions using a patient-simulation based measurement and feedback approach that combines multiple-choice care decisions with real-time, personalized scoring and feedback. The study will also measure the impact of gaming-inspired competition and motivation, including a weekly leaderboard, to improve evidence-based care decisions. In addition, the study the test the impact of CME and MOC credits on participant engagement in the process.

Study Overview

Detailed Description

Primary care providers (PCPs) make many of the most important care decisions, especially for patients with chronic conditions and multiple co-morbidities. Studies have confirmed that unwarranted variation is common among PCPs, with high level of variation in care documented between urban and rural practices, across regions, and even among providers within a single healthcare system.

The investigators' previous work has shown that patient simulations can rapidly and reliably measure unwarranted practice variation among providers. In addition, published work shows that patient simulations, when administered serially and combined with customized feedback on improvement opportunities can reduce practice variation and improve performance on patient-level quality measures. Given the large scope of unwarranted variation in medical practice, there is a need for scalable approaches to measure care decisions, provide feedback on improvement opportunities and benchmark performance to peers.

This study seeks to evaluate the impact of measurement, feedback and competition on evidence-based care decisions made by primary care providers across the country. It is a randomized, controlled trial with multiple measurements across key domains of clinical care. All participants are asked to care for simulated patients designed to look like typical patients seen in a primary care practice. In each case, providers will answer multiple-choice questions about their preferred course of action to work-up, diagnose and treat patients in the primary care setting. After each question, providers will receive evidence-based feedback, including references, on the appropriateness of each of their care decisions. Feedback will be supported with relevant reference to evidence-based guidelines, including national MIPS quality measures.

All participants will receive the following interventions:

  • Feedback on care decisions made in each Quality IQ case, which will identify correct care, unneeded care, or gaps in care. This feedback will recommend or reinforce evidence-based care decisions and includes references.
  • All cases will be scored against evidence-based criteria. For each case, providers will start with 100 base points. Correct care decisions will add to that total, while unnecessary care decisions will subtract from that total. A weekly leaderboard will be posted online, allowing participants to see how they are performing relative to their peers across the country. Participants will have the opportunity to select a unique username or an anonymous user ID to be identified on the leaderboard, to maintain anonymity.

Half of the recruits will be offered Category I CME credit approved by The University of California, San Francisco School of Medicine (UCSF) which has been accredited by the Accreditation Council of Continuing Medical Education to provide CME for physicians and MOC points in the ABIM's MOC program.

Study Type

Interventional

Enrollment (Actual)

187

Phase

  • Not Applicable

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

    • California
      • San Francisco, California, United States, 94109
        • QURE Healthcare

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

  • Child
  • Adult
  • Older Adult

Accepts Healthy Volunteers

Yes

Genders Eligible for Study

All

Description

Inclusion Criteria:

  1. Board-certified in internal medicine or family medicine
  2. Minimum patient panel size of 1,500 patients
  3. English-speaking
  4. Access to the internet
  5. Informed, signed and voluntarily consented to be in the study

Exclusion Criteria:

  1. Not board certified in either internal medicine or family medicine
  2. Patient panel size less than 1,500 patients
  3. Non-English speaking
  4. Unable to access the internet
  5. Does not voluntarily consent to be in the study

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

  • Primary Purpose: Health Services Research
  • Allocation: Randomized
  • Interventional Model: Parallel Assignment
  • Masking: Single

Arms and Interventions

Participant Group / Arm
Intervention / Treatment
Active Comparator: Control
The Control arm will be asked to care for online, Quality IQ patient simulations and will receive feedback based on their care decisions made in each case. The feedback will identify correct care, unneeded care, or gaps in care and recommend or reinforce evidence-based care decisions and includes references. This arm will not be offered Continuing Medical Education (CME) or American Board of Internal Medicine (ABIM) Part II Maintenance of Certification (MOC) credits for their participation.

Online patient cases designed to simulate typical patients seen in a primary care practice. In each case, providers will answer multiple-choice questions about their preferred course of action to work-up, diagnose and treat patients in the primary care setting. After each question, providers will receive evidence-based feedback, including references, on the appropriateness of each of their care decisions. Feedback will be supported with relevant reference to evidence-based guidelines, including national MIPS quality measures.

Cases will cover clinical conditions aligned with MIPS measures that are commonly seen in the primary care setting including: diabetes, hypertension, depression, osteoarthritis, asthma and pain control.

Other Names:
  • CPVs
  • Clinical Performance and Value vignettes
Experimental: CME
The CME arm will be asked to care for online, Quality IQ patient simulations and will receive feedback based on their care decisions made in each case. The feedback will identify correct care, unneeded care, or gaps in care and recommend or reinforce evidence-based care decisions and includes references. This arm will be offered Continuing Medical Education (CME) and American Board of Internal Medicine (ABIM) Part II Maintenance of Certification (MOC) credits for their participation.

Online patient cases designed to simulate typical patients seen in a primary care practice. In each case, providers will answer multiple-choice questions about their preferred course of action to work-up, diagnose and treat patients in the primary care setting. After each question, providers will receive evidence-based feedback, including references, on the appropriateness of each of their care decisions. Feedback will be supported with relevant reference to evidence-based guidelines, including national MIPS quality measures.

Cases will cover clinical conditions aligned with MIPS measures that are commonly seen in the primary care setting including: diabetes, hypertension, depression, osteoarthritis, asthma and pain control.

Other Names:
  • CPVs
  • Clinical Performance and Value vignettes
CME or ABIM MOC credits
Other Names:
  • CME

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Change in the percentage of evidence-based diagnostic and treatment decisions made in the simulations.
Time Frame: 3 months
In each case, participants will answer multiple-choice questions about their preferred course of action to work-up, diagnose and treat patients in the primary care setting. Each question has specific evidence-based scoring criteria identifying necessary and unnecessary care decisions. Each provider will get a score for each case, ranging from 0 to 100 percentage based on the care decisions they make in the case. Over the course of the project, the investigators will track the percentage of correct, evidence-based care decisions made by participants, with the hypothesis that serial measurement and feedback on evidence-based care decisions will lead to increases in appropriate decisions over time. Higher scores represent a better outcome.
3 months

Secondary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Change in MIPS-relevant care decisions made in the patient simulations
Time Frame: 3 months
As described in the primary outcome measure, the investigators will track the percentage of evidence-based care decisions made by participants in the patient simulations. A subset of these care decisions tie directly to quality measures tracked by Medicare through the Merit-based Incentive Payment System (MIPS). For this outcome measure, the investigators will track changes in the percentage of MIPS-relevant work-up and treatment decisions made in the patient simulations. Higher scores represent a better outcome.
3 months
Change in ordering of unneeded work-up tests made in the patient simulations
Time Frame: 3 months
As described in the primary outcome measure, the investigators will track the percentage of evidence-based care decisions made by participants in the patient simulations. A subset of these care decisions tie to ordering of unneeded laboratory and imaging testing that is not supported by the evidence-based guidelines. For this outcome measure, the investigators will track changes in the frequency with which unneeded tests are ordered in the patient simulations. Higher scores represent a better outcome.
3 months
Participant case completion rate
Time Frame: 3 months
The investigators will track the percentage of enrolled participants who stay engaged in the study and complete at least 75% of their patient simulation cases.
3 months
Participant Satisfaction
Time Frame: 3 months
Investigators will measure participant satisfaction as measured by post-evaluation survey. On a scale of 1 to 5 (with 5 being the highest), participants will be asked about the overall quality of the material, the relevance to their practice and the educational content. Higher scores represent a better outcome.
3 months
Impact of available CME and ABIM MOC on recruitment rate
Time Frame: 3 months
Operating under the hypothesis that physicians offered CME and MOC credits are more likely to participate in a quality improvement program like this, the investigators will track the rate at which a randomized group of primary care physicians enroll in the program when offered CME and MOC credit and compare that to a group that is not offered CME and MOC credit for their participation.
3 months
Impact of available CME and ABIM MOC on retention rate
Time Frame: 3 months
Operating under the hypothesis that physicians offered CME and MOC credits are more likely to continue participating in a quality improvement program, the investigators will track the rate at which a primary care physicians eligible to earn CME and MOC credit complete the full 8 week project and compare that to a group that is not offered CME and MOC credit.
3 months

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Collaborators

Investigators

  • Principal Investigator: John Peabody, MD, PhD, QURE Healthcare

Publications and helpful links

The person responsible for entering information about the study voluntarily provides these publications. These may be about anything related to the study.

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start (Actual)

January 11, 2019

Primary Completion (Actual)

March 11, 2019

Study Completion (Actual)

April 15, 2019

Study Registration Dates

First Submitted

January 4, 2019

First Submitted That Met QC Criteria

January 8, 2019

First Posted (Actual)

January 11, 2019

Study Record Updates

Last Update Posted (Actual)

March 3, 2020

Last Update Submitted That Met QC Criteria

March 2, 2020

Last Verified

March 1, 2020

More Information

Terms related to this study

Plan for Individual participant data (IPD)

Plan to Share Individual Participant Data (IPD)?

NO

IPD Plan Description

No individual participant data will be shared with other researchers. Analysis will be conducted at the aggregate group level.

Drug and device information, study documents

Studies a U.S. FDA-regulated drug product

No

Studies a U.S. FDA-regulated device product

No

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Depression

Clinical Trials on Quality IQ Patient Simulations

3
Subscribe