Patient Computer Dialog in Primary Care

June 11, 2013 updated by: Warner Slack, Beth Israel Deaconess Medical Center

Cybermedicine for the Patient and Physician

With this clinical study, we hoped to find out if interactive, computer-based medical interviews, when carefully tested and honed and made available to patients in their homes on the Internet, will improve both the efficiency and quality of medical care and be well received and found helpful by patients and their physicians. We developed the computer-based medical interview consisting of over 6000 questions and a corresponding program that provides a concisely written, summary of the patient's responses to the questions in the interview. We then conducted read aloud and test/retest reliability evaluations of the interview and summary programs and determined the programs to be reliable. Results were published in the November 27, 2010 issue of the Journal of the American medical Informatics Association. We also developed, edited, and revised a program that provides a concisely written, summary of the patient's responses to the questions in the interview.

We obtained a grant from the Rx Foundation to conduct clinical trial of our medical history. At the time of the office visit, the summary of the computer-based history of those patients who had completed the interview was available on the doctor's computer screen for the doctor and patient to use together on a voluntary basis. The results of this trial were published in the January 2012 issue of the Journal of the American Informatics Association.

Study Overview

Status

Completed

Detailed Description

We developed a computer-based medical history for patients to take in their homes via the Internet. The history is divided into 24 modules- family history, social history, cardiac history, pulmonary history, and the like. So far as possible, it is designed to model the comprehensive, inclusive, general medical history traditionally taken, when time permits, by a primary care doctor seeing a patient for the first time. It contains 232 primary questions asked of all patients about the presence or absence of medical problems. Of these, 215 have the preformatted mutually exclusive responses "Yes," "No," "Uncertain (Don't Know, Maybe)," "Don't understand," and "I'd rather not answer;" 10 have other sets of multiple choices, one response permitted; five have multiple choices with more than one response permitted, and two have numerical responses. In addition, more than 6000 questions, explanations, suggestions, and recommendations are available for presentation, as determined by the patient's responses and the branching logic of the program. These questions are available to explore in detail medical problems elicited by one or more of the primary questions. If for example, a patient responds with "Yes" to the question about chest pain, the program branches to multiple qualifying questions about characteristics of the pain, such as onset, location, quality, severity, relationship to exertion, and course. Once we had completed the interview in preliminary form, we made it available to members of our medical advisory board for their criticisms and suggestions. We then conducted a formal read-aloud assessment in which 10 volunteer patients read each primary question aloud to an investigator in attendance and offered their understanding and general assessment of the questions. We revised our program based on comments from the advisory board and the patients

We then conducted a test/retest reliability study of the 215 of the 232 primary questions that have the preformatted, allowable response set of "Yes," "No," "Uncertain (Don't know, Maybe)," "Don't understand," and "I'd rather not answer, the 10 questions that have other response sets with one answer permitted, and the 5 questions with more than one response permitted. Email messages were sent via PatientSite (our patients' portal to their electronic medical record) to inform patients of the study and how to sign on to the informed consent form, and for those that had consented to the study to remind them to take interview for the first and then the second time.)

From randomly selected patients of doctors affiliated with Beth Israel Deaconess Medical Center in Boston, 48 patients took the history twice with intervals between sessions ranging from one to 35 days (mean seven days; median five days). When we analyzed the inconsistency between first and second interviews with which the 48 patients responded to each of the primary questions. We found that the 215 questions with response options of "Yes," "No," "Uncertain," "Don't understand," and "I'd rather not answer" had the lowest incidence (6 percent); the 10 other multiple choice questions with one response permitted had a 13 percent incidence, and the five multiple choice questions with more than one response permitted had a 14 percent incidence. Whenever an inconsistency was detected with the repeat interview, the patient was asked to choose when appropriate from four possible reasons. Reasons chosen were: "clicked on the wrong choice" (23 percent), "not sure about the answer" (23 percent), "medical situation changed" (6 percent), and "didn't understand the question (less than 1 percent). With the remaining 47 percent of the inconsistencies, no reason was given.

We then computed the percentage of agreement for each of the primary questions together with Cohen's Kappa Index of Reliability. Of the 215 "Yes," "No," "Uncertain (Don't know, Maybe)," "Don't understand," and "I'd rather not answer" questions, 96 (45 percent) had kappa values greater than .75 (excellent agreement by the criteria of Landis and Koch, and of these, 38 had kappa values of one (perfect agreement); an additional 24 primary questions (12 percent), to which all patients had made identical responses both times (perfect consistency), had no Kappa values. Sixty-eight of these questions (32 percent) had kappa values between .40 and .75 (fair to good agreement); and 26 (13 percent) had kappa values less than .40 (poor agreement). Of the 27 questions with poor kappa values, 15 had percentages of agreement greater than 90 percent, and we deemed these to be sufficiently reliable within their clinical context to remain unrevised. We selected the 12 questions with poor kappa values and percentages of agreement less than 90 percent for rewording. Of the 15 primary questions with varying sets of responses, half had kappa values in the excellent range and half had kappa values in the fair to good range, and we kept these in place unrevised. Fifteen of the primary questions (7 percent) received a "don't understand" response. Although there was but a single "don't understand" response for each of these questions, we were able to isolate seven with which the possibility of confusion seemed to be evident, and we revised these accordingly.

With the first of the two interviews-with a mean of 545 frames presented and a completion time of 45 to 90 minutes (based on an estimated 7 seconds per frame -the volunteers were for the most part favorable in their assessment of the interview when asked a set of 10, 10-point Likert-scale questions.

These results were published in the November 2010 issue of the Journal of the American Informatics Association.

We also developed, edited, and revised a program that provides a concisely written, summary of the patient's responses to the questions in the interview. This was a formidable project that took considerably longer than we had anticipated. The "phrase" is the basic unit of the summary. Identified by its unique reference number, each phrase contains the words to be generated, the conditions for writing them, and the branching logic that determines the course of the program as it progresses from phrase to phrase. The summary program for the General Medical Interview, which contains over 5,000 phrases, is organized by sections that are related by name and content to their corresponding interview sections. Designed for use by both doctor and patient and available in both electronic and printed form, the summary is presented in a legible but otherwise traditional format.

We were not able to complete the randomized control study at this time due to a couple of factors. First, it took substantially longer than anticipated to develop and evaluate our program in our effort to have a comprehensive, detailed computer-based medical interview that would compare favorably with that of a thoughtful physician. It took us two years to develop, test, and revise the General Medical Interview and far longer than we had anticipated to complete the test-retest reliability study and to develop, test, and revise the summary program. In addition, our medical center's current policy is to obtain a patient's e-mail address only after the patient has had a first visit to the center and only if the patient has been registered in PatientSite after a first visit. Therefore, although we could readily recruit by e-mail our participants for the test-retest study, we were limited to the far more labor-intensive process of telephone recruitment for the randomized, controlled study.

We later obtained a grant from the Rx Foundation us to conduct clinical trial of our newly revised medical history. After completing the medical history the patients were asked to complete an online 10 item 10-point Likert-scale post-history assessment questionnaire. At the time of the office visit, the summary of the computer-based history of those patients who had completed the interview was available on the doctor's computer screen for the doctor and patient to use together on a voluntary basis. At the option of the doctor, the summary could then be edited and incorporated into the patient's online medical record. The day after the visit the patients and the doctors were asked to complete a 10-point Likert-scale questionnaire consisting of six questions that asked about the effect of the medical history and its summary on the quality of the visit from the patient's and the doctor's perspectives, with provision for them to record comments and suggestions for improvement.

The results of this were published in the January 2012 issue of the Journal of the American Informatics Association.

Study Type

Interventional

Enrollment (Actual)

45

Phase

  • Phase 3

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

    • Massachusetts
      • Boston, Massachusetts, United States, 02215
        • Beth Israel Deaconess Medical Center

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

18 years and older (Adult, Older Adult)

Accepts Healthy Volunteers

Yes

Genders Eligible for Study

All

Description

Inclusion Criteria:

  • Request for an initial appointment with a primary care physician at Beth Israel Deaconess Medical Center who has agreed to participate in the study
  • English as first language
  • Internet access at home

Exclusion Criteria

  • Under 18 years of age

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

  • Primary Purpose: Diagnostic
  • Allocation: N/A
  • Interventional Model: Single Group Assignment
  • Masking: None (Open Label)

Arms and Interventions

Participant Group / Arm
Intervention / Treatment
Experimental: 'Computer-based medical history
A computer-based medical history to take in their homes via the Internet. The history is divided into 24 modules- family history, social history, cardiac history, pulmonary history, and the like.
The intervention is a computer-based medical interview, which contains 232 primary questions that are asked of all respondents, and over 6000 frames (questions, explanations, suggestions, recommendations, and words of encouragement) that are available for presentation as determined by the patient's responses and the branching logic of the program.

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Patient Post Medical History Assessment Questionnaire
Time Frame: Immediately after taking the medical history

The questionnaire consisted of 10 Likert scales questions assessing the computer-based history. The Likert scale ranged from 1 for 'Not at all' to 10 for 'Very'.

We computed the mean of the responses to the question "How helpful were the questions when thinking about your health?" We also calculated a total score by averaging the mean scores of the 10 questions.

Immediately after taking the medical history
Patient Post Visit Questionnaire
Time Frame: One day after the visit with the physician
The questionnaire consisted of 6 Likert scales questions assessing helpfulness of the computer-based history for the patient at the time of the visit. The Likert scale ranged from 1 for 'Not at all helpful' to 10 for 'Very helpful'. We computed the mean of the responses to the question "How helpful was it for you to have taken the computer interview before seeing your doctor?" We also calculated a total score by averaging the mean scores of the 6 questions. In addition we calculated the combined mean of the responses of three of the patients whose doctors did not complete their post-visit questionnaire.
One day after the visit with the physician
Physician Post Visit Questionnaire
Time Frame: One day after the patient visit
The questionnaire consisted of 6 Likert scales questions assessing helpfulness of the computer-based history for the physician at the time of the patient visit. The Likert scale ranged from 1 for 'Not at all helpful' to 10 for 'Very helpful'. We computed the mean of the responses to the questions "How helpful was it for your patient to have taken the computer interview before seeing you?" " and the question "To what extent do you think the computer summary helped you to provide better care to your patient?" We also calculated a total score by averaging the mean scores of the 6 questions. In addition we calculated the combined mean of the physician responses to the post-visit questionnaires when physicians filled out the questionnaire but their patients did not.
One day after the patient visit

Secondary Outcome Measures

Outcome Measure
Measure Description
Number of Office Visits by Patients
The experimental design was revised from a two arm experimental and control study to a one arm experimental study. Therefore this secondary measure no longer applied.
Time Per Visit
The experimental design was revised from a two arm experimental and control study to a one arm experimental study. Therefore this secondary measure no longer applied.
Number of Telephone Calls and E-mail Messages Between Patients and Physicians
The experimental design was revised from a two arm experimental and control study to a one arm experimental study. Therefore this secondary measure no longer applied.
Completeness of Patients' Problem Lists
The experimental design was revised from a two arm experimental and control study to a one arm experimental study. Therefore this secondary measure no longer applied.

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Investigators

  • Principal Investigator: Warner V Slack, MD, Beth Israel Deaconess Medical Center

Publications and helpful links

The person responsible for entering information about the study voluntarily provides these publications. These may be about anything related to the study.

General Publications

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start

January 1, 2005

Primary Completion (Actual)

January 1, 2011

Study Completion (Actual)

January 1, 2011

Study Registration Dates

First Submitted

October 11, 2006

First Submitted That Met QC Criteria

October 11, 2006

First Posted (Estimate)

October 12, 2006

Study Record Updates

Last Update Posted (Estimate)

June 20, 2013

Last Update Submitted That Met QC Criteria

June 11, 2013

Last Verified

June 1, 2013

More Information

Terms related to this study

Other Study ID Numbers

  • 2004P-000420
  • R01LM008255-01A1 (U.S. NIH Grant/Contract)

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Patient Computer Dialog

Clinical Trials on Computer-based medical history

3
Subscribe