Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Artificial Intelligence Extension (PRISMA-AI)

May 18, 2022 updated by: Giovanni Cacciamani, University of Southern California

Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Artificial. Delphi Consensus

The investigators aim to develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Artificial Intelligence Extension (PRISMA-AI) guideline as a stand-alone extension of the PRISMA statement, modified to reflect the particular requirements for the reporting of AI and its related topics (namely machine learning, deep learning, neuronal networking) in systematic reviews.

Study Overview

Status

Enrolling by invitation

Intervention / Treatment

Detailed Description

With advances in artificial intelligence (AI) over the last two decades, enthusiasm and adoption of this technology in medicine have steadily increased. Yet despite the greater adoption of AI in medicine, the way such methodologies and results are reported varies widely and the readability of clinical studies utilizing AI can be challenging to the general clinician.

Systematic reviews of AI applications are an important area for which specific guidance is needed. An ongoing systematic review led by our team has shown that the number of systematic reviews on AI applications (with or without meta-analysis) is increasing dramatically over the time, yet the quality of reporting is still poor and heterogeneous, leading to inconsistencies in the reporting of informational details among individual studies. Consequently, the lack of these informational details may front problems for primary research and synthesis and potentially limits their usefulness for stakeholders interested in implementing AI or using the information in systematic reviews.

The criteria will derive from the consensus among multi-specialty experts (in each medical specialty) who have already published about AI applications in leading medical journals and the lead authors of PRISMA, STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI to ensure that the criteria have global applicability in all the disciplines and for each type of study which involves the AI.

The proposed PRISMA-AI extension criteria focus on standardizing the reporting of methods and results for clinical studies utilizing AI. These criteria will reflect the most relevant technical details a data scientist requires for future reproducibility, yet they focus on the ability for the clinician reader to critically follow and ascertain the relevant outcomes of such studies.

The resultant PRISMA-AI extension will

  1. help stakeholders interested in implementing AI or using AI-related information in systematic reviews
  2. create a framework for reviewers that assess publications,
  3. provide a tool for training researchers on Artificial Intelligence SR methodology
  4. help end-users of the SR such as physicians and policymakers to better evaluate SR's validity and applicability in their decision-making process.

The success of the criteria will be seen in how manuscripts are written, how peer reviewers assess them, and finally, how the general readership is able to read and digest the published studies

Study Type

Observational

Enrollment (Anticipated)

150

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

    • California
      • Los Angeles, California, United States, 90005
        • University of Southern California

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

18 years and older (Adult, Older Adult)

Accepts Healthy Volunteers

No

Genders Eligible for Study

All

Sampling Method

Non-Probability Sample

Study Population

A team of experts in the use AI technology in medicine together with experts in PRISMA, STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI will evaluate the PRISMA-AI extension reporting guidelines.

Description

Inclusion Criteria:

  • experts in the use AI technology in medicine
  • experts in PRISMA
  • leading authors of STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI

Exclusion Criteria:

  • Panelists who were not able to commit to all rounds of the modified Delphi process will be excluded

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

Cohorts and Interventions

Group / Cohort
Intervention / Treatment
Delphi Panel
A team of experts in the use AI technology in medicine together with experts in PRISMA, STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI will evaluate the PRISMA-AI extension reporting guidelines

An invitation email, including a link to the survey, will be sent to the panel of experts in Ai in healthcare.

The Delphi questionnaire will be administered via Welphi.com. In the first survey, panel members will outline the AI reporting standards in systematic reviews and objectively identify critical aspects of reporting methodology and results.

In subsequent surveys, the expert panel will evaluate the modified criteria using a 1 to 5-point Likert scale with space provided for suggested edits and comments. Multiple rounds will be conducted until consensus is reached. After each round of Likert responses, the study team will calculate the agreement and distribution of responses. Likert responses will be dichotomized with positive values indicating agreement and neutral or negative values indicating disagreement.

For the questions that do not reach a consensus of more than 80% in the first round or need further explanation, additional rounds of the survey may be performed.

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Degree of consensus
Time Frame: 3 months
The level of agreement for all statements achieving consensus from the expert panel; consensus is predefined as ≥ 80% of the panel rating a given statement
3 months

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Investigators

  • Study Chair: Giovanni Cacciamani, MD, University of Southern California

Publications and helpful links

The person responsible for entering information about the study voluntarily provides these publications. These may be about anything related to the study.

General Publications

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start (Anticipated)

June 15, 2022

Primary Completion (Anticipated)

June 30, 2022

Study Completion (Anticipated)

July 15, 2022

Study Registration Dates

First Submitted

May 2, 2022

First Submitted That Met QC Criteria

May 18, 2022

First Posted (Actual)

May 19, 2022

Study Record Updates

Last Update Posted (Actual)

May 19, 2022

Last Update Submitted That Met QC Criteria

May 18, 2022

Last Verified

May 1, 2022

More Information

Terms related to this study

Other Study ID Numbers

  • UP-22-00370

Plan for Individual participant data (IPD)

Plan to Share Individual Participant Data (IPD)?

NO

Drug and device information, study documents

Studies a U.S. FDA-regulated drug product

No

Studies a U.S. FDA-regulated device product

No

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Consensus Development

Clinical Trials on Delphi Questionnaire

3
Subscribe