- ICH GCP
- US Clinical Trials Registry
- Clinical Trial NCT05382455
Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Artificial Intelligence Extension (PRISMA-AI)
Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Artificial. Delphi Consensus
Study Overview
Status
Conditions
Intervention / Treatment
Detailed Description
With advances in artificial intelligence (AI) over the last two decades, enthusiasm and adoption of this technology in medicine have steadily increased. Yet despite the greater adoption of AI in medicine, the way such methodologies and results are reported varies widely and the readability of clinical studies utilizing AI can be challenging to the general clinician.
Systematic reviews of AI applications are an important area for which specific guidance is needed. An ongoing systematic review led by our team has shown that the number of systematic reviews on AI applications (with or without meta-analysis) is increasing dramatically over the time, yet the quality of reporting is still poor and heterogeneous, leading to inconsistencies in the reporting of informational details among individual studies. Consequently, the lack of these informational details may front problems for primary research and synthesis and potentially limits their usefulness for stakeholders interested in implementing AI or using the information in systematic reviews.
The criteria will derive from the consensus among multi-specialty experts (in each medical specialty) who have already published about AI applications in leading medical journals and the lead authors of PRISMA, STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI to ensure that the criteria have global applicability in all the disciplines and for each type of study which involves the AI.
The proposed PRISMA-AI extension criteria focus on standardizing the reporting of methods and results for clinical studies utilizing AI. These criteria will reflect the most relevant technical details a data scientist requires for future reproducibility, yet they focus on the ability for the clinician reader to critically follow and ascertain the relevant outcomes of such studies.
The resultant PRISMA-AI extension will
- help stakeholders interested in implementing AI or using AI-related information in systematic reviews
- create a framework for reviewers that assess publications,
- provide a tool for training researchers on Artificial Intelligence SR methodology
- help end-users of the SR such as physicians and policymakers to better evaluate SR's validity and applicability in their decision-making process.
The success of the criteria will be seen in how manuscripts are written, how peer reviewers assess them, and finally, how the general readership is able to read and digest the published studies
Study Type
Enrollment (Anticipated)
Contacts and Locations
Study Locations
-
-
California
-
Los Angeles, California, United States, 90005
- University of Southern California
-
-
Participation Criteria
Eligibility Criteria
Ages Eligible for Study
Accepts Healthy Volunteers
Genders Eligible for Study
Sampling Method
Study Population
Description
Inclusion Criteria:
- experts in the use AI technology in medicine
- experts in PRISMA
- leading authors of STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI
Exclusion Criteria:
- Panelists who were not able to commit to all rounds of the modified Delphi process will be excluded
Study Plan
How is the study designed?
Design Details
Cohorts and Interventions
Group / Cohort |
Intervention / Treatment |
---|---|
Delphi Panel
A team of experts in the use AI technology in medicine together with experts in PRISMA, STARD-AI, CONSORT-AI, SPIRIT-AI, TRIPOD-AI, PROBAST-AI, CLAIM-AI and DECIDE-AI will evaluate the PRISMA-AI extension reporting guidelines
|
An invitation email, including a link to the survey, will be sent to the panel of experts in Ai in healthcare. The Delphi questionnaire will be administered via Welphi.com. In the first survey, panel members will outline the AI reporting standards in systematic reviews and objectively identify critical aspects of reporting methodology and results. In subsequent surveys, the expert panel will evaluate the modified criteria using a 1 to 5-point Likert scale with space provided for suggested edits and comments. Multiple rounds will be conducted until consensus is reached. After each round of Likert responses, the study team will calculate the agreement and distribution of responses. Likert responses will be dichotomized with positive values indicating agreement and neutral or negative values indicating disagreement. For the questions that do not reach a consensus of more than 80% in the first round or need further explanation, additional rounds of the survey may be performed. |
What is the study measuring?
Primary Outcome Measures
Outcome Measure |
Measure Description |
Time Frame |
---|---|---|
Degree of consensus
Time Frame: 3 months
|
The level of agreement for all statements achieving consensus from the expert panel; consensus is predefined as ≥ 80% of the panel rating a given statement
|
3 months
|
Collaborators and Investigators
Investigators
- Study Chair: Giovanni Cacciamani, MD, University of Southern California
Publications and helpful links
General Publications
- Ibrahim H, Liu X, Rivera SC, Moher D, Chan AW, Sydes MR, Calvert MJ, Denniston AK. Reporting guidelines for clinical trials of artificial intelligence interventions: the SPIRIT-AI and CONSORT-AI guidelines. Trials. 2021 Jan 6;22(1):11. doi: 10.1186/s13063-020-04951-6.
- Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ; SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Lancet Digit Health. 2020 Oct;2(10):e549-e560. doi: 10.1016/S2589-7500(20)30219-3. Epub 2020 Sep 9.
- Sounderajah V, Ashrafian H, Aggarwal R, De Fauw J, Denniston AK, Greaves F, Karthikesalingam A, King D, Liu X, Markar SR, McInnes MDF, Panch T, Pearson-Stuttard J, Ting DSW, Golub RM, Moher D, Bossuyt PM, Darzi A. Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: The STARD-AI Steering Group. Nat Med. 2020 Jun;26(6):807-808. doi: 10.1038/s41591-020-0941-1. No abstract available.
- Collins GS, Dhiman P, Andaur Navarro CL, Ma J, Hooft L, Reitsma JB, Logullo P, Beam AL, Peng L, Van Calster B, van Smeden M, Riley RD, Moons KG. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021 Jul 9;11(7):e048008. doi: 10.1136/bmjopen-2020-048008.
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hrobjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021 Mar 29;372:n71. doi: 10.1136/bmj.n71.
Study record dates
Study Major Dates
Study Start (Anticipated)
Primary Completion (Anticipated)
Study Completion (Anticipated)
Study Registration Dates
First Submitted
First Submitted That Met QC Criteria
First Posted (Actual)
Study Record Updates
Last Update Posted (Actual)
Last Update Submitted That Met QC Criteria
Last Verified
More Information
Terms related to this study
Other Study ID Numbers
- UP-22-00370
Plan for Individual participant data (IPD)
Plan to Share Individual Participant Data (IPD)?
Drug and device information, study documents
Studies a U.S. FDA-regulated drug product
Studies a U.S. FDA-regulated device product
This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.
Clinical Trials on Consensus Development
-
University of Southern CaliforniaCompleted
-
Ohio State UniversityRecruitingConsensus DevelopmentUnited States
-
Istanbul Bilgi UniversityRecruiting
-
National University of Ireland, Galway, IrelandHealth Research Board, IrelandRecruiting
-
Charite University, Berlin, GermanyCompleted
-
The University of Hong KongGerontechnology Platform, The Social Innovation and Entrepreneurship Development...CompletedConsensus DevelopmentHong Kong
-
University of Southern CaliforniaEnrolling by invitationConsensus DevelopmentUnited States
-
University of SheffieldSheffield Teaching Hospitals NHS Foundation Trust; South Yorkshire and Bassetlaw...UnknownNurse's Role | ACP | Development, ConsensusUnited Kingdom
-
University of British ColumbiaCompleted
-
ThinkWellCompletedDecisional Variability by Presentation Methods for ConsensusUnited Kingdom
Clinical Trials on Delphi Questionnaire
-
Assistance Publique - Hôpitaux de ParisNot yet recruiting
-
University of Southern CaliforniaCompleted
-
University of Rome Tor VergataEnrolling by invitation
-
One ClinicPoissy-Saint Germain Hospital; Versailles Saint-Quentin-en-Yvelines University and other collaboratorsCompletedEndometriosisFrance
-
Ohio State UniversityRecruitingConsensus DevelopmentUnited States
-
University of Southern CaliforniaEnrolling by invitationConsensus DevelopmentUnited States
-
University Hospital, GhentAstellas Pharma Inc; MedtronicCompleted
-
University Children's Hospital, ZurichRecruiting
-
Fundación para el Fomento de la Investigación Sanitaria...Clinical Hospital Centre Zagreb; University of Haifa; Pavol Jozef Safarik University and other collaboratorsCompletedKnowledge, Attitudes, Practice | Psychological SafetySpain
-
Youngstown State UniversityCompleted