Laypersons Cannot Select Preferred Surgeon Based on Videos of Simulated Robot-assisted Radical Prostatectomies

November 3, 2022 updated by: Rikke Groth Olsen, Copenhagen Academy for Medical Education and Simulation

The goal of this comparative blinded assessment study is to compare ratings of crowd workers and expert ratings in simulated robot-assisted radical prostatectomies

The main question[s] it aims to answer are:

  • to examine the use of crowdsourced assessment for assessing the performance of robot-assisted rad-ical prostatectomy (RARP) compared with using experienced surgeons
  • to explore if some CW are better than others. Participants will assess edited videos of simulated robot assisted radical prostatecotmies using a standardized assessment tool. The laypersons will be asked to answer yes/no to the question: 'Would you trust this doctor to perform robot-assisted surgery on you?' after each surgery. All participants were blinded to the identity of the surgeon performing the videos of the robot-assisted radical prostatectomy Researchers will compare laypersons with expert raters to see if any difference between their ratings

Study Overview

Status

Completed

Conditions

Intervention / Treatment

Detailed Description

3. Trial design 3.1 Content This study will evaluate global robotic skills for the three modules performed on the RobotiX, Sim-bionix: bladder neck dissection, nerve sparring dissection, and ureterovesical anastomosis all recorded from the previous study: 'Validation of a novel simulation-based test in robot-assisted radical prostatectomy.' 3.2 Response process Experienced surgeons and crowd workers will first be presented with a short, written instruction describing the trial. Before enrolment all participants will have signed an informed consent (appendix 2) and a demographic questionnaire for baseline characteristics of the crowd and surgical experience of the experienced surgeons (Appendix 3). After completion of the informed consent and demographic questionnaire the survey links will be sent to the participants. Afterwards, both crowd raters and experts will be trained on how to assess the videos with use of the assessment tool, mGEARS. mGEARS is composed of 5 domains: depth perception, bimanual dexterity, efficiency, force sensitivity, and robotic control. Performance in each domain is measured on a 5-point Likert scale. A rating of 1 corresponds to the lowest level of performance, whereas a rating of 5 corresponds to the highest level of performance. An overall performance score is derived by summing the scores of each of the domains (25 points). The raters will have time to read and understand the assessment tool before rating the videos. An elaborate explanation of the chosen domains will be given to the raters including how to rate each video.

3.4 Video material The participants will be assessing the videos using the assessment tool in a survey sent by E-boks. The surveys will be sent using a URL-link from Redcap. All videos are stored at 23video system and a link to the videos will be included in the survey. The survey has successfully been tested on different devices.

The investigators will randomly choose videos from the third repetition from 5 novice surgeons, 5 experienced robotic surgeons and 5 experienced robotic surgeons in RARP. The investigators will use edited videos to the length of maximum 5 minutes. The videos will be edited from start (0 minutes) and 5 minutes forward, where the video will be stopped. Therefore, the videos will show how far the surgeon has come after 5 minutes of simulated operation. A total of 4548 edited videos will be used for crowd-sourced assessment.

To secure response process of Messick's framework all participants will be blinded to the identity and skill level of the surgeon on the recorded video. The experienced surgeons could potentially rate their own videos, which could be a threat to validity for the response process, but as the vide-os are blinded, they will not know which videos are their own. In addition, there will be a signifi-cant time delay between them having performed the task and rating the videos. Thus, it is unlikely that they will be able to identify their own videos. All videos will be given a randomly allocated identification ID.

3.5 Video-rating Each participant will rate ten randomly chosen videos using GEARS. The participants will be given a randomized ID number, which is used to match the ten videos to the participant. They will be asked to evaluate each video with the five different domains of GEARS on a scale from one to five. After rating the video, the participant will be asked to answer 'yes' or 'no' to the question: 'Would the participant trust this doctor to operate on he participant, if the participant were to have their prostate removed using robotic-assisted surgery?'. The participants will fill in the answers after the video-rating in RedCap.

3.6 Evaluation questions After the crowd-raters finish the video-ratings, they will receive a final questionnaire in RedCap, where they are asked their opinion about a possible future role as crowd-raters regarding time use and possible payment level (appendix 4).

3.7 Data-collection All data will be collected and stored in RedCap, which is a platform designed to store research data. All data will be pseudo anonymized as all participants will get a unique link only known to the participant and the principal investigator (RGO). The participants can only rate the video once. The data will be blinded by RGO prior to statistical analysis.

4. Selection of participants The crowd workers will be recruited by a Danish Association for volunteer patients who would like to contribute to research, Forskningspanelet. e-mail, Facebook, at the website of the Danish prostate cancer association (PROPA) or the monthly PROPA membership magazine.

The expert panel will be invited by e-mail.

Study Type

Interventional

Enrollment (Actual)

151

Phase

  • Not Applicable

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

    • Østerbro
      • Copenhagen, Østerbro, Denmark, 2100
        • Copenhagen Academy for Medical Education and Simulation

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

16 years and older (Adult, Older Adult)

Accepts Healthy Volunteers

No

Genders Eligible for Study

All

Description

Laypersons

Inclusion Criteria:

  • Member of Forskningspanelet

Exclusion Criteria:

  • Under the age of 18

Expert raters

Inclusion Criteria:

  • none

Exclusion Criteria:

  • Senior surgeons in urology
  • Conducted >50 robotic-assisted radical prostatectomy procedures

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

  • Primary Purpose: Other
  • Allocation: N/A
  • Interventional Model: Single Group Assignment
  • Masking: None (Open Label)

Arms and Interventions

Participant Group / Arm
Intervention / Treatment
Other: Crowd workers

Crowd workers watched 10 (oiut 45 possible) random videos and assessed with a standard assessment tool.

All participants were blinded to the identity and skill level of the surgeon

See arm/group description

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
to examine the use of crowdsourced assessment for assessing the performance of robot-assisted rad-ical prostatectomy (RARP) compared with using experienced surgeons
Time Frame: immediately after the study completion
To what degree do the GEARS scores correlate with experienced surgeons? A rating of 1 corresponds to the lowest level of performance, whereas a rating of 5 corresponds to the highest level of performance. An overall performance score is derived by summing the scores of each of the domains (5-25 points)by patients.
immediately after the study completion

Secondary Outcome Measures

Outcome Measure
Measure Description
Time Frame
To explore if some CW are better than others
Time Frame: immediately after the study completion
Does some CW perform closer to the expert raters than others (stratified by age, gender, education in the medical field yes/no)
immediately after the study completion

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Investigators

  • Study Chair: Flemming Bjerrum, MD, Copenhagen Academy for Medical Education and Simulation

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start (Actual)

April 1, 2021

Primary Completion (Actual)

February 1, 2022

Study Completion (Actual)

May 1, 2022

Study Registration Dates

First Submitted

October 26, 2022

First Submitted That Met QC Criteria

November 3, 2022

First Posted (Actual)

November 7, 2022

Study Record Updates

Last Update Posted (Actual)

November 7, 2022

Last Update Submitted That Met QC Criteria

November 3, 2022

Last Verified

November 1, 2022

More Information

Terms related to this study

Other Study ID Numbers

  • P-2020-701

Plan for Individual participant data (IPD)

Plan to Share Individual Participant Data (IPD)?

No

Drug and device information, study documents

Studies a U.S. FDA-regulated drug product

No

Studies a U.S. FDA-regulated device product

No

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Clinical Competence

Clinical Trials on Randomized video numbers

3
Subscribe