Enhancing Engagement With Digital Mental Health Care

January 29, 2024 updated by: Michael Pullmann, University of Washington
This proposal is a partnership between Mental Health America (MHA), a nonprofit mental health advocacy and resource organization, Talkspace (TS), a for-profit, online digital psychotherapy organization, and the University of Washington's Schools of Medicine and Computer Science Engineering (UW). The purpose of this partnership is to create a digital mental health research platform leveraging MHA and TS's marketing platforms and consumer base to describe the characteristics of optimal engagement with digital mental health treatment, and to identify effective, personalized methods to enhance motivation to engage in digital mental health treatment in order to improve mental health outcomes. These aims will be met by identifying and following at least 100,000 MHA and TS consumers over the next 4 years, apply machine learning approaches to characterizing client engagement subtypes, and apply micro-randomized trials to study the effectiveness of motivational enhancement strategies and response to digital mental health treatment.

Study Overview

Detailed Description

Digital mental health (DMH) is the use of technology to improve population well-being through rapid disease detection, outcome measurement, and care. Although several randomized clinical trials have demonstrated that digital mental health tools are highly effective, most consumers do not sustain their use of these tools. The field currently lacks an understanding of DMH tool engagement, how engagement is associated with well-being, and what practices are effective at sustaining engagement. In this partnership between Mental Health America (MHA), Talkspace (TS) and the University of Washington (UW), the investigators propose a naturalistic and experimental, theory-driven program of research, with the aim of understanding 1) how consumer engagement in self-help and clinician assisted DMH varies and what engagement patterns exist, 2) the association between patterns of engagement and important consumer outcomes, and 3) the effectiveness of personalized strategies for optimal engagement with DMH treatment.

This study will prospectively follow a large, naturalistic sample of MHA and TS consumers, and will apply machine learning, user-centered design strategies, and micro randomized and sequential multiple assignment randomized (SMART) trials to address these aims. As is usual practice for both platforms, consumers will complete online mental health screening and assessment, and the investigators will be able to classify participants by disease status and symptom severity. The sample that the investigators will be working with will not be limited by diagnosis or co-morbidities. Participants will be 10 years old and older and enter the MHA and TS platforms prospectively over 4 years. For aim 1, participant data will be analyzed statistically to reveal differences in engagement and dropout across groups based on demographics, symptoms and platform activity. For aim 2, the investigators will use supervised machine learning techniques to identify subtypes based on consumer demographics, engagement patterns with DMH, reasons for disengagement, success of existing MHA and TS engagement strategies, and satisfaction with the DMH tools, that are predictive of future engagement patterns. Finally, based on the outcomes from aim 2, in aim 3a the investigators will conduct focus groups applying user centered design strategies to identify and co-build potentially effective engagement strategies for particular client subtypes. The investigators will then conduct a series of micro-randomized and SMART trials to determine which theory-driven engagement strategies, co-designed with users, have the greatest fit with subtypes developed under aim 2. The investigators will test the effectiveness of these strategies to 1) prevent disengagement from those who are more likely to have poor outcomes after disengagement, 2) improve movement from motivation to volition and, 3) enhance optimal dose of DMH engagement and consequently improve mental health outcomes. These data will be analyzed using longitudinal mixed effects models with effect coding to estimate the effectiveness of each strategy on client engagement behavior and mental health outcomes.

The purpose of aim 3b is to identify effective engagement strategies tailored to client needs and demographics to increase MHA website engagement, and to better understand how self-help mental health resources can help people overcome negative thinking and support healthier thought processes. The investigators will compare effective engagement strategies tailored to subtypes developed under aim 2 to study the mediated impact of engagement strategies on consumer mental health outcomes. The study team will determine if engagement strategies targeted to consumer engagement subtype will enhance engagement and in turn result in improved clinical outcomes. These will be compared to generic strategies that are not subtype targeted.

All aim 3b activities will occur with MHA, broken down into two parts: (Study 1) a sequential multiple assignment randomized trial (SMART) and (Study 2) a Do-It-Yourself (DIY) tool longitudinal randomized control trial (RCT). Study 1 will use a SMART to examine methods to optimize engagement with MHA's website, and Study 2 will recruit participants for a longitudinal monthlong study where they are randomly assigned to a control group, the use of a DIY tool without Artificial Intelligence (AI), or the use of a DIY tool with AI to examine the efficacy of using a digital tool to improve mental health functioning. An AI tool that uses machine learning/Natural Language Processing (NLP)/AI methods was developed to personalize and tailor an intervention to improve engagement and completion outcomes. The study focuses on a specific, popular DIY tool that teaches cognitive restructuring. Pilot work showed that (1) engagement and completion rates on DIY tools can be low, and (2) a pilot AI tool had significantly higher engagement and completion rates. These differences may arise due to AI support, User Interface/User Experience/design differences, other factors, or a combination thereof. Additionally, the efficacy of the digital tool to improve mental health functioning is unknown. Study 2 will recruit participants who will be randomly assigned to one of three groups for a longitudinal month-long study: thrice-weekly DIY tool use with AI, without AI, or a control group.

Study Type

Interventional

Enrollment (Estimated)

10000

Phase

  • Not Applicable

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Contact

  • Name: Brittany Mosser, MSW
  • Phone Number: 206-616-2129
  • Email: bmosser@uw.edu

Study Locations

    • New York
      • New York, New York, United States, 10023
        • Recruiting
        • Groop Internet Platform DBA Talkspace
        • Contact:
        • Principal Investigator:
          • Derrick Hull, PhD
    • Virginia
      • Alexandria, Virginia, United States, 22314
        • Recruiting
        • Mental Health America
        • Contact:
        • Principal Investigator:
          • Theresa Nguyen, MSW
    • Washington
      • Seattle, Washington, United States, 98195
        • Active, not recruiting
        • University of Washington

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

14 years and older (Child, Adult, Older Adult)

Accepts Healthy Volunteers

Yes

Description

Inclusion Criteria:

  • Phase 3b Study 1 (SMART): Users of MHA website, engaging from Internet Protocol (IP) addresses in the US, who have chosen to start the PHQ-9 depression screener in English. Must be those who can read English.
  • Phase 3b Study 2 (DIY): PHQ-9 or GAD score of 10 or greater, users of MHA website, 18 years of age or older.

Exclusion Criteria:

  • Phase 3b Study 1 (SMART): None
  • Phase 3b Study 2 (DIY): Younger than 18 years old, Non-English or Non-Spanish speaking, PHQ-9 less than 10, outside of US, have more than a little familiarity with the concept of cognitive reframing.

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

  • Primary Purpose: Health Services Research
  • Allocation: Randomized
  • Interventional Model: Sequential Assignment
  • Masking: Triple

Arms and Interventions

Participant Group / Arm
Intervention / Treatment
Experimental: Study 1, Generic response + Top Resources

The Generic Response + Top Resources will feature the response-as-usual on MHA's website and the four most frequently visited resource pages on MHA's website.

Participants will be randomized into this condition, and also, participants who do not respond to the demographics survey or the Next Steps survey will be placed in this condition.

Participants will be provided with the generic/current response to screening.
Participants will be provided with links to the top list of 4 MHA resources.
Experimental: Study 1, Generic response + Tailored Resources by Demographics
This condition will feature the response-as-usual on MHA's website and resources tailored to two demographics. People who endorse being Lesbian, Gay, Bisexual, Transgender, or Queer (LGBTQ) will receive 4 resources associated with LGBTQ issues. People who endorse being 8-17, 18-24, 25-44, or 45+ years of age will receive the most common 4 resources used by people in those age groups. If someone enters both age and LGBTQ status, they will be provided with 2 resources tailored to age and 2 resources tailored to LGBTQ status, randomly chosen.
Participants will be provided with the generic/current response to screening.
Participants will be provided with links to MHA resources tailored to sexuality (Lesbian, Gay, Bisexual, Transgender, or Queer) status and age range.
Experimental: Study 1, Generic response + Tailored Resources by Desired Resources
This condition will feature the response-as-usual on MHA's website and 4 resources tailored to a survey question that asks participants what they would like to do next on the website after screening is complete (e.g., "Learn more about depression", "Take another mental health screening")
Participants will be provided with the generic/current response to screening.
Participants will be provided with links to MHA resources aligned with their expressed interest (e.g., additional screening, self help tools).
Experimental: Study 1, Tailored Response + Top Resources
This condition will feature a response tailored to screening status (above or below criteria for depression) and expressed need for mental health support (e.g., "We're so glad to hear you're open to exploring how to improve your mental health. People who score with minimum or mild depression often notice that symptoms can get worse in the weeks after taking a Depression test." They will also receive the 4 top resources.
Participants will be provided with links to the top list of 4 MHA resources.
Participants will be provided with a response to screening that is tailored to the match between screening score (above or below depression criteria) and expressed need for mental health support (yes or no).
Experimental: Study 1, Tailored Response + Tailored Resources by Desired Resources
This condition will feature a response tailored to screening status (above or below criteria for depression) and expressed need for mental health support, and will receive resources tailored to a survey question that asks participants what they would like to do next on the website after screening is complete (e.g., "Learn more about depression", "Take another mental health screening")
Participants will be provided with links to MHA resources aligned with their expressed interest (e.g., additional screening, self help tools).
Participants will be provided with a response to screening that is tailored to the match between screening score (above or below depression criteria) and expressed need for mental health support (yes or no).
Experimental: Study 1, Embedded single-question DIY
Participants who visit content pages on the MHA website after completing screening will receive stage 2 randomization into one of three content page conditions. In the embedded single-question DIY condition, a single DIY question will be embedded within the content page.
A single DIY question will be embedded within the content page.
Experimental: Study 1, Embedded full DIY within content page
Participants will be randomized into one of three content page conditions. In the embedded full DIY condition, the full DIY tool will be embedded within the content page.
The full DIY tool will be embedded within the content page.
Experimental: Study 1, Single question plus full DIY
Participants will be randomized into one of three content page conditions. In the single question plus full DIY condition, a single question plus the full DIY tool will be embedded within the content page.
A single question plus the full DIY tool will be embedded within the content page.
Experimental: Study 1, Content-as-usual
The content page will not include any embedded DIY tool.
The content page will not include any embedded DIY tool.
Experimental: Study 2, Control
Participants in the DIY control group will receive psychoeducation materials in W0. They will view content as usual (no DIY) and will receive surveys from W1 to W4 and follow-up surveys in W5 and at the end of W8.
Participants in the DIY control group will receive psychoeducation materials in W0. They will view content as usual (no DIY).
Experimental: Study 2, DIY tool without AI
Participants in the DIY tool without AI group will be instructed to use the DIY tool 3 times a week. They will receive surveys from W1 to W4 and follow-up surveys in W5 and at the end of W8.
Participants in the DIY tool without AI group will be instructed to use the DIY tool 3 times a week.
Experimental: Study 2, DIY tool with AI
Participants in the DIY tool with AI group will be instructed to use the DIY tool with AI 3 times a week. They will receive surveys from W1 to W4 and follow-up surveys in W5 and at the end of W8.
Participants in the DIY tool with AI group will be instructed to use the DIY tool with AI 3 times a week.

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Study 1: Mental Health America Engagement
Time Frame: Through active web session, an average of 10 minutes
Time spent on webpages after screening results provided
Through active web session, an average of 10 minutes
Study 1: Mental Health America Engagement
Time Frame: Through active web session, an average of 10 minutes
Number of article(s) read after screening results provided
Through active web session, an average of 10 minutes
Study 1: Mental Health America Disengagement
Time Frame: Through active web session, an average of 10 minutes
Proportion of users leaving website after shown assigned intervention
Through active web session, an average of 10 minutes
Study 1: DIY Completion Rate
Time Frame: Week 1, Week 2, Week 3, Week 4
Percent of completion of the DIY tool
Week 1, Week 2, Week 3, Week 4
Study 2: Engagement (Dosage)
Time Frame: Week 1, Week 2, Week 3, Week 4
Number of times using the DIY tool
Week 1, Week 2, Week 3, Week 4
Study 2: Tool Use Helpfulness
Time Frame: Week 1, Week 2, Week 3, Week 4, Week 8
Measured using one item (Using the tool has been helpful to me in dealing with my negative thoughts and emotions) on a scale of 1 (strongly agree) to 4 (strongly disagree). Note: includes 5 qualitative questions, not included here.
Week 1, Week 2, Week 3, Week 4, Week 8
Study 2: Emotion Mechanisms
Time Frame: Week 1, Week 2, Week 3, Week 4, Week 8
Emotion mechanisms are measured using a 16-item scale for the control group and a 17-item scale for the DIY, no AI and DIY, with AI groups on a scale of 1-10, with higher scores meaning more intensity for each specific emotion. At W8, Emotion Mechanisms are measured using 22 items on a scale of 1-10. Scores will be assessed using mean item-level scores. Participants are asked about their emotions (e.g., confidence, anxiety, comfort, hopefulness, motivation) about their ability to address negative thoughts. These include, for example, identifying thinking traps, reframing negative thoughts, commitment to regularly engaging in the reframing practice, and motivation to keep improving their reframing skills. For the four items that ask about feeling anxious or intimidated, higher scores indicate greater anxiety and intimidation. For the remaining items, higher scores indicate better emotion mechanisms.
Week 1, Week 2, Week 3, Week 4, Week 8
Study 2: Tool Mechanisms
Time Frame: Week 1, Week 2, Week 3, Week 4
Tool Mechanisms is measured using 4 items about reframing. The items measure relatability/believability, helpfulness, memorability, and learnability on a scale of 1( strongly agree) to 4 (Strongly disagree). Scores will be assessed using mean item-level scores, with higher scores indicating better outcomes after using a DIY tool.
Week 1, Week 2, Week 3, Week 4
Study 2: Tool Mechanisms (Part 2)
Time Frame: Week 1, Week 2, Week 3, Week 4
Tool Mechanisms (Part 2) is measured using 2 items about belief change and emotion change on a scale of 1 to 10. Scores will be assessed using mean item-level scores, with higher scores indicating stronger beliefs and emotions after completing a DIY tool. Note: this also includes 1 qualitative question; not included here.
Week 1, Week 2, Week 3, Week 4
Study 2: Hopefulness
Time Frame: Week 1, Week 2, Week 3, Week 4, Week 8
Hopefulness is measured using a single item (How hopeful do you feel about the future?) on a scale of 1 to 10.
Week 1, Week 2, Week 3, Week 4, Week 8
Study 2: DIY Skill Use: Competencies of Cognitive Therapy Scale - Self-Report
Time Frame: Week 1, Week 2, Week 3, Week 4, Week 8
The Competencies of Cognitive Therapy Scale will be used to ask participants about how much they have used specific strategies to cope with negative moods, primarily negative automatic thoughts, in the last 4 weeks. The current study will use items 20, 28, 21, 6, 24, and 11; all items are rated on a scale of 1 (not at all) to 7 (completely). Higher scores indicate better outcomes. Scores will be assessed using mean item-level scores. These items ask participants about the following strategies: re-evaluating the situation, taking time to step back from a situation and considering that their negative thoughts might be inaccurate, actively working to develop more rational views, having a specific action plan of things they could do to cope, taking time to consider other factors that may have been involved, and taking note of what they were thinking and working to develop a more balanced view. Note: this also includes 3 qualitative questions; not included here.
Week 1, Week 2, Week 3, Week 4, Week 8

Secondary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Study 2: Patient Health Questionnaire (PHQ-9)
Time Frame: Baseline, Week 1, Week 2, Week 3, Week 4, Week 8
The Patient Health Questionnaire (PHQ-9) consists of 9 depression items. Each item is associated with a Diagnostic and Statistical Manual (DSM) symptom of depression. Participants rate whether or not they have experienced the symptom over the last two weeks, with severity ratings of 0 to 3. Total scores range from 0 to 27, with higher scores indicating greater severity of depression symptoms. It is one of the few measures that is brief (it takes less than one minute to give) and has been found to have excellent sensitivity to change over time. Used for screening and will be used as a moderator.
Baseline, Week 1, Week 2, Week 3, Week 4, Week 8
Study 2: Generalized Anxiety Disorder (GAD-7)
Time Frame: Baseline, Week 1, Week 2, Week 3, Week 4, Week 8
A 7- item screener for generalized anxiety. It consists of items related to Generalized Anxiety Disorder (GAD). Participants rate how much anxiety they have experienced in the last two weeks on a scale of 0 to 3. Total scores range from 0 to 21, with higher scores indicating greater severity of anxiety symptoms. The scale is a valid screener for GAD. Used for screening and will be used as a moderator.
Baseline, Week 1, Week 2, Week 3, Week 4, Week 8

Other Outcome Measures

Outcome Measure
Measure Description
Time Frame
Satisfaction with Talkspace
Time Frame: Treatment completion (up to 9 weeks from treatment engagement for majority of Talkspace consumers)
Satisfaction with Talkspace clinical services, measure created by TS and asks whether goals were met
Treatment completion (up to 9 weeks from treatment engagement for majority of Talkspace consumers)
Satisfaction with outside services
Time Frame: Treatment completion (an average of 2-4 weeks from treatment engagement)
Satisfaction with outside services (administered to MHA consumers)
Treatment completion (an average of 2-4 weeks from treatment engagement)
Prodromal Questionnaire - Brief Version
Time Frame: Baseline
A validated measure of symptoms indicating risk for psychosis. 21 dichotomous response items. Used for screening and will be used as a moderator.
Baseline
Brief Bipolar Test
Time Frame: Baseline
A brief, validated self-report instrument designed to indicate bipolar symptoms. Used for screening and will be used as a moderator.
Baseline
Stanford-Washington University Eating Disorder (SWED) Screen
Time Frame: Baseline
A brief, 11-item validated self-report instrument designed to indicate eating disorders. Used for screening and will be used as a moderator.
Baseline
Primary Care - Post Traumatic Stress Disorder Screen (PC-PTSD)
Time Frame: Baseline
A brief, 4 item screen for PTSD for the primary care population. Used for screening and will be used as a moderator.
Baseline
CAGE-AID
Time Frame: Baseline
A brief, 4-item screen for alcohol addiction. Used for screening and will be used as a moderator.
Baseline
Healthy Workplace Survey
Time Frame: Baseline
A brief measure created by MHA to examine workplace mental health, examining the psychological safety, fairness, and healthiness of work environment. Will be used as a moderator.
Baseline
Duke Social Support Scale
Time Frame: Baseline
A validated, brief, 11 item self-report measure of the amount of social support a person feels. Will be used as a moderator.
Baseline

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Investigators

  • Principal Investigator: Michael Pullmann, PhD, University of Washington

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start (Actual)

October 15, 2021

Primary Completion (Estimated)

August 1, 2024

Study Completion (Estimated)

November 30, 2024

Study Registration Dates

First Submitted

August 5, 2020

First Submitted That Met QC Criteria

August 7, 2020

First Posted (Actual)

August 11, 2020

Study Record Updates

Last Update Posted (Actual)

February 1, 2024

Last Update Submitted That Met QC Criteria

January 29, 2024

Last Verified

January 1, 2024

More Information

Terms related to this study

Other Study ID Numbers

  • STUDY00010958
  • 1R01MH125179-01 (U.S. NIH Grant/Contract)

Drug and device information, study documents

Studies a U.S. FDA-regulated drug product

No

Studies a U.S. FDA-regulated device product

No

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Engagement, Patient

Clinical Trials on Study 1, Generic Response

3
Subscribe