Detecting sleep using heart rate and motion data from multisensor consumer-grade wearables, relative to wrist actigraphy and polysomnography

Daniel M Roberts, Margeaux M Schade, Gina M Mathew, Daniel Gartenberg, Orfeu M Buxton, Daniel M Roberts, Margeaux M Schade, Gina M Mathew, Daniel Gartenberg, Orfeu M Buxton

Abstract

Study objectives: Multisensor wearable consumer devices allowing the collection of multiple data sources, such as heart rate and motion, for the evaluation of sleep in the home environment, are increasingly ubiquitous. However, the validity of such devices for sleep assessment has not been directly compared to alternatives such as wrist actigraphy or polysomnography (PSG).

Methods: Eight participants each completed four nights in a sleep laboratory, equipped with PSG and several wearable devices. Registered polysomnographic technologist-scored PSG served as ground truth for sleep-wake state. Wearable devices providing sleep-wake classification data were compared to PSG at both an epoch-by-epoch and night level. Data from multisensor wearables (Apple Watch and Oura Ring) were compared to data available from electrocardiography and a triaxial wrist actigraph to evaluate the quality and utility of heart rate and motion data. Machine learning methods were used to train and test sleep-wake classifiers, using data from consumer wearables. The quality of classifications derived from devices was compared.

Results: For epoch-by-epoch sleep-wake performance, research devices ranged in d' between 1.771 and 1.874, with sensitivity between 0.912 and 0.982, and specificity between 0.366 and 0.647. Data from multisensor wearables were strongly correlated at an epoch-by-epoch level with reference data sources. Classifiers developed from the multisensor wearable data ranged in d' between 1.827 and 2.347, with sensitivity between 0.883 and 0.977, and specificity between 0.407 and 0.821.

Conclusions: Data from multisensor consumer wearables are strongly correlated with reference devices at the epoch level and can be used to develop epoch-by-epoch models of sleep-wake rivaling existing research devices.

Keywords: actigraphy; artificial intelligence; big data; machine learning; polysomnography; smartphone; wearable.

© Sleep Research Society 2020. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

Figures

Figure 1.
Figure 1.
The nested cross-validation procedure used.
Figure 2.
Figure 2.
Comparison of PSG-derived and device-derived night-level sleep metrics. Points depict the PSG and device-derived values for each night of data within the study (except nights previously described as excluded). Lines depict the linear fit between PSG value and device value for each device.
Figure 3.
Figure 3.
Bland-Altman [49] plots comparing PSG-derived sleep metrics to the difference between each device-derived sleep metric and the PSG-derived sleep metric. Points depict difference values for each night of data within the study (except nights previously described as excluded).
Figure 4.
Figure 4.
Mean receiver operating characteristic (ROC) curves, for each classifier, both with and without night-level normalization. Each line depicts the point-by-point average of ROC across nights classified. Classifiers depicted are without class oversampling.
Figure 5.
Figure 5.
Comparison of PSG- and classifier-derived night-level sleep metrics. Points depict the PSG- and classifier-derived values for each night of data within the study (except nights previously described as excluded). Lines depict the linear fit between PSG and classifier-derived values.
Figure 6.
Figure 6.
Bland-Altman [49] plots comparing PSG-derived sleep metrics to the difference between each classifier-derived sleep metric and the PSG-derived sleep metric. Points depict difference values for each night of data within the study (except nights previously described as excluded). Solid lines depict the mean bias between PSG and device-derived values, while dashed lines depict the 95% confidence interval for the bias across nights.

Source: PubMed

3
Abonnere