Modeling electroencephalography waveforms with semi-supervised deep belief nets: fast classification and anomaly measurement

D F Wulsin, J R Gupta, R Mani, J A Blanco, B Litt, D F Wulsin, J R Gupta, R Mani, J A Blanco, B Litt

Abstract

Clinical electroencephalography (EEG) records vast amounts of human complex data yet is still reviewed primarily by human readers. Deep belief nets (DBNs) are a relatively new type of multi-layer neural network commonly tested on two-dimensional image data but are rarely applied to times-series data such as EEG. We apply DBNs in a semi-supervised paradigm to model EEG waveforms for classification and anomaly detection. DBN performance was comparable to standard classifiers on our EEG dataset, and classification time was found to be 1.7-103.7 times faster than the other high-performing classifiers. We demonstrate how the unsupervised step of DBN learning produces an autoencoder that can naturally be used in anomaly measurement. We compare the use of raw, unprocessed data--a rarity in automated physiological waveform analysis--with hand-chosen features and find that raw data produce comparable classification and better anomaly measurement performance. These results indicate that DBNs and raw data inputs may be more effective for online automated EEG waveform recognition than other common techniques.

Figures

Figure 1
Figure 1
Four representative samples from each class of our EEG waveforms dataset. Abbreviations: GPED, generalized periodic epileptiform discharge; PLED, periodic lateralized epileptiform discharge. Horizontal scalebars show 100 ms, and vertical scalebars show 100 μV.
Figure 2
Figure 2
(a) A Restricted Boltzmann Machine (RBM) contains hidden layer units hJ connected to the visible layer units vi with symmetric weights W along with hidden layer biases b and visible layer biases c. (b) A Deep Belief Net (DBN) autoencoder can be initialized by stacking sequentially-trained RBMs on top of each other and then “unrolling” the weights to form a feed-forward network. Here, the first three hidden layers encode successive representations of the data, and the last three decode previous representation to form a reconstruction of the input. (c) A DBN classifier is initialized from either stacked RBMs or the first half of a DBN autoencoder to form a feed-forward network. A labels layer is stacked above the top hidden layer to produce the label output.
Figure 3
Figure 3
Average F1 classification performance on the raw256, feat16, and pca20 datasets for each classifier with standard deviation errorbars.
Figure 4
Figure 4
Median time for each classifier to test 1 second of EEG data (17 channels) for the raw256, feat16, and pca20 datasets. Note that the y-axis scale is in powers of 10.
Figure 5
Figure 5
Histogram estimates of the class-conditional probability density functions of the DBN reconstruction error (RMSE) for the background (solid) and non-background (hatched) classes in the (a) feat16 and (b) raw256 datasets.
Figure 6
Figure 6
Anomaly visualizations using DBN RMSE on three representative example 10-second clips of 10 EEG channels. The color behind a point on a given channel represents the windowed (1 second, 62.5 ms window overlap) RMSE centered around that point. More anomalous areas of the signal have higher RMSE and are redder in color. Samples that a human reviewer independently labeled as non-background are boxed. The top sample shows examples of eye-blink artifact; the middle shows examples of GPEDs and triphasic waves; and the bottom shows examples of high-amplitude spikes. The height of the black boxes represents 90 μV.

Source: PubMed

3
Subscribe