You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed in #74 and #41, for future improvements in PyPREP it would be handy to have some sort of real-world test battery that measures how well PyPREP does its job. That way we can better tune PREP's default settings, test proposed changes to see if they actually improve the resulting signal/noise ratio, and have a cool way of empirically showing off the utility of the software (and comparing it to other approaches). This wouldn't be something to add to CI as it would almost certainly be pretty slow, but rather a script or set of scripts to run locally for the sake of testing.
One of the ways I think we could do this would be using BCI datasets and algorithms, where the quality of data cleaning could be measured in terms of how much PyPREP improves (or doesn't) the classification accuracy of a range of different BCI algorithms. A good candidate for facilitating this is moabb, which is a package designed specifically for large-scale comparisons of various EEG classification algorithms across a range of different datasets (see section 23 of the Python notebook here for a good illustration of what I mean). I hacked together a basic script for testing PyPREP against the BCI2000 dataset with this package a while ago but had to drop it for other projects, will report back here sometime soon with some preliminary results!
There's also the .ipynb for comparing Python EEG cleaning tools in a similar manner here, which @sappelhoff shared a while ago in the Zenodo issue. It might be worthwhile adding PyPREP into that to see how it stacks up against other Python methods for the same purpose.
The text was updated successfully, but these errors were encountered:
As discussed in #74 and #41, for future improvements in PyPREP it would be handy to have some sort of real-world test battery that measures how well PyPREP does its job. That way we can better tune PREP's default settings, test proposed changes to see if they actually improve the resulting signal/noise ratio, and have a cool way of empirically showing off the utility of the software (and comparing it to other approaches). This wouldn't be something to add to CI as it would almost certainly be pretty slow, but rather a script or set of scripts to run locally for the sake of testing.
One of the ways I think we could do this would be using BCI datasets and algorithms, where the quality of data cleaning could be measured in terms of how much PyPREP improves (or doesn't) the classification accuracy of a range of different BCI algorithms. A good candidate for facilitating this is moabb, which is a package designed specifically for large-scale comparisons of various EEG classification algorithms across a range of different datasets (see section 23 of the Python notebook here for a good illustration of what I mean). I hacked together a basic script for testing PyPREP against the BCI2000 dataset with this package a while ago but had to drop it for other projects, will report back here sometime soon with some preliminary results!
There's also the .ipynb for comparing Python EEG cleaning tools in a similar manner here, which @sappelhoff shared a while ago in the Zenodo issue. It might be worthwhile adding PyPREP into that to see how it stacks up against other Python methods for the same purpose.
The text was updated successfully, but these errors were encountered: