Reviewer has chosen not to be Anonymous
Overall Impression: Good
Suggested Decision: Undecided
Technical Quality of the paper: Good
Presentation: Good
Reviewer`s confidence: Medium
Significance: Moderate significance
Background: Comprehensive
Novelty: Limited novelty
Data availability: All used and produced data (if any) are FAIR and openly available in established data repositories
Length of the manuscript: The length of this manuscript is about right
Summary of paper in a few sentences:
This position paper raised the need for a new framework to deal with data streams that are both non i.i.d. and time-dependent and proposed the Time-Evolving Analytics, which combines the advantages of Streaming Machine Learning and Time-series Analytics. The authors compared many of the popular classification methods using the Electricity dataset.
Reasons to accept:
This paper offers a good summary and overview of many different types of machine learning methods.
Reasons to reject:
However, I found a few things confusing and need clarification.
Nanopublication comments:
Further comments:
1. As a statistician, forgive my ignorance, I found many terminologies unfamiliar. Perhaps some definitions could be added? E.g. Incremental Learning, Streaming Machine Learning, concept drift, ADWIN (reference?).
2. Table 1: what is the difference between 'evolving data stream' and 'not i.i.d. data stream'? Is it the same and should delete the last column of the table? Because SML is said to 'relax the assumption that data points are iid', but there is a cross under not iid data stream.
3. The Desiderata and R1-6 seem repetitive. Linking types of data in Table 1, the shared need R1-6 in Page 2 and Framework’s Desiderata in Section 6.1:
R2 = time-dependent = Learning Sequences
R5 = i.i.d. data stream = Stateful learning
R4 = not i.i.d. data stream = Graceful forgetting + Selective remembering + Adaptive learning?
R1 = Problem agnostic
R3 = Forecasting alternatives
R6 = No task boundaries
Did I understand correctly? Then R2,4,5 are related to data types while R1,3,6 are not. Also top of page 3 summaries what data type each model couldn't deal with, hence the related R2,4,5. But did not mention whether ML & Incremental can deal with R2, TSA can deal with R5. Wouldn't it be clear to simply put R2,4,5 along with the corresponding data type in Table 1? I think it's very useful to explain the connection and differences.
4. Need more description of the dataset, e.g. the covariates (nswdemand, nswprice, viceprice, transfer) used.
5. Need to mention TSA only used label and they are online TSA at the beginning of Section 4 (somewhere P6).
6. P6: I don't understand why test the last 48 samples of each segment for ML, while use the 5-fold distributed prequential cross-validation for SML?
7. Fig. 2: I think it's better to separate boxplots for the 3 types of SML methods, instead of mixed and sorted altogether.
8. Fig. 4: plot (VFDT vs others) doesn't match the caption (NC vs others).
9. Concept drift is a new terminology I learned, which I think is similar to what we call 'change point detection' in Statistics. Perhaps some connections/comparisons can be made.
Fig. 5: 'horizontal' should be 'vertical'
P7,L27: should be SWT10_ARF, SWT20_ARF
P8,L13: performs as the baseline -> performs similar as the baseline?
1 Comment
Meta-Review by Editor
Submitted by Tobias Kuhn on
The reviewers have provided several suggestions to improve the manuscript. In particular, the clarity of the paper can be improved by clearly introducing the problem that is discussed, introducing technical terms instead of relying on prior knowledge, and removing of jargon; the technical and scientific challenges that need to be addressed should be described more clearly. Additionally, code and data underlying the results shown in the paper should be made available to ensure reprodibility.
Robert Hoehndorf (https://orcid.org/0000-0001-8149-5890)