Reviewer has chosen not to be AnonymousOverall Impression:
RejectTechnical Quality of the paper:
Incomplete or inappropriateNovelty:
Lack of noveltyData availability:
All used and produced data (if any) are FAIR and openly available in established data repositoriesLength of the manuscript:
The length of this manuscript is about right
Summary of paper in a few sentences (summary of changes and improvements for
second round reviews):
The paper proposes an approach to forecast the spread of the 2019-nCov epidemic. The authors' goals is to find a way to forecast the number of cases (infected) and number of deaths. To this end, the authors propose a two-step approach: first, "augment" the dataset using linear regression; this allows them to have a few more days of data. Second, the authors fit a couple of ML models (MLP, random forest) and some forecasting models (ARIMA, ETS) to the data, in order to obtain forecasts. The forecasts of the different models are combined, using the RMSE error as a metric, in order to obtain some final forecasts. The whole approach is based on less than a month worth of data, between January 21st and February 14th, 2020.
Reasons to accept:
I unfortunately do not see any reason to recommend this paper for publication (see below).
Reasons to reject:
There are several deep issues with the paper:
* Overall, the approach of trying to fit statistical / ML models on time series data to forecast epidemics trajectories is a very risky exercise. The trajectory of the epidemic will typically not follow monotonic trend, or periodicity, that would typically be captured by ARIMA/ETS and the likes. Similarly, there is also no reason to think that the distributions of the numbers of cases/deaths as a function of time has any stationarity, which is an implicit assumption behind supervised ML models. In fact, epidemic trajectories are strongly impacted by complex combinations of external factors, such as government actions, social habits,
vaccines developments, temperature (season), etc. None of these can be captured by any model looking at the history of time series, which renders the whole exercise quite futile. Evaluating any such attempt would require extreme care, which is not the case with this paper.
* The models presented seem very prone to overfitting: the data is really small (about 20-30 data points for cases & deaths time series), and the number of hyper-parameters and way to combine models is high. Furthermore, the models (MLP, RF) are too complex for the size of the data/test set. The test set is tiny, which makes it very easy to overfit; and there is not validation set, which indicates likely overfit on the test set.
* The data used in the paper stops on February 14th, but the paper has been submitted on April 3rd. It would at least have been reasonable to check the model predictions on recent actual data.
* There is not comparison with any epidemiological model, nor any reference to such models. The whole field of epidemiology is doing research on how to forecast epidemics; comparisons are necessary to show why any new approach would warrant attention.
* The approach seem to rely heavily on a "data augmentation" step based on linear regression. This seems completely wrong. The whole exercise then reduces in trying to predict something that was synthetically generated by the authors to start with.
* To summarize, the paper proposes no new approach, provides no new result, and has flawed methodology.