Review Details
Reviewer has chosen to be Anonymous
Overall Impression: Average
Suggested Decision: Undecided
Technical Quality of the paper: Average
Presentation: Average
Reviewer`s confidence: High
Significance: Moderate significance
Background: Incomplete or inappropriate
Novelty: Limited novelty
Data availability: With exceptions that are admissible according to the data availability guidelines, all used and produced data are FAIR and openly available in established data repositories
Length of the manuscript: The length of this manuscript is about right
Summary of paper in a few sentences:
The paper discusses the state of student dropout in developing countries and several performance metrics used by researchers to evaluate machine learning techniques in the context of education with experimental examples
Reasons to accept:
In this article, a review of how machine-learning techniques have been used in the fight against dropouts is presented for the purpose of providing a stepping-stone for students, researchers and developers who aspire to apply the techniques
Reasons to reject:
The authors should make a better survey of Machine Learning Approaches for Student Dropout Prediction.
e.g. Carlos Márquez-Vera, Alberto Cano, Cristóbal Romero, Amin Y. Noaman, Habib M. Fardoun, Sebastián Ventura:
Early dropout prediction using data mining: a case study with high school students. Expert Systems 33(1): 107-124 (2016)
as well as about the procedure for handling imbalance datasets e.g
Krawczyk, B. (2016). Learning from imbalanced data: open challenges and future directions. Progress in Artificial Intelligence, 5(4), 221-232.
The experimental framework is not well designed. The results are based on only one dataset. Overfitting should be included.
Nanopublication comments:
Further comments:
Section 3.3 is not really useful.
The results are based on only one dataset.
If the authors want to propose an algorithm should present it with pseudocode.
A statistical test should be used for the comparison of the examined methods.
Some information about the time efficiency of the proposed method should be included.
2 Comments
I think this paper is trying
Submitted by Colleen Farrelly on
I think this paper is trying to cover a bit too much. Perhaps it would be better to either dive into a couple of methods and how they have been used to predict student dropout or review the results of papers utilizing machine learning methods. Right now, the paper really glosses over the algorithms and relative strengths/weaknesses of each approach. It also glosses over the results and insight gained from studies that have used machine learning to understand drop rates. I'd suggest either focusing on the algorithms in more depth (perhaps comparing methods on a single dataset to compare efficacy of each) or structuring as a meta-analysis of prior studies that have used machine learning methods.
Meta-Review by Editor
Submitted by Tobias Kuhn on
Your manuscript has been reviewed by four reviewers, who had mixed views on the paper.
In general the reviewers thought that the study covered an important area where there has been little previous study, and where research is needed. Therefore this study could potentially be an important contribution to the literature.
However, the referees also flagged up several areas of concern, which would need to be fully addressed in a revised version of the manuscript. I give a summary of the major themes below, but a revised manuscript should respond to each individual reviewer’s comments
Given the potential importance of this work, I would welcome submission of a revised manuscript that addresses all the reviewers' concerns.
Richard Mann (http://orcid.org/0000-0003-0701-1274)