Review Details
Reviewer has chosen to be Anonymous
Overall Impression: Bad
Suggested Decision: Reject
Technical Quality of the paper: Unable to judge
Presentation: Bad
Reviewer`s confidence: Medium
Significance: Low significance
Background: Incomplete or inappropriate
Novelty: Lack of novelty
Data availability: Not all used and produced data are FAIR and openly available in established data repositories; authors need to fix this
Length of the manuscript: The length of this manuscript is about right
Summary of paper in a few sentences:
The paper proposes a task recommendation system to guide employers in matching candidates (internal or otherwise) to open positions. The method takes as input candidates’ resumes and information on the skillset required for the job position, and outputs a matching-based classification. The method is validated on an example dataset.
Reasons to accept:
None
Reasons to reject:
This is a list of reasons to reject. In “further comments” I will elaborate on each of the items.
> The paper is not correctly placed in the literature
> The scientific contribution is not clear
> The proposed method does not seem to apply to the context that the authors want to study
> The quality of the presentation is unacceptable
Nanopublication comments:
Further comments:
> The paper is not correctly placed in the literature
There are two major issues with the placement in the existing literature. On the one hand, the literature on job allocation and resume application from HRM and business communication is almost completely ignored. On the other hand, the few in-text citations featured in the paper do not relate to the text they are attached to. Examples include:
Page 2, “Screening process”, first paragraph [typos are in the original text]: “Many screening methods are used for recruitment like aptitude round, group discussion, Personal interview etc. but initial screening is usually based on resumes.(Nunley, 2016)”. While this claim is compatible with common sense, it cannot by attributed to Nunley et al. (2016), whose study only relates to this claim to the extent that they compare resumes in their study.
A couple of sentences later, the text reads [again, the typo is in the original text]: “A good resume is meant to provide a complete picture about the potentials of candidate and reflects whether the applicant is worth interviewing.(Derous, 2017)”. Here, too, the only connection between the sentence and the study by Derous et al. (2017) is that Derous et al. study *something* about resumes (specifically, ethnic discrimination in resume screening).
One last example from the top of page 3: “Recommending can help users to focus on the relevant news instead of getting confused by the bombardment of too many news documents. (Miller, 2003) (Melville, 2009)”. The paper by Melville et al. (2009), which is about sentiment analysis on weblogs, has nothing to do with what is being argued in the text.
> The scientific contribution is not clear
The paper does not explain what the novelty of the study is, and thus it fails to identify the niche it addresses.
> The proposed method does not seem to apply to the context that the authors want to study
I have two fundamental questions that potentially undermine the premise of this study.
1) Semantic similarity is at the core of the proposed task recommendation system, and is computed by “grouping the terms into synsets as provided by WordNet”. While the identification of sematic similarity works accurately with common, ‘general’ vocabulary, it is a much harder task when dealing with specialized jargon and technical vocabulary. Job applications, CVs, and job descriptions typically sport specialized jargon, and I can imagine that such jargon is used to identify the skills that the potential employer is trying to match with the job requirements. Hence my question: are job applications a plausible setting to study the task recommendation system?
2) Am I right to assume that the recommender system can very easily be tricked by purposively crafted resumes?
> The quality of the presentation is unacceptable
The language is poor to the point that some sentences are incomprehensible. The diagrams are all either non informative, not correctly displayed on the page, ruined by formatting errors, or a combination of the above. Citations and references are not reported in any standard (or at least consistent) layout.
1 Comment
Meta-Review by Editor
Submitted by Tobias Kuhn on
Thank you for submitting your paper “Task Recommender System using Semantic Clustering to Identify the Right Personnel”, to Data Science. First of all, we would like to apologize for the long delay before we could make a decision. We invited nine reviewers but only two were able to provide a review in the limited time we request.
We have now received two reports. Unfortunately, both reviewers recommend rejecting your manuscript. Both reviewers identify a weak integration into the literature as the main weakness. Recruitment and job allocation are important research problems that have been studied in various fields, including economics and manag ement science. The reviewers argue that your manuscript does not identify your contribution to this literature. This involves comparing your approach to existing methods and demonstrating under which conditions your method is superior. In addition, both reviewers point to language style issues and problems with the graphical analyses that make the paper hard to digest.
Having read your paper and both reviews, our meta-reviewer agreed with the reviewer recommendations and decided to reject your paper. We are sorry not bringing you better news, but we hope that you continue to consider Data Science as a potential outlet for your work. We wish you good luck in publishing this paper in another journal.
Michael Maes (http://orcid.org/0000-0001-9416-3211)