Reviewer has chosen not to be Anonymous
Overall Impression: Weak
Suggested Decision: Undecided
Technical Quality of the paper: Average
Presentation: Weak
Reviewer`s confidence: Medium
Significance: High significance
Background: Reasonable
Novelty: Limited novelty
Data availability: All used and produced data (if any) are FAIR and openly available in established data repositories
Length of the manuscript: The length of this manuscript is about right
Summary of paper in a few sentences:
The author presents an approach for knowledge graph completion, called "concepts of nearest neighbors". It is an anytime learning approach which learns patterns of neighbors and uses those for knowledge graph completion.
The paper is an extended version of an ESWC 2019 paper, as acknowledged by the author. Compared to that paper, there are no novel ideas, but a more extensive evaluation on more datasets and baselines, as well as a more detailed investigation of the impact of various parameters.
Reasons to accept:
The author compares against a variety of state of the art approaches and on more than one dataset. The comparison to AnyBURL is particularly interesting, yet not fully conclusive.
Reasons to reject:
Overall, the paper is written in a very confusing style. Although a running example is used, it is often hard to grasp the idea and the example per se. What would help is a big picture explaining how the different pieces (queries, answers, extensions, intensions, neighbors, inference, ...) fit together and how they are combined in order to produce a prediction. For the actual prediction part, I miss a pseudocode explaining how the approach actually computes an inference.
One major claim of the paper is that the inference is explainable. However, I find that claim problematic, in particular since the paper does not show examples of how such an explanation would be presented to a user. Just because the patterns learned are explicit, it does not mean that the inference itself is explainable. In fact, as the paper claims, many rules are combined into a final inference. Which one is then chosen as an explanation? Is there a difference whether the inference is created by one strong or by 100 weak rules?
Nanopublication comments:
Further comments:
1 Comment
Meta-Review by Editor
Submitted by Tobias Kuhn on
The reviewers agree that the paper has its merits, but also point to a number of shortcomings. In particular, scalability should be better addressed, the formal terminology and use of symbols should be simplified, the bigger picture should be made clearer throughout, and the explainability property should be better justified. These points need to be fixed before the paper can be accepted.
Tobias Kuhn (http://orcid.org/0000-0002-1267-0234)