Enabling text search on SPARQL-endpoints through OSCAR

Tracking #: 551-1531


Responsible editor: 

Alejandra Gonzalez-Beltran

Submission Type: 

Resource Paper

Abstract: 

In this paper we introduce the latest version of OSCAR (Version 2.0), the OpenCitations RDF Search Application, which has several improved features and extends the query workflow comparing with the previous version (Version 1.0) we presented at the 4th Workshop entitled Semantics, Analytics, Visualisation: Enhancing Scholarly Dissemination (SAVE-SD 2018), held in conjunction with The Web Conference 2018. OSCAR is a user-friendly search platform that can be used to search any RDF triplestore providing a SPARQL endpoint, while hiding the complexities of SPARQL, making the searching operations operable by those who are not experts in Semantic Web technologies. We present here the basic features and the main extensions of this latest version. In addition, we demonstrate how it can be adapted to work with different SPARQL endpoints containing scholarly data, using as examples the OpenCitations Corpus (OCC) and and the OpenCitations Index of Crossref open DOI-to-DOI citations (COCI) datasets, both provided by OpenCitations, and the Wikidata dataset provided by the Wikimedia Foundation. We conclude by reporting the usage statistics of OSCAR, retrieved from the OpenCitations website logs, so as to demonstrate its usefulness.

Manuscript: 

Tags: 

  • Reviewed

Special issue (if applicable): 

SAVE-SD 2017/2018

Data repository URLs: 

Date of Submission: 

Wednesday, December 19, 2018

Date of Decision: 

Wednesday, February 13, 2019


Nanopublication URLs:

Decision: 

Accept

Solicited Reviews:


1 Comment

Meta-Review by Editor

As you will see from the enclosed reviews, they are broadly favourable, while there are several recommendations for changes and improvements that you must consider before the paper is published. Please, consider all reviewers comments and address them on the final version.

 

In particular, please address the concerns by all reviewers about including a more in depth comparison with other tools and describe what are OSCAR’s distinctive advantages. For example, consider including a table comparing OSCAR’s functionality against the functionality provided by other tools. Do include in the comparison the new tools suggested by reviewers. Please, also address the distinction between the tool and its data content and provide ways of evaluating both. In addition, please make available all the material required to evaluate the tool and content (e.g. the usage statistics).

 

As regards the availability of associated material, while OSCAR development is open and you provide the GitHub repository (https://github.com/opencitations/oscar),  there are currently no releases in that repository. I recommend you create a release and use the GitHub/Zenodo association to obtain a DOI and make the code citable (see https://guides.github.com/activities/citable-code/). Thus, please include a citation to your software using the Zenodo DOI.

 

Finally, take into account all the textual changes indicated by reviewers and proofread the paper again (e.g. avoid the repetition of SAVE-SD workshop URL in the introduction).

 

Alejandra Gonzalez-Beltran (http://orcid.org/0000-0003-3499-8262)