The aim of the Semantic Web and Linked Data principles is to create a web of data that can be processed by machines. The web of data is seen as a single globally distributed dataset. During the years, an increasing amount of data was published on the Web. In particular, large knowledge bases such as Wikidata, DBPedia, LinkedGeoData, and others are freely available as Linked Data and SPARQL endpoints. Exploring and performing reasoning tasks on such huge knowledge graphs is practically impossible. Moreover, triples involving an entity can be distributed among different datasets hosted by different SPARQL endpoints. Given an entity of interest and a task, we are interested into extracting a fragment of knowledge relevant to that entity, such that the results of the given task performed on the fragment are the same as if the task was performed on the whole web of data. Here we propose a system, called KRaider (“Knowledge Raider”), for extracting the relevant fragment from different SPARQL endpoints, without the user knowing their location. The extracted triples are then converted into an OWL ontology, in order to allow inference tasks. The system is part of a - still under development - framework called SRL-Frame (“Statistical Relational Learning Framework”).

KRaider: A crawler for linked data / Cota, Giuseppe; Riguzzi, Fabrizio; Zese, Riccardo; Lamma, Evelina. - ELETTRONICO. - 2396:(2019), pp. 202-216. (Intervento presentato al convegno 34th Italian Conference on Computational Logic, CILC 2019 tenutosi a ita nel 2019).

KRaider: A crawler for linked data

Giuseppe Cota;
2019-01-01

Abstract

The aim of the Semantic Web and Linked Data principles is to create a web of data that can be processed by machines. The web of data is seen as a single globally distributed dataset. During the years, an increasing amount of data was published on the Web. In particular, large knowledge bases such as Wikidata, DBPedia, LinkedGeoData, and others are freely available as Linked Data and SPARQL endpoints. Exploring and performing reasoning tasks on such huge knowledge graphs is practically impossible. Moreover, triples involving an entity can be distributed among different datasets hosted by different SPARQL endpoints. Given an entity of interest and a task, we are interested into extracting a fragment of knowledge relevant to that entity, such that the results of the given task performed on the fragment are the same as if the task was performed on the whole web of data. Here we propose a system, called KRaider (“Knowledge Raider”), for extracting the relevant fragment from different SPARQL endpoints, without the user knowing their location. The extracted triples are then converted into an OWL ontology, in order to allow inference tasks. The system is part of a - still under development - framework called SRL-Frame (“Statistical Relational Learning Framework”).
2019
KRaider: A crawler for linked data / Cota, Giuseppe; Riguzzi, Fabrizio; Zese, Riccardo; Lamma, Evelina. - ELETTRONICO. - 2396:(2019), pp. 202-216. (Intervento presentato al convegno 34th Italian Conference on Computational Logic, CILC 2019 tenutosi a ita nel 2019).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11381/2870776
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact