Study
Predicting wrong DBpedia mappings

Predicting wrong DBpedia mappings

Created 05-05-2017 by Mariano Rico Visibility: public
Search these data sets in more detail
training test without 30 instances used as holdout (reserve) set.
1 runs0 likes0 downloads0 reach0 impact
210 instances - 23 features - classes - 0 missing values
Revalidation dataset with 30 instances.
0 runs0 likes0 downloads0 reach0 impact
30 instances - 23 features - classes - 0 missing values
training test without 20 instances used as holdout (reserve) set.
0 runs0 likes0 downloads0 reach0 impact
220 instances - 23 features - classes - 0 missing values
Revalidation dataset with 20 instances
0 runs0 likes0 downloads0 reach0 impact
20 instances - 23 features - classes - 0 missing values
EN-ES-literals for training. The 14 instances in the top are used for validation.
0 runs0 likes0 downloads0 reach0 impact
226 instances - 23 features - 2 classes - 0 missing values
EN-ES-IRI annotations. Uniform to be compared to EN-ES-lit
0 runs0 likes0 downloads0 reach0 impact
80 instances - 23 features - 2 classes - 0 missing values
DBpedia incorrect mapping prediction. Spanish-German-IRIs annotations
0 runs0 likes0 downloads0 reach0 impact
110 instances - 23 features - 2 classes - 0 missing values
DBpedia mappings EN-ES-literals. Dataset used to train a predictive model. Uses the annotations provided manually (240) - 14 (used to validate the model). That is, 226 annotations.
5 runs0 likes0 downloads0 reach0 impact
226 instances - 23 features - 2 classes - 0 missing values
The dataset used for validating the model
0 runs0 likes0 downloads0 reach0 impact
14 instances - 23 features - 2 classes - 0 missing values
The English Spanish mappings (literals), annotated (binary), without the 14 upper (top) rows. This dataset is used to model a classificator. The classificator will be validated against the 14 upper…
0 runs0 likes0 downloads0 reach0 impact
240 instances - 23 features - 2 classes - 0 missing values