ACTC: Active Threshold Calibration for Cold-Start Knowledge Graph Completion

Anastasiia Sedova, Benjamin Roth

Publications: Contribution to bookContribution to proceedingsPeer Reviewed

Abstract

Self-supervised knowledge-graph completion (KGC) relies on estimating a scoring model over (entity, relation, entity)-tuples, for example, by embedding an initial knowledge graph. Prediction quality can be improved by calibrating the scoring model, typically by adjusting the prediction thresholds using manually annotated examples. In this paper, we attempt for the first time cold-start calibration for KGC, where no annotated examples exist initially for calibration, and only a limited number of tuples can be selected for annotation. Our new method ACTC finds good per-relation thresholds efficiently based on a limited set of annotated tuples. Additionally to a few annotated tuples, ACTC also leverages unlabeled tuples by estimating their correctness with Logistic Regression or Gaussian Process classifiers. We also experiment with different methods for selecting candidate tuples for annotation: density-based and random selection. Experiments with five scoring models and an oracle annotator show an improvement of 7% points when using ACTC in the challenging setting with an annotation budget of only 10 tuples, and an average improvement of 4% points over different budgets.

Original languageEnglish
Title of host publicationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Pages1853-1863
Number of pages11
ISBN (Electronic)9781959429715
DOIs
Publication statusPublished - Jul 2023

Austrian Fields of Science 2012

  • 102001 Artificial intelligence

Fingerprint

Dive into the research topics of 'ACTC: Active Threshold Calibration for Cold-Start Knowledge Graph Completion'. Together they form a unique fingerprint.

Cite this