Download PDFOpen PDF in browser

A Zero-Shot Relation Extraction Approach Based on Contrast Learning

EasyChair Preprint no. 7537

6 pagesDate: March 12, 2022


The most significant advantage of the unseen relation extraction is that it can recognize unlabeled relations. While Zero-Shot Learning can meet the requirements of the identification of unseen relation through relation description information without labeled datasets. However, unseen relation extraction requires an effective method in representation and generalization, which become a challenge for zero-shot learning approach.  In this paper, we propose a Zero-Shot learning Relation Extraction based on Contrastive learning Model (ZRCM) to capture deep interrelation text information.  We design a comparison sample generation method which can produce several instances for one input sentence and compare the distance between positive instance and negative ones, so as to improve the hidden text information mining ability. Experiments conducted on relation extraction common datasets confirmed the promotion of ZRCM compared with the existing methods. Especially, our model can improve the F1 value by up to 7% at best. When there are fewer unseen relations to predict, our model can achieve better performance.

Keyphrases: Contrastive Learning, Relation Extraction, zero-shot learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Hongyu Zhu and Jun Zeng and Yang Yu and Yingbo Wu},
  title = {A Zero-Shot Relation Extraction Approach Based on Contrast Learning},
  howpublished = {EasyChair Preprint no. 7537},

  year = {EasyChair, 2022}}
Download PDFOpen PDF in browser