Analysing zero-shot temporal relation extraction on clinical notes using temporal consistency

Publications: Contribution to bookContribution to proceedingsPeer Reviewed

Abstract

This paper presents the first study for temporal relation extraction in a zero-shot setting focusing on biomedical text. We employ two types of prompts and five Large Language Models (LLMs; GPT-3.5, Mixtral, Llama 2, Gemma, and PMC-LLaMA) to obtain responses about the temporal relations between two events. Our experiments demonstrate that LLMs struggle in the zero-shot setting, performing worse than fine-tuned specialized models in terms of F1 score. This highlights the challenging nature of this task and underscores the need for further research to enhance the performance of LLMs in this context. We further contribute a novel comprehensive temporal analysis by calculating consistency scores for each LLM. Our findings reveal that LLMs face challenges in providing responses consistent with the temporal properties of uniqueness and transitivity. Moreover, we study the relation between the temporal consistency of an LLM and its accuracy, and whether the latter can be improved by solving temporal inconsistencies. Our analysis shows that even when temporal consistency is achieved, the predictions can remain inaccurate.
Original languageEnglish
Title of host publicationProceedings of the 23rd Workshop on Biomedical Natural Language Processing
PublisherAssociation for Computational Linguistics (ACL)
Pages72-84
DOIs
Publication statusPublished - 2024

Austrian Fields of Science 2012

  • 102019 Machine learning
  • 102001 Artificial intelligence

Cite this