Activities per year
Abstract
This paper presents the first study for temporal relation extraction in a zero-shot setting focusing on biomedical text. We employ two types of prompts and five Large Language Models (LLMs; GPT-3.5, Mixtral, Llama 2, Gemma, and PMC-LLaMA) to obtain responses about the temporal relations between two events. Our experiments demonstrate that LLMs struggle in the zero-shot setting, performing worse than fine-tuned specialized models in terms of F1 score. This highlights the challenging nature of this task and underscores the need for further research to enhance the performance of LLMs in this context. We further contribute a novel comprehensive temporal analysis by calculating consistency scores for each LLM. Our findings reveal that LLMs face challenges in providing responses consistent with the temporal properties of uniqueness and transitivity. Moreover, we study the relation between the temporal consistency of an LLM and its accuracy, and whether the latter can be improved by solving temporal inconsistencies. Our analysis shows that even when temporal consistency is achieved, the predictions can remain inaccurate.
Original language | English |
---|---|
Title of host publication | Proceedings of the 23rd Workshop on Biomedical Natural Language Processing |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 72-84 |
DOIs | |
Publication status | Published - 2024 |
Austrian Fields of Science 2012
- 102019 Machine learning
- 102001 Artificial intelligence
Activities
- 1 Talk or oral contribution
-
Analysing zero-shot temporal relation extraction on clinical notes using temporal consistency
Vasiliki Kougia (Speaker)
16 Aug 2024Activity: Talks and presentations › Talk or oral contribution › Science to Science