Functionality learning through specification instructions

Publications: Contribution to bookContribution to proceedingsPeer Reviewed

Abstract

Test suites assess natural language processing models' performance on specific functionalities: cases of interest involving model robustness, fairness, or particular linguistic capabilities. This paper introduces specification instructions: text descriptions specifying fine-grained task-specific behaviors. For each functionality in a suite, we generate an instruction that describes it. We combine the specification instructions to create specification-augmented prompts, which we feed to language models pre-trained on natural instruction data.

We conduct experiments to measure how optimizing for some functionalities may negatively impact functionalities that are not covered by the specification set. Our analyses across four tasks and models of diverse sizes and families show that smaller models struggle to follow specification instructions. However, larger models (>~3B params.) can benefit from specifications and -- surprisingly -- even generalize certain desirable behaviors across functionalities.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2023
Publication statusPublished - 2024
EventThe 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024) - Hyatt Regency Miami Hotel, Miami, United States
Duration: 12 Nov 202416 Nov 2024

Conference

ConferenceThe 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
Country/TerritoryUnited States
CityMiami
Period12/11/2416/11/24

Austrian Fields of Science 2012

  • 602011 Computational linguistics
  • 102019 Machine learning

Cite this