A Probability-Quality Trade-off in Aligned Language Models and its Relation to Sampling Adaptors

Naaman Tan, Josef Valvoda, Tianyu Liu, Anej Svete, Yanxia Qin, Min-Yen Kan, Ryan Cotterell

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

The relationship between the quality of a string, as judged by a human reader, and its probability, p(y) under a language model undergirds the development of better language models. For example, many popular algorithms for sampling from a language model have been conceived with the goal of manipulating p(y) to
place higher probability on strings that humans deem of high quality (Fan et al., 2018; Holtzman et al., 2020). In this article, we examine the probability–quality relationship in language models explicitly aligned to human preferences,
e.g., through reinforcement learning through human feedback. We show that, when sampling corpora from an aligned language model, there exists a trade-off between the strings’ average reward and average log-likelihood under the prior language model, i.e., the same model before alignment with human preferences. We provide a formal treatment of this phenomenon and demonstrate how a choice of sampling adaptor allows for a selection of how much likelihood we exchange for the reward.
Original languageEnglish
Title of host publicationProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Number of pages24
Volume1
Place of PublicationMiami, Florida, US
PublisherACL
Publication date2024
Pages14805-14829
DOIs
Publication statusPublished - 2024
Event2024 Conference on Empirical Methods in Natural Language Processing - EMNLP, Miami, United States
Duration: 12 Nov 202416 Nov 2024

Conference

Conference2024 Conference on Empirical Methods in Natural Language Processing
LocationEMNLP
Country/TerritoryUnited States
CityMiami
Period12/11/202416/11/2024

Cite this