A Probability-Quality Trade-off in Aligned Language Models and its Relation to Sampling Adaptors

Naaman Tan, Josef Valvoda, Tianyu Liu, Anej Svete, Yanxia Qin, Min-Yen Kan, Ryan Cotterell

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

Abstract

The relationship between the quality of a string, as judged by a human reader, and its probability, p(y) under a language model undergirds the development of better language models. For example, many popular algorithms for sampling from a language model have been conceived with the goal of manipulating p(y) to
place higher probability on strings that humans deem of high quality (Fan et al., 2018; Holtzman et al., 2020). In this article, we examine the probability–quality relationship in language models explicitly aligned to human preferences,
e.g., through reinforcement learning through human feedback. We show that, when sampling corpora from an aligned language model, there exists a trade-off between the strings’ average reward and average log-likelihood under the prior language model, i.e., the same model before alignment with human preferences. We provide a formal treatment of this phenomenon and demonstrate how a choice of sampling adaptor allows for a selection of how much likelihood we exchange for the reward.
OriginalsprogEngelsk
TitelProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
RedaktørerYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Antal sider24
Vol/bind1
UdgivelsesstedMiami, Florida, US
ForlagACL
Publikationsdato2024
Sider14805-14829
DOI
StatusUdgivet - 2024
Begivenhed2024 Conference on Empirical Methods in Natural Language Processing - EMNLP, Miami, USA
Varighed: 12 nov. 202416 nov. 2024

Konference

Konference2024 Conference on Empirical Methods in Natural Language Processing
LokationEMNLP
Land/OmrådeUSA
ByMiami
Periode12/11/202416/11/2024

Citationsformater