PTM4Tag+: Tag recommendation of stack overflow posts with pre-trained models

Junda He, Bowen Xu, Zhou Yang, Dong Gyun Han, Chengran Yang, Jiakun Liu*, Zhipeng Zhao, David Lo

*Corresponding author af dette arbejde

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

1 Downloads (Pure)

Abstract

Stack Overflow is one of the most influential Software Question & Answer (SQA) websites, hosting millions of programming-related questions and answers. Tags play a critical role in efficiently organizing the contents on Stack Overflow and are vital to support various site operations, such as querying relevant content. Poorly chosen tags often lead to issues such as tag ambiguity and tag explosion. Therefore, a precise and accurate automated tag recommendation technique is needed. Inspired by the recent success of pre-trained models (PTMs) in natural language processing (NLP), we present PTM4Tag+, a tag recommendation framework for Stack Overflow posts that utilize PTMs in language modeling. PTM4Tag+ is implemented with a triplet architecture, which considers three key components of a post, i.e., Title, Description, and Code, with independent PTMs. We utilize a number of popular pre-trained models, including BERT-based models (e.g., BERT, RoBERTa, CodeBERT, BERTOverflow, and ALBERT), and encoder-decoder models (e.g., PLBART, CoTexT, and CodeT5). Our results show that leveraging CodeT5 under the PTM4Tag+ framework achieves the best performance among the eight considered PTMs and outperforms the state-of-the-art Convolutional Neural Network-based approach by a substantial margin in terms of average Precision@k, Recall@k, and F1-score@k (k ranges from 1 to 5). Specifically, CodeT5 improves the performance of F1-score@1-5 by 8.8%, 12.4%, 15.3%, 16.4%, and 16.6%, respectively. Moreover, to address the concern with inference latency, we experimented PTM4Tag+ using smaller PTM models (i.e., DistilBERT, DistilRoBERTa, CodeBERT-small, and CodeT5-small). We find that although smaller PTMs cannot outperform larger PTMs, they still maintain over 93.96% of the performance on average while reducing the mean inference time by more than 47.2%.

OriginalsprogEngelsk
Artikelnummer28
TidsskriftEmpirical Software Engineering
Vol/bind30
Udgave nummer1
Antal sider41
ISSN1382-3256
DOI
StatusUdgivet - 2025

Bibliografisk note

Funding Information:
This research / project is supported by the National Research Foundation, Singapore, under its Industry Alignment Fund - Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of National Research Foundation, Singapore.

Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.

Citationsformater