CLIP-Branches: Interactive Fine-Tuning for Text-Image Retrieval

Christian Lülf, Denis Mayr Lima Martins, Marcos Antonio Vaz Salles, Yongluan Zhou, Fabian Gieseke

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

6 Citations (Scopus)
22 Downloads (Pure)

Abstract

The advent of text-image models, most notably CLIP, has significantly transformed the landscape of information retrieval. These models enable the fusion of various modalities, such as text and images. One significant outcome of CLIP is its capability to allow users to search for images using text as a query, as well as vice versa. This is achieved via a joint embedding of images and text data that can, for instance, be used to search for similar items. Despite efficient query processing techniques such as approximate nearest neighbor search, the results may lack precision and completeness. We introduce CLIP-Branches, a novel text-image search engine built upon the CLIP architecture. Our approach enhances traditional text-image search engines by incorporating an interactive fine-tuning phase, which allows the user to further concretize the search query by iteratively defining positive and negative examples. Our framework involves training a classification model given the additional user feedback and essentially outputs all positively classified instances of the entire data catalog. By building upon recent techniques, this inference phase, however, is not implemented by scanning the entire data catalog, but by employing efficient index structures pre-built for the data. Our results show that the fine-tuned results can improve the initial search outputs in terms of relevance and accuracy while maintaining swift response times.

Original languageEnglish
Title of host publicationSIGIR '24: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval
Number of pages5
PublisherAssociation for Computing Machinery, Inc.
Publication date2024
Pages2719-2723
DOIs
Publication statusPublished - 2024
Event47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024 - Washington, United States
Duration: 14 Jul 202418 Jul 2024

Conference

Conference47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024
Country/TerritoryUnited States
CityWashington
Period14/07/202418/07/2024
SponsorACM SIGIR

Bibliographical note

Funding Information:
This research is supported by the Independent Research Fund Denmark (grant number 9131-00110B). We also acknowledge support from NVIDIA for a hardware donation.

Publisher Copyright:
© 2024 Owner/Author.

Keywords

  • CLIP
  • quantization
  • relevance feedback
  • text-image retrieval

Cite this