Vision-Language Models under Cultural and Inclusive Considerations

Antonia Karamolegkou*, Phillip Rust, Yong Cao, Ruixiang Cui, Anders Søgaard, Daniel Hershcovich

*Corresponding author af dette arbejde

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

1 Downloads (Pure)

Abstract

Large vision-language models (VLMs) can assist visually impaired people by describing images from their daily lives. Current evaluation datasets may not reflect diverse cultural user backgrounds or the situational context of this use case. To address this problem, we create a survey to determine caption preferences and propose a culture-centric evaluation benchmark by filtering VizWiz, an existing dataset with images taken by people who are blind. We then evaluate several VLMs, investigating their reliability as visual assistants in a culturally diverse setting. While our results for state-of-the-art models are promising, we identify challenges such as hallucination and misalignment of automatic evaluation metrics with human judgment. We make our survey, data, code, and model outputs publicly available.

OriginalsprogEngelsk
TitelProceedings of the 1st Human-Centered Large Language Modeling Workshop
RedaktørerNikita Soni, Lucie Flek, Ashish Sharma, Diyi Yang, Sara Hooker, H. Andrew Schwartz
Antal sider14
ForlagAssociation for Computational Linguistics (ACL)
Publikationsdato2024
Sider53-66
ISBN (Elektronisk)9798891761520
StatusUdgivet - 2024
Begivenhed1st Human-Centered Large Language Modeling Workshop, HuCLLM 2024 - Bangkok, Thailand
Varighed: 15 aug. 2024 → …

Konference

Konference1st Human-Centered Large Language Modeling Workshop, HuCLLM 2024
Land/OmrådeThailand
ByBangkok
Periode15/08/2024 → …

Bibliografisk note

Publisher Copyright:
©2024 Association for Computational Linguistics.

Citationsformater