Cross-dataset Learning for Generalizable Land Use Scene Classification

Dimitri Gominski, Valerie Gouet-Brunet, Liming Chen

    Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningpeer review

    3 Citationer (Scopus)
    10 Downloads (Pure)

    Abstract

    Few-shot and cross-domain land use scene classification methods propose solutions to classify unseen classes or un-seen visual distributions, but are hardly applicable to real-world situations due to restrictive assumptions. Few-shot methods involve episodic training on restrictive training subsets with small feature extractors, while cross-domain methods are only applied to common classes. The underlying challenge remains open: can we accurately classify new scenes on new datasets? In this paper, we propose a new framework for few-shot, cross-domain classification. Our retrieval-inspired approach1 exploits the interrelations in both the training and testing data to output class labels using compact descriptors. Results show that our method can accurately produce land-use predictions on unseen datasets and unseen classes, going beyond the traditional few-shot or cross-domain formulation, and allowing cross-dataset training.

    OriginalsprogEngelsk
    TitelProceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
    Antal sider10
    ForlagIEEE Computer Society Press
    Publikationsdato2022
    Sider1381-1390
    ISBN (Elektronisk)9781665487399
    DOI
    StatusUdgivet - 2022
    Begivenhed2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022 - New Orleans, USA
    Varighed: 19 jun. 202220 jun. 2022

    Konference

    Konference2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2022
    Land/OmrådeUSA
    ByNew Orleans
    Periode19/06/202220/06/2022
    NavnIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
    Vol/bind2022-June
    ISSN2160-7508

    Bibliografisk note

    Funding Information:
    This work was supported by ANR, the French National Research Agency, within the ALEGORIA project, under Grant ANR-17-CE38-0014-01.

    Publisher Copyright:
    © 2022 IEEE.

    Citationsformater