Cross-modal retrieval is gaining increasing efficacy and interest from the research community, thanks to large-scale training, novel architectural and learning designs, and its application in LLMs and multimodal LLMs. In this paper, we move a step forward and design an approach that allows for multimodal queries -- composed of both an image and a text -- and can search within collections of multimodal documents, where images and text are interleaved. Our model, ReT, employs multi-level representations extracted from different layers of both visual and textual backbones, both at the query and document side. To allow for multi-level and cross-modal understanding and feature extraction, ReT employs a novel Transformer-based recurrent cell that integrates both textual and visual features at different layers, and leverages sigmoidal gates inspired by the classical design of LSTMs. Extensive experiments on M2KR and M-BEIR benchmarks show that ReT achieves state-of-the-art performance across diverse settings. Our source code and trained models are publicly available at: https://github.com/aimagelab/ReT.

Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval / Caffagni, Davide; Sarto, Sara; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - (2025), p. 9295. ( 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025 Nashville, TN JUN 10-17, 2025) [10.1109/CVPR52734.2025.00867].

Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval

Davide Caffagni;Sara Sarto;Marcella Cornia;Lorenzo Baraldi;Rita Cucchiara
2025

Abstract

Cross-modal retrieval is gaining increasing efficacy and interest from the research community, thanks to large-scale training, novel architectural and learning designs, and its application in LLMs and multimodal LLMs. In this paper, we move a step forward and design an approach that allows for multimodal queries -- composed of both an image and a text -- and can search within collections of multimodal documents, where images and text are interleaved. Our model, ReT, employs multi-level representations extracted from different layers of both visual and textual backbones, both at the query and document side. To allow for multi-level and cross-modal understanding and feature extraction, ReT employs a novel Transformer-based recurrent cell that integrates both textual and visual features at different layers, and leverages sigmoidal gates inspired by the classical design of LSTMs. Extensive experiments on M2KR and M-BEIR benchmarks show that ReT achieves state-of-the-art performance across diverse settings. Our source code and trained models are publicly available at: https://github.com/aimagelab/ReT.
2025
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025
Nashville, TN
JUN 10-17, 2025
9295
Caffagni, Davide; Sarto, Sara; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita
Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval / Caffagni, Davide; Sarto, Sara; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - (2025), p. 9295. ( 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025 Nashville, TN JUN 10-17, 2025) [10.1109/CVPR52734.2025.00867].
File in questo prodotto:
File Dimensione Formato  
2025_CVPR_Multimodal_Retrieval.pdf

Accesso riservato

Tipologia: AAM - Versione dell'autore revisionata e accettata per la pubblicazione
Licenza: [IR] closed
Dimensione 4.94 MB
Formato Adobe PDF
4.94 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Recurrence-Enhanced_Vision-and-Language_Transformers_for_Robust_Multimodal_Document_Retrieval.pdf

Accesso riservato

Tipologia: VOR - Versione pubblicata dall'editore
Licenza: [IR] closed
Dimensione 1.33 MB
Formato Adobe PDF
1.33 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1373630
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 0
social impact