Deep learning models achieve high performance in Entity Matching tasks, but lack interpretability, limiting user understanding of their decision-making process. Several explainers, such as LIME, Mojito, Landmark, LEMON, and CERTA, have been proposed in the literature to address this issue. However, these methods primarily focus on model fidelity without prioritizing comprehensibility, resulting in explanations difficult to interpret. This extended abstract introduce CREW, a system designed to explain matching decisions. CREW enhances both interpretability and fidelity by grouping words from EM records based on semantic similarity, dataset structure, and their importance to the model. Experimental results demonstrate that CREW produces explanations that are both more interpretable for users and more faithful to the model compared to existing methods.
Explaining Entity Matching Models with CREW / Benassi, R.; Contalbo, M. L.; Del Buono, F.; Guerra, F.; Guiduzzi, G.; Paganelli, M.; Pederzoli, S.; Tiano, D.; Vincini, M.. - 4182:(2025), pp. 570-579. ( 33rd Italian Symposium on Advanced Database Systems, SEBD 2025 Ischia, ita 16-18 June 2025).
Explaining Entity Matching Models with CREW
Benassi R.;Contalbo M. L.;Del Buono F.;Guerra F.;Guiduzzi G.;Paganelli M.;Pederzoli S.;Tiano D.;Vincini M.
2025
Abstract
Deep learning models achieve high performance in Entity Matching tasks, but lack interpretability, limiting user understanding of their decision-making process. Several explainers, such as LIME, Mojito, Landmark, LEMON, and CERTA, have been proposed in the literature to address this issue. However, these methods primarily focus on model fidelity without prioritizing comprehensibility, resulting in explanations difficult to interpret. This extended abstract introduce CREW, a system designed to explain matching decisions. CREW enhances both interpretability and fidelity by grouping words from EM records based on semantic similarity, dataset structure, and their importance to the model. Experimental results demonstrate that CREW produces explanations that are both more interpretable for users and more faithful to the model compared to existing methods.Pubblicazioni consigliate

I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris




