Don't be fooled: label leakage in explanation methods and the importance of their quantitative evalu...
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
About this item
Full title
Author / Creator
Publisher
Ithaca: Cornell University Library, arXiv.org
Journal title
Language
English
Formats
Publication information
Publisher
Ithaca: Cornell University Library, arXiv.org
Subjects
More information
Scope and Contents
Contents
Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature attribution methods (such as SHAP, LIME, and Grad-CAM) are "class-dependent" methods in that they generate a feature attribution vector as a function of class. In this work, we demonstrate that class-dependent methods can "leak"...
Alternative Titles
Full title
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
Authors, Artists and Contributors
Author / Creator
Identifiers
Primary Identifiers
Record Identifier
TN_cdi_proquest_journals_2780581341
Permalink
https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2780581341
Other Identifiers
E-ISSN
2331-8422