Log in to save to my catalogue

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evalu...

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evalu...

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2780581341

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

About this item

Full title

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2023-02

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

Subjects

Subjects and topics

More information

Scope and Contents

Contents

Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature attribution methods (such as SHAP, LIME, and Grad-CAM) are "class-dependent" methods in that they generate a feature attribution vector as a function of class. In this work, we demonstrate that class-dependent methods can "leak"...

Alternative Titles

Full title

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

Authors, Artists and Contributors

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2780581341

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2780581341

Other Identifiers

E-ISSN

2331-8422

How to access this item