Log in to save to my catalogue

Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models

Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_3133048798

Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models

About this item

Full title

Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models

Author / Creator

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2024-11

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

More information

Scope and Contents

Contents

Deep learning models are vulnerable to backdoor attacks, where adversaries inject malicious functionality during training that activates on trigger inputs at inference time. Extensive research has focused on developing stealthy backdoor attacks to evade detection and defense mechanisms. However, these approaches still have limitations that leave th...

Alternative Titles

Full title

Unlearn to Relearn Backdoors: Deferred Backdoor Functionality Attacks on Deep Learning Models

Authors, Artists and Contributors

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_3133048798

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_3133048798

Other Identifiers

E-ISSN

2331-8422

How to access this item