VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder
VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder
About this item
Full title
Author / Creator
Publisher
Ithaca: Cornell University Library, arXiv.org
Journal title
Language
English
Formats
Publication information
Publisher
Ithaca: Cornell University Library, arXiv.org
Subjects
More information
Scope and Contents
Contents
In this paper, we present VerifyML, the first secure inference framework to check the fairness degree of a given Machine learning (ML) model. VerifyML is generic and is immune to any obstruction by the malicious model holder during the verification process. We rely on secure two-party computation (2PC) technology to implement VerifyML, and carefull...
Alternative Titles
Full title
VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder
Authors, Artists and Contributors
Identifiers
Primary Identifiers
Record Identifier
TN_cdi_proquest_journals_2725737168
Permalink
https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2725737168
Other Identifiers
E-ISSN
2331-8422