To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inferen...
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
About this item
Full title
Author / Creator
Qin, Qing , Ren, Jie , Yu, Jialong , Gao, Ling , Wang, Hai , Zheng, Jie , Feng, Yansong , Fang, Jianbin and Wang, Zheng
Publisher
Ithaca: Cornell University Library, arXiv.org
Journal title
Language
English
Formats
Publication information
Publisher
Ithaca: Cornell University Library, arXiv.org
Subjects
More information
Scope and Contents
Contents
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resource-constrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does...
Alternative Titles
Full title
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Authors, Artists and Contributors
Author / Creator
Identifiers
Primary Identifiers
Record Identifier
TN_cdi_proquest_journals_2124349933
Permalink
https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2124349933
Other Identifiers
E-ISSN
2331-8422