Log in to save to my catalogue

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inferen...

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inferen...

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2124349933

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

About this item

Full title

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2018-10

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

More information

Scope and Contents

Contents

The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resource-constrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does...

Alternative Titles

Full title

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2124349933

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2124349933

Other Identifiers

E-ISSN

2331-8422

How to access this item