Log in to save to my catalogue

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2923551056

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

About this item

Full title

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2024-12

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

More information

Scope and Contents

Contents

Due to the high memory and computational costs associated with large language models (LLMs), model compression techniques such as quantization, which reduces inference costs, and parameter-efficient fine-tuning (PEFT) methods like Low-Rank Adaptation (LoRA), which reduce training costs, have gained significant popularity. This trend has spurred act...

Alternative Titles

Full title

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

Authors, Artists and Contributors

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2923551056

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2923551056

Other Identifiers

E-ISSN

2331-8422

How to access this item