P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation
P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation
About this item
Full title
Author / Creator
Publisher
Basel: MDPI AG
Journal title
Language
English
Formats
Publication information
Publisher
Basel: MDPI AG
Subjects
More information
Scope and Contents
Contents
In the field of natural language processing (NLP), prompt-based learning is widely used for efficient parameter learning. However, this method has the drawback of shortening the input length by the extent of the attached prompt, leading to the inefficient utilization of the input space. In this study, we propose P-Distill, a novel prompt compressio...
Alternative Titles
Full title
P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation
Authors, Artists and Contributors
Author / Creator
Identifiers
Primary Identifiers
Record Identifier
TN_cdi_doaj_primary_oai_doaj_org_article_217a2580e75a43278019ff02169083bc
Permalink
https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_doaj_primary_oai_doaj_org_article_217a2580e75a43278019ff02169083bc
Other Identifiers
ISSN
2076-3417
E-ISSN
2076-3417
DOI
10.3390/app15052420