Log in to save to my catalogue

P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation

P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_doaj_primary_oai_doaj_org_article_217a2580e75a43278019ff02169083bc

P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation

About this item

Full title

P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation

Publisher

Basel: MDPI AG

Journal title

Applied sciences, 2025-03, Vol.15 (5), p.2420

Language

English

Formats

Publication information

Publisher

Basel: MDPI AG

More information

Scope and Contents

Contents

In the field of natural language processing (NLP), prompt-based learning is widely used for efficient parameter learning. However, this method has the drawback of shortening the input length by the extent of the attached prompt, leading to the inefficient utilization of the input space. In this study, we propose P-Distill, a novel prompt compressio...

Alternative Titles

Full title

P-Distill: Efficient and Effective Prompt Tuning Using Knowledge Distillation

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_doaj_primary_oai_doaj_org_article_217a2580e75a43278019ff02169083bc

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_doaj_primary_oai_doaj_org_article_217a2580e75a43278019ff02169083bc

Other Identifiers

ISSN

2076-3417

E-ISSN

2076-3417

DOI

10.3390/app15052420

How to access this item