Log in to save to my catalogue

Teaching Language Models to Hallucinate Less with Synthetic Tasks

Teaching Language Models to Hallucinate Less with Synthetic Tasks

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2875642414

Teaching Language Models to Hallucinate Less with Synthetic Tasks

About this item

Full title

Teaching Language Models to Hallucinate Less with Synthetic Tasks

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2023-11

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

More information

Scope and Contents

Contents

Large language models (LLMs) frequently hallucinate on abstractive summarization tasks such as document-based question-answering, meeting summarization, and clinical report generation, even though all necessary information is included in context. However, optimizing LLMs to hallucinate less on these tasks is challenging, as hallucination is hard to...

Alternative Titles

Full title

Teaching Language Models to Hallucinate Less with Synthetic Tasks

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2875642414

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2875642414

Other Identifiers

E-ISSN

2331-8422

How to access this item