Log in to save to my catalogue

Evaluating Coherence in Dialogue Systems using Entailment

Evaluating Coherence in Dialogue Systems using Entailment

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2205757166

Evaluating Coherence in Dialogue Systems using Entailment

About this item

Full title

Evaluating Coherence in Dialogue Systems using Entailment

Publisher

Ithaca: Cornell University Library, arXiv.org

Journal title

arXiv.org, 2020-04

Language

English

Formats

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

More information

Scope and Contents

Contents

Evaluating open-domain dialogue systems is difficult due to the diversity of possible correct answers. Automatic metrics such as BLEU correlate weakly with human annotations, resulting in a significant bias across different models and datasets. Some researchers resort to human judgment experimentation for assessing response quality, which is expens...

Alternative Titles

Full title

Evaluating Coherence in Dialogue Systems using Entailment

Authors, Artists and Contributors

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2205757166

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2205757166

Other Identifiers

E-ISSN

2331-8422

How to access this item