Log in to save to my catalogue

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2649832326

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

About this item

Publication information

Publisher

Ithaca: Cornell University Library, arXiv.org

Subjects

Subjects and topics

More information

Scope and Contents

Contents

We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore...

Alternative Titles

Full title

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Identifiers

Primary Identifiers

Record Identifier

TN_cdi_proquest_journals_2649832326

Permalink

https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_2649832326

Other Identifiers

E-ISSN

2331-8422

How to access this item