Alignment at Pre-training! Towards Native Alignment for Arabic LLMs
Alignment at Pre-training! Towards Native Alignment for Arabic LLMs
About this item
Full title
Author / Creator
Liang, Juhao , Cai, Zhenyang , Zhu, Jianqing , Huang, Huang , Zong, Kewei , Bang, An , Mosen Alharthi , He, Juncai , Zhang, Lian , Li, Haizhou , Wang, Benyou and Xu, Jinchao
Publisher
Ithaca: Cornell University Library, arXiv.org
Journal title
Language
English
Formats
Publication information
Publisher
Ithaca: Cornell University Library, arXiv.org
Subjects
More information
Scope and Contents
Contents
The alignment of large language models (LLMs) is critical for developing effective and safe language models. Traditional approaches focus on aligning models during the instruction tuning or reinforcement learning stages, referred to in this paper as `post alignment'. We argue that alignment during the pre-training phase, which we term `native align...
Alternative Titles
Full title
Alignment at Pre-training! Towards Native Alignment for Arabic LLMs
Authors, Artists and Contributors
Identifiers
Primary Identifiers
Record Identifier
TN_cdi_proquest_journals_3141257866
Permalink
https://devfeature-collection.sl.nsw.gov.au/record/TN_cdi_proquest_journals_3141257866
Other Identifiers
E-ISSN
2331-8422