Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code

Nakamura, Taishi ; Mishra, Mayank ; Tedeschi, Simone ; Chai, Yekun ; Stillerman, Jason T. ; Friedrich, Felix ; Yadav, Prateek ; Laud, Tanmay ; Chien, Vu Minh and Zhuo, Terry Yue , et al. (2025) p.656-678
Abstract
Pretrained language models are integral part of AI applications, but their high computational cost for training limits accessibility. Initiatives such as Bloom and StarCoder aim to democratize access to pretrained models for collaborative community development. Despite these efforts, such models encounter challenges such as limited multilingual capabilities, risks of catastrophic forgetting during continual pretraining, and the high costs of training models from scratch, alongside the need to align with AI safety standards and regulatory frameworks. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on... (More)
Pretrained language models are integral part of AI applications, but their high computational cost for training limits accessibility. Initiatives such as Bloom and StarCoder aim to democratize access to pretrained models for collaborative community development. Despite these efforts, such models encounter challenges such as limited multilingual capabilities, risks of catastrophic forgetting during continual pretraining, and the high costs of training models from scratch, alongside the need to align with AI safety standards and regulatory frameworks. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435B additional tokens, Aurora-M surpasses 2T tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. We evaluate Aurora-M across a wide range of tasks and languages, showcasing its robustness against catastrophic forgetting and its superior performance in multilingual settings, particularly in safety evaluations. We open-source Aurora-M and its variants to encourage responsible open-source development of large language models at https://huggingface.co/aurora-m. (Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; ; ; ; and , et al. (More)
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; and (Less)
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
host publication
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
editor
Rambow, Owen ; Wanner, Leo ; Apidianaki, Marianna ; Al-Khalifa, Hend ; Eugenio, Barbara Di ; Schockaert, Steven ; Darwish, Kareem and Agarwal, Apoorv
pages
23 pages
publisher
Association for Computational Linguistics
external identifiers
  • scopus:105000111106
language
Unknown
LU publication?
no
id
374014c0-c549-40c1-8487-6dd8281a9efb
alternative location
https://aclanthology.org/2025.coling-industry.56/
date added to LUP
2026-02-11 00:04:04
date last changed
2026-02-17 12:54:12
@inproceedings{374014c0-c549-40c1-8487-6dd8281a9efb,
  abstract     = {{Pretrained language models are integral part of AI applications, but their high computational cost for training limits accessibility. Initiatives such as Bloom and StarCoder aim to democratize access to pretrained models for collaborative community development. Despite these efforts, such models encounter challenges such as limited multilingual capabilities, risks of catastrophic forgetting during continual pretraining, and the high costs of training models from scratch, alongside the need to align with AI safety standards and regulatory frameworks. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435B additional tokens, Aurora-M surpasses 2T tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. We evaluate Aurora-M across a wide range of tasks and languages, showcasing its robustness against catastrophic forgetting and its superior performance in multilingual settings, particularly in safety evaluations. We open-source Aurora-M and its variants to encourage responsible open-source development of large language models at https://huggingface.co/aurora-m.}},
  author       = {{Nakamura, Taishi and Mishra, Mayank and Tedeschi, Simone and Chai, Yekun and Stillerman, Jason T. and Friedrich, Felix and Yadav, Prateek and Laud, Tanmay and Chien, Vu Minh and Zhuo, Terry Yue and Misra, Diganta and Bogin, Ben and Vu, Xuan-Son and Karpinska, Marzena and Dantuluri, Arnav Varma and Kusa, Wojciech and Furlanello, Tommaso and Yokota, Rio and Muennighoff, Niklas and Pai, Suhas and Adewumi, Tosin and Laippala, Veronika and Yao, Xiaozhe and Junior, Adalberto Barbosa and Drozd, Aleksandr and Clive, Jordan and Gupta, Kshitij and Chen, Liangyu and Sun, Qi and Tsui, Ken and Moustafa-Fahmy, Nour and Monti, Nicolo and Dang, Tai and Luo, Ziyang and Bui, Tien-Tung and Navigli, Roberto and Mehta, Virendra and Blumberg, Matthew and May, Victor and Nguyen, Hiep and Pyysalo, Sampo}},
  booktitle    = {{Proceedings of the 31st International Conference on Computational Linguistics: Industry Track}},
  editor       = {{Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven and Darwish, Kareem and Agarwal, Apoorv}},
  language     = {{und}},
  month        = {{01}},
  pages        = {{656--678}},
  publisher    = {{Association for Computational Linguistics}},
  title        = {{Aurora-M: Open Source Continual Pre-training for Multilingual Language and Code}},
  url          = {{https://aclanthology.org/2025.coling-industry.56/}},
  year         = {{2025}},
}