StarCoder 2 and The Stack v2: The Next Generation

Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries
, 2024

The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data.

PDF available on arXiv

  @misc{lozhkov:starcoder-stack-2,
  title="{StarCoder} 2 and {The Stack} v2: The Next Generation", 
  author="Anton Lozhkov and Raymond Li and Loubna Ben Allal and
    Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and
    Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and 
    Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and
    Younes Belkada and Zijian Wang and Qian Liu and Dmitry  Abulkhanov and
    Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and 
    Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and
    Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and
    Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and
    Yekun Chai and Niklas Muennighoff and Xiangru Tang and
    Muhtasham Oblokulov and Christopher Akiki and Marc Marone and
    Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and
    Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and
    Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and
    Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and 
    Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and 
    Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and 
    Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and
    Harm de Vries",
    year={2024},
    eprint={2402.19173},
    archivePrefix={arXiv},
    primaryClass={cs.SE}
}