Skip to content

Latest commit

 

History

History
64 lines (51 loc) · 3.56 KB

README.md

File metadata and controls

64 lines (51 loc) · 3.56 KB
👋 Hi, everyone!
We are ByteDance Seed team.

You can get to know us better through the following channels👇

seed logo

Multi-SWE-bench Website

This repository contains the code for the website and leaderboard of the Multi-SWE-bench project.

To learn more about Multi-SWE-bench, please check out the main code repository along with the main paper, Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving.

🙏 Acknowledgements

We express our deepest gratitude to the creators of the SWE-bench dataset. This repository is a modified version of their original website repository. Additionally, we would like to thank the creators of the SQuAD dataset, whose template is used for SWE-bench.

📄 Citation

If you found SWE-bench or Multi-SWE-bench helpful for your work, please cite as follows:

@inproceedings{jimenez2024swebench,
    title={SWE-bench: Can Language Models Resolve Real-world Github Issues?},
    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=VTF8yNQM66}
}
@misc{zan2025multiswebench,
      title={Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving}, 
      author={Daoguang Zan and Zhirong Huang and Wei Liu and Hanwu Chen and Linhao Zhang and Shulin Xin and Lu Chen and Qi Liu and Xiaojian Zhong and Aoyan Li and Siyao Liu and Yongsheng Xiao and Liangqiang Chen and Yuyu Zhang and Jing Su and Tianyu Liu and Rui Long and Kai Shen and Liang Xiang},
      year={2025},
      eprint={2504.02605},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2504.02605}, 
}

📜 License

This project is licensed under Apache License 2.0. See the LICENSE flie for details.

Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.