Qwen2.5 Technical Report
- URL: http://arxiv.org/abs/2412.15115v2
- Date: Fri, 03 Jan 2025 02:18:21 GMT
- Title: Qwen2.5 Technical Report
- Authors: Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zihan Qiu,
- Abstract summary: We introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs.
Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages.
Open-weight offerings include base and instruction-tuned models, with quantized versions available.
For hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus.
- Score: 122.13958993185952
- License:
- Abstract: In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.
Related papers
- Qwen2.5-VL Technical Report [57.43576033343722]
Qwen2.5-VL achieves a major leap forward in understanding and interacting with the world through enhanced visual recognition.
It provides robust structured data extraction from invoices, forms, and tables, as well as detailed analysis of charts, diagrams, and layouts.
Qwen2.5-VL is available in three sizes, addressing diverse use cases from edge AI to high-performance computing.
arXiv Detail & Related papers (2025-02-19T18:00:14Z) - ACECODER: Acing Coder RL via Automated Test-Case Synthesis [36.740393665032954]
We design a pipeline that generates extensive (question, test-cases) pairs from existing code data.
We construct preference pairs based on pass rates over sampled programs to train reward models with Bradley-Terry loss.
We show that our RL training can improve model on HumanEval-plus by over 25% and MBPP-plus by 6% for merely 80 optimization steps.
arXiv Detail & Related papers (2025-02-03T18:46:04Z) - Qwen2.5-1M Technical Report [72.09755998661568]
We introduce Qwen2.5-1M, a series of models that extend the context length to 1 million tokens.
By leveraging our inference framework, the Qwen2.5-1M models achieve a remarkable 3x to 7x prefill speedup.
arXiv Detail & Related papers (2025-01-26T03:47:25Z) - Directional Diffusion-Style Code Editing Pre-training [23.559074575821683]
We propose DivoT5, a pre-trained model based on directional diffusion at the data level.
DivoT5 achieves state-of-the-art (SOTA) performance on most tasks in comparison to models of the same scale.
For one code-editing task (i.e., automated code review), DivoT5 pre-trained on top of CodeT5-small (60M) can even outperform CodeT5-base (220M)
arXiv Detail & Related papers (2025-01-21T12:10:18Z) - Qwen2.5-Coder Technical Report [105.131580912726]
We introduce the Qwen2.5-Coder series, a significant upgrade from its predecessor, CodeQwen1.5.
As a code-specific model, Qwen2.5-Coder is built upon the Qwen2.5 architecture and continues pretrained on a vast corpus of over 5.5 trillion tokens.
arXiv Detail & Related papers (2024-09-18T17:57:57Z) - Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement [71.46993852662021]
We present a series of math-specific large language models: Qwen2.5-Math and Qwen2.5-Math-Instruct-1.5B/7B/72B.
Qwen2.5-Math-Instruct supports both Chinese and English, and possess advanced mathematical reasoning capabilities.
arXiv Detail & Related papers (2024-09-18T16:45:37Z) - InternLM2 Technical Report [159.70692271378581]
This paper introduces InternLM2, an open-source Large Language Models (LLMs) that outperforms its predecessors in comprehensive evaluations across 6 dimensions and 30 benchmarks.
The pre-training process of InternLM2 is meticulously detailed, highlighting the preparation of diverse data types.
InternLM2 efficiently captures long-term dependencies, initially trained on 4k tokens before advancing to 32k tokens in pre-training and fine-tuning stages.
arXiv Detail & Related papers (2024-03-26T00:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.