DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like
Models at All Scales
- URL: http://arxiv.org/abs/2308.01320v1
- Date: Wed, 2 Aug 2023 18:49:57 GMT
- Title: DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like
Models at All Scales
- Authors: Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam
Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang,
Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev
Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song,
Yuxiong He
- Abstract summary: This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community.
DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way.
- Score: 26.62712640037033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ChatGPT-like models have revolutionized various applications in artificial
intelligence, from summarization and coding to translation, matching or even
surpassing human performance. However, the current landscape lacks an
accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement
Learning with Human Feedback) training pipeline for these powerful models,
particularly when training at the scale of billions of parameters. This paper
introduces DeepSpeed-Chat, a novel system that democratizes RLHF training,
making it accessible to the AI community. DeepSpeed-Chat offers three key
capabilities: an easy-to-use training and inference experience for ChatGPT-like
models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from
InstructGPT, and a robust DeepSpeed-RLHF system that combines various
optimizations for training and inference in a unified way. The system delivers
unparalleled efficiency and scalability, enabling training of models with
hundreds of billions of parameters in record time and at a fraction of the
cost. With this development, DeepSpeed-Chat paves the way for broader access to
advanced RLHF training, even for data scientists with limited resources,
thereby fostering innovation and further development in the field of AI.
Related papers
- ULTHO: Ultra-Lightweight yet Efficient Hyperparameter Optimization in Deep Reinforcement Learning [50.53705050673944]
We propose ULTHO, an ultra-lightweight yet powerful framework for fast HPO in deep RL within single runs.
Specifically, we formulate the HPO process as a multi-armed bandit with clustered arms (MABC) and link it directly to long-term return optimization.
We test ULTHO on benchmarks including ALE, Procgen, MiniGrid, and PyBullet.
arXiv Detail & Related papers (2025-03-08T07:03:43Z) - DRL-based Dolph-Tschebyscheff Beamforming in Downlink Transmission for Mobile Users [52.9870460238443]
We propose a deep reinforcement learning-based blind beamforming technique using a learnable Dolph-Tschebyscheff antenna array.
Our simulation results show that the proposed method can support data rates very close to the best possible values.
arXiv Detail & Related papers (2025-02-03T11:50:43Z) - Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models [11.624678008637623]
We propose separating generation and learning in RLHF.
Asynchronous training relies on an underexplored regime, online but off-policy RLHF.
We study further compute optimizations for asynchronous RLHF but find that they come at a performance cost.
arXiv Detail & Related papers (2024-10-23T19:59:50Z) - OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework [11.745186056668295]
We present OpenRLHF, an open-source framework enabling efficient RLHF scaling.
OpenRLHF re-designs scheduling for the models beyond 70B parameters using Ray, vLLM, and DeepSpeed.
Integrating seamlessly with Hugging Face, OpenRLHF provides an out-of-the-box solution with optimized algorithms and launch scripts.
arXiv Detail & Related papers (2024-05-20T01:04:40Z) - RLHF Workflow: From Reward Modeling to Online RLHF [79.83927049253924]
We present the workflow of Online Iterative Reinforcement Learning from Human Feedback (RLHF) in this technical report.
RLHF is widely reported to outperform its offline counterpart by a large margin in the recent large language model (LLM) literature.
We show that supervised fine-tuning (SFT) and iterative RLHF can obtain state-of-the-art performance with fully open-source datasets.
arXiv Detail & Related papers (2024-05-13T15:50:39Z) - Parameter Efficient Reinforcement Learning from Human Feedback [27.687265760622918]
Reinforcement Learning from Human Feedback (RLHF) effectively aligns pretrained Large Language and Vision-Language Models with human preferences.
To alleviate some of the computational burden of fine-tuning, efficient methods, like LoRA were introduced.
We benchmark the PE-RLHF setup on six diverse datasets spanning summarization, harmless/helpful response generation, UI automation, and visual question answering.
arXiv Detail & Related papers (2024-03-15T21:43:46Z) - REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous
Manipulation [61.7171775202833]
We introduce an efficient system for learning dexterous manipulation skills withReinforcement learning.
The main idea of our approach is the integration of recent advances in sample-efficient RL and replay buffer bootstrapping.
Our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy.
arXiv Detail & Related papers (2023-09-06T19:05:31Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - RRHF: Rank Responses to Align Language Models with Human Feedback
without tears [69.68672043223249]
InstructGPT implements RLHF through several stages, including Supervised Fine-Tuning (SFT), reward model training, and Proximal Policy Optimization (PPO)
We propose a novel learning paradigm called RRHF, which scores sampled responses from different sources via a logarithm of conditional probabilities.
We evaluate RRHF on the Helpful and Harmless dataset, demonstrating comparable alignment performance with PPO by reward model score and human labeling.
arXiv Detail & Related papers (2023-04-11T15:53:40Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - RAPID-RL: A Reconfigurable Architecture with Preemptive-Exits for
Efficient Deep-Reinforcement Learning [7.990007201671364]
We propose a reconfigurable architecture with preemptive exits for efficient deep RL (RAPID-RL)
RAPID-RL enables conditional activation of preemptive layers based on the difficulty level of inputs.
We show that RAPID-RL incurs 0.34x (0.25x) number of operations (OPS) while maintaining performance above 0.88x (0.91x) on Atari (Drone navigation) tasks.
arXiv Detail & Related papers (2021-09-16T21:30:40Z) - Podracer architectures for scalable Reinforcement Learning [23.369001500657028]
How to best train reinforcement learning (RL) agents at scale is still an active research area.
In this report we argue that TPUs are particularly well suited for training RL agents in a scalable, efficient and reproducible way.
arXiv Detail & Related papers (2021-04-13T15:05:35Z) - AWAC: Accelerating Online Reinforcement Learning with Offline Datasets [84.94748183816547]
We show that our method, advantage weighted actor critic (AWAC), enables rapid learning of skills with a combination of prior demonstration data and online experience.
Our results show that incorporating prior data can reduce the time required to learn a range of robotic skills to practical time-scales.
arXiv Detail & Related papers (2020-06-16T17:54:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.