Fine-Tuning Language Models Using Formal Methods Feedback
- URL: http://arxiv.org/abs/2310.18239v1
- Date: Fri, 27 Oct 2023 16:24:24 GMT
- Title: Fine-Tuning Language Models Using Formal Methods Feedback
- Authors: Yunhao Yang, Neel P. Bhatt, Tyler Ingebrand, William Ward, Steven
Carr, Zhangyang Wang, Ufuk Topcu
- Abstract summary: We present a fully automated approach to fine-tune pre-trained language models for applications in autonomous systems.
The method synthesizes automaton-based controllers from pre-trained models guided by natural language task descriptions.
The results indicate an improvement in percentage of specifications satisfied by the controller from 60% to 90%.
- Score: 53.24085794087253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although pre-trained language models encode generic knowledge beneficial for
planning and control, they may fail to generate appropriate control policies
for domain-specific tasks. Existing fine-tuning methods use human feedback to
address this limitation, however, sourcing human feedback is labor intensive
and costly. We present a fully automated approach to fine-tune pre-trained
language models for applications in autonomous systems, bridging the gap
between generic knowledge and domain-specific requirements while reducing cost.
The method synthesizes automaton-based controllers from pre-trained models
guided by natural language task descriptions. These controllers are verifiable
against independently provided specifications within a world model, which can
be abstract or obtained from a high-fidelity simulator. Controllers with high
compliance with the desired specifications receive higher ranks, guiding the
iterative fine-tuning process. We provide quantitative evidences, primarily in
autonomous driving, to demonstrate the method's effectiveness across multiple
tasks. The results indicate an improvement in percentage of specifications
satisfied by the controller from 60% to 90%.
Related papers
- PID Control-Based Self-Healing to Improve the Robustness of Large Language Models [23.418411870842178]
Minor perturbations can significantly reduce the performance of well-trained language models.
We construct a computationally efficient self-healing process to correct undesired model behavior.
The proposed PID control-based self-healing is a low cost framework that improves the robustness of pre-trained large language models.
arXiv Detail & Related papers (2024-03-31T23:46:51Z) - SALMON: Self-Alignment with Instructable Reward Models [80.83323636730341]
This paper presents a novel approach, namely SALMON, to align base language models with minimal human supervision.
We develop an AI assistant named Dromedary-2 with only 6 exemplars for in-context learning and 31 human-defined principles.
arXiv Detail & Related papers (2023-10-09T17:56:53Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - Multimodal Pretrained Models for Verifiable Sequential Decision-Making: Planning, Grounding, and Perception [20.12750360095627]
Recently developed pretrained models can encode rich world knowledge expressed in multiple modalities, such as text and images.
We develop an algorithm that utilizes the knowledge from pretrained models to construct and verify controllers for sequential decision-making tasks.
We demonstrate the algorithm's ability to construct, verify, and ground automaton-based controllers through a suite of real-world tasks.
arXiv Detail & Related papers (2023-08-10T02:29:11Z) - Fully Decentralized Model-based Policy Optimization for Networked
Systems [23.46407780093797]
This work aims to improve data efficiency of multi-agent control by model-based learning.
We consider networked systems where agents are cooperative and communicate only locally with their neighbors.
In our method, each agent learns a dynamic model to predict future states and broadcast their predictions by communication, and then the policies are trained under the model rollouts.
arXiv Detail & Related papers (2022-07-13T23:52:14Z) - Bridging the Gap Between Training and Inference of Bayesian Controllable
Language Models [58.990214815032495]
Large-scale pre-trained language models have achieved great success on natural language generation tasks.
BCLMs have been shown to be efficient in controllable language generation.
We propose a "Gemini Discriminator" for controllable language generation which alleviates the mismatch problem with a small computational cost.
arXiv Detail & Related papers (2022-06-11T12:52:32Z) - SideControl: Controlled Open-domain Dialogue Generation via Additive
Side Networks [10.607177634432214]
We propose a novel approach to control the generation of Transformer-based pre-trained language models: the SideControl framework.
Results show that the SideControl framework has better controllability, higher generation quality and better sample-efficiency than existing gradient-based and weighted-decoding baselines.
arXiv Detail & Related papers (2021-09-05T01:15:26Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.