Auto.gov: Learning-based On-chain Governance for Decentralized Finance
(DeFi)
- URL: http://arxiv.org/abs/2302.09551v2
- Date: Sat, 6 May 2023 09:54:17 GMT
- Title: Auto.gov: Learning-based On-chain Governance for Decentralized Finance
(DeFi)
- Authors: Jiahua Xu, Daniel Perez, Yebo Feng, Benjamin Livshits
- Abstract summary: Decentralized finance (DeFi) protocols employ off-chain governance, where token holders vote to modify parameters.
However, manual parameter adjustment, often conducted by the protocol's core team, is vulnerable to collusion, compromising the integrity and security of the system.
We present "Auto.gov", a learning-based on-chain governance framework for DeFi that enhances security and reduces susceptibility to attacks.
- Score: 18.849149890999687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, decentralized finance (DeFi) has experienced remarkable
growth, with various protocols such as lending protocols and automated market
makers (AMMs) emerging. Traditionally, these protocols employ off-chain
governance, where token holders vote to modify parameters. However, manual
parameter adjustment, often conducted by the protocol's core team, is
vulnerable to collusion, compromising the integrity and security of the system.
Furthermore, purely deterministic, algorithm-based approaches may expose the
protocol to novel exploits and attacks.
In this paper, we present "Auto.gov", a learning-based on-chain governance
framework for DeFi that enhances security and reduces susceptibility to
attacks. Our model leverages a deep Q- network (DQN) reinforcement learning
approach to propose semi-automated, intuitive governance proposals with
quantitative justifications. This methodology enables the system to efficiently
adapt to and mitigate the negative impact of malicious behaviors, such as price
oracle attacks, more effectively than benchmark models. Our evaluation
demonstrates that Auto.gov offers a more reactive, objective, efficient, and
resilient solution compared to existing manual processes, thereby significantly
bolstering the security and, ultimately, enhancing the profitability of DeFi
protocols.
Related papers
- Improving DeFi Accessibility through Efficient Liquidity Provisioning with Deep Reinforcement Learning [0.3376269351435395]
This paper applies deep reinforcement learning (DRL) to optimize liquidity provision in a DeFi protocol.
By promoting more efficient liquidity management, this work aims to make DeFi markets more accessible and inclusive for a broader range of participants.
arXiv Detail & Related papers (2025-01-13T17:27:11Z) - Towards Autonomous Cybersecurity: An Intelligent AutoML Framework for Autonomous Intrusion Detection [21.003217781832923]
This paper proposes an Automated Machine Learning (AutoML)-based autonomous IDS framework towards achieving autonomous cybersecurity for next-generation networks.
The proposed AutoML-based IDS was evaluated on two public benchmark network security datasets, CICIDS 2017 and 5G-NIDD.
This research marks a significant step towards fully autonomous cybersecurity in next-generation networks, potentially revolutionizing network security applications.
arXiv Detail & Related papers (2024-09-05T00:36:23Z) - Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction [4.968718867282096]
Federated Learning (FL) offers a distributed framework to train a global control model across multiple base stations.
This makes it ideal for applications like wireless traffic prediction (WTP), which plays a crucial role in optimizing network resources.
Despite its promise, the security aspects of FL-based distributed wireless systems, particularly in regression-based WTP problems, remain inadequately investigated.
arXiv Detail & Related papers (2024-04-22T17:50:27Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot
Learning [52.101643259906915]
We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations.
Existing model-based offline RL methods are not suitable for offline-to-online fine-tuning in high-dimensional domains.
We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization.
arXiv Detail & Related papers (2024-01-06T21:04:31Z) - Fully Decentralized Model-based Policy Optimization for Networked
Systems [23.46407780093797]
This work aims to improve data efficiency of multi-agent control by model-based learning.
We consider networked systems where agents are cooperative and communicate only locally with their neighbors.
In our method, each agent learns a dynamic model to predict future states and broadcast their predictions by communication, and then the policies are trained under the model rollouts.
arXiv Detail & Related papers (2022-07-13T23:52:14Z) - On Effective Scheduling of Model-based Reinforcement Learning [53.027698625496015]
We propose a framework named AutoMBPO to automatically schedule the real data ratio.
In this paper, we first theoretically analyze the role of real data in policy training, which suggests that gradually increasing the ratio of real data yields better performance.
arXiv Detail & Related papers (2021-11-16T15:24:59Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Federated Learning on the Road: Autonomous Controller Design for
Connected and Autonomous Vehicles [109.71532364079711]
A new federated learning (FL) framework is proposed for designing the autonomous controller of connected and autonomous vehicles (CAVs)
A novel dynamic federated proximal (DFP) algorithm is proposed that accounts for the mobility of CAVs, the wireless fading channels, and the unbalanced and nonindependent and identically distributed data across CAVs.
A rigorous convergence analysis is performed for the proposed algorithm to identify how fast the CAVs converge to using the optimal controller.
arXiv Detail & Related papers (2021-02-05T19:57:47Z) - Regulation conform DLT-operable payment adapter based on trustless -
justified trust combined generalized state channels [77.34726150561087]
Economy of Things (EoT) will be based on software agents running on peer-to-peer trustless networks.
We give an overview of current solutions that differ in their fundamental values and technological possibilities.
We propose to combine the strengths of the crypto based, decentralized trustless elements with established and well regulated means of payment.
arXiv Detail & Related papers (2020-07-03T10:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.