Model-Based Control with Sparse Neural Dynamics
- URL: http://arxiv.org/abs/2312.12791v1
- Date: Wed, 20 Dec 2023 06:25:02 GMT
- Title: Model-Based Control with Sparse Neural Dynamics
- Authors: Ziang Liu, Genggeng Zhou, Jeff He, Tobia Marcucci, Li Fei-Fei, Jiajun
Wu, Yunzhu Li
- Abstract summary: We propose a new framework for integrated model learning and predictive control.
We show that our framework can deliver better closed-loop performance than existing state-of-the-art methods.
- Score: 23.961218902837807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning predictive models from observations using deep neural networks
(DNNs) is a promising new approach to many real-world planning and control
problems. However, common DNNs are too unstructured for effective planning, and
current control methods typically rely on extensive sampling or local gradient
descent. In this paper, we propose a new framework for integrated model
learning and predictive control that is amenable to efficient optimization
algorithms. Specifically, we start with a ReLU neural model of the system
dynamics and, with minimal losses in prediction accuracy, we gradually sparsify
it by removing redundant neurons. This discrete sparsification process is
approximated as a continuous problem, enabling an end-to-end optimization of
both the model architecture and the weight parameters. The sparsified model is
subsequently used by a mixed-integer predictive controller, which represents
the neuron activations as binary variables and employs efficient
branch-and-bound algorithms. Our framework is applicable to a wide variety of
DNNs, from simple multilayer perceptrons to complex graph neural dynamics. It
can efficiently handle tasks involving complicated contact dynamics, such as
object pushing, compositional object sorting, and manipulation of deformable
objects. Numerical and hardware experiments show that, despite the aggressive
sparsification, our framework can deliver better closed-loop performance than
existing state-of-the-art methods.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Geometric sparsification in recurrent neural networks [0.8851237804522972]
We propose a new technique for sparsification of recurrent neural nets (RNNs) called moduli regularization.
We show that moduli regularization induces more stable RNNs with a variety of moduli regularizers, and achieves high fidelity models at 98% sparsity.
arXiv Detail & Related papers (2024-06-10T14:12:33Z) - Dynamically configured physics-informed neural network in topology
optimization applications [4.403140515138818]
The physics-informed neural network (PINN) can avoid generating enormous amounts of data when solving forward problems.
A dynamically configured PINN-based topology optimization (DCPINN-TO) method is proposed.
The accuracy of the displacement prediction and optimization results indicate that the DCPINN-TO method is effective and efficient.
arXiv Detail & Related papers (2023-12-12T05:35:30Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Near-optimal control of dynamical systems with neural ordinary
differential equations [0.0]
Recent advances in deep learning and neural network-based optimization have contributed to the development of methods that can help solve control problems involving high-dimensional dynamical systems.
We first analyze how truncated and non-truncated backpropagation through time affect runtime performance and the ability of neural networks to learn optimal control functions.
arXiv Detail & Related papers (2022-06-22T14:11:11Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Neural-iLQR: A Learning-Aided Shooting Method for Trajectory
Optimization [17.25824905485415]
We present Neural-iLQR, a learning-aided shooting method over the unconstrained control space.
It is shown to outperform the conventional iLQR significantly in the presence of inaccuracies in system models.
arXiv Detail & Related papers (2020-11-21T07:17:28Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Persistent Neurons [4.061135251278187]
We propose a trajectory-based strategy that optimize the learning task using information from previous solutions.
Persistent neurons can be regarded as a method with gradient informed bias where individual updates are corrupted by deterministic error terms.
We evaluate the full and partial persistent model and show it can be used to boost the performance on a range of NN structures.
arXiv Detail & Related papers (2020-07-02T22:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.