Non-collinear density functional theory
- URL: http://arxiv.org/abs/2110.09897v2
- Date: Wed, 11 Jan 2023 01:07:41 GMT
- Title: Non-collinear density functional theory
- Authors: Zhichen Pu, Hao Li, Qiming Sun, Ning Zhang, Yong Zhang, Sihong Shao,
Hong Jiang, Yiqin Gao, Yunlong Xiao
- Abstract summary: This approach satisfies the correct collinear limit for any kind of functionals.
It has well-defined and numerically stable functional derivatives.
It provides local torque, hinting at its applications in spin dynamics.
- Score: 15.872687786457826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An approach to generalize any kind of collinear functionals in density
functional theory to non-collinear functionals is proposed. This approach, for
the very first time, satisfies the correct collinear limit for any kind of
functionals, guaranteeing that the exact collinear functional after generalized
is still exact for collinear spins. Besides, it has well-defined and
numerically stable functional derivatives, a desired feature for non-collinear
and spin-flip time-dependent density functional theory. Furthermore, it
provides local torque, hinting at its applications in spin dynamics.
Related papers
- Kernel Operator-Theoretic Bayesian Filter for Nonlinear Dynamical Systems [25.922732994397485]
We propose a machine-learning alternative based on a functional Bayesian perspective for operator-theoretic modeling.
This formulation is directly done in an infinite-dimensional space of linear operators or Hilbert space with universal approximation property.
We demonstrate that this practical approach can obtain accurate results and outperform finite-dimensional Koopman decomposition.
arXiv Detail & Related papers (2024-10-31T20:31:31Z) - Good regularity creates large learning rate implicit biases: edge of
stability, balancing, and catapult [49.8719617899285]
Large learning rates, when applied to objective descent for non optimization, yield various implicit biases including the edge of stability.
This paper provides an initial step in descent and shows that these implicit biases are in fact various tips same iceberg.
arXiv Detail & Related papers (2023-10-26T01:11:17Z) - Function-Space Optimality of Neural Architectures With Multivariate
Nonlinearities [30.762063524541638]
We prove a representer theorem that states that the solution sets to learning problems posed over Banach spaces are completely characterized by neural architectures with nonlinearities.
Our results shed light on the regularity of functions learned by neural networks trained on data, and provide new theoretical motivation for several architectural choices found in practice.
arXiv Detail & Related papers (2023-10-05T17:13:16Z) - Linear convergence of forward-backward accelerated algorithms without knowledge of the modulus of strong convexity [14.0409219811182]
We show that both Nesterov's accelerated gradient descent (NAG) and FISTA exhibit linear convergence for strongly convex functions.
We emphasize the distinctive approach employed in crafting the Lyapunov function, which involves a dynamically adapting coefficient of kinetic energy.
arXiv Detail & Related papers (2023-06-16T08:58:40Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Approximation of Nonlinear Functionals Using Deep ReLU Networks [7.876115370275732]
We investigate the approximation power of functional deep neural networks associated with the rectified linear unit (ReLU) activation function.
In addition, we establish rates of approximation of the proposed functional deep ReLU networks under mild regularity conditions.
arXiv Detail & Related papers (2023-04-10T08:10:11Z) - Continuous Function Structured in Multilayer Perceptron for Global
Optimization [0.0]
gradient information of multilayer perceptron with a linear neuron is modified with functional derivative for benchmarking global minimum search problems.
We show that the landscape of the gradient derived from given continuous function using functional derivative can be a form with ax+b neurons.
arXiv Detail & Related papers (2023-03-07T14:50:50Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.