Self-consistent Validation for Machine Learning Electronic Structure
- URL: http://arxiv.org/abs/2402.10186v1
- Date: Thu, 15 Feb 2024 18:41:35 GMT
- Title: Self-consistent Validation for Machine Learning Electronic Structure
- Authors: Gengyuan Hu, Gengchen Wei, Zekun Lou, Philip H.S. Torr, Wanli Ouyang,
Han-sen Zhong, Chen Lin
- Abstract summary: Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
- Score: 81.54661501506185
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning has emerged as a significant approach to efficiently tackle
electronic structure problems. Despite its potential, there is less guarantee
for the model to generalize to unseen data that hinders its application in
real-world scenarios. To address this issue, a technique has been proposed to
estimate the accuracy of the predictions. This method integrates machine
learning with self-consistent field methods to achieve both low validation cost
and interpret-ability. This, in turn, enables exploration of the model's
ability with active learning and instills confidence in its integration into
real-world studies.
Related papers
- Privacy Preservation through Practical Machine Unlearning [0.0]
This paper examines methods such as Naive Retraining and Exact Unlearning via the SISA framework.
We explore the potential of integrating unlearning principles into Positive Unlabeled (PU) Learning to address challenges posed by partially labeled datasets.
arXiv Detail & Related papers (2025-02-15T02:25:27Z) - Probabilities-Informed Machine Learning [0.0]
This study introduces an ML paradigm inspired by domain knowledge of the structure of output function, akin to physics-informed ML.
The proposed approach integrates the probabilistic structure of the target variable into the training process.
It enhances model accuracy and mitigates risks of overfitting and underfitting.
arXiv Detail & Related papers (2024-12-16T08:01:22Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Silver Linings in the Shadows: Harnessing Membership Inference for Machine Unlearning [7.557226714828334]
We present a novel unlearning mechanism designed to remove the impact of specific data samples from a neural network.
In achieving this goal, we crafted a novel loss function tailored to eliminate privacy-sensitive information from weights and activation values of the target model.
Our results showcase the superior performance of our approach in terms of unlearning efficacy and latency as well as the fidelity of the primary task.
arXiv Detail & Related papers (2024-07-01T00:20:26Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Generalizing Machine Learning Evaluation through the Integration of Shannon Entropy and Rough Set Theory [0.0]
We introduce a comprehensive framework that synergizes the granularity of rough set theory with the uncertainty quantification of Shannon entropy.
Our methodology is rigorously tested on various datasets, showcasing its capability to not only assess predictive performance but also to illuminate the underlying data complexity and model robustness.
arXiv Detail & Related papers (2024-04-18T21:22:42Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Towards Automated Knowledge Integration From Human-Interpretable Representations [55.2480439325792]
We introduce and motivate theoretically the principles of informed meta-learning enabling automated and controllable inductive bias selection.
We empirically demonstrate the potential benefits and limitations of informed meta-learning in improving data efficiency and generalisation.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Graceful Degradation and Related Fields [0.0]
graceful degradation refers to the optimisation of model performance as it encounters out-of-distribution data.
This work presents a definition and discussion of graceful degradation and where it can be applied in deployed visual systems.
arXiv Detail & Related papers (2021-06-21T13:56:41Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.