Incorporating prior knowledge about structural constraints in model
identification
- URL: http://arxiv.org/abs/2007.04030v1
- Date: Wed, 8 Jul 2020 11:09:59 GMT
- Title: Incorporating prior knowledge about structural constraints in model
identification
- Authors: Deepak Maurya, Sivadurgaprasad Chinta, Abhishek Sivaram and
Raghunathan Rengaswamy
- Abstract summary: We propose model identification techniques that could leverage such partial information to produce better estimates.
Specifically, we propose Structural Principal Component Analysis (SPCA) which improvises over existing methods like PCA.
The efficacy of the proposed approach is demonstrated using synthetic and industrial case-studies.
- Score: 1.376408511310322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model identification is a crucial problem in chemical industries. In recent
years, there has been increasing interest in learning data-driven models
utilizing partial knowledge about the system of interest. Most techniques for
model identification do not provide the freedom to incorporate any partial
information such as the structure of the model. In this article, we propose
model identification techniques that could leverage such partial information to
produce better estimates. Specifically, we propose Structural Principal
Component Analysis (SPCA) which improvises over existing methods like PCA by
utilizing the essential structural information about the model. Most of the
existing methods or closely related methods use sparsity constraints which
could be computationally expensive. Our proposed method is a wise modification
of PCA to utilize structural information. The efficacy of the proposed approach
is demonstrated using synthetic and industrial case-studies.
Related papers
- High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Low-dimensional Data-based Surrogate Model of a Continuum-mechanical
Musculoskeletal System Based on Non-intrusive Model Order Reduction [0.0]
Non-traditional approaches such as surrogate modeling using data-driven model order reduction are used to make high-fidelity models more widely available anyway.
We demonstrate the benefits of the surrogate modeling approach on a complex finite element model of a human upper-arm.
arXiv Detail & Related papers (2023-02-13T17:14:34Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Artefact Retrieval: Overview of NLP Models with Knowledge Base Access [18.098224374478598]
This paper systematically describes the typology of artefacts (items retrieved from a knowledge base), retrieval mechanisms and the way these artefacts are fused into the model.
Most of the focus is given to language models, though we also show how question answering, fact-checking and dialogue models fit into this system as well.
arXiv Detail & Related papers (2022-01-24T13:15:33Z) - Explanation of Machine Learning Models Using Shapley Additive
Explanation and Application for Real Data in Hospital [0.11470070927586014]
We propose two novel techniques for better interpretability of machine learning models.
We show how the A/G ratio works as an important prognostic factor for cerebral infarction using our hospital data and proposed techniques.
arXiv Detail & Related papers (2021-12-21T10:08:31Z) - Provably Robust Model-Centric Explanations for Critical Decision-Making [14.367217955827002]
We show that data-centric methods may yield brittle explanations of limited practical utility.
The model-centric framework, however, can offer actionable insights into risks of using AI models in practice.
arXiv Detail & Related papers (2021-10-26T18:05:49Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z) - Value of Information Analysis via Active Learning and Knowledge Sharing
in Error-Controlled Adaptive Kriging [7.148732567427574]
This paper proposes the first surrogate-based framework for value of information (VoI) analysis.
It affords sharing equality-type information from observations among surrogate models to update likelihoods of multiple events of interest.
The proposed VoI analysis framework is applied for an optimal decision-making problem involving load testing of a truss bridge.
arXiv Detail & Related papers (2020-02-06T16:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.