A review on data-driven constitutive laws for solids
- URL: http://arxiv.org/abs/2405.03658v1
- Date: Mon, 6 May 2024 17:33:58 GMT
- Title: A review on data-driven constitutive laws for solids
- Authors: Jan Niklas Fuhg, Govinda Anantha Padmanabha, Nikolaos Bouklas, Bahador Bahmani, WaiChing Sun, Nikolaos N. Vlassis, Moritz Flaschel, Pietro Carrara, Laura De Lorenzis,
- Abstract summary: This review article highlights state-of-the-art data-driven techniques to discover, encode, surrogate, or emulate laws.
Our objective is to provide an organized taxonomy to a large spectrum of methodologies developed in the past decades.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This review article highlights state-of-the-art data-driven techniques to discover, encode, surrogate, or emulate constitutive laws that describe the path-independent and path-dependent response of solids. Our objective is to provide an organized taxonomy to a large spectrum of methodologies developed in the past decades and to discuss the benefits and drawbacks of the various techniques for interpreting and forecasting mechanics behavior across different scales. Distinguishing between machine-learning-based and model-free methods, we further categorize approaches based on their interpretability and on their learning process/type of required data, while discussing the key problems of generalization and trustworthiness. We attempt to provide a road map of how these can be reconciled in a data-availability-aware context. We also touch upon relevant aspects such as data sampling techniques, design of experiments, verification, and validation.
Related papers
- A review on discriminative self-supervised learning methods [6.24302896438145]
Self-supervised learning has emerged as a method to extract robust features from unlabeled data.
This paper provides a review of discriminative approaches of self-supervised learning within the domain of computer vision.
arXiv Detail & Related papers (2024-05-08T11:15:20Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Enhancing Explainability in Mobility Data Science through a combination
of methods [0.08192907805418582]
This paper introduces a comprehensive framework that harmonizes pivotal XAI techniques.
LIMEInterpretable Model-a-gnostic Explanations, SHAP, Saliency maps, attention mechanisms, direct trajectory visualization, and Permutation Feature (PFI)
To validate our framework, we undertook a survey to gauge preferences and reception among various user demographics.
arXiv Detail & Related papers (2023-12-01T07:09:21Z) - Towards Trustworthy and Aligned Machine Learning: A Data-centric Survey
with Causality Perspectives [11.63431725146897]
The trustworthiness of machine learning has emerged as a critical topic in the field.
This survey presents the background of trustworthy machine learning development using a unified set of concepts.
We provide a unified language with mathematical vocabulary to link these methods across robustness, adversarial robustness, interpretability, and fairness.
arXiv Detail & Related papers (2023-07-31T17:11:35Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Model Positionality and Computational Reflexivity: Promoting Reflexivity
in Data Science [10.794642538442107]
We describe how the concepts of positionality and reflexivity can be adapted to provide a framework for understanding data science work.
We describe the challenges of adapting these concepts for data science work and offer annotator fingerprinting and position mining as promising solutions.
arXiv Detail & Related papers (2022-03-08T16:02:03Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Self-Supervised Representation Learning: Introduction, Advances and
Challenges [125.38214493654534]
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets.
This article introduces this vibrant area including key concepts, the four main families of approach and associated state of the art, and how self-supervised methods are applied to diverse modalities of data.
arXiv Detail & Related papers (2021-10-18T13:51:22Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Ontology-based Interpretable Machine Learning for Textual Data [35.01650633374998]
We introduce a novel interpreting framework that learns an interpretable model based on sampling technique to explain prediction models.
To narrow down the search space for explanations, we design a learnable anchor algorithm.
A set of regulations is further introduced, regarding combining learned interpretable representations with anchors to generate comprehensible explanations.
arXiv Detail & Related papers (2020-04-01T02:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.