Explainable Artificial Intelligent (XAI) for Predicting Asphalt Concrete Stiffness and Rutting Resistance: Integrating Bailey's Aggregate Gradation Method
- URL: http://arxiv.org/abs/2410.21298v1
- Date: Wed, 16 Oct 2024 02:39:55 GMT
- Title: Explainable Artificial Intelligent (XAI) for Predicting Asphalt Concrete Stiffness and Rutting Resistance: Integrating Bailey's Aggregate Gradation Method
- Authors: Warat Kongkitkul, Sompote Youwai, Siwipa Khamsoy, Manaswee Feungfung,
- Abstract summary: This study employs explainable artificial intelligence (XAI) techniques to analyze the behavior of asphalt concrete with varying aggregate gradations.
The model's performance was validated using k-fold cross-validation, demonstrating superior accuracy compared to alternative machine learning approaches.
The study revealed size-dependent performance of aggregates, with coarse aggregates primarily affecting rutting resistance and medium-fine aggregates influencing stiffness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study employs explainable artificial intelligence (XAI) techniques to analyze the behavior of asphalt concrete with varying aggregate gradations, focusing on resilience modulus (MR) and dynamic stability (DS) as measured by wheel track tests. The research utilizes a deep learning model with a multi-layer perceptron architecture to predict MR and DS based on aggregate gradation parameters derived from Bailey's Method, including coarse aggregate ratio (CA), fine aggregate coarse ratio (FAc), and other mix design variables. The model's performance was validated using k-fold cross-validation, demonstrating superior accuracy compared to alternative machine learning approaches. SHAP (SHapley Additive exPlanations) values were applied to interpret the model's predictions, providing insights into the relative importance and impact of different gradation characteristics on asphalt concrete performance. Key findings include the identification of critical aggregate size thresholds, particularly the 0.6 mm sieve size, which significantly influences both MR and DS. The study revealed size-dependent performance of aggregates, with coarse aggregates primarily affecting rutting resistance and medium-fine aggregates influencing stiffness. The research also highlighted the importance of aggregate lithology in determining rutting resistance. To facilitate practical application, web-based interfaces were developed for predicting MR and DS, incorporating explainable features to enhance transparency and interpretation of results. This research contributes a data-driven approach to understanding the complex relationships between aggregate gradation and asphalt concrete performance, potentially informing more efficient and performance-oriented mix design processes in the future.
Related papers
- Your Language Model May Think Too Rigidly: Achieving Reasoning Consistency with Symmetry-Enhanced Training [66.48331530995786]
We propose syMmetry-ENhanceD (MEND) Data Augmentation, a data-centric approach that improves the model's ability to extract useful information from context.
Unlike existing methods that emphasize reasoning chain augmentation, our approach improves model robustness at the knowledge extraction stage.
Experiments on both logical and arithmetic reasoning tasks show that MEND enhances reasoning performance across diverse query variations.
arXiv Detail & Related papers (2025-02-25T03:03:35Z) - Explainable Artificial Intelligence Model for Evaluating Shear Strength Parameters of Municipal Solid Waste Across Diverse Compositional Profiles [0.0]
This paper presents a novel explainable intelligence (XAI) framework for evaluating cohesion and friction angle across diverse profiles.
The proposed model integrates a multi-layer perceptron architecture with SHAP (SHapley Additive exPlanations) analysis.
The model demonstrated superior predictive accuracy compared to traditional gradient boosting methods.
arXiv Detail & Related papers (2025-02-20T05:02:55Z) - Efficient Multi-Agent System Training with Data Influence-Oriented Tree Search [59.75749613951193]
We propose Data Influence-oriented Tree Search (DITS) to guide both tree search and data selection.
By leveraging influence scores, we effectively identify the most impactful data for system improvement.
We derive influence score estimation methods tailored for non-differentiable metrics.
arXiv Detail & Related papers (2025-02-02T23:20:16Z) - Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity [61.48338027901318]
We show that fine-tuning with LLM-generated data improves target task performance and reduces out-of-domain degradation.
This is the first mechanistic explanation for the superior OOD robustness conferred by LLM-generated training data.
arXiv Detail & Related papers (2025-01-24T08:18:56Z) - Testing and Improving the Robustness of Amortized Bayesian Inference for Cognitive Models [0.5223954072121659]
Contaminant observations and outliers often cause problems when estimating the parameters of cognitive models.
In this study, we test and improve the robustness of parameter estimation using amortized Bayesian inference.
The proposed method is straightforward and practical to implement and has a broad applicability in fields where outlier detection or removal is challenging.
arXiv Detail & Related papers (2024-12-29T21:22:24Z) - Bridging Interpretability and Robustness Using LIME-Guided Model Refinement [0.0]
Local Interpretable Model-Agnostic Explanations (LIME) systematically enhance model robustness.
Empirical evaluations on multiple benchmark datasets demonstrate that LIME-guided refinement not only improves interpretability but also significantly enhances resistance to adversarial perturbations and generalization to out-of-distribution data.
arXiv Detail & Related papers (2024-12-25T17:32:45Z) - Robust Time Series Causal Discovery for Agent-Based Model Validation [5.430532390358285]
This study proposes a Robust Cross-Validation (RCV) approach to enhance causal structure learning for ABM validation.
We develop RCV-VarLiNGAM and RCV-PCMCI, novel extensions of two prominent causal discovery algorithms.
The proposed approach is then integrated into an enhanced ABM validation framework.
arXiv Detail & Related papers (2024-10-25T09:13:26Z) - How much do we really know about Structure Learning from i.i.d. Data? Interpretable, multi-dimensional Performance Indicator for Causal Discovery [3.8443430569753025]
causal discovery from observational data imposes strict identifiability assumptions on the formulation of structural equations utilized in the data generating process.
Motivated by the lack of unified performance assessment framework, we introduce an interpretable, six-dimensional evaluation metric, i.e., distance to optimal solution (DOS)
This is the first research to assess the performance of structure learning algorithms from seven different families on increasing percentage of non-identifiable, nonlinear causal patterns.
arXiv Detail & Related papers (2024-09-28T15:03:49Z) - Revisiting Spurious Correlation in Domain Generalization [12.745076668687748]
We build a structural causal model (SCM) to describe the causality within data generation process.
We further conduct a thorough analysis of the mechanisms underlying spurious correlation.
In this regard, we propose to control confounding bias in OOD generalization by introducing a propensity score weighted estimator.
arXiv Detail & Related papers (2024-06-17T13:22:00Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Understanding Robust Overfitting from the Feature Generalization Perspective [61.770805867606796]
Adversarial training (AT) constructs robust neural networks by incorporating adversarial perturbations into natural data.
It is plagued by the issue of robust overfitting (RO), which severely damages the model's robustness.
In this paper, we investigate RO from a novel feature generalization perspective.
arXiv Detail & Related papers (2023-10-01T07:57:03Z) - Employing Explainable Artificial Intelligence (XAI) Methodologies to
Analyze the Correlation between Input Variables and Tensile Strength in
Additively Manufactured Samples [0.0]
This research paper explores the impact of various input parameters, including Infill percentage, Layer Height, Extrusion Temperature, and Print Speed, on the resulting Tensile Strength in objects produced through additive manufacturing.
We introduce the utilization of Explainable Artificial Intelligence (XAI) techniques for the first time, which allowed us to analyze the data and gain valuable insights into the system's behavior.
Our findings reveal that the Infill percentage and Extrusion Temperature have the most significant influence on Tensile Strength, while the impact of Layer Height and Print Speed is relatively minor.
arXiv Detail & Related papers (2023-05-28T21:44:25Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Improved Sensitivity of Base Layer on the Performance of Rigid Pavement [0.0]
The performance of rigid pavement is greatly affected by the properties of base/subbase and subgrade layer.
The performance predicted by the AASHTOWare Pavement ME design shows low sensitivity to the properties of base and subgrade layers.
To improve the sensitivity and better reflect the influence of unbound layers a new set of improved models are adopted in this study.
arXiv Detail & Related papers (2021-01-20T23:43:41Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.