Improved Sensitivity of Base Layer on the Performance of Rigid Pavement
- URL: http://arxiv.org/abs/2101.09167v1
- Date: Wed, 20 Jan 2021 23:43:41 GMT
- Title: Improved Sensitivity of Base Layer on the Performance of Rigid Pavement
- Authors: Sajib Saha, Fan Gu, Xue Luo, and Robert L. Lytton
- Abstract summary: The performance of rigid pavement is greatly affected by the properties of base/subbase and subgrade layer.
The performance predicted by the AASHTOWare Pavement ME design shows low sensitivity to the properties of base and subgrade layers.
To improve the sensitivity and better reflect the influence of unbound layers a new set of improved models are adopted in this study.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The performance of rigid pavement is greatly affected by the properties of
base/subbase as well as subgrade layer. However, the performance predicted by
the AASHTOWare Pavement ME design shows low sensitivity to the properties of
base and subgrade layers. To improve the sensitivity and better reflect the
influence of unbound layers a new set of improved models i.e., resilient
modulus (MR) and modulus of subgrade reaction (k-value) are adopted in this
study. An Artificial Neural Network (ANN) model is developed to predict the
modified k-value based on finite element (FE) analysis. The training and
validation datasets in the ANN model consist of 27000 simulation cases with
different combinations of pavement layer thickness, layer modulus and slab-base
interface bond ratio. To examine the sensitivity of modified MR and k-values on
pavement response, eight pavement sections data are collected from the
Long-Term Pavement performance (LTPP) database and modeled by using the FE
software ISLAB2000. The computational results indicate that the modified MR
values have higher sensitivity to water content in base layer on critical
stress and deflection response of rigid pavements compared to the results using
the Pavement ME design model. It is also observed that the k-values using ANN
model has the capability of predicting critical pavement response at any
partially bonded conditions whereas the Pavement ME design model can only
calculate at two extreme bonding conditions (i.e., fully bonding and no
bonding).
Related papers
- Explainable Artificial Intelligent (XAI) for Predicting Asphalt Concrete Stiffness and Rutting Resistance: Integrating Bailey's Aggregate Gradation Method [0.0]
This study employs explainable artificial intelligence (XAI) techniques to analyze the behavior of asphalt concrete with varying aggregate gradations.
The model's performance was validated using k-fold cross-validation, demonstrating superior accuracy compared to alternative machine learning approaches.
The study revealed size-dependent performance of aggregates, with coarse aggregates primarily affecting rutting resistance and medium-fine aggregates influencing stiffness.
arXiv Detail & Related papers (2024-10-16T02:39:55Z) - Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation [7.200910949076064]
Federated Learning (FL) enables multiple clients to collaboratively train a model without sharing their local data.
Yet the FL system is vulnerable to well-designed Byzantine attacks, which aim to disrupt the model training process by uploading malicious model updates.
We propose the Layer-Adaptive Sparsified Model Aggregation (LASA) approach, which combines pre-aggregation sparsification with layer-wise adaptive aggregation to improve robustness.
arXiv Detail & Related papers (2024-09-02T19:28:35Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - On the Impact of Sampling on Deep Sequential State Estimation [17.92198582435315]
State inference and parameter learning in sequential models can be successfully performed with approximation techniques.
Tighter Monte Carlo objectives have been proposed in the literature to enhance generative modeling performance.
arXiv Detail & Related papers (2023-11-28T17:59:49Z) - Layer-wise Feedback Propagation [53.00944147633484]
We present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors.
LFP assigns rewards to individual connections based on their respective contributions to solving a given task.
We demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Effect of Batch Normalization on Noise Resistant Property of Deep
Learning Models [3.520496620951778]
There are concerns about the presence of analog noise which causes changes to the weight of the models, leading to performance degradation of deep learning model.
The effect of the popular batch normalization layer on the noise resistant ability of deep learning model is investigated in this work.
arXiv Detail & Related papers (2022-05-15T20:10:21Z) - SmoothNets: Optimizing CNN architecture design for differentially
private deep learning [69.10072367807095]
DPSGD requires clipping and noising of per-sample gradients.
This introduces a reduction in model utility compared to non-private training.
We distilled a new model architecture termed SmoothNet, which is characterised by increased robustness to the challenges of DP-SGD training.
arXiv Detail & Related papers (2022-05-09T07:51:54Z) - Exploring Heterogeneous Characteristics of Layers in ASR Models for More
Efficient Training [1.3999481573773072]
We study the stability of these layers across runs and model sizes.
We propose that group normalization may be used without disrupting their formation.
We apply these findings to Federated Learning in order to improve the training procedure.
arXiv Detail & Related papers (2021-10-08T17:25:19Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Understanding and Diagnosing Vulnerability under Adversarial Attacks [62.661498155101654]
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks.
We propose a novel interpretability method, InterpretGAN, to generate explanations for features used for classification in latent variables.
We also design the first diagnostic method to quantify the vulnerability contributed by each layer.
arXiv Detail & Related papers (2020-07-17T01:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.