Application of Clustering Algorithms for Dimensionality Reduction in
Infrastructure Resilience Prediction Models
- URL: http://arxiv.org/abs/2205.03316v1
- Date: Fri, 6 May 2022 15:51:05 GMT
- Title: Application of Clustering Algorithms for Dimensionality Reduction in
Infrastructure Resilience Prediction Models
- Authors: Srijith Balakrishnan, Beatrice Cassottana, Arun Verma
- Abstract summary: We present a clustering-based method that simultaneously minimizes the problem of high-dimensionality and improves the prediction accuracy of machine learning models.
The proposed method can be used to develop decision-support tools for post-disaster recovery of infrastructure networks.
- Score: 4.350783459690612
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent studies increasingly adopt simulation-based machine learning (ML)
models to analyze critical infrastructure system resilience. For realistic
applications, these ML models consider the component-level characteristics that
influence the network response during emergencies. However, such an approach
could result in a large number of features and cause ML models to suffer from
the `curse of dimensionality'. We present a clustering-based method that
simultaneously minimizes the problem of high-dimensionality and improves the
prediction accuracy of ML models developed for resilience analysis in
large-scale interdependent infrastructure networks. The methodology has three
parts: (a) generation of simulation dataset, (b) network component clustering,
and (c) dimensionality reduction and development of prediction models. First,
an interdependent infrastructure simulation model simulates the network-wide
consequences of various disruptive events. The component-level features are
extracted from the simulated data. Next, clustering algorithms are used to
derive the cluster-level features by grouping component-level features based on
their topological and functional characteristics. Finally, ML algorithms are
used to develop models that predict the network-wide impacts of disruptive
events using the cluster-level features. The applicability of the method is
demonstrated using an interdependent power-water-transport testbed. The
proposed method can be used to develop decision-support tools for post-disaster
recovery of infrastructure networks.
Related papers
- DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [70.91804882618243]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.
We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.
Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Enhancing Non-Intrusive Load Monitoring with Features Extracted by Independent Component Analysis [0.0]
A novel neural network architecture is proposed to address the challenges in energy disaggregation algorithms.
Our results demonstrate that the model is less prone to overfitting, exhibits low complexity, and effectively decomposes signals with many individual components.
arXiv Detail & Related papers (2025-01-28T09:45:06Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Physics-Informed Machine Learning for Seismic Response Prediction OF Nonlinear Steel Moment Resisting Frame Structures [6.483318568088176]
PiML method integrates scientific principles and physical laws into deep neural networks to model seismic responses of nonlinear structures.
Manipulating the equation of motion helps learn system nonlinearities and confines solutions within physically interpretable results.
Result handles complex data better than existing physics-guided LSTM models and outperforms other non-physics data-driven networks.
arXiv Detail & Related papers (2024-02-28T02:16:03Z) - Generalizable data-driven turbulence closure modeling on unstructured grids with differentiable physics [1.8749305679160366]
We introduce a framework for embedding deep learning models within a generic finite element solver to solve the Navier-Stokes equations.
We validate our method for flow over a backwards-facing step and test its performance on novel geometries.
We show that our GNN-based closure model may be learned in a data-limited scenario by interpreting closure modeling as a solver-constrained optimization.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Semi-Structured Distributional Regression -- Extending Structured
Additive Models by Arbitrary Deep Neural Networks and Data Modalities [0.0]
We propose a general framework to combine structured regression models and deep neural networks into a unifying network architecture.
We demonstrate the framework's efficacy in numerical experiments and illustrate its special merits in benchmarks and real-world applications.
arXiv Detail & Related papers (2020-02-13T21:01:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.