Application of the Affinity Propagation Clustering Technique to obtain
traffic accident clusters at macro, meso, and micro levels
- URL: http://arxiv.org/abs/2202.05175v1
- Date: Wed, 9 Feb 2022 02:46:19 GMT
- Title: Application of the Affinity Propagation Clustering Technique to obtain
traffic accident clusters at macro, meso, and micro levels
- Authors: Fagner Sutel de Moura, Christine Tessele Nodari
- Abstract summary: Accident grouping is a crucial step in identifying accident-prone locations.
This work introduces the Affinity Propagation Clustering ( APC) approach for grouping traffic accidents.
The preference parameter of similarity provides the necessary performance to calibrate the model and generate clusters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accident grouping is a crucial step in identifying accident-prone locations.
Among the different accident grouping modes, clustering methods present
excellent performance for discovering different distributions of accidents in
space. This work introduces the Affinity Propagation Clustering (APC) approach
for grouping traffic accidents based on criteria of similarity and
dissimilarity between distributions of data points in space. The APC provides
more realistic representations of the distribution of events from similarity
matrices between instances. The results showed that when representative data
samples obtain, the preference parameter of similarity provides the necessary
performance to calibrate the model and generate clusters according to the
desired characteristics. In addition, the study demonstrates that the
preference parameter as a continuous parameter facilitates the calibration and
control of the model's convergence, allowing the discovery of clustering
patterns with less effort and greater control of the results
Related papers
- An Agglomerative Clustering of Simulation Output Distributions Using Regularized Wasserstein Distance [0.0]
We present a novel agglomerative clustering algorithm that utilizes the regularized Wasserstein distance to cluster simulation outputs.
This framework has several important use cases, including anomaly detection, pre-optimization, and online monitoring.
arXiv Detail & Related papers (2024-07-16T18:07:32Z) - Cluster-Aware Similarity Diffusion for Instance Retrieval [64.40171728912702]
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph.
We propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval.
arXiv Detail & Related papers (2024-06-04T14:19:50Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Efficient Bilateral Cross-Modality Cluster Matching for Unsupervised Visible-Infrared Person ReID [56.573905143954015]
We propose a novel bilateral cluster matching-based learning framework to reduce the modality gap by matching cross-modality clusters.
Under such a supervisory signal, a Modality-Specific and Modality-Agnostic (MSMA) contrastive learning framework is proposed to align features jointly at a cluster-level.
Experiments on the public SYSU-MM01 and RegDB datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-05-22T03:27:46Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Multi-objective Semi-supervised Clustering for Finding Predictive
Clusters [0.5371337604556311]
This study focuses on clustering problems and aims to find compact clusters that are informative regarding the outcome variable.
The main goal is partitioning data points so that observations in each cluster are similar and the outcome variable can be predicated using these clusters simultaneously.
arXiv Detail & Related papers (2022-01-26T06:24:38Z) - Deep Conditional Gaussian Mixture Model for Constrained Clustering [7.070883800886882]
Constrained clustering can leverage prior information on a growing amount of only partially labeled data.
We propose a novel framework for constrained clustering that is intuitive, interpretable, and can be trained efficiently in the framework of gradient variational inference.
arXiv Detail & Related papers (2021-06-11T13:38:09Z) - Modeling Heterogeneous Statistical Patterns in High-dimensional Data by
Adversarial Distributions: An Unsupervised Generative Framework [33.652544673163774]
We propose a novel unsupervised generative framework called FIRD, which utilizes adversarial distributions to fit and disentangle the heterogeneous statistical patterns.
When applying to discrete spaces, FIRD effectively distinguishes the synchronized fraudsters from normal users.
arXiv Detail & Related papers (2020-12-15T08:51:20Z) - Decorrelated Clustering with Data Selection Bias [55.91842043124102]
We propose a novel Decorrelation regularized K-Means algorithm (DCKM) for clustering with data selection bias.
Our DCKM algorithm achieves significant performance gains, indicating the necessity of removing unexpected feature correlations induced by selection bias.
arXiv Detail & Related papers (2020-06-29T08:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.