Analogical Learning for Cross-Scenario Generalization: Framework and Application to Intelligent Localization
- URL: http://arxiv.org/abs/2504.08811v1
- Date: Wed, 09 Apr 2025 03:36:52 GMT
- Title: Analogical Learning for Cross-Scenario Generalization: Framework and Application to Intelligent Localization
- Authors: Zirui Chen, Zhaoyang Zhang, Ziqing Xing, Ridong Li, Zhaohui Yang, Richeng Jin, Chongwen Huang, Yuzhi Yang, Mérouane Debbah,
- Abstract summary: This article proposes a brand-new universal deep learning framework named analogical learning (AL)<n>AL provides a highly efficient way to implicitly retrieve the reference frame information associated with a scenario.<n>We apply AL to the typical multi-scenario learning problem of intelligent wireless localization in cellular networks.
- Score: 45.64154370037619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing learning models often exhibit poor generalization when deployed across diverse scenarios. It is mainly due to that the underlying reference frame of the data varies with the deployment environment and settings. However, despite the data of each scenario has its distinct reference frame, its generation generally follows the same underlying physical rule. Based on these findings, this article proposes a brand-new universal deep learning framework named analogical learning (AL), which provides a highly efficient way to implicitly retrieve the reference frame information associated with a scenario and then to make accurate prediction by relative analogy across scenarios. Specifically, an elegant bipartite neural network architecture called Mateformer is designed, the first part of which calculates the relativity within multiple feature spaces between the input data and a small amount of embedded data from the current scenario, while the second part uses these relativity to guide the nonlinear analogy. We apply AL to the typical multi-scenario learning problem of intelligent wireless localization in cellular networks. Extensive experiments show that AL achieves state-of-the-art accuracy, stable transferability and robust adaptation to new scenarios without any tuning, and outperforming conventional methods with a precision improvement of nearly two orders of magnitude. All data and code are available at https://github.com/ziruichen-research/ALLoc.
Related papers
- Towards Predicting Any Human Trajectory In Context [10.332817296500533]
We introduce TrajICL, an In-Context Learning framework for pedestrian trajectory prediction.<n>TrajICL enables rapid adaptation without fine-tuning on scenario-specific data.<n>We train our model on a large-scale synthetic dataset to enhance its prediction ability.
arXiv Detail & Related papers (2025-06-01T07:18:47Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Improved Generalization Bounds for Communication Efficient Federated Learning [4.3707341422218215]
This paper focuses on reducing the communication cost of federated learning by exploring generalization bounds and representation learning.
We design a novel Federated Learning with Adaptive Local Steps (FedALS) algorithm based on our generalization bound and representation learning analysis.
arXiv Detail & Related papers (2024-04-17T21:17:48Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Graph Convolutional Networks for Complex Traffic Scenario Classification [0.7919810878571297]
A scenario-based testing approach can reduce the time required to obtain statistically significant evidence of the safety of Automated Driving Systems.
Most methods on scenario classification do not work for complex scenarios with diverse environments.
We propose a method for complex traffic scenario classification that is able to model the interaction of a vehicle with the environment.
arXiv Detail & Related papers (2023-10-26T20:51:24Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - DCID: Deep Canonical Information Decomposition [84.59396326810085]
We consider the problem of identifying the signal shared between two one-dimensional target variables.
We propose ICM, an evaluation metric which can be used in the presence of ground-truth labels.
We also propose Deep Canonical Information Decomposition (DCID) - a simple, yet effective approach for learning the shared variables.
arXiv Detail & Related papers (2023-06-27T16:59:06Z) - Federated Gradient Matching Pursuit [17.695717854068715]
Traditional machine learning techniques require centralizing all training data on one server or data hub.
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
We propose a novel algorithmic framework, federated gradient matching pursuit (FedGradMP), to solve the sparsity constrained minimization problem in the FL setting.
arXiv Detail & Related papers (2023-02-20T16:26:29Z) - Location-aware Adaptive Normalization: A Deep Learning Approach For
Wildfire Danger Forecasting [17.25189382307337]
This paper proposes a 2D/3D two-branch convolutional neural network (CNN) with a Location-aware Adaptive Normalization layer (LOAN)
Using LOAN as a building block, we can modulate the dynamic features conditional on their geographical locations.
Results show that location-aware adaptive feature normalization is a promising technique to learn the relation between dynamic variables and their geographic locations.
arXiv Detail & Related papers (2022-12-16T00:32:38Z) - On the Versatile Uses of Partial Distance Correlation in Deep Learning [47.11577420740119]
This paper revisits a (less widely known) from statistics, called distance correlation (and its partial variant), designed to evaluate correlation between feature spaces of different dimensions.
We describe the steps necessary to carry out its deployment for large scale models.
This opens the door to a surprising array of applications ranging from conditioning one deep model w.r.t. another, learning disentangled representations as well as optimizing diverse models that would directly be more robust to adversarial attacks.
arXiv Detail & Related papers (2022-07-20T06:36:11Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Understanding Dynamics of Nonlinear Representation Learning and Its
Application [12.697842097171119]
We study the dynamics of implicit nonlinear representation learning.
We show that the data-architecture alignment condition is sufficient for the global convergence.
We derive a new training framework, which satisfies the data-architecture alignment condition without assuming it.
arXiv Detail & Related papers (2021-06-28T16:31:30Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Deep Context-Aware Novelty Detection [6.599344783327053]
A common assumption of novelty detection is that the distribution of both "normal" and "novel" data are static.
This is often not the case - for example scenarios where data evolves over time or scenarios in which the definition of normal and novel depends on contextual information.
This can lead to significant difficulties when attempting to train a model on datasets where the distribution of normal data in one scenario is similar to that of novel data in another scenario.
arXiv Detail & Related papers (2020-06-01T18:02:51Z) - FedLoc: Federated Learning Framework for Data-Driven Cooperative
Localization and Location Data Processing [12.518673970373422]
Data-driven learning model-based cooperative localization and location data processing are considered.
We first review state-of-the-art algorithms in the context of federated learning.
We demonstrate various practical use cases that are summarized from a mixture of standard, newly published, and unpublished works.
arXiv Detail & Related papers (2020-03-08T01:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.