Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning for Robust Forecasting and Security
- URL: http://arxiv.org/abs/2402.01163v3
- Date: Sat, 18 Jan 2025 06:53:19 GMT
- Title: Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning for Robust Forecasting and Security
- Authors: Weiliang Chen, Qianqian Ren, Yong Liu, Jianguo Sun,
- Abstract summary: Existing methods often struggle with issues such as noise, data incompleteness, and security vulnerabilities.
This paper proposes a novel framework, Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning (EUPAS)
EUPAS ensures robust performance across various forecasting tasks such as crime prediction, check-in prediction, and land use classification.
- Score: 12.8405655328298
- License:
- Abstract: Urban region profiling plays a crucial role in forecasting and decision-making in the context of dynamic and noisy urban environments. Existing methods often struggle with issues such as noise, data incompleteness, and security vulnerabilities. This paper proposes a novel framework, Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning (EUPAS), to address these challenges. By combining adversarial contrastive learning with both supervised and self-supervised objectives, EUPAS ensures robust performance across various forecasting tasks such as crime prediction, check-in prediction, and land use classification. To enhance model resilience against adversarial attacks and noisy data, we incorporate several key components, including perturbation augmentation, trickster generator, and deviation copy generator. These innovations effectively improve the robustness of the embeddings, making EUPAS capable of handling the complexities and noise inherent in urban data. Experimental results show that EUPAS significantly outperforms state-of-the-art methods across multiple tasks, achieving improvements in prediction accuracy of up to 10.8%. Notably, our model excels in adversarial attack tests, demonstrating its resilience in real-world, security-sensitive applications. This work makes a substantial contribution to the field of urban analytics by offering a more robust and secure approach to forecasting and profiling urban regions. It addresses key challenges in secure, data-driven modeling, providing a stronger foundation for future urban analytics and decision-making applications.
Related papers
- Towards Robust Stability Prediction in Smart Grids: GAN-based Approach under Data Constraints and Adversarial Challenges [53.2306792009435]
We introduce a novel framework to detect instability in smart grids by employing only stable data.
It relies on a Generative Adversarial Network (GAN) where the generator is trained to create instability data that are used along with stable data to train the discriminator.
Our solution, tested on a dataset composed of real-world stable and unstable samples, achieve accuracy up to 97.5% in predicting grid stability and up to 98.9% in detecting adversarial attacks.
arXiv Detail & Related papers (2025-01-27T20:48:25Z) - Collaborative Imputation of Urban Time Series through Cross-city Meta-learning [54.438991949772145]
We propose a novel collaborative imputation paradigm leveraging meta-learned implicit neural representations (INRs)
We then introduce a cross-city collaborative learning scheme through model-agnostic meta learning.
Experiments on a diverse urban dataset from 20 global cities demonstrate our model's superior imputation performance and generalizability.
arXiv Detail & Related papers (2025-01-20T07:12:40Z) - Adversarial Robustness through Dynamic Ensemble Learning [0.0]
Adversarial attacks pose a significant threat to the reliability of pre-trained language models (PLMs)
This paper presents Adversarial Robustness through Dynamic Ensemble Learning (ARDEL), a novel scheme designed to enhance the robustness of PLMs against such attacks.
arXiv Detail & Related papers (2024-12-20T05:36:19Z) - Balancing Security and Accuracy: A Novel Federated Learning Approach for Cyberattack Detection in Blockchain Networks [10.25938198121523]
This paper presents a novel Collaborative Cyberattack Detection (CCD) system aimed at enhancing the security of blockchain-based data-sharing networks.
We explore the effects of various noise types on key performance metrics, including attack detection accuracy, deep learning model convergence time, and the overall runtime of global model generation.
Our findings reveal the intricate trade-offs between ensuring data privacy and maintaining system performance, offering valuable insights into optimizing these parameters for diverse CCD environments.
arXiv Detail & Related papers (2024-09-08T04:38:07Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Robust VAEs via Generating Process of Noise Augmented Data [9.366139389037489]
This paper introduces a novel framework that enhances robustness by regularizing the latent space divergence between original and noise-augmented data.
Our empirical evaluations demonstrate that this approach, termed Robust Augmented Variational Auto-ENcoder (RAVEN), yields superior performance in resisting adversarial inputs.
arXiv Detail & Related papers (2024-07-26T09:55:34Z) - Exploring the Interplay of Interpretability and Robustness in Deep Neural Networks: A Saliency-guided Approach [3.962831477787584]
Adversarial attacks pose a significant challenge to deploying deep learning models in safety-critical applications.
Maintaining model robustness while ensuring interpretability is vital for fostering trust and comprehension in these models.
This study investigates the impact of Saliency-guided Training on model robustness.
arXiv Detail & Related papers (2024-05-10T07:21:03Z) - Develop End-to-End Anomaly Detection System [3.130722489512822]
Anomaly detection plays a crucial role in ensuring network robustness.
We propose an end-to-end anomaly detection model development pipeline.
We demonstrate the efficacy of the framework by way of introducing and bench-marking a new forecasting model.
arXiv Detail & Related papers (2024-02-01T09:02:44Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.