COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against
Semantic Attacks
- URL: http://arxiv.org/abs/2403.02329v1
- Date: Mon, 4 Mar 2024 18:57:11 GMT
- Title: COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against
Semantic Attacks
- Authors: Zijian Huang, Wenda Chu, Linyi Li, Chejian Xu, Bo Li
- Abstract summary: We propose the first robustness certification framework COMMIT certify robustness of multi-sensor fusion systems against semantic attacks.
In particular, we propose a practical anisotropic noise mechanism that leverages randomized smoothing with multi-modal data.
We show that the certification for MSF models is at most 48.39% higher than that of single-modal models, which validates the advantages of MSF models.
- Score: 24.37030085306459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-sensor fusion systems (MSFs) play a vital role as the perception module
in modern autonomous vehicles (AVs). Therefore, ensuring their robustness
against common and realistic adversarial semantic transformations, such as
rotation and shifting in the physical world, is crucial for the safety of AVs.
While empirical evidence suggests that MSFs exhibit improved robustness
compared to single-modal models, they are still vulnerable to adversarial
semantic transformations. Despite the proposal of empirical defenses, several
works show that these defenses can be attacked again by new adaptive attacks.
So far, there is no certified defense proposed for MSFs. In this work, we
propose the first robustness certification framework COMMIT certify robustness
of multi-sensor fusion systems against semantic attacks. In particular, we
propose a practical anisotropic noise mechanism that leverages randomized
smoothing with multi-modal data and performs a grid-based splitting method to
characterize complex semantic transformations. We also propose efficient
algorithms to compute the certification in terms of object detection accuracy
and IoU for large-scale MSF models. Empirically, we evaluate the efficacy of
COMMIT in different settings and provide a comprehensive benchmark of certified
robustness for different MSF models using the CARLA simulation platform. We
show that the certification for MSF models is at most 48.39% higher than that
of single-modal models, which validates the advantages of MSF models. We
believe our certification framework and benchmark will contribute an important
step towards certifiably robust AVs in practice.
Related papers
- TRANSPOSE: Transitional Approaches for Spatially-Aware LFI Resilient FSM Encoding [2.236957801565796]
Finite state machines (FSMs) regulate sequential circuits, including access to sensitive information and privileged CPU states.
Laser-based fault injection (LFI) is becoming even more precise where an adversary can thwart chip security by altering individual flip-flop (FF) values.
arXiv Detail & Related papers (2024-11-05T04:18:47Z) - A Hybrid Defense Strategy for Boosting Adversarial Robustness in Vision-Language Models [9.304845676825584]
We propose a novel adversarial training framework that integrates multiple attack strategies and advanced machine learning techniques.
Experiments conducted on real-world datasets, including CIFAR-10 and CIFAR-100, demonstrate that the proposed method significantly enhances model robustness.
arXiv Detail & Related papers (2024-10-18T23:47:46Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - On the Certified Robustness for Ensemble Models and Beyond [22.43134152931209]
Deep neural networks (DNN) are vulnerable to adversarial examples, which aim to mislead them.
We analyze and provide the certified robustness for ensemble ML models.
Inspired by the theoretical findings, we propose the lightweight Diversity Regularized Training (DRT) to train certifiably robust ensemble ML models.
arXiv Detail & Related papers (2021-07-22T18:10:41Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - Providing reliability in Recommender Systems through Bernoulli Matrix
Factorization [63.732639864601914]
This paper proposes Bernoulli Matrix Factorization (BeMF) to provide both prediction values and reliability values.
BeMF acts on model-based collaborative filtering rather than on memory-based filtering.
The more reliable a prediction is, the less liable it is to be wrong.
arXiv Detail & Related papers (2020-06-05T14:24:27Z) - TSS: Transformation-Specific Smoothing for Robustness Certification [37.87602431929278]
Motivated adversaries can mislead machine learning systems by perturbing test data using semantic transformations.
We provide TSS -- a unified framework for certifying ML robustness against general adversarial semantic transformations.
We show TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2020-02-27T19:19:32Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.