Center Smoothing for Certifiably Robust Vector-Valued Functions
- URL: http://arxiv.org/abs/2102.09701v1
- Date: Fri, 19 Feb 2021 01:34:48 GMT
- Title: Center Smoothing for Certifiably Robust Vector-Valued Functions
- Authors: Aounon Kumar and Tom Goldstein
- Abstract summary: We produce certifiable robustness for vector-valued functions bound to change in output caused by a small change in input.
We demonstrate the effectiveness of our method on multiple learning tasks involving vector-valued functions with a wide range of input and output dimensionalities.
- Score: 59.46976586742266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Randomized smoothing has been successfully applied in high-dimensional image
classification tasks to obtain models that are provably robust against input
perturbations of bounded size. We extend this technique to produce certifiable
robustness for vector-valued functions, i.e., bound the change in output caused
by a small change in input. These functions are used in many areas of machine
learning, such as image reconstruction, dimensionality reduction,
super-resolution, etc., but due to the enormous dimensionality of the output
space in these problems, generating meaningful robustness guarantees is
difficult. We design a smoothing procedure that can leverage the local,
potentially low-dimensional, behaviour of the function around an input to
obtain probabilistic robustness certificates. We demonstrate the effectiveness
of our method on multiple learning tasks involving vector-valued functions with
a wide range of input and output dimensionalities.
Related papers
- PCF-Lift: Panoptic Lifting by Probabilistic Contrastive Fusion [80.79938369319152]
We design a new pipeline coined PCF-Lift based on our Probabilis-tic Contrastive Fusion (PCF)
Our PCF-lift not only significantly outperforms the state-of-the-art methods on widely used benchmarks including the ScanNet dataset and the Messy Room dataset (4.4% improvement of scene-level PQ)
arXiv Detail & Related papers (2024-10-14T16:06:59Z) - Scalable Transformer for PDE Surrogate Modeling [9.438207505148947]
Transformer has emerged as a promising tool for surrogate modeling of partial differential equations (PDEs)
We propose Factorized Transformer (FactFormer), which is based on an axial factorized kernel integral.
We showcase that the proposed model is able to simulate 2D Kolmogorov flow on a $256times 256$ grid and 3D smoke buoyancy on a $64times64times64$ grid with good accuracy and efficiency.
arXiv Detail & Related papers (2023-05-27T19:23:00Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - Application of probabilistic modeling and automated machine learning
framework for high-dimensional stress field [1.073039474000799]
We propose an end-to-end approach that maps a high-dimensional image like input to an output of high dimensionality or its key statistics.
Our approach uses two main framework that perform three steps: a) reduce the input and output from a high-dimensional space to a reduced or low-dimensional space, b) model the input-output relationship in the low-dimensional space, and c) enable the incorporation of domain-specific physical constraints as masks.
arXiv Detail & Related papers (2023-03-15T13:10:58Z) - Towards Confidence-guided Shape Completion for Robotic Applications [6.940242990198]
Deep learning has begun taking traction as effective means of inferring a complete 3D object representation from partial visual data.
We propose an object shape completion method based on an implicit 3D representation providing a confidence value for each reconstructed point.
We experimentally validate our approach by comparing reconstructed shapes with ground truths, and by deploying our shape completion algorithm in a robotic grasping pipeline.
arXiv Detail & Related papers (2022-09-09T13:48:24Z) - Vector Quantisation for Robust Segmentation [14.477470283239501]
The reliability of segmentation models in the medical domain depends on the model's robustness to perturbations in the input space.
We propose and justify that learning a discrete representation in a low dimensional embedding space improves robustness of a segmentation model.
This is achieved with a dictionary learning method called vector quantisation.
arXiv Detail & Related papers (2022-07-05T09:52:53Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - Learning High-Dimensional Distributions with Latent Neural Fokker-Planck
Kernels [67.81799703916563]
We introduce new techniques to formulate the problem as solving Fokker-Planck equation in a lower-dimensional latent space.
Our proposed model consists of latent-distribution morphing, a generator and a parameterized Fokker-Planck kernel function.
arXiv Detail & Related papers (2021-05-10T17:42:01Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z) - Deep Multi-Fidelity Active Learning of High-dimensional Outputs [17.370056935194786]
We develop a deep neural network-based multi-fidelity model for learning with high-dimensional outputs.
We then propose a mutual information-based acquisition function that extends the predictive entropy principle.
We show the advantage of our method in several applications of computational physics and engineering design.
arXiv Detail & Related papers (2020-12-02T00:02:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.