ASAT: Adaptively Scaled Adversarial Training in Time Series
- URL: http://arxiv.org/abs/2108.08976v1
- Date: Fri, 20 Aug 2021 03:13:34 GMT
- Title: ASAT: Adaptively Scaled Adversarial Training in Time Series
- Authors: Zhiyuan Zhang, Wei Li, Ruihan Bao, Keiko Harimoto, Yunfang Wu, Xu Sun
- Abstract summary: We take the first step to introduce adversarial training in time series analysis by taking the finance field as an example.
We propose the adaptively scaled adversarial training (ASAT) in time series analysis, by treating data at different time slots with time-dependent importance weights.
Experimental results show that the proposed ASAT can improve both the accuracy and the adversarial robustness of neural networks.
- Score: 21.65050910881857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training is a method for enhancing neural networks to improve the
robustness against adversarial examples. Besides the security concerns of
potential adversarial examples, adversarial training can also improve the
performance of the neural networks, train robust neural networks, and provide
interpretability for neural networks. In this work, we take the first step to
introduce adversarial training in time series analysis by taking the finance
field as an example. Rethinking existing researches of adversarial training, we
propose the adaptively scaled adversarial training (ASAT) in time series
analysis, by treating data at different time slots with time-dependent
importance weights. Experimental results show that the proposed ASAT can
improve both the accuracy and the adversarial robustness of neural networks.
Besides enhancing neural networks, we also propose the dimension-wise
adversarial sensitivity indicator to probe the sensitivities and importance of
input dimensions. With the proposed indicator, we can explain the decision
bases of black box neural networks.
Related papers
- Peer-to-Peer Learning Dynamics of Wide Neural Networks [10.179711440042123]
We provide an explicit, non-asymptotic characterization of the learning dynamics of wide neural networks trained using popularDGD algorithms.
We validate our analytical results by accurately predicting error and error and for classification tasks.
arXiv Detail & Related papers (2024-09-23T17:57:58Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
In safety-critical environments, the inputs often contain noisy sensor data.
We employ an end-to-end set-based training procedure that trains robust neural networks for formal verification.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric
Regression by Adversarial Training [5.68558935178946]
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme.
A deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction.
arXiv Detail & Related papers (2023-07-08T20:24:14Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Global quantitative robustness of regression feed-forward neural
networks [0.0]
We adapt the notion of the regression breakdown point to regression neural networks.
We compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate.
The results indeed motivate to use robust loss functions for neural network training.
arXiv Detail & Related papers (2022-11-18T09:57:53Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Evaluation of Adversarial Training on Different Types of Neural Networks
in Deep Learning-based IDSs [3.8073142980733]
We focus on investigating the effectiveness of different evasion attacks and how to train a resilience deep learning-based IDS.
We use the min-max approach to formulate the problem of training robust IDS against adversarial examples.
Our experiments on different deep learning algorithms and different benchmark datasets demonstrate that defense using an adversarial training-based min-max approach improves the robustness against the five well-known adversarial attack methods.
arXiv Detail & Related papers (2020-07-08T23:33:30Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.