Example Forgetting: A Novel Approach to Explain and Interpret Deep
Neural Networks in Seismic Interpretation
- URL: http://arxiv.org/abs/2302.14644v1
- Date: Fri, 24 Feb 2023 19:19:22 GMT
- Title: Example Forgetting: A Novel Approach to Explain and Interpret Deep
Neural Networks in Seismic Interpretation
- Authors: Ryan Benkert, Oluwaseun Joseph Aribido, and Ghassan AlRegib
- Abstract summary: deep neural networks are an attractive component for the common interpretation pipeline.
Deep neural networks are frequently met with distrust due to their property of producing semantically incorrect outputs when exposed to sections the model was not trained on.
We introduce a method that effectively relates semantically malfunctioned predictions to their respectful positions within the neural network representation manifold.
- Score: 12.653673008542155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, deep neural networks have significantly impacted the seismic
interpretation process. Due to the simple implementation and low interpretation
costs, deep neural networks are an attractive component for the common
interpretation pipeline. However, neural networks are frequently met with
distrust due to their property of producing semantically incorrect outputs when
exposed to sections the model was not trained on. We address this issue by
explaining model behaviour and improving generalization properties through
example forgetting: First, we introduce a method that effectively relates
semantically malfunctioned predictions to their respectful positions within the
neural network representation manifold. More concrete, our method tracks how
models "forget" seismic reflections during training and establishes a
connection to the decision boundary proximity of the target class. Second, we
use our analysis technique to identify frequently forgotten regions within the
training volume and augment the training set with state-of-the-art style
transfer techniques from computer vision. We show that our method improves the
segmentation performance on underrepresented classes while significantly
reducing the forgotten regions in the F3 volume in the Netherlands.
Related papers
- SoK: On Finding Common Ground in Loss Landscapes Using Deep Model Merging Techniques [4.013324399289249]
We present a novel taxonomy of model merging techniques organized by their core algorithmic principles.
We distill repeated empirical observations from the literature in these fields into characterizations of four major aspects of loss landscape geometry.
arXiv Detail & Related papers (2024-10-16T18:14:05Z) - A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models [13.283281356356161]
We review the literature on statistical theories of neural networks from three perspectives.
Results on excess risks for neural networks are reviewed.
Papers that attempt to answer how the neural network finds the solution that can generalize well on unseen data'' are reviewed.
arXiv Detail & Related papers (2024-01-14T02:30:19Z) - Explaining Deep Models through Forgettable Learning Dynamics [12.653673008542155]
We visualize the learning behaviour during training by tracking how often samples are learned and forgotten in subsequent training epochs.
Inspired by this phenomenon, we present a novel segmentation method that actively uses this information to alter the data representation within the model.
arXiv Detail & Related papers (2023-01-10T21:59:20Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Explainability-aided Domain Generalization for Image Classification [0.0]
We show that applying methods and architectures from the explainability literature can achieve state-of-the-art performance for the challenging task of domain generalization.
We develop a set of novel algorithms including DivCAM, an approach where the network receives guidance during training via gradient based class activation maps to focus on a diverse set of discriminative features.
Since these methods offer competitive performance on top of explainability, we argue that the proposed methods can be used as a tool to improve the robustness of deep neural network architectures.
arXiv Detail & Related papers (2021-04-05T02:27:01Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Retrospective Loss: Looking Back to Improve Training of Deep Neural
Networks [15.329684157845872]
We introduce a new retrospective loss to improve the training of deep neural network models.
Minimizing the retrospective loss, along with the task-specific loss, pushes the parameter state at the current training step towards the optimal parameter state.
Although a simple idea, we analyze the method as well as to conduct comprehensive sets of experiments across domains.
arXiv Detail & Related papers (2020-06-24T10:16:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.