Leveraging Internal Representations of Model for Magnetic Image
Classification
- URL: http://arxiv.org/abs/2403.06797v1
- Date: Mon, 11 Mar 2024 15:15:50 GMT
- Title: Leveraging Internal Representations of Model for Magnetic Image
Classification
- Authors: Adarsh N L, Arun P V, Alok Porwal, Malcolm Aranha
- Abstract summary: This paper introduces a potentially groundbreaking paradigm for machine learning model training, specifically designed for scenarios with only a single magnetic image and its corresponding label image available.
We harness the capabilities of Deep Learning to generate concise yet informative samples, aiming to overcome data scarcity.
- Score: 0.13654846342364302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data generated by edge devices has the potential to train intelligent
autonomous systems across various domains. Despite the emergence of diverse
machine learning approaches addressing privacy concerns and utilizing
distributed data, security issues persist due to the sensitive storage of data
shards in disparate locations. This paper introduces a potentially
groundbreaking paradigm for machine learning model training, specifically
designed for scenarios with only a single magnetic image and its corresponding
label image available. We harness the capabilities of Deep Learning to generate
concise yet informative samples, aiming to overcome data scarcity. Through the
utilization of deep learning's internal representations, our objective is to
efficiently address data scarcity issues and produce meaningful results. This
methodology presents a promising avenue for training machine learning models
with minimal data.
Related papers
- Distribution-Level Feature Distancing for Machine Unlearning: Towards a Better Trade-off Between Model Utility and Forgetting [4.220336689294245]
Recent studies have presented various machine unlearning algorithms to make a trained model unlearn the data to be forgotten.
We propose Distribution-Level Feature Distancing (DLFD), a novel method that efficiently forgets instances while preventing correlation collapse.
Our method synthesizes data samples so that the generated data distribution is far from the distribution of samples being forgotten in the feature space.
arXiv Detail & Related papers (2024-09-23T06:51:10Z) - Silver Linings in the Shadows: Harnessing Membership Inference for Machine Unlearning [7.557226714828334]
We present a novel unlearning mechanism designed to remove the impact of specific data samples from a neural network.
In achieving this goal, we crafted a novel loss function tailored to eliminate privacy-sensitive information from weights and activation values of the target model.
Our results showcase the superior performance of our approach in terms of unlearning efficacy and latency as well as the fidelity of the primary task.
arXiv Detail & Related papers (2024-07-01T00:20:26Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Towards Independence Criterion in Machine Unlearning of Features and
Labels [9.790684060172662]
This work delves into the complexities of machine unlearning in the face of distributional shifts.
Our research introduces a novel approach that leverages influence functions and principles of distributional independence to address these challenges.
Our method not only facilitates efficient data removal but also dynamically adjusts the model to preserve its generalization capabilities.
arXiv Detail & Related papers (2024-03-12T23:21:09Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z) - Digital Fingerprinting of Microstructures [44.139970905896504]
Finding efficient means of fingerprinting microstructural information is a critical step towards harnessing data-centric machine learning approaches.
Here, we consider microstructure classification and utilise the resulting features over a range of related machine learning tasks.
In particular, methods that leverage transfer learning with convolutional neural networks (CNNs), pretrained on the ImageNet dataset, are generally shown to outperform other methods.
arXiv Detail & Related papers (2022-03-25T15:40:44Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information [55.866673486753115]
We propose an extendable and elastic learning framework to preserve privacy and security.
The proposed framework is named distributed Asynchronized Discriminator Generative Adrial Networks (AsynDGAN)
arXiv Detail & Related papers (2020-12-15T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.