The Comparability of Model Fusion to Measured Data in Confuser Rejection
- URL: http://arxiv.org/abs/2505.00836v1
- Date: Thu, 01 May 2025 19:51:30 GMT
- Title: The Comparability of Model Fusion to Measured Data in Confuser Rejection
- Authors: Conor Flynn, Christopher Ebersole, Edmund Zelnio,
- Abstract summary: No dataset can account for every slight deviation we might see in live usage.<n>Simulators have been developed utilizing the shooting and bouncing ray method to allow for the generation of synthetic SAR data on 3D models.<n>We aim to use computational power as a substitution for this lack of quality measured data, by ensembling many models trained on synthetic data.
- Score: 0.24578723416255746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data collection has always been a major issue in the modeling and training of large deep learning networks, as no dataset can account for every slight deviation we might see in live usage. Collecting samples can be especially costly for Synthetic Aperture Radar (SAR), limiting the amount of unique targets and operating conditions we are able to observe from. To counter this lack of data, simulators have been developed utilizing the shooting and bouncing ray method to allow for the generation of synthetic SAR data on 3D models. While effective, the synthetically generated data does not perfectly correlate to the measured data leading to issues when training models solely on synthetic data. We aim to use computational power as a substitution for this lack of quality measured data, by ensembling many models trained on synthetic data. Synthetic data is also not complete, as we do not know what targets might be present in a live environment. Therefore we need to have our ensembling techniques account for these unknown targets by applying confuser rejection in which our models will reject unknown targets it is presented with, and only classify those it has been trained on.
Related papers
- Synthetic Data Can Mislead Evaluations: Membership Inference as Machine Text Detection [1.03590082373586]
Using synthetic data in membership evaluations may lead to false conclusions about model memorization and data leakage.<n>This issue could affect other evaluations using model signals such as loss where synthetic or machine-generated translated data substitutes for real-world samples.
arXiv Detail & Related papers (2025-01-20T23:19:15Z) - Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World [19.266191284270793]
generative machine learning models are pretrained on web-scale datasets containing data generated by earlier models.<n>Some prior work warns of "model collapse" as the web is overwhelmed by synthetic data.<n>We report experiments on three ways of using data (training-workflows) across three generative model task-settings.
arXiv Detail & Related papers (2024-10-22T05:49:24Z) - Improving Object Detector Training on Synthetic Data by Starting With a Strong Baseline Methodology [0.14980193397844666]
We propose a methodology for improving the performance of a pre-trained object detector when training on synthetic data.
Our approach focuses on extracting the salient information from synthetic data without forgetting useful features learned from pre-training on real images.
arXiv Detail & Related papers (2024-05-30T08:31:01Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - On the Stability of Iterative Retraining of Generative Models on their own Data [56.153542044045224]
We study the impact of training generative models on mixed datasets.
We first prove the stability of iterative training under the condition that the initial generative models approximate the data distribution well enough.
We empirically validate our theory on both synthetic and natural images by iteratively training normalizing flows and state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-09-30T16:41:04Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - The Big Data Myth: Using Diffusion Models for Dataset Generation to
Train Deep Detection Models [0.15469452301122172]
This study presents a framework for the generation of synthetic datasets by fine-tuning stable diffusion models.
The results of this study reveal that the object detection models trained on synthetic data perform similarly to the baseline model.
arXiv Detail & Related papers (2023-06-16T10:48:52Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - STAN: Synthetic Network Traffic Generation with Generative Neural Models [10.54843182184416]
This paper presents STAN (Synthetic network Traffic generation with Autoregressive Neural models), a tool to generate realistic synthetic network traffic datasets.
Our novel neural architecture captures both temporal dependencies and dependence between attributes at any given time.
We evaluate the performance of STAN in terms of the quality of data generated, by training it on both a simulated dataset and a real network traffic data set.
arXiv Detail & Related papers (2020-09-27T04:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.