BSS-Bench: Towards Reproducible and Effective Band Selection Search
- URL: http://arxiv.org/abs/2312.14570v1
- Date: Fri, 22 Dec 2023 10:00:32 GMT
- Title: BSS-Bench: Towards Reproducible and Effective Band Selection Search
- Authors: Wenshuai Xu, Zhenbo Xu
- Abstract summary: This paper presents the first band selection search benchmark (BSS-Bench) for various hyperspectral analysis tasks.
The creation of BSS-Bench required a significant computational effort of 1.26k GPU days.
In addition to BSS-Bench, we present an effective one-shot BS method called Single Combination One Shot (SCOS), which learns the priority of any BCs through one-time training.
- Score: 8.712706386559171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The key technology to overcome the drawbacks of hyperspectral imaging
(expensive, high capture delay, and low spatial resolution) and make it widely
applicable is to select only a few representative bands from hundreds of bands.
However, current band selection (BS) methods face challenges in fair
comparisons due to inconsistent train/validation settings, including the number
of bands, dataset splits, and retraining settings. To make BS methods easy and
reproducible, this paper presents the first band selection search benchmark
(BSS-Bench) containing 52k training and evaluation records of numerous band
combinations (BC) with different backbones for various hyperspectral analysis
tasks. The creation of BSS-Bench required a significant computational effort of
1.26k GPU days. By querying BSS-Bench, BS experiments can be performed easily
and reproducibly, and the gap between the searched result and the best
achievable performance can be measured. Based on BSS-Bench, we further discuss
the impact of various factors on BS, such as the number of bands, unsupervised
statistics, and different backbones. In addition to BSS-Bench, we present an
effective one-shot BS method called Single Combination One Shot (SCOS), which
learns the priority of any BCs through one-time training, eliminating the need
for repetitive retraining on different BCs. Furthermore, the search process of
SCOS is flexible and does not require training, making it efficient and
effective. Our extensive evaluations demonstrate that SCOS outperforms current
BS methods on multiple tasks, even with much fewer bands. Our BSS-Bench and
codes are available in the supplementary material and will be publicly
available.
Related papers
- Trajectory Balance with Asynchrony: Decoupling Exploration and Learning for Fast, Scalable LLM Post-Training [71.16258800411696]
Reinforcement learning (RL) is a critical component of large language model (LLM) post-training.
Existing on-policy algorithms used for post-training are inherently incompatible with the use of experience replay buffers.
We propose efficiently obtaining this benefit of replay buffers via Trajectory Balance with Asynchrony (TBA)
arXiv Detail & Related papers (2025-03-24T17:51:39Z) - Offline Learning for Combinatorial Multi-armed Bandits [56.96242764723241]
Off-CMAB is the first offline learning framework for CMAB.
Off-CMAB combines pessimistic reward estimations with solvers.
Experiments on synthetic and real-world datasets highlight the superior performance of CLCB.
arXiv Detail & Related papers (2025-01-31T16:56:18Z) - Diversified Batch Selection for Training Acceleration [68.67164304377732]
A prevalent research line, known as online batch selection, explores selecting informative subsets during the training process.
vanilla reference-model-free methods involve independently scoring and selecting data in a sample-wise manner.
We propose Diversified Batch Selection (DivBS), which is reference-model-free and can efficiently select diverse and representative samples.
arXiv Detail & Related papers (2024-06-07T12:12:20Z) - Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection [9.241737058291823]
Adrial training methods generate independent initial perturbation for adversarial samples from a simple uniform distribution.
We propose a simple yet effective training framework called Batch-in-Batch to enhance models.
We show that models trained within the BB framework consistently have higher adversarial accuracy across various adversarial settings.
arXiv Detail & Related papers (2024-06-06T13:34:43Z) - Soft Random Sampling: A Theoretical and Empirical Analysis [59.719035355483875]
Soft random sampling (SRS) is a simple yet effective approach for efficient deep neural networks when dealing with massive data.
It selects a uniformly speed at random with replacement from each data set in each epoch.
It is shown to be a powerful and competitive strategy with significant and competitive performance on real-world industrial scale.
arXiv Detail & Related papers (2023-11-21T17:03:21Z) - RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End
Robust Estimation [74.47709320443998]
We propose RLSAC, a novel Reinforcement Learning enhanced SAmple Consensus framework for end-to-end robust estimation.
RLSAC employs a graph neural network to utilize both data and memory features to guide exploring directions for sampling the next minimum set.
Our experimental results demonstrate that RLSAC can learn from features to gradually explore a better hypothesis.
arXiv Detail & Related papers (2023-08-10T03:14:19Z) - End-to-end Hyperspectral Image Change Detection Network Based on Band
Selection [22.7908026248101]
We propose an end-to-end hyperspectral image change detection network with band selection (ECDBS)
The main ingredients of the network are a deep learning based band selection module and cascading band-specific spatial attention blocks.
Experimental evaluations conducted on three widely used HSI-CD datasets demonstrate the effectiveness and superiority of our proposed method.
arXiv Detail & Related papers (2023-07-23T13:50:41Z) - One-shot neural band selection for spectral recovery [15.565913045545066]
We present a novel one-shot Neural Band Selection (NBS) framework for spectral recovery.
Our NBS is based on the continuous relaxation of the band selection process, thus allowing efficient band search using gradient descent.
Our code will be publicly available.
arXiv Detail & Related papers (2023-05-16T07:34:03Z) - Query-Efficient Adversarial Attack Based on Latin Hypercube Sampling [6.141497251925968]
This paper proposes a Latin Hypercube Sampling based Boundary Attack (LHS-BA) to save query budget.
Experimental results demonstrate the superiority of the proposed LHS-BA over the state-of-the-art BA methods in terms of query efficiency.
arXiv Detail & Related papers (2022-07-05T12:04:44Z) - Batch Active Learning at Scale [39.26441165274027]
Batch active learning, which adaptively issues batched queries to a labeling oracle, is a common approach for addressing this problem.
In this work, we analyze an efficient active learning algorithm, which focuses on the large batch setting.
We show that our sampling method, which combines notions of uncertainty and diversity, easily scales to batch sizes (100K-1M) several orders of magnitude larger than used in previous studies.
arXiv Detail & Related papers (2021-07-29T18:14:05Z) - Rethinking Sampling Strategies for Unsupervised Person Re-identification [59.47536050785886]
We analyze the reasons for the performance differences between various sampling strategies under the same framework and loss function.
Group sampling is proposed, which gathers samples from the same class into groups.
Experiments on Market-1501, DukeMTMC-reID and MSMT17 show that group sampling achieves performance comparable to state-of-the-art methods.
arXiv Detail & Related papers (2021-07-07T05:39:58Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.