Reinforcement Learning as a Parsimonious Alternative to Prediction
Cascades: A Case Study on Image Segmentation
- URL: http://arxiv.org/abs/2402.11760v1
- Date: Mon, 19 Feb 2024 01:17:52 GMT
- Title: Reinforcement Learning as a Parsimonious Alternative to Prediction
Cascades: A Case Study on Image Segmentation
- Authors: Bharat Srikishan, Anika Tabassum, Srikanth Allu, Ramakrishnan Kannan,
Nikhil Muralidhar
- Abstract summary: PaSeR (Parsimonious with Reinforcement Learning) is a non-cascading, cost-aware learning pipeline.
We show that PaSeR achieves better accuracy while minimizing computational cost relative to cascaded models.
We introduce a new metric IoU/GigaFlop to evaluate the balance between cost and performance.
- Score: 6.576180048533476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning architectures have achieved state-of-the-art (SOTA) performance
on computer vision tasks such as object detection and image segmentation. This
may be attributed to the use of over-parameterized, monolithic deep learning
architectures executed on large datasets. Although such architectures lead to
increased accuracy, this is usually accompanied by a large increase in
computation and memory requirements during inference. While this is a non-issue
in traditional machine learning pipelines, the recent confluence of machine
learning and fields like the Internet of Things has rendered such large
architectures infeasible for execution in low-resource settings. In such
settings, previous efforts have proposed decision cascades where inputs are
passed through models of increasing complexity until desired performance is
achieved. However, we argue that cascaded prediction leads to increased
computational cost due to wasteful intermediate computations. To address this,
we propose PaSeR (Parsimonious Segmentation with Reinforcement Learning) a
non-cascading, cost-aware learning pipeline as an alternative to cascaded
architectures. Through experimental evaluation on real-world and standard
datasets, we demonstrate that PaSeR achieves better accuracy while minimizing
computational cost relative to cascaded models. Further, we introduce a new
metric IoU/GigaFlop to evaluate the balance between cost and performance. On
the real-world task of battery material phase segmentation, PaSeR yields a
minimum performance improvement of 174% on the IoU/GigaFlop metric with respect
to baselines. We also demonstrate PaSeR's adaptability to complementary models
trained on a noisy MNIST dataset, where it achieved a minimum performance
improvement on IoU/GigaFlop of 13.4% over SOTA models. Code and data are
available at https://github.com/scailab/paser .
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator [4.09225917049674]
Transferable NAS has emerged, generalizing the search process from dataset-dependent to task-dependent.
This paper introduces POMONAG, extending DiffusionNAG via a many-optimal diffusion process.
Results were validated on two search spaces -- NAS201 and MobileNetV3 -- and evaluated across 15 image classification datasets.
arXiv Detail & Related papers (2024-09-30T16:05:29Z) - Mechanistic Design and Scaling of Hybrid Architectures [114.3129802943915]
We identify and test new hybrid architectures constructed from a variety of computational primitives.
We experimentally validate the resulting architectures via an extensive compute-optimal and a new state-optimal scaling law analysis.
We find MAD synthetics to correlate with compute-optimal perplexity, enabling accurate evaluation of new architectures.
arXiv Detail & Related papers (2024-03-26T16:33:12Z) - Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - Transfer Learning in Deep Learning Models for Building Load Forecasting:
Case of Limited Data [0.0]
This paper proposes a Building-to-Building Transfer Learning framework to overcome the problem and enhance the performance of Deep Learning models.
The proposed approach improved the forecasting accuracy by 56.8% compared to the case of conventional deep learning where training from scratch is used.
arXiv Detail & Related papers (2023-01-25T16:05:47Z) - DCT-Former: Efficient Self-Attention with Discrete Cosine Transform [4.622165486890318]
An intrinsic limitation of the Trasformer architectures arises from the computation of the dot-product attention.
Our idea takes inspiration from the world of lossy data compression (such as the JPEG algorithm) to derive an approximation of the attention module.
An extensive section of experiments shows that our method takes up less memory for the same performance, while also drastically reducing inference time.
arXiv Detail & Related papers (2022-03-02T15:25:27Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Top-KAST: Top-K Always Sparse Training [50.05611544535801]
We propose Top-KAST, a method that preserves constant sparsity throughout training.
We show that it performs comparably to or better than previous works when training models on the established ImageNet benchmark.
In addition to our ImageNet results, we also demonstrate our approach in the domain of language modeling.
arXiv Detail & Related papers (2021-06-07T11:13:05Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - Tidying Deep Saliency Prediction Architectures [6.613005108411055]
In this paper, we identify four key components of saliency models, i.e., input features, multi-level integration, readout architecture, and loss functions.
We propose two novel end-to-end architectures called SimpleNet and MDNSal, which are neater, minimal, more interpretable and achieve state of the art performance on public saliency benchmarks.
arXiv Detail & Related papers (2020-03-10T19:34:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.