A Learning-based Approach Towards Automated Tuning of SSD Configurations
- URL: http://arxiv.org/abs/2110.08685v1
- Date: Sun, 17 Oct 2021 00:25:21 GMT
- Title: A Learning-based Approach Towards Automated Tuning of SSD Configurations
- Authors: Daixuan Li and Jian Huang
- Abstract summary: We present an automated learning-based framework, named LearnedSSD, for tuning of hardware configurations for solid-state drives (SSDs)
LearnedSSD automatically extracts the unique access patterns of a new workload using its block I/O traces, maps the workload to previously workloads for utilizing the learned experiences, and recommends an optimal SSD configuration based on the validated storage performance.
We develop LearnedSSD with simple yet effective learning algorithms that can run efficiently on multi-core CPUs.
- Score: 3.8975567119716805
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Thanks to the mature manufacturing techniques, solid-state drives (SSDs) are
highly customizable for applications today, which brings opportunities to
further improve their storage performance and resource utilization. However,
the SSD efficiency is usually determined by many hardware parameters, making it
hard for developers to manually tune them and determine the optimal SSD
configurations.
In this paper, we present an automated learning-based framework, named
LearnedSSD, that utilizes both supervised and unsupervised machine learning
(ML) techniques to drive the tuning of hardware configurations for SSDs.
LearnedSSD automatically extracts the unique access patterns of a new workload
using its block I/O traces, maps the workload to previously workloads for
utilizing the learned experiences, and recommends an optimal SSD configuration
based on the validated storage performance. LearnedSSD accelerates the
development of new SSD devices by automating the hard-ware parameter
configurations and reducing the manual efforts. We develop LearnedSSD with
simple yet effective learning algorithms that can run efficiently on multi-core
CPUs. Given a target storage workload, our evaluation shows that LearnedSSD can
always deliver an optimal SSD configuration for the target workload, and this
configuration will not hurt the performance of non-target workloads.
Related papers
- VSSD: Vision Mamba with Non-Causal State Space Duality [26.96416515847115]
State Space Models (SSMs) have gained prominence in vision tasks as they offer linear computational complexity.
We introduce Visual State Space Duality (VSSD) model, which has a non-causal format of SSD.
We conduct extensive experiments on various benchmarks including image classification, detection, and segmentation, where VSSD surpasses existing state-of-the-art SSM-based models.
arXiv Detail & Related papers (2024-07-26T07:16:52Z) - HOPE for a Robust Parameterization of Long-memory State Space Models [51.66430224089725]
State-space models (SSMs) that utilize linear, time-invariant (LTI) systems are known for their effectiveness in learning long sequences.
We develop a new parameterization scheme, called HOPE, for LTI systems that utilize Markov parameters within Hankel operators.
Our new parameterization endows the SSM with non-decaying memory within a fixed time window, which is empirically corroborated by a sequential CIFAR-10 task with padded noise.
arXiv Detail & Related papers (2024-05-22T20:20:14Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - Follow the Soldiers with Optimized Single-Shot Multibox Detection and
Reinforcement Learning [0.0]
We build an autonomous system using DeepRacer which will follow a specific person (for our project, a soldier) when they will be moving in any direction.
Two main components to accomplish this project is an optimized Single-Shot Multibox Detection (SSD) object detection model and a Reinforcement Learning (RL) model.
Experimental results show that SSD Lite gives better performance among these three techniques and exhibits a considerable boost in inference speed (2-3 times) without compromising accuracy.
arXiv Detail & Related papers (2023-08-02T19:08:57Z) - Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets [55.2118691522524]
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture.
We propose a distillation-aware meta accuracy prediction model, DaSS (Distillation-aware Student Search), which can predict a given architecture's final performances on a dataset.
arXiv Detail & Related papers (2023-05-26T14:00:35Z) - U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search [50.33956216274694]
optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
We propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization.
We achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods.
arXiv Detail & Related papers (2022-03-23T13:44:15Z) - SSDNet: State Space Decomposition Neural Network for Time Series
Forecasting [5.311025156596578]
SSDNet is a novel deep learning approach for time series forecasting.
Transformer architecture is used to learn the temporal patterns and estimate the parameters of the state space model.
We show that SSDNet is an effective method in terms of accuracy and speed, outperforming state-of-the-art deep learning and statistical methods.
arXiv Detail & Related papers (2021-12-19T20:35:16Z) - SE-SSD: Self-Ensembling Single-Stage Object Detector From Point Cloud [44.009023567586446]
We present Self-Ensembling Single-Stage object Detector (SE-SSD) for accurate and efficient 3D object detection in point clouds.
Our key focus is on exploiting both soft and hard targets with our formulated constraints.
Our SE-SSD attains top performance compared with all prior published works.
arXiv Detail & Related papers (2021-04-20T07:33:03Z) - RecSSD: Near Data Processing for Solid State Drive Based Recommendation
Inference [7.3762607002135]
RecSSD is a near data processing based SSD memory system customized for neural recommendation.
It reduces end-to-end model inference latency by 2X compared to using COTS across eight industry-representative models.
arXiv Detail & Related papers (2021-01-29T21:25:34Z) - SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and
Training [82.35376405568975]
Deep neural networks (DNNs) come with heavy parameterization, leading to external dynamic random-access memory (DRAM) for storage.
We present SmartDeal (SD), an algorithm framework to trade higher-cost memory storage/access for lower-cost computation.
We show that SD leads to 10.56x and 4.48x reduction in the storage and training energy, with negligible accuracy loss compared to state-of-the-art training baselines.
arXiv Detail & Related papers (2021-01-04T18:54:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.