Continual Learning Approach for Improving the Data and Computation
Mapping in Near-Memory Processing System
- URL: http://arxiv.org/abs/2104.13671v1
- Date: Wed, 28 Apr 2021 09:50:35 GMT
- Title: Continual Learning Approach for Improving the Data and Computation
Mapping in Near-Memory Processing System
- Authors: Pritam Majumder, Jiayi Huang, Sungkeun Kim, Abdullah Muzahid, Dylan
Siegers, Chia-Che Tsai, and Eun Jung Kim
- Abstract summary: We propose an artificially intelligent memory mapping scheme, AIMM, that optimize data placement and resource utilization through page and computation remapping.
AIMM uses a neural network to achieve a near-optimal mapping during execution, trained using a reinforcement learning algorithm.
Our experimental evaluation shows that AIMM improves the baseline NMP performance in single and multiple program scenario by up to 70% and 50%, respectively.
- Score: 3.202860612193139
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The resurgence of near-memory processing (NMP) with the advent of big data
has shifted the computation paradigm from processor-centric to memory-centric
computing. To meet the bandwidth and capacity demands of memory-centric
computing, 3D memory has been adopted to form a scalable memory-cube network.
Along with NMP and memory system development, the mapping for placing data and
guiding computation in the memory-cube network has become crucial in driving
the performance improvement in NMP. However, it is very challenging to design a
universal optimal mapping for all applications due to unique application
behavior and intractable decision space. In this paper, we propose an
artificially intelligent memory mapping scheme, AIMM, that optimizes data
placement and resource utilization through page and computation remapping. Our
proposed technique involves continuously evaluating and learning the impact of
mapping decisions on system performance for any application. AIMM uses a neural
network to achieve a near-optimal mapping during execution, trained using a
reinforcement learning algorithm that is known to be effective for exploring a
vast design space. We also provide a detailed AIMM hardware design that can be
adopted as a plugin module for various NMP systems. Our experimental evaluation
shows that AIMM improves the baseline NMP performance in single and multiple
program scenario by up to 70% and 50%, respectively.
Related papers
- CHIME: Energy-Efficient STT-RAM-based Concurrent Hierarchical In-Memory Processing [1.5566524830295307]
This paper introduces a novel PiC/PiM architecture, Concurrent Hierarchical In-Memory Processing (CHIME)
CHIME strategically incorporates heterogeneous compute units across multiple levels of the memory hierarchy.
Experiments reveal that, compared to the state-of-the-art bit-line computing approaches, CHIME achieves significant speedup and energy savings of 57.95% and 78.23%.
arXiv Detail & Related papers (2024-07-29T01:17:54Z) - A parallel evolutionary algorithm to optimize dynamic memory managers in embedded systems [4.651702738999686]
We present a novel parallel evolutionary algorithm for DMMs optimization in embedded systems.
Our framework is able to reach a speed-up of 86.40x when compared with other state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-28T15:47:25Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - PIM-Opt: Demystifying Distributed Optimization Algorithms on a Real-World Processing-In-Memory System [21.09681871279162]
Modern Machine Learning (ML) training on large-scale datasets is a time-consuming workload.
It relies on the optimization algorithm Gradient Descent (SGD) due to its effectiveness, simplicity, and generalization performance.
processor-centric architectures suffer from low performance and high energy consumption while executing ML training workloads.
Processing-In-Memory (PIM) is a promising solution to alleviate the data movement bottleneck.
arXiv Detail & Related papers (2024-04-10T17:00:04Z) - Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark [166.40879020706151]
This paper proposes a shift towards BP-free, zeroth-order (ZO) optimization as a solution for reducing memory costs during fine-tuning.
Unlike traditional ZO-SGD methods, our work expands the exploration to a wider array of ZO optimization techniques.
Our study unveils previously overlooked optimization principles, highlighting the importance of task alignment, the role of the forward gradient method, and the balance between algorithm complexity and fine-tuning performance.
arXiv Detail & Related papers (2024-02-18T14:08:48Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning [72.80896338009579]
We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs.
We propose a generic patch-by-patch inference scheduling, which significantly cuts down the peak memory.
We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2.
arXiv Detail & Related papers (2021-10-28T17:58:45Z) - PIM-DRAM:Accelerating Machine Learning Workloads using Processing in
Memory based on DRAM Technology [2.6168147530506958]
We propose a processing-in-memory (PIM) multiplication primitive to accelerate matrix vector operations in ML workloads.
We show that the proposed architecture, mapping, and data flow can provide up to 23x and 6.5x benefits over a GPU.
arXiv Detail & Related papers (2021-05-08T16:39:24Z) - Hybrid In-memory Computing Architecture for the Training of Deep Neural
Networks [5.050213408539571]
We propose a hybrid in-memory computing architecture for the training of deep neural networks (DNNs) on hardware accelerators.
We show that HIC-based training results in about 50% less inference model size to achieve baseline comparable accuracy.
Our simulations indicate HIC-based training naturally ensures that the number of write-erase cycles seen by the devices is a small fraction of the endurance limit of PCM.
arXiv Detail & Related papers (2021-02-10T05:26:27Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.