Pyxis: An Open-Source Performance Dataset of Sparse Accelerators
- URL: http://arxiv.org/abs/2110.04280v1
- Date: Fri, 8 Oct 2021 17:46:51 GMT
- Title: Pyxis: An Open-Source Performance Dataset of Sparse Accelerators
- Authors: Linghao Song, Yuze Chi, Jason Cong
- Abstract summary: PYXIS is a performance dataset for specialized accelerators on sparse data.
PYXIS is open-source, and we are constantly growing PYXIS with new accelerator designs and performance statistics.
- Score: 10.18035715512647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Specialized accelerators provide gains of performance and efficiency in
specific domains of applications. Sparse data structures or/and representations
exist in a wide range of applications. However, it is challenging to design
accelerators for sparse applications because no analytic architecture or
performance-level models are able to fully capture the spectrum of the sparse
data. Accelerator researchers rely on real execution to get precise feedback
for their designs. In this work, we present PYXIS, a performance dataset for
specialized accelerators on sparse data. PYXIS collects accelerator designs and
real execution performance statistics. Currently, there are 73.8 K instances in
PYXIS. PYXIS is open-source, and we are constantly growing PYXIS with new
accelerator designs and performance statistics. PYXIS can benefit researchers
in the fields of accelerator, architecture, performance, algorithm, and many
related topics.
Related papers
- ZeroLM: Data-Free Transformer Architecture Search for Language Models [54.83882149157548]
Current automated proxy discovery approaches suffer from extended search times, susceptibility to data overfitting, and structural complexity.
This paper introduces a novel zero-cost proxy methodology that quantifies model capacity through efficient weight statistics.
Our evaluation demonstrates the superiority of this approach, achieving a Spearman's rho of 0.76 and Kendall's tau of 0.53 on the FlexiBERT benchmark.
arXiv Detail & Related papers (2025-03-24T13:11:22Z) - Fake Runs, Real Fixes -- Analyzing xPU Performance Through Simulation [4.573673188291683]
We present xPU-Shark, a fine-grained methodology for analyzing ML models at the machine-code level.
xPU-Shark captures traces from production deployments running on accelerators and replays them in a modified microarchitecture simulator.
We optimize a common communication collective by up to 15% and reduce token generation latency by up to 4.1%.
arXiv Detail & Related papers (2025-03-18T23:15:02Z) - Automatic Generation of Fast and Accurate Performance Models for Deep Neural Network Accelerators [33.18173790144853]
We present an automated generation approach for fast performance models to accurately estimate the latency of a Deep Neural Networks (DNNs)
We modeled representative DNN accelerators such as Gemmini, UltraTrail, Plasticine-derived, and a parameterizable systolic array.
We evaluate only 154 loop kernel iterations to estimate the performance for 4.19 billion instructions achieving a significant speedup.
arXiv Detail & Related papers (2024-09-13T07:27:55Z) - HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator [47.66463010685586]
We propose a novel approach to exploit unstructured weights and activations sparsity for dataflow accelerators, using software and hardware co-optimization.
We achieve an efficiency improvement ranging from 1.3$times$ to 4.2$times$ compared to existing sparse designs.
arXiv Detail & Related papers (2024-06-05T09:25:18Z) - Efflex: Efficient and Flexible Pipeline for Spatio-Temporal Trajectory Graph Modeling and Representation Learning [8.690298376643959]
We introduce Efflex, a comprehensive pipeline for graph modeling and learning of large-temporal trajectories.
Efflex pioneers the incorporation of a multivolume kestnear neighbors (KNN) algorithm with feature fusion for graph construction.
The groundbreaking graph construction mechanism and the high-performance lightweight GCN increase embedding extraction speed by up to 36 times faster.
arXiv Detail & Related papers (2024-04-15T05:36:27Z) - Using the Abstract Computer Architecture Description Language to Model
AI Hardware Accelerators [77.89070422157178]
Manufacturers of AI-integrated products face a critical challenge: selecting an accelerator that aligns with their product's performance requirements.
The Abstract Computer Architecture Description Language (ACADL) is a concise formalization of computer architecture block diagrams.
In this paper, we demonstrate how to use the ACADL to model AI hardware accelerators, use their ACADL description to map DNNs onto them, and explain the timing simulation semantics to gather performance results.
arXiv Detail & Related papers (2024-01-30T19:27:16Z) - NumS: Scalable Array Programming for the Cloud [82.827921577004]
We present NumS, an array programming library which optimize NumPy-like expressions on task-based distributed systems.
This is achieved through a novel scheduler called Load Simulated Hierarchical Scheduling (LSHS)
We show that LSHS enhances performance on Ray by decreasing network load by a factor of 2x, requiring 4x less memory, and reducing execution time by 10x on the logistic regression problem.
arXiv Detail & Related papers (2022-06-28T20:13:40Z) - Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling [10.610523739702971]
This paper first presents a unified taxonomy to systematically describe the diverse sparse tensor accelerator design space.
Based on the proposed taxonomy, it then introduces Sparseloop, the first fast, accurate, and flexible analytical modeling framework.
Sparseloop comprehends a large set of architecture specifications, including various dataflows and sparse acceleration features.
arXiv Detail & Related papers (2022-05-12T01:28:03Z) - Data-Driven Offline Optimization For Architecting Hardware Accelerators [89.68870139177785]
We develop a data-driven offline optimization method for designing hardware accelerators, dubbed PRIME.
PRIME improves performance upon state-of-the-art simulation-driven methods by about 1.54x and 1.20x, while considerably reducing the required total simulation time by 93% and 99%, respectively.
In addition, PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x.
arXiv Detail & Related papers (2021-10-20T17:06:09Z) - Union: A Unified HW-SW Co-Design Ecosystem in MLIR for Evaluating Tensor
Operations on Spatial Accelerators [4.055002321981825]
We present a HW-SW co-design ecosystem for spatial accelerators called Union.
Our framework allows exploring different algorithms and their mappings on several accelerator cost models.
We demonstrate the value of Union for the community with several case studies.
arXiv Detail & Related papers (2021-09-15T16:42:18Z) - Evaluating Spatial Accelerator Architectures with Tiled Matrix-Matrix
Multiplication [4.878665155352402]
We develop a framework that finds optimized mappings for a tiled GEMM for a given spatial accelerator and workload combination.
Our evaluations over five spatial accelerators demonstrate that the tiled GEMM mappings systematically generated by our framework achieve high performance.
arXiv Detail & Related papers (2021-06-19T13:53:58Z) - Providing Meaningful Data Summarizations Using Examplar-based Clustering
in Industry 4.0 [67.80123919697971]
We show, that our GPU implementation provides speedups of up to 72x using single-precision and up to 452x using half-precision compared to conventional CPU algorithms.
We apply our algorithm to real-world data from injection molding manufacturing processes and discuss how found summaries help with steering this specific process to cut costs and reduce the manufacturing of bad parts.
arXiv Detail & Related papers (2021-05-25T15:55:14Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.