GEqO: ML-Accelerated Semantic Equivalence Detection
- URL: http://arxiv.org/abs/2401.01280v1
- Date: Tue, 2 Jan 2024 16:37:42 GMT
- Title: GEqO: ML-Accelerated Semantic Equivalence Detection
- Authors: Brandon Haynes, Rana Alotaibi, Anna Pavlenko, Jyoti Leeka, Alekh
Jindal, Yuanyuan Tian
- Abstract summary: Common computation is crucial for efficient cluster resource utilization and reducing job execution time.
detecting equivalence on large-scale analytics engines requires efficient and scalable solutions that are fully automated.
We propose GEqO, a portable and lightweight machine-learning-based framework for efficiently identifying semantically equivalent computations at scale.
- Score: 3.5521901508676774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large scale analytics engines have become a core dependency for modern
data-driven enterprises to derive business insights and drive actions. These
engines support a large number of analytic jobs processing huge volumes of data
on a daily basis, and workloads are often inundated with overlapping
computations across multiple jobs. Reusing common computation is crucial for
efficient cluster resource utilization and reducing job execution time.
Detecting common computation is the first and key step for reducing this
computational redundancy. However, detecting equivalence on large-scale
analytics engines requires efficient and scalable solutions that are fully
automated. In addition, to maximize computation reuse, equivalence needs to be
detected at the semantic level instead of just the syntactic level (i.e., the
ability to detect semantic equivalence of seemingly different-looking queries).
Unfortunately, existing solutions fall short of satisfying these requirements.
In this paper, we take a major step towards filling this gap by proposing
GEqO, a portable and lightweight machine-learning-based framework for
efficiently identifying semantically equivalent computations at scale. GEqO
introduces two machine-learning-based filters that quickly prune out
nonequivalent subexpressions and employs a semi-supervised learning feedback
loop to iteratively improve its model with an intelligent sampling mechanism.
Further, with its novel database-agnostic featurization method, GEqO can
transfer the learning from one workload and database to another. Our extensive
empirical evaluation shows that, on TPC-DS-like queries, GEqO yields
significant performance gains-up to 200x faster than automated verifiers-and
finds up to 2x more equivalences than optimizer and signature-based equivalence
detection approaches.
Related papers
- Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization [0.6445087473595953]
Large language models (LLMs) demonstrate outstanding performance in various tasks in machine learning.
deploying LLM inference poses challenges due to the high compute and memory requirements.
We present Tender, an algorithm-hardware co-design solution that enables efficient deployment of LLM inference at low precision.
arXiv Detail & Related papers (2024-06-16T09:51:55Z) - Asynchronous Local Computations in Distributed Bayesian Learning [8.516532665507835]
We propose gossip-based communication to leverage fast computations and reduce communication overhead simultaneously.
We observe faster initial convergence and improved performance accuracy, especially in the low data range.
We achieve on average 78% and over 90% classification accuracy respectively on the Gamma Telescope and mHealth data sets from the UCI ML repository.
arXiv Detail & Related papers (2023-11-06T20:11:41Z) - Towards Model-Size Agnostic, Compute-Free, Memorization-based Inference
of Deep Learning [5.41530201129053]
This paper proposes a novel memorization-based inference (MBI) that is compute free and only requires lookups.
Specifically, our work capitalizes on the inference mechanism of the recurrent attention model (RAM)
By leveraging the low-dimensionality of glimpse, our inference procedure stores key value pairs comprising of glimpse location, patch vector, etc. in a table.
The computations are obviated during inference by utilizing the table to read out key-value pairs and performing compute-free inference by memorization.
arXiv Detail & Related papers (2023-07-14T21:01:59Z) - Multi-Agent Reinforcement Learning for Long-Term Network Resource
Allocation through Auction: a V2X Application [7.326507804995567]
We formulate offloading of computational tasks from a dynamic group of mobile agents (e.g., cars) as decentralized decision making among autonomous agents.
We design an interaction mechanism that incentivizes such agents to align private and system goals by balancing between competition and cooperation.
We propose a novel multi-agent online learning algorithm that learns with partial, delayed and noisy state information.
arXiv Detail & Related papers (2022-07-29T10:29:06Z) - Scalable Vehicle Re-Identification via Self-Supervision [66.2562538902156]
Vehicle Re-Identification is one of the key elements in city-scale vehicle analytics systems.
Many state-of-the-art solutions for vehicle re-id mostly focus on improving the accuracy on existing re-id benchmarks and often ignore computational complexity.
We propose a simple yet effective hybrid solution empowered by self-supervised training which only uses a single network during inference time.
arXiv Detail & Related papers (2022-05-16T12:14:42Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.