FuzzyFlow: Leveraging Dataflow To Find and Squash Program Optimization
Bugs
- URL: http://arxiv.org/abs/2306.16178v1
- Date: Wed, 28 Jun 2023 13:00:17 GMT
- Title: FuzzyFlow: Leveraging Dataflow To Find and Squash Program Optimization
Bugs
- Authors: Philipp Schaad and Timo Schneider and Tal Ben-Nun and Alexandru
Calotoiu and Alexandros Nikolaos Ziogas and Torsten Hoefler
- Abstract summary: FuzzyFlow is a fault localization and test case extraction framework designed to test program optimizations.
We leverage dataflow program representations to capture a fully reproducible system state and area-of-effect for optimizations.
To reduce testing time, we design an algorithm for minimizing test inputs, trading off memory for recomputation.
- Score: 92.47146416628965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current hardware landscape and application scale is driving performance
engineers towards writing bespoke optimizations. Verifying such optimizations,
and generating minimal failing cases, is important for robustness in the face
of changing program conditions, such as inputs and sizes. However, isolation of
minimal test-cases from existing applications and generating new configurations
are often difficult due to side effects on the system state, mostly related to
dataflow. This paper introduces FuzzyFlow: a fault localization and test case
extraction framework designed to test program optimizations. We leverage
dataflow program representations to capture a fully reproducible system state
and area-of-effect for optimizations to enable fast checking for semantic
equivalence. To reduce testing time, we design an algorithm for minimizing test
inputs, trading off memory for recomputation. We demonstrate FuzzyFlow on
example use cases in real-world applications where the approach provides up to
528 times faster optimization testing and debugging compared to traditional
approaches.
Related papers
- Improving Instance Optimization in Deformable Image Registration with Gradient Projection [7.6061804149819885]
Deformable image registration is inherently a multi-objective optimization problem.
These conflicting objectives often lead to poor optimization outcomes.
Deep learning methods have recently gained popularity in this domain due to their efficiency in processing large datasets.
arXiv Detail & Related papers (2024-10-21T08:27:13Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Practical Layout-Aware Analog/Mixed-Signal Design Automation with
Bayesian Neural Networks [5.877728608070716]
Many learning-based algorithms require thousands of simulated data points, which is impractical for expensive to simulate circuits.
We propose a learning-based algorithm that can be trained using a small amount of data and, therefore, scalable to tasks with expensive simulations.
arXiv Detail & Related papers (2023-11-27T19:02:43Z) - Diffusion Generative Inverse Design [28.04683283070957]
Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.
Recent developments in learned graph neural networks (GNNs) can be used for accurate, efficient, differentiable estimation of simulator dynamics.
We show how denoising diffusion diffusion models can be used to solve inverse design problems efficiently and propose a particle sampling algorithm for further improving their efficiency.
arXiv Detail & Related papers (2023-09-05T08:32:07Z) - AccFlow: Backward Accumulation for Long-Range Optical Flow [70.4251045372285]
This paper proposes a novel recurrent framework called AccFlow for long-range optical flow estimation.
We demonstrate the superiority of backward accumulation over conventional forward accumulation.
Experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation.
arXiv Detail & Related papers (2023-08-25T01:51:26Z) - AnyFlow: Arbitrary Scale Optical Flow with Implicit Neural
Representation [17.501820140334328]
We introduce AnyFlow, a robust network that estimates accurate flow from images of various resolutions.
We establish a new state-of-the-art performance of cross-dataset generalization on the KITTI dataset.
arXiv Detail & Related papers (2023-03-29T07:03:51Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - Fast Bayesian Optimization of Needle-in-a-Haystack Problems using
Zooming Memory-Based Initialization [73.96101108943986]
A Needle-in-a-Haystack problem arises when there is an extreme imbalance of optimum conditions relative to the size of the dataset.
We present a Zooming Memory-Based Initialization algorithm that builds on conventional Bayesian optimization principles.
arXiv Detail & Related papers (2022-08-26T23:57:41Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.