FlowUnits: Extending Dataflow for the Edge-to-Cloud Computing Continuum
- URL: http://arxiv.org/abs/2504.11400v1
- Date: Tue, 15 Apr 2025 17:14:08 GMT
- Title: FlowUnits: Extending Dataflow for the Edge-to-Cloud Computing Continuum
- Authors: Fabio Chini, Luca De Martini, Alessandro Margara, Gianpaolo Cugola,
- Abstract summary: FlowUnits organizes processing operators into cohesive, independently manageable components that can be transparently replicated across different regions.<n>Our approach maintains the simplicity of dataflow while enabling seamless integration of edge and cloud resources into unified data processing pipelines.
- Score: 41.94295877935867
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper introduces FlowUnits, a novel programming and deployment model that extends the traditional dataflow paradigm to address the unique challenges of edge-to-cloud computing environments. While conventional dataflow systems offer significant advantages for large-scale data processing in homogeneous cloud settings, they fall short when deployed across distributed, heterogeneous infrastructures. FlowUnits addresses three critical limitations of current approaches: lack of locality awareness, insufficient resource adaptation, and absence of dynamic update mechanisms. FlowUnits organize processing operators into cohesive, independently manageable components that can be transparently replicated across different regions, efficiently allocated on nodes with appropriate hardware capabilities, and dynamically updated without disrupting ongoing computations. We implement and evaluate the FlowUnits model within Renoir, an existing dataflow system, demonstrating significant improvements in deployment flexibility and resource utilization across the computing continuum. Our approach maintains the simplicity of dataflow while enabling seamless integration of edge and cloud resources into unified data processing pipelines.
Related papers
- Learning to Compose for Cross-domain Agentic Workflow Generation [56.630382886594184]
We create an open-source LLM for cross-domain workflow generation.<n>We learn a compact set of reusable workflow capabilities across diverse domains.<n>Our 1-pass generator surpasses SOTA refinement baselines that consume 20 iterations.
arXiv Detail & Related papers (2026-02-11T18:27:22Z) - Trajectory Stitching for Solving Inverse Problems with Flow-Based Models [68.36374645801901]
Flow-based generative models have emerged as powerful priors for solving inverse problems.<n>We propose MS-Flow, which represents the trajectory as a sequence of intermediate latent states rather than a single initial code.<n>We demonstrate the effectiveness of MS-Flow over existing methods on image recovery and inverse problems, including inpainting, super-resolution, and computed tomography.
arXiv Detail & Related papers (2026-02-09T11:36:41Z) - Causify DataFlow: A Framework For High-performance Machine Learning Stream Computing [0.0]
We present DataFlow, a computational framework for building, testing, and deploying machine learning systems on unbounded time-series data.<n>Traditional data science assume finite datasets and require substantial reimplementation when moving from batch prototypes to streaming production systems.<n>DataFlow resolves these issues through a unified execution model based on acyclic graphs with point-in-time idempotency.
arXiv Detail & Related papers (2025-12-30T04:24:04Z) - FlowBind: Efficient Any-to-Any Generation with Bidirectional Flows [17.924626622563924]
FlowBind is an efficient framework for any-to-any generation.<n>It learns a shared latent space capturing cross-modal information, with modality-specific invertible flows bridging this latent to each modality.<n>Experiments on text, image, and audio demonstrate that FlowBind attains comparable quality while requiring up to 6x fewer parameters and training 10x faster than prior methods.
arXiv Detail & Related papers (2025-12-17T13:08:18Z) - PowerGrow: Feasible Co-Growth of Structures and Dynamics for Power Grid Synthesis [75.14189839277928]
We present PowerGrow, a co-generative framework that significantly reduces computational overhead while maintaining operational validity.<n> Experiments across benchmark settings show that PowerGrow outperforms prior diffusion models in fidelity and diversity.<n>This demonstrates its ability to generate operationally valid and realistic power grid scenarios.
arXiv Detail & Related papers (2025-08-29T01:47:27Z) - Learning Normal Flow Directly From Event Neighborhoods [18.765370814655626]
We propose a novel supervised point-based method for normal flow estimation.
Using a local point cloud encoder, our method directly estimates per-event normal flow from raw events.
Our method achieves better and more consistent performance than state-of-the-art methods when transferred across different datasets.
arXiv Detail & Related papers (2024-12-15T19:09:45Z) - Recursive Function Definitions in Static Dataflow Graphs and their Implementation in TensorFlow [0.8368470115534696]
We propose an efficient technique for supporting function definitions in dataflow-based systems.
We make heavy use of the idea of tagging, which was one of the cornerstones of dataflow systems since their inception.
arXiv Detail & Related papers (2024-10-26T16:40:24Z) - SeBS-Flow: Benchmarking Serverless Cloud Function Workflows [51.4200085836966]
We propose the first serverless workflow benchmarking suite SeBS-Flow.<n>SeBS-Flow includes six real-world application benchmarks and four microbenchmarks representing different computational patterns.<n>We conduct comprehensive evaluations on three major cloud platforms, assessing performance, cost, scalability, and runtime deviations.
arXiv Detail & Related papers (2024-10-04T14:52:18Z) - FlowMind: Automatic Workflow Generation with LLMs [12.848562107014093]
This paper introduces a novel approach, FlowMind, leveraging the capabilities of Large Language Models (LLMs)
We propose a generic prompt recipe for a lecture that helps ground LLM reasoning with reliable Application Programming Interfaces (APIs)
We also introduce NCEN-QA, a new dataset in finance for benchmarking question-answering tasks from N-CEN reports on funds.
arXiv Detail & Related papers (2024-03-17T00:36:37Z) - FlowHON: Representing Flow Fields Using Higher-Order Networks [4.761836945285552]
FlowHON is an approach to construct higher-order networks (HONs) from flow fields.
FlowHON captures the inherent higher-order dependencies in flow fields as nodes and estimates the transitions among them as edges.
arXiv Detail & Related papers (2023-12-04T11:50:25Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Kernelised Normalising Flows [10.31916245015817]
Normalising Flows are non-parametric statistical models characterised by their dual capabilities of density estimation and generation.
We present Ferumal flow, a novel kernelised normalising flow paradigm that integrates kernels into the framework.
arXiv Detail & Related papers (2023-07-27T13:18:52Z) - TensAIR: Real-Time Training of Neural Networks from Data-streams [1.409180142531996]
This paper presents TensAIR, the first OL system for training ANNs in real time.
TensAIR achieves remarkable performance and scalability by using a decentralized and asynchronous architecture to train ANN models.
We empirically demonstrate that TensAIR achieves a nearly linear scale-out performance in terms of (1) the number of worker nodes deployed in the network, and (2) the throughput at which the data batches arrive.
arXiv Detail & Related papers (2022-11-18T15:11:44Z) - Generative Flows with Invertible Attentions [135.23766216657745]
We introduce two types of invertible attention mechanisms for generative flow models.
We exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps.
Our method provides invertible attention modules with tractable Jacobian determinants, enabling seamless integration of it at any positions of the flow-based models.
arXiv Detail & Related papers (2021-06-07T20:43:04Z) - Go with the Flows: Mixtures of Normalizing Flows for Point Cloud
Generation and Reconstruction [98.38585659305325]
normalizing flows (NFs) have demonstrated state-of-the-art performance on modeling 3D point clouds.
This work enhances their representational power by applying mixtures of NFs to point clouds.
arXiv Detail & Related papers (2021-06-06T14:25:45Z) - Coresets via Bilevel Optimization for Continual Learning and Streaming [86.67190358712064]
We propose a novel coreset construction via cardinality-constrained bilevel optimization.
We show how our framework can efficiently generate coresets for deep neural networks, and demonstrate its empirical benefits in continual learning and in streaming settings.
arXiv Detail & Related papers (2020-06-06T14:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.