HPTMT Parallel Operators for High Performance Data Science & Data
Engineering
- URL: http://arxiv.org/abs/2108.06001v1
- Date: Fri, 13 Aug 2021 00:05:43 GMT
- Title: HPTMT Parallel Operators for High Performance Data Science & Data
Engineering
- Authors: Vibhatha Abeykoon, Supun Kamburugamuve, Chathura Widanage, Niranda
Perera, Ahmet Uyar, Thejaka Amila Kanewala, Gregor von Laszewski, and
Geoffrey Fox
- Abstract summary: HPTMT architecture identifies a set of data structures, operators, and an execution model for creating rich data applications.
This paper elaborates and illustrates this architecture using an end-to-end application with deep learning and data engineering parts working together.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-intensive applications are becoming commonplace in all science
disciplines. They are comprised of a rich set of sub-domains such as data
engineering, deep learning, and machine learning. These applications are built
around efficient data abstractions and operators that suit the applications of
different domains. Often lack of a clear definition of data structures and
operators in the field has led to other implementations that do not work well
together. The HPTMT architecture that we proposed recently, identifies a set of
data structures, operators, and an execution model for creating rich data
applications that links all aspects of data engineering and data science
together efficiently. This paper elaborates and illustrates this architecture
using an end-to-end application with deep learning and data engineering parts
working together.
Related papers
- DSBench: How Far Are Data Science Agents to Becoming Data Science Experts? [58.330879414174476]
We introduce DSBench, a benchmark designed to evaluate data science agents with realistic tasks.
This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions.
Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent solving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG)
arXiv Detail & Related papers (2024-09-12T02:08:00Z) - Towards an Integrated Performance Framework for Fire Science and Management Workflows [0.0]
This paper presents an artificial intelligence and machine learning (AI/ML) approach to performance assessment and optimization.
An associated early AI/ML framework spanning performance data collection, prediction and optimization is applied to wildfire science applications.
arXiv Detail & Related papers (2024-07-30T22:37:25Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - Imitation Learning Datasets: A Toolkit For Creating Datasets, Training
Agents and Benchmarking [0.9944647907864256]
Imitation learning field requires expert data to train agents in a task.
Most often, this learning approach suffers from the absence of available data.
This work aims to address these issues by creating Imitation Learning datasets.
arXiv Detail & Related papers (2024-03-01T14:18:46Z) - Architecting Data-Intensive Applications : From Data Architecture Design
to Its Quality Assurance [0.0]
Data Architecture is crucial in describing, collecting, storing, processing, and analyzing data to meet business needs.
We have evaluated the DAT on more than five cases within various industry domains, demonstrating its exceptional adaptability and effectiveness.
arXiv Detail & Related papers (2024-01-22T14:58:54Z) - DAT: Data Architecture Modeling Tool for Data-Driven Applications [1.6037279419318131]
Data Architecture (DA) focuses on describing, collecting, storing, processing, and analyzing the data to meet business needs.
We present the DAT, a model-driven engineering tool enabling data architects, data engineers, and other stakeholders to describe how data flows through the system.
arXiv Detail & Related papers (2023-06-21T11:24:59Z) - KGLiDS: A Platform for Semantic Abstraction, Linking, and Automation of Data Science [4.120803087965204]
This paper presents a scalable platform, KGLiDS, that employs machine learning and knowledge graph technologies to abstract and capture the semantics of data science artifacts and their connections.
Based on this information, KGLiDS enables various downstream applications, such as data discovery and pipeline automation.
arXiv Detail & Related papers (2023-03-03T20:31:04Z) - A Multi-Format Transfer Learning Model for Event Argument Extraction via
Variational Information Bottleneck [68.61583160269664]
Event argument extraction (EAE) aims to extract arguments with given roles from texts.
We propose a multi-format transfer learning model with variational information bottleneck.
We conduct extensive experiments on three benchmark datasets, and obtain new state-of-the-art performance on EAE.
arXiv Detail & Related papers (2022-08-27T13:52:01Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - CateCom: a practical data-centric approach to categorization of
computational models [77.34726150561087]
We present an effort aimed at organizing the landscape of physics-based and data-driven computational models.
We apply object-oriented design concepts and outline the foundations of an open-source collaborative framework.
arXiv Detail & Related papers (2021-09-28T02:59:40Z) - MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and
Architectures [61.73533544385352]
We propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data.
As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize heterogeneous tasks and architectures.
arXiv Detail & Related papers (2020-06-13T02:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.