DataJoint 2.0: A Computational Substrate for Agentic Scientific Workflows
- URL: http://arxiv.org/abs/2602.16585v1
- Date: Wed, 18 Feb 2026 16:35:47 GMT
- Title: DataJoint 2.0: A Computational Substrate for Agentic Scientific Workflows
- Authors: Dimitri Yatsenko, Thinh T. Nguyen,
- Abstract summary: DataJoint creates a substrate for SciOps where agents can participate in scientific transformations without risking data corruption.<n>Tables represent workflow steps, rows represent artifacts, foreign keys prescribe execution order.<n>Single formal system where data structure, computational dependencies, and integrity constraints are all queryable, enforceable, and machine-readable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operational rigor determines whether human-agent collaboration succeeds or fails. Scientific data pipelines need the equivalent of DevOps -- SciOps -- yet common approaches fragment provenance across disconnected systems without transactional guarantees. DataJoint 2.0 addresses this gap through the relational workflow model: tables represent workflow steps, rows represent artifacts, foreign keys prescribe execution order. The schema specifies not only what data exists but how it is derived -- a single formal system where data structure, computational dependencies, and integrity constraints are all queryable, enforceable, and machine-readable. Four technical innovations extend this foundation: object-augmented schemas integrating relational metadata with scalable object storage, semantic matching using attribute lineage to prevent erroneous joins, an extensible type system for domain-specific formats, and distributed job coordination designed for composability with external orchestration. By unifying data structure, data, and computational transformations, DataJoint creates a substrate for SciOps where agents can participate in scientific workflows without risking data corruption.
Related papers
- Generative Data Transformation: From Mixed to Unified Data [57.84692191369066]
textscTaesar is a emphdata-centric framework for textbftarget-textbfal textbfregeneration.<n>It encodes cross-domain context into target sequences, enabling standard models to learn intricate dependencies without complex fusion architectures.
arXiv Detail & Related papers (2026-02-26T08:30:09Z) - AgentSkiller: Scaling Generalist Agent Intelligence through Semantically Integrated Cross-Domain Data Synthesis [30.512393568258105]
Large Language Model agents demonstrate potential in solving real-world problems via tools, yet generalist intelligence is bottlenecked by scarce high-quality, long-horizon data.<n>We propose AgentSkiller, a fully automated framework synthesizing multi-turn interaction data across realistic, semantically linked domains.
arXiv Detail & Related papers (2026-02-10T03:21:42Z) - DataCross: A Unified Benchmark and Agent Framework for Cross-Modal Heterogeneous Data Analysis [8.171937411588015]
We introduce DataCross, a novel benchmark and collaborative agent framework for unified, insight-driven analysis.<n>DataCrossBench comprises 200 end-to-end analysis tasks across finance, healthcare, and other domains.<n>We also propose the DataCrossAgent framework, inspired by the "divide-and-synthesis" workflow of human analysts.
arXiv Detail & Related papers (2026-01-29T08:40:45Z) - Operon: Incremental Construction of Ragged Data via Named Dimensions [1.6212518002538465]
Existing workflow engines lack native support for tracking the shapes and dependencies inherent to ragged data.<n>We present Operon, a Rust-based workflow engine that addresses these challenges through a novel formalism of named dimensions with explicit dependency relations.
arXiv Detail & Related papers (2025-11-20T06:16:31Z) - Kernel Representation and Similarity Measure for Incomplete Data [55.62595187178638]
Measuring similarity between incomplete data is a fundamental challenge in web mining, recommendation systems, and user behavior analysis.<n>Traditional approaches either discard incomplete data or perform imputation as a preprocessing step, leading to information loss and biased similarity estimates.<n>This paper presents a new similarity measure that directly computes similarity between incomplete data in kernel feature space without explicit imputation in the original space.
arXiv Detail & Related papers (2025-10-15T09:41:23Z) - Transduction is All You Need for Structured Data Workflows [8.178153196011028]
This paper introduces Agentics, a functional agentic AI framework for building structured data workflow pipelines.<n>Designed for both research and practical applications, Agentics offers a new data-centric paradigm in which agents are embedded within data types.<n>We present a range of structured data workflow tasks and empirical evidence demonstrating the effectiveness of this approach.
arXiv Detail & Related papers (2025-08-21T14:35:47Z) - Data Dependency-Aware Code Generation from Enhanced UML Sequence Diagrams [54.528185120850274]
We propose a novel step-by-step code generation framework named API2Dep.<n>First, we introduce an enhanced Unified Modeling Language (UML) API diagram tailored for service-oriented architectures.<n>Second, recognizing the critical role of data flow, we introduce a dedicated data dependency inference task.
arXiv Detail & Related papers (2025-08-05T12:28:23Z) - RelDiff: Relational Data Generative Modeling with Graph-Based Diffusion Models [83.6013616017646]
RelDiff is a novel diffusion generative model that synthesizes complete relational databases by explicitly modeling their foreign key graph structure.<n>RelDiff consistently outperforms prior methods in producing realistic and coherent synthetic relational databases.
arXiv Detail & Related papers (2025-05-31T21:01:02Z) - Text2Schema: Filling the Gap in Designing Database Table Structures based on Natural Language [22.15408079332362]
People without a database background usually rely on file systems or tools such as Excel data management.<n> Database systems possess strong management capabilities, but require a high level of professional expertise from users.
arXiv Detail & Related papers (2025-03-31T09:39:19Z) - ToolACE: Winning the Points of LLM Function Calling [139.07157814653638]
ToolACE is an automatic agentic pipeline designed to generate accurate, complex, and diverse tool-learning data.<n>We demonstrate that models trained on our synthesized data, even with only 8B parameters, achieve state-of-the-art performance on the Berkeley Function-Calling Leaderboard.
arXiv Detail & Related papers (2024-09-02T03:19:56Z) - Accessing and Interpreting OPC UA Event Traces based on Semantic Process
Descriptions [69.9674326582747]
This paper proposes an approach to access a production systems' event data based on the event data's context.
The approach extracts filtered event logs from a database system by combining: 1) a semantic model of a production system's hierarchical structure, 2) a formalized process description and 3) an OPC UA information model.
arXiv Detail & Related papers (2022-07-25T15:13:44Z) - Partially-Aligned Data-to-Text Generation with Distant Supervision [69.15410325679635]
We propose a new generation task called Partially-Aligned Data-to-Text Generation (PADTG)
It is more practical since it utilizes automatically annotated data for training and thus considerably expands the application domains.
Our framework outperforms all baseline models as well as verify the feasibility of utilizing partially-aligned data.
arXiv Detail & Related papers (2020-10-03T03:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.