Learning Context-Aware Service Representation for Service Recommendation
in Workflow Composition
- URL: http://arxiv.org/abs/2205.11771v1
- Date: Tue, 24 May 2022 04:18:01 GMT
- Title: Learning Context-Aware Service Representation for Service Recommendation
in Workflow Composition
- Authors: Xihao Xie, Jia Zhang, Rahul Ramachandran, Tsengdar J. Lee, Seungwon
Lee
- Abstract summary: This paper proposes a novel NLP-inspired approach to recommending services throughout a workflow development process.
A workflow composition process is formalized as a step-wise, context-aware service generation procedure.
Service embeddings are then learned by applying deep learning model from the NLP field.
- Score: 6.17189383632496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As increasingly more software services have been published onto the Internet,
it remains a significant challenge to recommend suitable services to facilitate
scientific workflow composition. This paper proposes a novel NLP-inspired
approach to recommending services throughout a workflow development process,
based on incrementally learning latent service representation from workflow
provenance. A workflow composition process is formalized as a step-wise,
context-aware service generation procedure, which is mapped to next-word
prediction in a natural language sentence. Historical service dependencies are
extracted from workflow provenance to build and enrich a knowledge graph. Each
path in the knowledge graph reflects a scenario in a data analytics experiment,
which is analogous to a sentence in a conversation. All paths are thus
formalized as composable service sequences and are mined, using various
patterns, from the established knowledge graph to construct a corpus. Service
embeddings are then learned by applying deep learning model from the NLP field.
Extensive experiments on the real-world dataset demonstrate the effectiveness
and efficiency of the approach.
Related papers
- ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation [87.39861573270173]
We introduce the novel task of prompt-adaptive workflow generation, where the goal is to automatically tailor a workflow to each user prompt.
We propose two LLM-based approaches to tackle this task: a tuning-based method that learns from user-preference data, and a training-free method that uses the LLM to select existing flows.
Our work shows that prompt-dependent flow prediction offers a new pathway to improving text-to-image generation quality, complementing existing research directions in the field.
arXiv Detail & Related papers (2024-10-02T16:43:24Z) - A Universal Prompting Strategy for Extracting Process Model Information from Natural Language Text using Large Language Models [0.8899670429041453]
We show that generative large language models (LLMs) can solve NLP tasks with very high quality without the need for extensive data.
Based on a novel prompting strategy, we show that LLMs are able to outperform state-of-the-art machine learning approaches.
arXiv Detail & Related papers (2024-07-26T06:39:35Z) - Learning Service Selection Decision Making Behaviors During Scientific Workflow Development [3.341965553962658]
In this paper, a novel context-aware approach is proposed to recommending next services in a workflow development process.
The problem of next service recommendation is mapped to next word prediction.
Experiments on a real-word repository have demonstrated the effectiveness of this approach.
arXiv Detail & Related papers (2024-03-30T16:58:42Z) - Vocabulary-Defined Semantics: Latent Space Clustering for Improving In-Context Learning [32.178931149612644]
In-context learning enables language models to adapt to downstream data or incorporate tasks by few samples as demonstrations within the prompts.
However, the performance of in-context learning can be unstable depending on the quality, format, or order of demonstrations.
We propose a novel approach "vocabulary-defined semantics"
arXiv Detail & Related papers (2024-01-29T14:29:48Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Goal-Driven Context-Aware Next Service Recommendation for Mashup
Composition [6.17189383632496]
Service discovery and recommendation has attracted significant momentum in both academia and industry.
This paper proposes a novel incremental recommend-as-you-go approach to recommending next potential service based on the context of a mashup under construction.
arXiv Detail & Related papers (2022-10-25T16:24:21Z) - Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data
Programming [77.38174112525168]
We present Nemo, an end-to-end interactive Supervision system that improves overall productivity of WS learning pipeline by an average 20% (and up to 47% in one task) compared to the prevailing WS supervision approach.
arXiv Detail & Related papers (2022-03-02T19:57:32Z) - A Data-Centric Framework for Composable NLP Workflows [109.51144493023533]
Empirical natural language processing systems in application domains (e.g., healthcare, finance, education) involve interoperation among multiple components.
We establish a unified open-source framework to support fast development of such sophisticated NLP in a composable manner.
arXiv Detail & Related papers (2021-03-02T16:19:44Z) - Knowledge-Aware Procedural Text Understanding with Multi-Stage Training [110.93934567725826]
We focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.
Two challenges, the difficulty of commonsense reasoning and data insufficiency, still remain unsolved.
We propose a novel KnOwledge-Aware proceduraL text understAnding (KOALA) model, which effectively leverages multiple forms of external knowledge.
arXiv Detail & Related papers (2020-09-28T10:28:40Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.