Introducing "Forecast Utterance" for Conversational Data Science
- URL: http://arxiv.org/abs/2309.03877v1
- Date: Thu, 7 Sep 2023 17:41:41 GMT
- Title: Introducing "Forecast Utterance" for Conversational Data Science
- Authors: Md Mahadi Hassan, Alex Knipper, Shubhra Kanti Karmaker (Santu)
- Abstract summary: This paper introduces a new concept called Forecast Utterance.
We then focus on the automatic and accurate interpretation of users' prediction goals from these utterances.
Specifically, we frame the task as a slot-filling problem, where each slot corresponds to a specific aspect of the goal prediction task.
We then employ two zero-shot methods for solving the slot-filling task, namely: 1) Entity Extraction (EE), and 2) Question-Answering (QA) techniques.
- Score: 2.3894779000840503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Envision an intelligent agent capable of assisting users in conducting
forecasting tasks through intuitive, natural conversations, without requiring
in-depth knowledge of the underlying machine learning (ML) processes. A
significant challenge for the agent in this endeavor is to accurately
comprehend the user's prediction goals and, consequently, formulate precise ML
tasks. In this paper, we take a pioneering step towards this ambitious goal by
introducing a new concept called Forecast Utterance and then focus on the
automatic and accurate interpretation of users' prediction goals from these
utterances. Specifically, we frame the task as a slot-filling problem, where
each slot corresponds to a specific aspect of the goal prediction task. We then
employ two zero-shot methods for solving the slot-filling task, namely: 1)
Entity Extraction (EE), and 2) Question-Answering (QA) techniques. Our
experiments, conducted with three meticulously crafted data sets, validate the
viability of our ambitious goal and demonstrate the effectiveness of both EE
and QA techniques in interpreting Forecast Utterances.
Related papers
- QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person View [2.3982875575861677]
We present our solutions for a spectrum of automation tasks in life-saving intervention procedures within the Trauma THOMPSON (T3) Challenge.
For action recognition and anticipation, we propose a pre-processing strategy that samples and stitches multiple inputs into a single image.
For training, we present an action dictionary-guided design, which consistently yields the most favorable results.
arXiv Detail & Related papers (2024-07-18T06:55:26Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - Modeling of learning curves with applications to pos tagging [0.27624021966289597]
We introduce an algorithm to estimate the evolution of learning curves on the whole of a training data base.
We approximate iteratively the sought value at the desired time, independently of the learning technique used.
The proposal proves to be formally correct with respect to our working hypotheses and includes a reliable proximity condition.
arXiv Detail & Related papers (2024-02-04T15:00:52Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Slot Induction via Pre-trained Language Model Probing and Multi-level
Contrastive Learning [62.839109775887025]
Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations.
We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit unsupervised semantic knowledge extracted from PLM.
Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets.
arXiv Detail & Related papers (2023-08-09T05:08:57Z) - ChatGPT as your Personal Data Scientist [0.9689893038619583]
This paper introduces a ChatGPT-based conversational data-science framework to act as a "personal data scientist"
Our model pivots around four dialogue states: Data visualization, Task Formulation, Prediction Engineering, and Result Summary and Recommendation.
In summary, we developed an end-to-end system that not only proves the viability of the novel concept of conversational data science but also underscores the potency of LLMs in solving complex tasks.
arXiv Detail & Related papers (2023-05-23T04:00:16Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Explain and Predict, and then Predict Again [6.865156063241553]
We propose ExPred, that uses multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses.
We conduct an extensive evaluation of our approach on three diverse language datasets.
arXiv Detail & Related papers (2021-01-11T19:36:52Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction [84.49035467829819]
We show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective.
Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale.
arXiv Detail & Related papers (2020-05-01T23:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.