A Search for Prompts: Generating Structured Answers from Contracts
- URL: http://arxiv.org/abs/2310.10141v1
- Date: Mon, 16 Oct 2023 07:29:38 GMT
- Title: A Search for Prompts: Generating Structured Answers from Contracts
- Authors: Adam Roegiest and Radha Chitta and Jonathan Donnelly and Maya Lash and
Alexandra Vtyurina and Fran\c{c}ois Longtin
- Abstract summary: We present a form of legal question answering that seeks to return one (or more) fixed answers for a question about a contract clause.
We discuss our exploration methodology for legal question answering prompts using OpenAI's textitGPT-3.5-Turbo and provide a summary of insights.
- Score: 40.99057706243682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many legal processes being able to action on the concrete implication of a
legal question can be valuable to automating human review or signalling certain
conditions (e.g., alerts around automatic renewal). To support such tasks, we
present a form of legal question answering that seeks to return one (or more)
fixed answers for a question about a contract clause. After showing that
unstructured generative question answering can have questionable outcomes for
such a task, we discuss our exploration methodology for legal question
answering prompts using OpenAI's \textit{GPT-3.5-Turbo} and provide a summary
of insights.
Using insights gleaned from our qualitative experiences, we compare our
proposed template prompts against a common semantic matching approach and find
that our prompt templates are far more accurate despite being less reliable in
the exact response return. With some additional tweaks to prompts and the use
of in-context learning, we are able to further improve the performance of our
proposed strategy while maximizing the reliability of responses as best we can.
Related papers
- Open Domain Question Answering with Conflicting Contexts [55.739842087655774]
We find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search.
We ask our annotators to provide explanations for their selections of correct answers.
arXiv Detail & Related papers (2024-10-16T07:24:28Z) - Answering Questions in Stages: Prompt Chaining for Contract QA [1.0359008237358598]
We propose two-stage prompt chaining to produce structured answers to multiple-choice and multiple-select questions.
We analyze situations where this technique works well and areas where further refinement is needed.
arXiv Detail & Related papers (2024-10-09T17:14:13Z) - Estimating the Usefulness of Clarifying Questions and Answers for
Conversational Search [17.0363715044341]
We propose a method for processing answers to clarifying questions, moving away from previous work that simply appends answers to the original query.
Specifically, we propose a classifier for assessing usefulness of the prompted clarifying question and an answer given by the user.
Results demonstrate significant improvements over strong non-mixed-initiative baselines.
arXiv Detail & Related papers (2024-01-21T11:04:30Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - Keeping the Questions Conversational: Using Structured Representations
to Resolve Dependency in Conversational Question Answering [26.997542897342164]
We propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues.
We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
arXiv Detail & Related papers (2023-04-14T13:42:32Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Adaptive Information Seeking for Open-Domain Question Answering [61.39330982757494]
We propose a novel adaptive information-seeking strategy for open-domain question answering, namely AISO.
According to the learned policy, AISO could adaptively select a proper retrieval action to seek the missing evidence at each step.
AISO outperforms all baseline methods with predefined strategies in terms of both retrieval and answer evaluations.
arXiv Detail & Related papers (2021-09-14T15:08:13Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.