Answer-Set Programs for Repair Updates and Counterfactual Interventions
- URL: http://arxiv.org/abs/2209.12110v1
- Date: Sun, 25 Sep 2022 00:34:39 GMT
- Title: Answer-Set Programs for Repair Updates and Counterfactual Interventions
- Authors: Leopoldo Bertossi
- Abstract summary: We briefly describe different kinds of answer-set programs with annotations.
These include database repairs and consistent query answering, secrecy view and query evaluation, and counterfactual interventions for causality in databases.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We briefly describe -- mainly through very simple examples -- different kinds
of answer-set programs with annotations that have been proposed for specifying:
database repairs and consistent query answering; secrecy view and query
evaluation with them; counterfactual interventions for causality in databases;
and counterfactual-based explanations in machine learning.
Related papers
- Composable Interventions for Language Models [60.32695044723103]
Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining.
But despite a flood of new methods, different types of interventions are largely developing independently.
We introduce composable interventions, a framework to study the effects of using multiple interventions on the same language models.
arXiv Detail & Related papers (2024-07-09T01:17:44Z) - Attribution-Scores in Data Management and Explainable Machine Learning [0.0]
We describe recent research on the use of actual causality in the definition of responsibility scores in databases.
In the case of databases, useful connections with database repairs are illustrated and exploited.
For classification models, the responsibility score is properly extended and illustrated.
arXiv Detail & Related papers (2023-07-31T22:41:17Z) - From Database Repairs to Causality in Databases and Beyond [0.0]
We describe some recent approaches to score-based explanations for query answers in databases.
Special emphasis is placed on the use of counterfactual reasoning for score specification and computation.
arXiv Detail & Related papers (2023-06-15T04:08:23Z) - Socratic Pretraining: Question-Driven Pretraining for Controllable
Summarization [89.04537372465612]
Socratic pretraining is a question-driven, unsupervised pretraining objective designed to improve controllability in summarization tasks.
Our results show that Socratic pretraining cuts task-specific labeled data requirements in half.
arXiv Detail & Related papers (2022-12-20T17:27:10Z) - DecAF: Joint Decoding of Answers and Logical Forms for Question
Answering over Knowledge Bases [81.19499764899359]
We propose a novel framework DecAF that jointly generates both logical forms and direct answers.
DecAF achieves new state-of-the-art accuracy on WebQSP, FreebaseQA, and GrailQA benchmarks.
arXiv Detail & Related papers (2022-09-30T19:51:52Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - Reasoning about Counterfactuals and Explanations: Problems, Results and
Directions [0.0]
These approaches are flexible and modular in that they allow the seamless addition of domain knowledge.
The programs can be used to specify and compute responsibility-based numerical scores as attributive explanations for classification results.
arXiv Detail & Related papers (2021-08-25T01:04:49Z) - Answer-Set Programs for Reasoning about Counterfactual Interventions and
Responsibility Scores for Classification [0.0]
We describe how answer-set programs can be used to declaratively specify counterfactual interventions on entities under classification.
In particular, they can be used to define and compute responsibility scores as attribution-based explanations for outcomes from classification models.
arXiv Detail & Related papers (2021-07-21T15:41:56Z) - Score-Based Explanations in Data Management and Machine Learning: An
Answer-Set Programming Approach to Counterfactual Analysis [0.0]
We describe some recent approaches to score-based explanations for query answers in databases and outcomes from classification models in machine learning.
Special emphasis is placed on declarative approaches based on answer-set programming to the use of counterfactual reasoning for score specification and computation.
arXiv Detail & Related papers (2021-06-19T19:21:48Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.