Explaining with Greater Support: Weighted Column Sampling Optimization
for q-Consistent Summary-Explanations
- URL: http://arxiv.org/abs/2302.04528v1
- Date: Thu, 9 Feb 2023 09:40:30 GMT
- Title: Explaining with Greater Support: Weighted Column Sampling Optimization
for q-Consistent Summary-Explanations
- Authors: Chen Peng, Zhengqi Dai, Guangping Xia, Yajie Niu, Yihui Lei
- Abstract summary: $q$-consistent summary-explanation aims to achieve greater support at the cost of slightly lower consistency.
The challenge is that the max-support problem of $q$-consistent summary-explanation (MSqC) is much more complex than the original MS problem.
To improve the solution time efficiency, this paper proposes the weighted column sampling(WCS) method.
- Score: 1.6262731094866383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning systems have been extensively used as auxiliary tools in
domains that require critical decision-making, such as healthcare and criminal
justice. The explainability of decisions is crucial for users to develop trust
on these systems. In recent years, the globally-consistent rule-based
summary-explanation and its max-support (MS) problem have been proposed, which
can provide explanations for particular decisions along with useful statistics
of the dataset. However, globally-consistent summary-explanations with limited
complexity typically have small supports, if there are any. In this paper, we
propose a relaxed version of summary-explanation, i.e., the $q$-consistent
summary-explanation, which aims to achieve greater support at the cost of
slightly lower consistency. The challenge is that the max-support problem of
$q$-consistent summary-explanation (MSqC) is much more complex than the
original MS problem, resulting in over-extended solution time using standard
branch-and-bound solvers. To improve the solution time efficiency, this paper
proposes the weighted column sampling~(WCS) method based on solving smaller
problems by sampling variables according to their simplified increase support
(SIS) values. Experiments verify that solving MSqC with the proposed SIS-based
WCS method is not only more scalable in efficiency, but also yields solutions
with greater support and better global extrapolation effectiveness.
Related papers
- Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling [38.7578639980701]
Self-improvement methods enable large language models to generate solutions themselves.
We find that models tend to over-sample on easy queries and under-sample on queries they have yet to master.
We introduce Guided Self-Improvement (GSI), a strategy aimed at improving the efficiency of sampling challenging heavy-tailed data.
arXiv Detail & Related papers (2024-11-01T17:18:45Z) - DISCO: Efficient Diffusion Solver for Large-Scale Combinatorial Optimization Problems [37.205311971072405]
DISCO is an efficient DIffusion solver for large-scale Combinatorial Optimization problems.
It constrains the sampling space to a more meaningful domain guided by solution residues, while preserving the multi-modal properties of the output distributions.
It delivers strong performance on large-scale Traveling Salesman Problems and challenging Maximal Independent Set benchmarks, with inference time up to 5.28 times faster than other diffusion alternatives.
arXiv Detail & Related papers (2024-06-28T07:36:31Z) - Dataless Quadratic Neural Networks for the Maximum Independent Set Problem [23.643727259409744]
We show that pCQO-MIS scales with only the number of nodes in the graph not the number of edges.
Our method avoids out-of-distribution, sampling, and data-centric approaches.
arXiv Detail & Related papers (2024-06-27T21:12:48Z) - QFMTS: Generating Query-Focused Summaries over Multi-Table Inputs [63.98556480088152]
Table summarization is a crucial task aimed at condensing information into concise and comprehensible textual summaries.
We propose a novel method to address these limitations by introducing query-focused multi-table summarization.
Our approach, which comprises a table serialization module, a summarization controller, and a large language model, generates query-dependent table summaries tailored to users' information needs.
arXiv Detail & Related papers (2024-05-08T15:05:55Z) - Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity [59.57065228857247]
Retrieval-augmented Large Language Models (LLMs) have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA)
We propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs based on the query complexity.
We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems.
arXiv Detail & Related papers (2024-03-21T13:52:30Z) - MQAG: Multiple-choice Question Answering and Generation for Assessing
Information Consistency in Summarization [55.60306377044225]
State-of-the-art summarization systems can generate highly fluent summaries.
These summaries, however, may contain factual inconsistencies and/or information not present in the source.
We introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared.
arXiv Detail & Related papers (2023-01-28T23:08:25Z) - On the Global Solution of Soft k-Means [159.23423824953412]
This paper presents an algorithm to solve the Soft k-Means problem globally.
A new model, named Minimal Volume Soft kMeans (MVSkM), is proposed to address solutions non-uniqueness issue.
arXiv Detail & Related papers (2022-12-07T12:06:55Z) - Tensor Completion with Provable Consistency and Fairness Guarantees for
Recommender Systems [5.099537069575897]
We introduce a new consistency-based approach for defining and solving nonnegative/positive matrix and tensor completion problems.
We show that a single property/constraint: preserving unit-scale consistency, guarantees the existence of both a solution and, under relatively weak support assumptions, uniqueness.
arXiv Detail & Related papers (2022-04-04T19:42:46Z) - Abstractive Query Focused Summarization with Query-Free Resources [60.468323530248945]
In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
arXiv Detail & Related papers (2020-12-29T14:39:35Z) - Learning What to Defer for Maximum Independent Sets [84.00112106334655]
We propose a novel DRL scheme, coined learning what to defer (LwD), where the agent adaptively shrinks or stretch the number of stages by learning to distribute the element-wise decisions of the solution at each stage.
We apply the proposed framework to the maximum independent set (MIS) problem, and demonstrate its significant improvement over the current state-of-the-art DRL scheme.
arXiv Detail & Related papers (2020-06-17T02:19:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.