Data Summarization beyond Monotonicity: Non-monotone Two-Stage
Submodular Maximization
- URL: http://arxiv.org/abs/2309.05183v2
- Date: Thu, 2 Nov 2023 02:30:20 GMT
- Title: Data Summarization beyond Monotonicity: Non-monotone Two-Stage
Submodular Maximization
- Authors: Shaojie Tang
- Abstract summary: The objective of a two-stage submodular problem is to reduce the ground set using provided training functions that are submodular.
This problem has applications in various domains including data summarization.
- Score: 11.296845424426062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The objective of a two-stage submodular maximization problem is to reduce the
ground set using provided training functions that are submodular, with the aim
of ensuring that optimizing new objective functions over the reduced ground set
yields results comparable to those obtained over the original ground set. This
problem has applications in various domains including data summarization.
Existing studies often assume the monotonicity of the objective function,
whereas our work pioneers the extension of this research to accommodate
non-monotone submodular functions. We have introduced the first constant-factor
approximation algorithms for this more general case.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Non-monotone Sequential Submodular Maximization [13.619980548779687]
We aim to select and rank a group of $k$ items from a ground set $V$ such that the weighted assortment of $k$ is maximized.
The results of this research have implications in various fields, including recommendation systems and optimization.
arXiv Detail & Related papers (2023-08-16T19:32:29Z) - Stochastic Submodular Maximization via Polynomial Estimators [13.498923494159312]
We focus on maximizing submodular functions that are defined as expectations over a class of submodular functions with an unknown distribution.
We show that for monotone functions of this form, greedy continuous algorithm attains an approximation ratio (in expectation) arbitrarily close to $(1-1/e) approx 63%$ using a estimation.
arXiv Detail & Related papers (2023-03-17T13:32:33Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Neural Estimation of Submodular Functions with Applications to
Differentiable Subset Selection [50.14730810124592]
Submodular functions and variants, through their ability to characterize diversity and coverage, have emerged as a key tool for data selection and summarization.
We propose FLEXSUBNET, a family of flexible neural models for both monotone and non-monotone submodular functions.
arXiv Detail & Related papers (2022-10-20T06:00:45Z) - Online SuBmodular + SuPermodular (BP) Maximization with Bandit Feedback [23.665149409064814]
We extend purely submodular prior work to more general non-submodular objectives.
This allows representing not only competitive (submodular) but also complementary (supermodular) relationships between objects.
arXiv Detail & Related papers (2022-07-07T05:10:15Z) - Using Partial Monotonicity in Submodular Maximization [13.23676270963484]
We show that for many standard submodular algorithms one can prove new approximation guarantees that depend on the monotonicity ratio.
This leads to improved approximation ratios for the common machine learning applications of movie recommendation, quadratic programming and image summarization.
arXiv Detail & Related papers (2022-02-07T10:35:40Z) - Planning with Submodular Objective Functions [118.0376288522372]
We study planning with submodular objective functions, where instead of maximizing cumulative reward, the goal is to maximize the value induced by a submodular function.
Our framework subsumes standard planning and submodular objective with cardinality constraints as special cases.
arXiv Detail & Related papers (2020-10-22T16:55:12Z) - Continuous Submodular Function Maximization [91.17492610120324]
Continuous submodularity is a class of functions with a wide spectrum of applications.
We identify several applications of continuous submodular optimization, ranging from influence, MAP for inferences to inferences to field field.
arXiv Detail & Related papers (2020-06-24T04:37:31Z) - From Sets to Multisets: Provable Variational Inference for Probabilistic
Integer Submodular Models [82.95892656532696]
Submodular functions have been studied extensively in machine learning and data mining.
In this work, we propose a continuous DR-submodular extension for integer submodular functions.
We formulate a new probabilistic model which is defined through integer submodular functions.
arXiv Detail & Related papers (2020-06-01T22:20:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.