Submodularity In Machine Learning and Artificial Intelligence
- URL: http://arxiv.org/abs/2202.00132v1
- Date: Mon, 31 Jan 2022 22:41:35 GMT
- Title: Submodularity In Machine Learning and Artificial Intelligence
- Authors: Jeff Bilmes
- Abstract summary: We offer a plethora of submodular definitions; a full description of example submodular functions and their generalizations.
We then turn to how submodularity is useful in machine learning and artificial intelligence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this manuscript, we offer a gentle review of submodularity and
supermodularity and their properties. We offer a plethora of submodular
definitions; a full description of a number of example submodular functions and
their generalizations; example discrete constraints; a discussion of basic
algorithms for maximization, minimization, and other operations; a brief
overview of continuous submodular extensions; and some historical applications.
We then turn to how submodularity is useful in machine learning and artificial
intelligence. This includes summarization, and we offer a complete account of
the differences between and commonalities amongst sketching, coresets,
extractive and abstractive summarization in NLP, data distillation and
condensation, and data subset selection and feature selection. We discuss a
variety of ways to produce a submodular function useful for machine learning,
including heuristic hand-crafting, learning or approximately learning a
submodular function or aspects thereof, and some advantages of the use of a
submodular function as a coreset producer. We discuss submodular combinatorial
information functions, and how submodularity is useful for clustering, data
partitioning, parallel machine learning, active and semi-supervised learning,
probabilistic modeling, and structured norms and loss functions.
Related papers
- Modular Deep Learning [120.36599591042908]
Transfer learning has recently become the dominant paradigm of machine learning.
It remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference.
Modular deep learning has emerged as a promising solution to these challenges.
arXiv Detail & Related papers (2023-02-22T18:11:25Z) - Neural Estimation of Submodular Functions with Applications to
Differentiable Subset Selection [50.14730810124592]
Submodular functions and variants, through their ability to characterize diversity and coverage, have emerged as a key tool for data selection and summarization.
We propose FLEXSUBNET, a family of flexible neural models for both monotone and non-monotone submodular functions.
arXiv Detail & Related papers (2022-10-20T06:00:45Z) - Sparsification of Decomposable Submodular Functions [27.070004659301162]
Submodular functions are at the core of many machine learning and data mining tasks.
The number of underlying submodular functions in the original function is so large that we need large amount of time to process it and/or it does not even fit in the main memory.
We introduce the notion of sparsification for decomposable submodular functions whose objective is to obtain an accurate approximation of the original function is a (weighted) sum of only a few submodular functions.
arXiv Detail & Related papers (2022-01-18T20:05:25Z) - Deep Submodular Networks for Extractive Data Summarization [0.46898263272139784]
We propose an end-to-end learning framework for summarization problems.
The Deep Submodular Networks (DSN) framework can be used to learn features appropriate for summarization from scratch.
In particular, we show that DSNs outperform simple mixture models using off the shelf features.
arXiv Detail & Related papers (2020-10-16T19:06:15Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z) - Submodular Combinatorial Information Measures with Applications in
Machine Learning [2.5329739965085785]
Information-theoretic quantities like entropy and mutual information have found numerous uses in machine learning.
We study information measures that generalize independence, (conditional) entropy, (conditional) mutual information, and total correlation defined over sets of variables.
arXiv Detail & Related papers (2020-06-27T17:45:32Z) - Continuous Submodular Function Maximization [91.17492610120324]
Continuous submodularity is a class of functions with a wide spectrum of applications.
We identify several applications of continuous submodular optimization, ranging from influence, MAP for inferences to inferences to field field.
arXiv Detail & Related papers (2020-06-24T04:37:31Z) - From Sets to Multisets: Provable Variational Inference for Probabilistic
Integer Submodular Models [82.95892656532696]
Submodular functions have been studied extensively in machine learning and data mining.
In this work, we propose a continuous DR-submodular extension for integer submodular functions.
We formulate a new probabilistic model which is defined through integer submodular functions.
arXiv Detail & Related papers (2020-06-01T22:20:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.