A Unified Analysis of Dynamic Interactive Learning
- URL: http://arxiv.org/abs/2204.07071v1
- Date: Thu, 14 Apr 2022 16:03:37 GMT
- Title: A Unified Analysis of Dynamic Interactive Learning
- Authors: Xing Gao, Thomas Maranzatto, Lev Reyzin
- Abstract summary: Previous work by Emamjomeh-Zadeh et al. [ 2020] introduced dynamics into interactive learning as a way to model non-static user preferences.
We give a framework that captures both of the models analyzed by [Emamjomeh-Zadeh et al., 2020], which allows us to study any type of concept evolution.
We also study an efficient algorithm where the learner simply follows the feedback at each round.
- Score: 5.474944738795308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we investigate the problem of learning evolving concepts over a
combinatorial structure. Previous work by Emamjomeh-Zadeh et al. [2020]
introduced dynamics into interactive learning as a way to model non-static user
preferences in clustering problems or recommender systems. We provide many
useful contributions to this problem. First, we give a framework that captures
both of the models analyzed by [Emamjomeh-Zadeh et al., 2020], which allows us
to study any type of concept evolution and matches the same query complexity
bounds and running time guarantees of the previous models. Using this general
model we solve the open problem of closing the gap between the upper and lower
bounds on query complexity. Finally, we study an efficient algorithm where the
learner simply follows the feedback at each round, and we provide mistake
bounds for low diameter graphs such as cliques, stars, and general o(log n)
diameter graphs by using a Markov Chain model.
Related papers
- Multi-View Stochastic Block Models [34.55723218769512]
We formalize a new family of models, called textitmulti-view block models that captures this setting.
For this model, we first study efficient algorithms that naively work on the union of multiple graphs.
Then, we introduce a new efficient algorithm that provably outperforms previous approaches by analyzing the structure of each graph separately.
arXiv Detail & Related papers (2024-06-07T11:45:31Z) - Towards a Generic Representation of Combinatorial Problems for
Learning-Based Approaches [2.2526069385327316]
In recent years, there has been a growing interest in using learning-based approaches for solving problems.
The challenge lies in encoding the targeted problems into a structure compatible with the learning algorithm.
Many existing works have proposed problem-specific representations, often in the form of a graph, to leverage the advantages of textitgraph neural networks
This paper advocates for a fully generic representation of problems for learning-based approaches.
arXiv Detail & Related papers (2024-03-09T22:28:46Z) - Scalable Structure Learning for Sparse Context-Specific Systems [0.0]
We present an algorithm for learning context-specific models that scales to hundreds of variables.
Our method is shown to perform well on synthetic data and real world examples.
arXiv Detail & Related papers (2024-02-12T16:28:52Z) - Tensor Decompositions Meet Control Theory: Learning General Mixtures of
Linear Dynamical Systems [19.47235707806519]
We give a new approach to learning mixtures of linear dynamical systems based on tensor decompositions.
Our algorithm succeeds without strong separation conditions on the components, and can be used to compete with the Bayes optimal clustering of the trajectories.
arXiv Detail & Related papers (2023-07-13T03:00:01Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - A Simple Approach to Case-Based Reasoning in Knowledge Bases [56.661396189466664]
We present a surprisingly simple yet accurate approach to reasoning in knowledge graphs (KGs) that requires emphno training, and is reminiscent of case-based reasoning in classical artificial intelligence (AI)
Consider the task of finding a target entity given a source entity and a binary relation.
Our non-parametric approach derives crisp logical rules for each query by finding multiple textitgraph path patterns that connect similar source entities through the given relation.
arXiv Detail & Related papers (2020-06-25T06:28:09Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z) - Learning fine-grained search space pruning and heuristics for
combinatorial optimization [5.72274610208488]
We propose a framework for leveraging machine learning techniques to scale-up exact optimization algorithms.
Our framework learns the relatively simpler task of pruning the elements in order to reduce the size of the problem instances.
We show that our framework can prune a large fraction of the input graph and still detect almost all of the maximum cliques.
arXiv Detail & Related papers (2020-01-05T13:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.