Machine-Learning Based Objective Function Selection for Community
Detection
- URL: http://arxiv.org/abs/2203.13495v1
- Date: Fri, 25 Mar 2022 08:12:01 GMT
- Title: Machine-Learning Based Objective Function Selection for Community
Detection
- Authors: Asa Bornstein, Amir Rubin and Danny Hendler
- Abstract summary: We present NECTAR-ML, an extension of the NECTAR algorithm that uses a machine-learning based model for automating the selection of the objective function.
Our analysis shows that in approximately 90% of the cases our model was able to successfully select the correct objective function.
- Score: 5.156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NECTAR, a Node-centric ovErlapping Community deTection AlgoRithm, presented
in 2016 by Cohen et. al, chooses dynamically between two objective functions
which function to optimize, based on the network on which it is invoked. This
approach, as shown by Cohen et al., outperforms six state-of-the-art algorithms
for overlapping community detection. In this work, we present NECTAR-ML, an
extension of the NECTAR algorithm that uses a machine-learning based model for
automating the selection of the objective function, trained and evaluated on a
dataset of 15,755 synthetic and 7 real-world networks. Our analysis shows that
in approximately 90% of the cases our model was able to successfully select the
correct objective function. We conducted a competitive analysis of NECTAR and
NECTAR-ML. NECTAR-ML was shown to significantly outperform NECTAR's ability to
select the best objective function. We also conducted a competitive analysis of
NECTAR-ML and two additional state-of-the-art multi-objective community
detection algorithms. NECTAR-ML outperformed both algorithms in terms of
average detection quality. Multiobjective EAs (MOEAs) are considered to be the
most popular approach to solve MOP and the fact that NECTAR-ML significantly
outperforms them demonstrates the effectiveness of ML-based objective function
selection.
Related papers
- Self-Exploring Language Models: Active Preference Elicitation for Online Alignment [88.56809269990625]
We propose a bilevel objective optimistically biased towards potentially high-reward responses to actively explore out-of-distribution regions.
Our experimental results demonstrate that when fine-tuned on Zephyr-7B-SFT and Llama-3-8B-Instruct models, Self-Exploring Language Models (SELM) significantly boosts the performance on instruction-following benchmarks.
arXiv Detail & Related papers (2024-05-29T17:59:07Z) - DynamoRep: Trajectory-Based Population Dynamics for Classification of
Black-box Optimization Problems [0.755972004983746]
We propose a feature extraction method that describes the trajectories of optimization algorithms using simple statistics.
We demonstrate that the proposed DynamoRep features capture enough information to identify the problem class on which the optimization algorithm is running.
arXiv Detail & Related papers (2023-06-08T06:57:07Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Explainable Model-specific Algorithm Selection for Multi-Label
Classification [6.442438468509492]
Multi-label classification (MLC) is an ML task of predictive modeling in which a data instance can simultaneously belong to multiple classes.
Several MLC algorithms have been proposed in the literature, resulting in a meta-optimization problem.
We investigate in this work the quality of an automated approach that uses characteristics of the datasets.
arXiv Detail & Related papers (2022-11-21T07:42:11Z) - Multi-Task Learning on Networks [0.0]
Multi-objective optimization problems arising in the multi-task learning context have specific features and require adhoc methods.
In this thesis the solutions in the Input Space are represented as probability distributions encapsulating the knowledge contained in the function evaluations.
In this space of probability distributions, endowed with the metric given by the Wasserstein distance, a new algorithm MOEA/WST can be designed in which the model is not directly on the objective function.
arXiv Detail & Related papers (2021-12-07T09:13:10Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - On the Global Optimality of Model-Agnostic Meta-Learning [133.16370011229776]
Model-a meta-learning (MAML) formulates meta-learning as a bilevel optimization problem, where the inner level solves each subtask based on a shared prior.
We characterize optimality of the stationary points attained by MAML for both learning and supervised learning, where the inner-level outer-level problems are solved via first-order optimization methods.
arXiv Detail & Related papers (2020-06-23T17:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.