DeepVoting: Learning and Fine-Tuning Voting Rules with Canonical Embeddings
- URL: http://arxiv.org/abs/2408.13630v2
- Date: Thu, 11 Sep 2025 15:32:16 GMT
- Title: DeepVoting: Learning and Fine-Tuning Voting Rules with Canonical Embeddings
- Authors: Leonardo Matone, Ben Abramowitz, Ben Armstrong, Avinash Balakrishnan, Nicholas Mattei,
- Abstract summary: We recast the problem of designing voting rules with desirable properties into one of learning probabilistic functions.<n>We show that preference profile encoding has significant impact on the efficiency and ability of neural networks to learn rules.<n>We also show that our learned rules can be fine-tuned using axiomatic properties to create novel voting rules.
- Score: 5.312279415103033
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Aggregating agent preferences into a collective decision is an important step in many problems (e.g., hiring, elections, peer review) and across areas of computer science (e.g., reinforcement learning, recommender systems). As Social Choice Theory has shown, the problem of designing aggregation rules with specific sets of properties (axioms) can be difficult, or provably impossible in some cases. Instead of designing algorithms by hand, one can learn aggregation rules, particularly voting rules, from data. However, prior work in this area has required extremely large models or been limited by the choice of preference representation, i.e., embedding. We recast the problem of designing voting rules with desirable properties into one of learning probabilistic functions that output distributions over a set of candidates. Specifically, we use neural networks to learn probabilistic social choice functions. Using standard embeddings from the social choice literature we show that preference profile encoding has significant impact on the efficiency and ability of neural networks to learn rules, allowing us to learn rules faster and with smaller networks than previous work. Moreover, we show that our learned rules can be fine-tuned using axiomatic properties to create novel voting rules and make them resistant to specific types of "attack". Namely, we fine-tune rules to resist a probabilistic version of the No Show Paradox.
Related papers
- Optimized Distortion in Linear Social Choice [28.227695590829086]
We study distortion of linear social choice for deterministic and randomized voting rules.<n>We introduce poly-time instance-optimal algorithms for minimizing distortion given a collection of candidates and votes.
arXiv Detail & Related papers (2025-10-22T20:42:49Z) - What Voting Rules Actually Do: A Data-Driven Analysis of Multi-Winner Voting [5.880273374889066]
We propose a data-driven framework to evaluate how frequently voting rules violate axioms across diverse preference distributions.<n>We show that neural networks, acting as voting rules, can outperform traditional rules in minimizing axiom violations.
arXiv Detail & Related papers (2025-08-08T16:54:09Z) - Self-Evolving Curriculum for LLM Reasoning [108.23021254812258]
Self-Evolving Curriculum (SEC) is an automatic curriculum learning method that learns a curriculum policy concurrently with the RL fine-tuning process.<n>Our experiments demonstrate that SEC significantly improves models' reasoning capabilities, enabling better generalization to harder, out-of-distribution test problems.
arXiv Detail & Related papers (2025-05-20T23:17:15Z) - Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Abductive and Contrastive Explanations for Scoring Rules in Voting [5.928530455750507]
We design algorithms for computing abductive and contrastive explanations for scoring rules.
For the Borda rule, we find a lower bound on the size of the smallest abductive explanations.
We conduct simulations to identify correlations between properties of preference profiles and the size of their smallest abductive explanations.
arXiv Detail & Related papers (2024-08-23T09:12:58Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Faithful Differentiable Reasoning with Reshuffled Region-based Embeddings [62.93577376960498]
Knowledge graph embedding methods learn geometric representations of entities and relations to predict plausible missing knowledge.<n>We propose RESHUFFLE, a model based on ordering constraints that can faithfully capture a much larger class of rule bases.<n>The entity embeddings in our framework can be learned by a Graph Neural Network (GNN), which effectively acts as a differentiable rule base.
arXiv Detail & Related papers (2024-06-13T18:37:24Z) - On the Aggregation of Rules for Knowledge Graph Completion [9.628032156001069]
Rule learning approaches for knowledge graph completion are efficient, interpretable and competitive to purely neural models.
We show that existing aggregation approaches can be expressed as marginal inference operations over the predicting rules.
We propose an efficient and overlooked baseline which combines the previous strategies and is competitive to computationally more expensive approaches.
arXiv Detail & Related papers (2023-09-01T07:32:11Z) - Abstracting Concept-Changing Rules for Solving Raven's Progressive
Matrix Problems [54.26307134687171]
Raven's Progressive Matrix (RPM) is a classic test to realize such ability in machine intelligence by selecting from candidates.
Recent studies suggest that solving RPM in an answer-generation way boosts a more in-depth understanding of rules.
We propose a deep latent variable model for Concept-changing Rule ABstraction (CRAB) by learning interpretable concepts and parsing concept-changing rules in the latent space.
arXiv Detail & Related papers (2023-07-15T07:16:38Z) - Data as voters: instance selection using approval-based multi-winner voting [1.597617022056624]
We present a novel approach to the instance selection problem in machine learning (or data mining)
In our model, instances play a double role as voters and candidates.
For SVMs, we have obtained slight increases in the average accuracy by using several voting rules that satisfy EJR or PJR.
arXiv Detail & Related papers (2023-04-19T22:00:23Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Obvious Manipulability of Voting Rules [105.35249497503527]
The Gibbard-Satterthwaite theorem states that no unanimous and non-dictatorial voting rule is strategyproof.
We revisit voting rules and consider a weaker notion of strategyproofness called not obvious manipulability.
arXiv Detail & Related papers (2021-11-03T02:41:48Z) - The Complexity of Learning Approval-Based Multiwinner Voting Rules [9.071560867542647]
We study the learnability of multiwinner voting, focusing on the class of approval-based committee scoring (ABCS) rules.
Our goal is to learn a target rule (i.e., to learn the corresponding scoring function) using information about the winning committees of a small number of profiles.
We prove that deciding whether there exists some ABCS rule that makes a given committee winning in a given profile is a hard problem.
arXiv Detail & Related papers (2021-10-01T08:25:05Z) - Learning to Elect [7.893831644671976]
Voting systems have a wide range of applications including recommender systems, web search, product design and elections.
We show that set-input neural network architectures such as Set Transformers, fully-connected graph networks and DeepSets are both theoretically and empirically well-suited for learning voting rules.
arXiv Detail & Related papers (2021-08-05T17:55:46Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Objective Social Choice: Using Auxiliary Information to Improve Voting
Outcomes [16.764511357821043]
How should one combine noisy information from diverse sources to make an inference about an objective ground truth?
We propose a multi-arm bandit noise model and count-based auxiliary information set.
We find that our rules successfully use auxiliary information to outperform the naive baselines.
arXiv Detail & Related papers (2020-01-27T21:21:19Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.