Learning How to Vote With Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks
- URL: http://arxiv.org/abs/2410.16170v1
- Date: Mon, 21 Oct 2024 16:35:58 GMT
- Title: Learning How to Vote With Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks
- Authors: Levin Hornischer, Zoi Terzopoulou,
- Abstract summary: We propose axiomatic deep voting: a framework to build and evaluate neural networks that aggregate preferences.
Our findings are: (1) Neural networks, despite being highly accurate, often fail to align with the core axioms of voting rules, revealing a disconnect between mimicking outcomes and reasoning.
By solely optimizing axiom satisfaction, neural networks can synthesize new voting rules that often surpass and substantially differ from existing ones.
- Score: 4.8342038441006805
- License:
- Abstract: Can neural networks be applied in voting theory, while satisfying the need for transparency in collective decisions? We propose axiomatic deep voting: a framework to build and evaluate neural networks that aggregate preferences, using the well-established axiomatic method of voting theory. Our findings are: (1) Neural networks, despite being highly accurate, often fail to align with the core axioms of voting rules, revealing a disconnect between mimicking outcomes and reasoning. (2) Training with axiom-specific data does not enhance alignment with those axioms. (3) By solely optimizing axiom satisfaction, neural networks can synthesize new voting rules that often surpass and substantially differ from existing ones. This offers insights for both fields: For AI, important concepts like bias and value-alignment are studied in a mathematically rigorous way; for voting theory, new areas of the space of voting rules are explored.
Related papers
- DeepVoting: Learning Voting Rules with Tailored Embeddings [13.037431161285971]
We recast the problem of designing a good voting rule into one of learning probabilistic versions of voting rules.
We show that embeddings of preference profiles derived from the social choice literature allows us to learn existing voting rules more efficiently.
We also show that rules learned using embeddings can be tweaked to create novel voting rules with improved axiomatic properties.
arXiv Detail & Related papers (2024-08-24T17:15:20Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Proportional Aggregation of Preferences for Sequential Decision Making [20.374669324368625]
We study the problem of fair sequential decision making given voter preferences.
In each round, a decision rule must choose a decision from a set of alternatives where each voter reports which of these alternatives they approve.
We formalize this aim using axioms based on Proportional Justified Representation.
arXiv Detail & Related papers (2023-06-26T17:10:10Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Learning to Elect [7.893831644671976]
Voting systems have a wide range of applications including recommender systems, web search, product design and elections.
We show that set-input neural network architectures such as Set Transformers, fully-connected graph networks and DeepSets are both theoretically and empirically well-suited for learning voting rules.
arXiv Detail & Related papers (2021-08-05T17:55:46Z) - Learning Accurate and Interpretable Decision Rule Sets from Neural
Networks [5.280792199222362]
This paper proposes a new paradigm for learning a set of independent logical rules in disjunctive normal form as an interpretable model for classification.
We consider the problem of learning an interpretable decision rule set as training a neural network in a specific, yet very simple two-layer architecture.
arXiv Detail & Related papers (2021-03-04T04:10:19Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Distance-Based Regularisation of Deep Networks for Fine-Tuning [116.71288796019809]
We develop an algorithm that constrains a hypothesis class to a small sphere centred on the initial pre-trained weights.
Empirical evaluation shows that our algorithm works well, corroborating our theoretical results.
arXiv Detail & Related papers (2020-02-19T16:00:47Z) - Objective Social Choice: Using Auxiliary Information to Improve Voting
Outcomes [16.764511357821043]
How should one combine noisy information from diverse sources to make an inference about an objective ground truth?
We propose a multi-arm bandit noise model and count-based auxiliary information set.
We find that our rules successfully use auxiliary information to outperform the naive baselines.
arXiv Detail & Related papers (2020-01-27T21:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.