Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing
- URL: http://arxiv.org/abs/2409.12012v1
- Date: Wed, 18 Sep 2024 14:26:19 GMT
- Title: Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing
- Authors: Andres Rojas Paredes, Brenda Mareco,
- Abstract summary: We propose Shannon Entropy as a simple feature which can replace standard features.
Our results show that a Shannon Entropy based ranking is better than a standard ranking according to the NDCG metric.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: App reviews in mobile app stores contain useful information which is used to improve applications and promote software evolution. This information is processed by automatic tools which prioritize reviews. In order to carry out this prioritization, reviews are decomposed into features like category and sentiment. Then, a weighted function assigns a weight to each feature and a review ranking is calculated. Unfortunately, in order to extract category and sentiment from reviews, its is required at least a classifier trained in an annotated corpus. Therefore this task is computational demanding. Thus, in this work, we propose Shannon Entropy as a simple feature which can replace standard features. Our results show that a Shannon Entropy based ranking is better than a standard ranking according to the NDCG metric. This result is promising even if we require fairness by means of algorithmic bias. Finally, we highlight a computational limit which appears in the search of the best ranking.
Related papers
- InRank: Incremental Low-Rank Learning [85.6380047359139]
gradient-based training implicitly regularizes neural networks towards low-rank solutions through a gradual increase of the rank during training.
Existing training algorithms do not exploit the low-rank property to improve computational efficiency.
We design a new training algorithm Incremental Low-Rank Learning (InRank), which explicitly expresses cumulative weight updates as low-rank matrices.
arXiv Detail & Related papers (2023-06-20T03:03:04Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - Fast online ranking with fairness of exposure [29.134493256287072]
We show that our algorithm is computationally fast, generating rankings on-the-fly with computation cost dominated by the sort operation, memory efficient, and has strong theoretical guarantees.
Compared to baseline policies that only maximize user-side performance, our algorithm allows to incorporate complex fairness of exposure criteria in the recommendations with negligible computational overhead.
arXiv Detail & Related papers (2022-09-13T12:35:36Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Which Tricks Are Important for Learning to Rank? [32.38701971636441]
State-of-the-art learning-to-rank methods are based on gradient-boosted decision trees (GBDT)
In this paper, we thoroughly analyze several GBDT-based ranking algorithms in a unified setup.
As a result, we gain insights into learning-to-rank techniques and obtain a new state-of-the-art algorithm.
arXiv Detail & Related papers (2022-04-04T13:59:04Z) - Online Learning of Optimally Diverse Rankings [63.62764375279861]
We propose an algorithm that efficiently learns the optimal list based on users' feedback only.
We show that after $T$ queries, the regret of LDR scales as $O((N-L)log(T))$ where $N$ is the number of all items.
arXiv Detail & Related papers (2021-09-13T12:13:20Z) - On the Linear Ordering Problem and the Rankability of Data [0.0]
We use the degree of linearity to quantify what percentage of the data aligns with an optimal ranking.
In a sports context, this is analogous to the number of games that a ranking can correctly predict in hindsight.
We show that the optimal rankings computed via the LOP maximize the hindsight accuracy of a ranking.
arXiv Detail & Related papers (2021-04-12T21:05:17Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z) - Choppy: Cut Transformer For Ranked List Truncation [92.58177016973421]
Choppy is an assumption-free model based on the widely successful Transformer architecture.
We show Choppy improves upon recent state-of-the-art methods.
arXiv Detail & Related papers (2020-04-26T00:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.