Effective and secure federated online learning to rank
- URL: http://arxiv.org/abs/2412.19069v1
- Date: Thu, 26 Dec 2024 05:53:10 GMT
- Title: Effective and secure federated online learning to rank
- Authors: Shuyi Wang,
- Abstract summary: Online Learning to Rank optimises ranking models using implicit user feedback, such as clicks.
It addresses several drawbacks such as the high cost of human annotations, potential misalignment between user preferences and human judgements, and the rapid changes in user query intents.
This thesis presents a comprehensive study on Federated Online Learning to Rank, addressing its effectiveness, robustness, security, and unlearning capabilities.
- Score: 5.874142059884521
- License:
- Abstract: Online Learning to Rank (OLTR) optimises ranking models using implicit user feedback, such as clicks. Unlike traditional Learning to Rank (LTR) methods that rely on a static set of training data with relevance judgements to learn a ranking model, OLTR methods update the model continually as new data arrives. Thus, it addresses several drawbacks such as the high cost of human annotations, potential misalignment between user preferences and human judgments, and the rapid changes in user query intents. However, OLTR methods typically require the collection of searchable data, user queries, and clicks, which poses privacy concerns for users. Federated Online Learning to Rank (FOLTR) integrates OLTR within a Federated Learning (FL) framework to enhance privacy by not sharing raw data. While promising, FOLTR methods currently lag behind traditional centralised OLTR due to challenges in ranking effectiveness, robustness with respect to data distribution across clients, susceptibility to attacks, and the ability to unlearn client interactions and data. This thesis presents a comprehensive study on Federated Online Learning to Rank, addressing its effectiveness, robustness, security, and unlearning capabilities, thereby expanding the landscape of FOLTR.
Related papers
- How to Forget Clients in Federated Online Learning to Rank? [34.5695601040165]
We study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness.
A key challenge is how to measure whether the model has instructd the contributions from the client $c*$ that has requested removal.
arXiv Detail & Related papers (2024-01-24T12:11:41Z) - Federated Unlearning for Human Activity Recognition [11.287645073129108]
We propose a lightweight machine unlearning method for refining the FL HAR model by selectively removing a portion of a client's training data.
Our method achieves unlearning accuracy comparable to textitretraining methods, resulting in speedups ranging from hundreds to thousands.
arXiv Detail & Related papers (2024-01-17T15:51:36Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Unified Off-Policy Learning to Rank: a Reinforcement Learning
Perspective [61.4025671743675]
Off-policy learning to rank methods often make strong assumptions about how users generate the click data.
We show that offline reinforcement learning can adapt to various click models without complex debiasing techniques and prior knowledge of the model.
Results on various large-scale datasets demonstrate that CUOLR consistently outperforms the state-of-the-art off-policy learning to rank algorithms.
arXiv Detail & Related papers (2023-06-13T03:46:22Z) - Online Meta-Learning for Model Update Aggregation in Federated Learning
for Click-Through Rate Prediction [2.9649783577150837]
We propose a simple online meta-learning method to learn a strategy of aggregating the model updates.
Our method significantly outperforms the state-of-the-art in both the speed of convergence and the quality of the final learning results.
arXiv Detail & Related papers (2022-08-30T18:13:53Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Federated Unbiased Learning to Rank [3.125116096130909]
Unbiased Learning to Rank (ULTR) studies the problem of learning a ranking function based on biased user interactions.
In this paper, we consider an on-device search setting, where users search against their personal corpora on their local devices.
We propose the FedIPS algorithm, which learns from user interactions on-device under the coordination of a central server.
arXiv Detail & Related papers (2021-05-11T03:01:14Z) - AliExpress Learning-To-Rank: Maximizing Online Model Performance without
Going Online [60.887637616379926]
This paper proposes an evaluator-generator framework for learning-to-rank.
It consists of an evaluator that generalizes to evaluate recommendations involving the context, and a generator that maximizes the evaluator score by reinforcement learning.
Our method achieves a significant improvement in terms of Conversion Rate (CR) over the industrial-level fine-tuned model in online A/B tests.
arXiv Detail & Related papers (2020-03-25T10:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.