Lightweight Boosting Models for User Response Prediction Using
Adversarial Validation
- URL: http://arxiv.org/abs/2310.03778v1
- Date: Thu, 5 Oct 2023 13:57:05 GMT
- Title: Lightweight Boosting Models for User Response Prediction Using
Adversarial Validation
- Authors: Hyeonwoo Kim and Wonsung Lee
- Abstract summary: The ACM RecSys Challenge 2023, organized by ShareChat, aims to predict the probability of the app being installed.
This paper describes the lightweight solution to this challenge.
- Score: 2.4040470282119983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ACM RecSys Challenge 2023, organized by ShareChat, aims to predict the
probability of the app being installed. This paper describes the lightweight
solution to this challenge. We formulate the task as a user response prediction
task. For rapid prototyping for the task, we propose a lightweight solution
including the following steps: 1) using adversarial validation, we effectively
eliminate uninformative features from a dataset; 2) to address noisy continuous
features and categorical features with a large number of unique values, we
employ feature engineering techniques.; 3) we leverage Gradient Boosted
Decision Trees (GBDT) for their exceptional performance and scalability. The
experiments show that a single LightGBM model, without additional ensembling,
performs quite well. Our team achieved ninth place in the challenge with the
final leaderboard score of 6.059065. Code for our approach can be found here:
https://github.com/choco9966/recsys-challenge-2023.
Related papers
- Symmetric Multi-Similarity Loss for EPIC-KITCHENS-100 Multi-Instance Retrieval Challenge 2024 [17.622013322533423]
We present our champion solution for EPIC-KITCHENS-100 Multi-Instance Retrieval Challenge in CVPR 2024.
This challenge differs from traditional visual-text retrieval tasks by providing a correlation matrix.
We propose a novel loss function, Symmetric Multi-Similarity Loss, which offers a more precise learning objective.
arXiv Detail & Related papers (2024-06-18T04:10:20Z) - Localizing Task Information for Improved Model Merging and Compression [61.16012721460561]
We show that the information required to solve each task is still preserved after merging as different tasks mostly use non-overlapping sets of weights.
We propose Consensus Merging, an algorithm that eliminates such weights and improves the general performance of existing model merging approaches.
arXiv Detail & Related papers (2024-05-13T14:54:37Z) - Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - Spurious Feature Eraser: Stabilizing Test-Time Adaptation for Vision-Language Foundation Model [86.9619638550683]
Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data.
However, these models display significant limitations when applied to downstream tasks, such as fine-grained image classification, as a result of decision shortcuts''
arXiv Detail & Related papers (2024-03-01T09:01:53Z) - RecSys Challenge 2023: From data preparation to prediction, a simple,
efficient, robust and scalable solution [2.0564549686015594]
The RecSys Challenge 2023, presented by ShareChat, consists to predict if an user will install an application on his smartphone after having seen advertising impressions in ShareChat & Moj apps.
This paper presents the solution of 'Team UMONS' to this challenge, giving accurate results with a relatively small model that can be easily implemented in different production configurations.
arXiv Detail & Related papers (2024-01-12T10:14:10Z) - Ask Me Anything: A simple strategy for prompting language models [24.294416731247427]
Large language models (LLMs) transfer well to new tasks out-of-the-box simply given a natural language prompt.
We develop an understanding of the effective prompt formats, finding that question-answering (QA) prompts tend to outperform those that restrict the model outputs.
We apply the collected prompts to obtain several noisy votes for the input's true label.
We find that the prompts can have very different accuracies and complex dependencies.
arXiv Detail & Related papers (2022-10-05T17:59:45Z) - PERFECT: Prompt-free and Efficient Few-shot Learning with Language
Models [67.3725459417758]
PERFECT is a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting.
We show that manually engineered task prompts can be replaced with task-specific adapters that enable sample-efficient fine-tuning.
Experiments on a wide range of few-shot NLP tasks demonstrate that PERFECT, while being simple and efficient, also outperforms existing state-of-the-art few-shot learning methods.
arXiv Detail & Related papers (2022-04-03T22:31:25Z) - Efficient Person Search: An Anchor-Free Approach [86.45858994806471]
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images.
To achieve this goal, state-of-the-art models typically add a re-id branch upon two-stage detectors like Faster R-CNN.
In this work, we present an anchor-free approach to efficiently tackling this challenging task, by introducing the following dedicated designs.
arXiv Detail & Related papers (2021-09-01T07:01:33Z) - Anchor-Free Person Search [127.88668724345195]
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images.
Most existing works employ two-stage detectors like Faster-RCNN, yielding encouraging accuracy but with high computational overhead.
We present the Feature-Aligned Person Search Network (AlignPS), the first anchor-free framework to efficiently tackle this challenging task.
arXiv Detail & Related papers (2021-03-22T07:04:29Z) - Conditional Channel Gated Networks for Task-Aware Continual Learning [44.894710899300435]
Convolutional Neural Networks experience catastrophic forgetting when optimized on a sequence of learning problems.
We introduce a novel framework to tackle this problem with conditional computation.
We validate our proposal on four continual learning datasets.
arXiv Detail & Related papers (2020-03-31T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.