Improved Adaboost Algorithm for Web Advertisement Click Prediction Based on Long Short-Term Memory Networks
- URL: http://arxiv.org/abs/2408.05245v1
- Date: Thu, 8 Aug 2024 03:27:02 GMT
- Title: Improved Adaboost Algorithm for Web Advertisement Click Prediction Based on Long Short-Term Memory Networks
- Authors: Qixuan Yu, Xirui Tang, Feiyang Li, Zinan Cao,
- Abstract summary: This paper explores an improved Adaboost algorithm based on Long Short-Term Memory Networks (LSTM)
By comparing it with several common machine learning algorithms, the paper analyses the advantages of the new model in ad click prediction.
It is shown that the improved algorithm proposed in this paper performs well in user ad click prediction with an accuracy of 92%.
- Score: 2.7959678888027906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores an improved Adaboost algorithm based on Long Short-Term Memory Networks (LSTMs), which aims to improve the prediction accuracy of user clicks on web page advertisements. By comparing it with several common machine learning algorithms, the paper analyses the advantages of the new model in ad click prediction. It is shown that the improved algorithm proposed in this paper performs well in user ad click prediction with an accuracy of 92%, which is an improvement of 13.6% compared to the highest of 78.4% among the other three base models. This significant improvement indicates that the algorithm is more capable of capturing user behavioural characteristics and time series patterns. In addition, this paper evaluates the model's performance on other performance metrics, including accuracy, recall, and F1 score. The results show that the improved Adaboost algorithm based on LSTM is significantly ahead of the traditional model in all these metrics, which further validates its effectiveness and superiority. Especially when facing complex and dynamically changing user behaviours, the model is able to better adapt and make accurate predictions. In order to ensure the practicality and reliability of the model, this study also focuses on the accuracy difference between the training set and the test set. After validation, the accuracy of the proposed model on these two datasets only differs by 1.7%, which is a small difference indicating that the model has good generalisation ability and can be effectively applied to real-world scenarios.
Related papers
- Beyond Accuracy: Ensuring Correct Predictions With Correct Rationales [10.397502254316645]
We propose a two-phase scheme to ensure double-correct predictions.
First, we curate a new dataset that offers structured rationales for visual recognition tasks.
Second, we propose a rationale-informed optimization method to guide the model in disentangling and localizing visual evidence.
arXiv Detail & Related papers (2024-10-31T18:33:39Z) - Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting Models [68.23649978697027]
Forecast-PEFT is a fine-tuning strategy that freezes the majority of the model's parameters, focusing adjustments on newly introduced prompts and adapters.
Our experiments show that Forecast-PEFT outperforms traditional full fine-tuning methods in motion prediction tasks.
Forecast-FT further improves prediction performance, evidencing up to a 9.6% enhancement over conventional baseline methods.
arXiv Detail & Related papers (2024-07-28T19:18:59Z) - Towards An Online Incremental Approach to Predict Students Performance [0.8287206589886879]
We propose a memory-based online incremental learning approach for updating an online classifier.
Our approach achieves a notable improvement in model accuracy, with an enhancement of nearly 10% compared to the current state-of-the-art.
arXiv Detail & Related papers (2024-05-03T17:13:26Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Model soups: averaging weights of multiple fine-tuned models improves
accuracy without increasing inference time [69.7693300927423]
We show that averaging the weights of multiple models fine-tuned with different hyper parameter configurations improves accuracy and robustness.
We show that the model soup approach extends to multiple image classification and natural language processing tasks.
arXiv Detail & Related papers (2022-03-10T17:03:49Z) - No One Representation to Rule Them All: Overlapping Features of Training
Methods [12.58238785151714]
High-performing models tend to make similar predictions regardless of training methodology.
Recent work has made very different training techniques, such as large-scale contrastive learning, yield competitively-high accuracy.
We show these models specialize in generalization of the data, leading to higher ensemble performance.
arXiv Detail & Related papers (2021-10-20T21:29:49Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Efficient Action Recognition Using Confidence Distillation [9.028144245738247]
We propose a confidence distillation framework to teach a representation of uncertainty of the teacher to the student sampler.
We conduct extensive experiments on three action recognition datasets and demonstrate that our framework achieves significant improvements in action recognition accuracy (up to 20%) and computational efficiency (more than 40%)
arXiv Detail & Related papers (2021-09-05T18:25:49Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z) - Iterative Boosting Deep Neural Networks for Predicting Click-Through
Rate [15.90144113403866]
The click-through rate (CTR) reflects the ratio of clicks on a specific item to its total number of views.
XdBoost is an iterative three-stage neural network model influenced by the traditional machine learning boosting mechanism.
arXiv Detail & Related papers (2020-07-26T09:41:16Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.