Reframing Audience Expansion through the Lens of Probability Density
Estimation
- URL: http://arxiv.org/abs/2311.05853v1
- Date: Fri, 10 Nov 2023 03:25:53 GMT
- Title: Reframing Audience Expansion through the Lens of Probability Density
Estimation
- Authors: Claudio Carvalhaes
- Abstract summary: Audience expansion helps marketers create target audiences based on a mere representative sample of their current customer base.
We present a simulation study based on the widely used MNIST dataset, where consistent high precision and recall values demonstrate our approach's ability to identify the most relevant users for an expanded audience.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Audience expansion has become an important element of prospective marketing,
helping marketers create target audiences based on a mere representative sample
of their current customer base. Within the realm of machine learning, a favored
algorithm for scaling this sample into a broader audience hinges on a binary
classification task, with class probability estimates playing a crucial role.
In this paper, we review this technique and introduce a key change in how we
choose training examples to ensure the quality of the generated audience. We
present a simulation study based on the widely used MNIST dataset, where
consistent high precision and recall values demonstrate our approach's ability
to identify the most relevant users for an expanded audience. Our results are
easily reproducible and a Python implementation is openly available on GitHub:
\url{https://github.com/carvalhaes-ai/audience-expansion}
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Prompt-based Personality Profiling: Reinforcement Learning for Relevance Filtering [8.20929362102942]
Author profiling is the task of inferring characteristics about individuals by analyzing content they share.
We propose a new method for author profiling which aims at distinguishing relevant from irrelevant content first, followed by the actual user profiling only with relevant data.
We evaluate our method for Big Five personality trait prediction on two Twitter corpora.
arXiv Detail & Related papers (2024-09-06T08:43:10Z) - Sample Complexity of Preference-Based Nonparametric Off-Policy
Evaluation with Deep Networks [58.469818546042696]
We study the sample efficiency of OPE with human preference and establish a statistical guarantee for it.
By appropriately selecting the size of a ReLU network, we show that one can leverage any low-dimensional manifold structure in the Markov decision process.
arXiv Detail & Related papers (2023-10-16T16:27:06Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - Enlarging Instance-specific and Class-specific Information for Open-set
Action Recognition [47.69171542776917]
We find that features with richer semantic diversity can significantly improve the open-set performance under the same uncertainty scores.
A novel Prototypical Similarity Learning (PSL) framework is proposed to keep the instance variance within the same class to retain more IS information.
arXiv Detail & Related papers (2023-03-25T04:07:36Z) - DFW-PP: Dynamic Feature Weighting based Popularity Prediction for Social
Media Content [4.348651617004765]
Over-saturation of content on social media platforms has persuaded us to identify the important factors that affect content popularity.
We propose the DFW-PP framework, to learn the importance of different features that vary over time.
The proposed method is experimented with a benchmark dataset, to show promising results.
arXiv Detail & Related papers (2021-10-16T08:40:58Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Automatic Curation of Large-Scale Datasets for Audio-Visual
Representation Learning [62.47593143542552]
We describe a subset optimization approach for automatic dataset curation.
We demonstrate that our approach finds videos with high audio-visual correspondence and show that self-supervised models trained on our data, despite being automatically constructed, achieve similar downstream performances to existing video datasets with similar scales.
arXiv Detail & Related papers (2021-01-26T14:27:47Z) - Out-distribution aware Self-training in an Open World Setting [62.19882458285749]
We leverage unlabeled data in an open world setting to further improve prediction performance.
We introduce out-distribution aware self-training, which includes a careful sample selection strategy.
Our classifiers are by design out-distribution aware and can thus distinguish task-related inputs from unrelated ones.
arXiv Detail & Related papers (2020-12-21T12:25:04Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.