Optimal Targeting in Fundraising: A Machine Learning Approach
- URL: http://arxiv.org/abs/2103.10251v1
- Date: Wed, 10 Mar 2021 23:06:35 GMT
- Title: Optimal Targeting in Fundraising: A Machine Learning Approach
- Authors: Tobias Cagala, Ulrich Glogowsky, Johannes Rincke, Anthony Strittmatter
- Abstract summary: This paper studies optimal targeting as a means to increase fundraising efficacy.
We randomly provide potential donors with an unconditional gift and use causal-machine learning techniques to "optimally" target this fundraising tool to the predicted net donors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper studies optimal targeting as a means to increase fundraising
efficacy. We randomly provide potential donors with an unconditional gift and
use causal-machine learning techniques to "optimally" target this fundraising
tool to the predicted net donors: individuals who, in expectation, give more
than their solicitation costs. With this strategy, our fundraiser avoids lossy
solicitations, significantly boosts available funds, and, consequently, can
increase service and goods provision. Further, to realize these gains, the
charity can merely rely on readily available data. We conclude that charities
that refrain from fundraising targeting waste significant resources.
Related papers
- No-Regret Learning Under Adversarial Resource Constraints: A Spending Plan Is All You Need! [56.80767500991973]
We focus on two canonical settings: $(i)$ online resource allocation where rewards and costs are observed before action selection, and $(ii)$ online learning with resource constraints where they are observed after action selection, under full feedback or bandit feedback.<n>It is well known that achieving sublinear regret in these settings is impossible when reward and cost distributions may change arbitrarily over time.<n>We design general (primal-)dual methods that achieve sublinear regret with respect to baselines that follow the spending plan. Crucially, the performance of our algorithms improves when the spending plan ensures a well-balanced distribution of the budget
arXiv Detail & Related papers (2025-06-16T08:42:31Z) - The Effects of Moral Framing on Online Fundraising Outcomes: Evidence from GoFundMe Campaigns [2.305290404567739]
This study examines the impact of moral framing on fundraising outcomes, including both monetary and social support.<n>We focused on three moral frames: care, fairness, and (ingroup) loyalty, and measured their presence in campaign appeals.
arXiv Detail & Related papers (2025-05-16T15:31:56Z) - Reinforcement Learning with LTL and $ω$-Regular Objectives via Optimality-Preserving Translation to Average Rewards [43.816375964005026]
Linear temporal logic (LTL) and, more generally, $omega$-regular objectives are alternatives to the traditional discount sum and average reward objectives in reinforcement learning.
We show that each RL problem for $omega$-regular objectives can be reduced to a limit-average reward problem in an optimality-preserving fashion.
arXiv Detail & Related papers (2024-10-16T02:42:37Z) - Query Routing for Homogeneous Tools: An Instantiation in the RAG Scenario [62.615210194004106]
Current research on tool learning primarily focuses on selecting the most effective tool from a wide array of options, often overlooking cost-effectiveness.
In this paper, we address the selection of homogeneous tools by predicting both their performance and the associated cost required to accomplish a given task.
arXiv Detail & Related papers (2024-06-18T09:24:09Z) - Using Artificial Intelligence to Unlock Crowdfunding Success for Small Businesses [8.226509113718125]
We utilize the latest advancements in AI technology to identify crucial factors that influence the success of crowdfunding campaigns.
Our best-performing machine learning model accurately predicts the fundraising outcomes of 81.0% of campaigns.
We demonstrate that by augmenting just three aspects of the narrative using a large language model, a campaign becomes more preferable to 83% human evaluators.
arXiv Detail & Related papers (2024-04-24T20:53:10Z) - Efficient Public Health Intervention Planning Using Decomposition-Based
Decision-Focused Learning [33.14258196945301]
We show how to exploit the structure of Restless Multi-Armed Bandits (RMABs) to speed up intervention planning.
We use real-world data from an Indian NGO, ARMMAN, to show that our approach is up to two orders of magnitude faster than the state-of-the-art approach.
arXiv Detail & Related papers (2024-03-08T21:31:00Z) - Dense Reward for Free in Reinforcement Learning from Human Feedback [64.92448888346125]
We leverage the fact that the reward model contains more information than just its scalar output.
We use these attention weights to redistribute the reward along the whole completion.
Empirically, we show that it stabilises training, accelerates the rate of learning, and, in practical cases, may lead to better local optima.
arXiv Detail & Related papers (2024-02-01T17:10:35Z) - Market Responses to Genuine Versus Strategic Generosity: An Empirical
Examination of NFT Charity Fundraisers [15.310650714527602]
Nonfungible token (NFT) charity fundraisers involve the sale of NFTs of artistic works with the proceeds donated to philanthropic causes.
We investigate the causal effect of purchasing an NFT within the charity fundraiser on a donor's later market outcomes.
We show that charity-NFT "relisters" experience significant penalties in the market, in terms of the prices they are able to command on other NFT listings.
arXiv Detail & Related papers (2024-01-22T15:58:47Z) - A Latent Dirichlet Allocation (LDA) Semantic Text Analytics Approach to
Explore Topical Features in Charity Crowdfunding Campaigns [0.6298586521165193]
This study introduces an inventive text analytics framework, utilizing Latent Dirichlet Allocation (LDA) to extract latent themes from textual descriptions of charity campaigns.
The study has explored four different themes, two each in campaign and incentive descriptions.
The study was successful in using Random Forest to predict success of the campaign using both thematic and numerical parameters.
arXiv Detail & Related papers (2024-01-03T09:17:46Z) - Basis for Intentions: Efficient Inverse Reinforcement Learning using
Past Experience [89.30876995059168]
inverse reinforcement learning (IRL) -- inferring the reward function of an agent from observing its behavior.
This paper addresses the problem of IRL -- inferring the reward function of an agent from observing its behavior.
arXiv Detail & Related papers (2022-08-09T17:29:49Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Inferring Lexicographically-Ordered Rewards from Preferences [82.42854687952115]
This paper proposes a method for inferring multi-objective reward-based representations of an agent's observed preferences.
We model the agent's priorities over different objectives as entering lexicographically, so that objectives with lower priorities matter only when the agent is indifferent with respect to objectives with higher priorities.
arXiv Detail & Related papers (2022-02-21T12:01:41Z) - Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning [100.73223416589596]
We propose a cost-sensitive portfolio selection method with deep reinforcement learning.
Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations.
A new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning.
arXiv Detail & Related papers (2020-03-06T06:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.