It's all in the (Sub-)title? Expanding Signal Evaluation in Crowdfunding
Research
- URL: http://arxiv.org/abs/2010.14389v1
- Date: Tue, 27 Oct 2020 15:51:31 GMT
- Title: It's all in the (Sub-)title? Expanding Signal Evaluation in Crowdfunding
Research
- Authors: Constantin von Selasinsky and Andrew Jay Isaak
- Abstract summary: We compare and contrast the strength of the entrepreneur's textual success signals to project backers.
We find that incorporating subtitle information increases the variance explained by the respective models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research on crowdfunding success that incorporates CATA (computer-aided text
analysis) is quickly advancing to the big leagues (e.g., Parhankangas and
Renko, 2017; Anglin et al., 2018; Moss et al., 2018) and is often theoretically
based on information asymmetry, social capital, signaling or a combination
thereof. Yet, current papers that explore crowdfunding success criteria fail to
take advantage of the full breadth of signals available and only very few such
papers examine technology projects. In this paper, we compare and contrast the
strength of the entrepreneur's textual success signals to project backers
within this category. Based on a random sample of 1,049 technology projects
collected from Kickstarter, we evaluate textual information not only from
project titles and descriptions but also from video subtitles. We find that
incorporating subtitle information increases the variance explained by the
respective models and therefore their predictive capability for funding
success. By expanding the information landscape, our work advances the field
and paves the way for more fine-grained studies of success signals in
crowdfunding and therefore for an improved understanding of investor
decision-making in the crowd.
Related papers
- Using Artificial Intelligence to Unlock Crowdfunding Success for Small Businesses [8.226509113718125]
We utilize the latest advancements in AI technology to identify crucial factors that influence the success of crowdfunding campaigns.
Our best-performing machine learning model accurately predicts the fundraising outcomes of 81.0% of campaigns.
We demonstrate that by augmenting just three aspects of the narrative using a large language model, a campaign becomes more preferable to 83% human evaluators.
arXiv Detail & Related papers (2024-04-24T20:53:10Z) - A Latent Dirichlet Allocation (LDA) Semantic Text Analytics Approach to
Explore Topical Features in Charity Crowdfunding Campaigns [0.6298586521165193]
This study introduces an inventive text analytics framework, utilizing Latent Dirichlet Allocation (LDA) to extract latent themes from textual descriptions of charity campaigns.
The study has explored four different themes, two each in campaign and incentive descriptions.
The study was successful in using Random Forest to predict success of the campaign using both thematic and numerical parameters.
arXiv Detail & Related papers (2024-01-03T09:17:46Z) - Video Timeline Modeling For News Story Understanding [123.03394373132353]
We present a novel problem, namely video timeline modeling.
Our objective is to create a video-associated timeline from a set of videos related to a specific topic, thereby facilitating the content and structure understanding of the story being told.
This problem has significant potential in various real-world applications, for instance, news story summarization.
arXiv Detail & Related papers (2023-09-23T18:24:15Z) - Who Will Support My Project? Interactive Search of Potential
Crowdfunding Investors Through InSearch [5.8669103084285315]
inSearch allows founders to search for investors interactively on crowdfunding platforms.
It supports an effective overview of potential investors by leveraging a Graph Neural Network to model investor preferences.
arXiv Detail & Related papers (2022-05-04T12:59:00Z) - Video Question Answering: Datasets, Algorithms and Challenges [99.9179674610955]
Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos.
This paper provides a clear taxonomy and comprehensive analyses to VideoQA, focusing on the datasets, algorithms, and unique challenges.
arXiv Detail & Related papers (2022-03-02T16:34:09Z) - An Empirical Investigation of Personalization Factors on TikTok [77.34726150561087]
Despite the importance of TikTok's algorithm to the platform's success and content distribution, little work has been done on the empirical analysis of the algorithm.
Using a sock-puppet audit methodology with a custom algorithm developed by us, we tested and analysed the effect of the language and location used to access TikTok.
We identify that the follow-feature has the strongest influence, followed by the like-feature and video view rate.
arXiv Detail & Related papers (2022-01-28T17:40:00Z) - MERLOT Reserve: Neural Script Knowledge through Vision and Language and
Sound [90.1857707251566]
We introduce MERLOT Reserve, a model that represents videos jointly over time.
We replace snippets of text and audio with a MASK token; the model learns by choosing the correct masked-out snippet.
Our objective learns faster than alternatives, and performs well at scale.
arXiv Detail & Related papers (2022-01-07T19:00:21Z) - How COVID-19 Have Changed Crowdfunding: Evidence From GoFundMe [77.34726150561087]
This study uses a unique data set of all the campaigns published over the past two years on GoFundMe.
We study a corpus of crowdfunded projects, analyzing cover images and other variables commonly present on crowdfunding sites.
arXiv Detail & Related papers (2021-06-18T08:03:58Z) - Does Crowdfunding Really Foster Innovation? Evidence from the Board Game
Industry [1.776746672434207]
We investigate the link between crowdfunding and innovation using a dataset of board games.
We find that crowdfunded games tend to be more distinctive from previous games than their traditionally published counterparts.
Our findings demonstrate that the innovative potential of crowdfunding goes beyond individual products to entire industries.
arXiv Detail & Related papers (2021-01-07T18:44:47Z) - Screenplay Quality Assessment: Can We Predict Who Gets Nominated? [53.9153892362629]
We present a method to evaluate the quality of a screenplay based on linguistic cues.
Based on industry opinions and narratology, we extract and integrate domain-specific features into common classification techniques.
arXiv Detail & Related papers (2020-05-13T02:39:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.