Design and Evaluation of Camera-Centric Mobile Crowdsourcing Applications
- URL: http://arxiv.org/abs/2409.03012v1
- Date: Wed, 4 Sep 2024 18:10:35 GMT
- Title: Design and Evaluation of Camera-Centric Mobile Crowdsourcing Applications
- Authors: Abby Stylianou, Michelle Brachman, Albatool Wazzan, Samuel Black, Richard Souvenir,
- Abstract summary: This project seeks to understand how the application design affects a user's willingness to contribute and the quantity and quality of the data they capture.
We designed three versions of a camera-based mobile crowdsourcing application, which varied in the amount of labeling effort requested of the user.
The results suggest that higher levels of user labeling do not lead to reduced contribution.
- Score: 3.941600320957518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The data that underlies automated methods in computer vision and machine learning, such as image retrieval and fine-grained recognition, often comes from crowdsourcing. In contexts that rely on the intrinsic motivation of users, we seek to understand how the application design affects a user's willingness to contribute and the quantity and quality of the data they capture. In this project, we designed three versions of a camera-based mobile crowdsourcing application, which varied in the amount of labeling effort requested of the user and conducted a user study to evaluate the trade-off between the level of user-contributed information requested and the quantity and quality of labeled images collected. The results suggest that higher levels of user labeling do not lead to reduced contribution. Users collected and annotated the most images using the application version with the highest requested level of labeling with no decrease in user satisfaction. In preliminary experiments, the additional labeled data supported increased performance on an image retrieval task.
Related papers
- Empowering Visually Impaired Individuals: A Novel Use of Apple Live
Photos and Android Motion Photos [3.66237529322911]
We advocate for the use of Apple Live Photos and Android Motion Photos technologies.
Our findings reveal that both Live Photos and Motion Photos outperform single-frame images in common visual assisting tasks.
arXiv Detail & Related papers (2023-09-14T20:46:35Z) - UX Heuristics and Checklist for Deep Learning powered Mobile
Applications with Image Classification [1.2437226707039446]
This study examines existing mobile applications with image classification and develops an initial set of AIXs for Deep Learning powered mobile applications with image classification decomposed into a checklist.
In order to facilitate the usage of the checklist we also developed an online course presenting the concepts and conductions as well as a web-based tool in order to support an evaluation using theses.
arXiv Detail & Related papers (2023-07-05T20:23:34Z) - Identifying Professional Photographers Through Image Quality and
Aesthetics in Flickr [0.0]
This study reveals the lack of suitable data sets in photo and video sharing platforms.
We created one of the largest labelled data sets in Flickr with the multimodal data which has been open sourced.
We examined the relationship between the aesthetics and technical quality of a picture and the social activity of that picture.
arXiv Detail & Related papers (2023-07-04T14:55:37Z) - Collaborative Image Understanding [5.5174379874002435]
We show that collaborative information can be leveraged to improve the classification process of new images.
A series of experiments on datasets from e-commerce and social media demonstrates that considering collaborative signals helps to significantly improve the performance of the main task of image classification by up to 9.1%.
arXiv Detail & Related papers (2022-10-21T12:13:08Z) - Can you recommend content to creatives instead of final consumers? A
RecSys based on user's preferred visual styles [69.69160476215895]
This report is an extension of the paper "Learning Users' Preferred Visual Styles in an Image Marketplace", presented at ACM RecSys '22.
We design a RecSys that learns visual styles preferences to the semantics of the projects users work on.
arXiv Detail & Related papers (2022-08-23T12:11:28Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z) - An Automatic Image Content Retrieval Method for better Mobile Device
Display User Experiences [91.3755431537592]
A new mobile application for image content retrieval and classification for mobile device display is proposed.
The application was run on thousands of pictures and showed encouraging results towards a better user visual experience with mobile displays.
arXiv Detail & Related papers (2021-08-26T23:44:34Z) - CapWAP: Captioning with a Purpose [56.99405135645775]
We propose a new task, Captioning with a Purpose (CapWAP)
Our goal is to develop systems that can be tailored to be useful for the information needs of an intended population.
We show that it is possible to use reinforcement learning to directly optimize for the intended information need.
arXiv Detail & Related papers (2020-11-09T09:23:55Z) - Learning to Detect Important People in Unlabelled Images for
Semi-supervised Important People Detection [85.91577271918783]
We propose learning important people detection on partially annotated images.
Our approach iteratively learns to assign pseudo-labels to individuals in un-annotated images.
We have collected two large-scale datasets for evaluation.
arXiv Detail & Related papers (2020-04-16T10:09:37Z) - Adjusting Image Attributes of Localized Regions with Low-level Dialogue [83.06971746641686]
We develop a task-oriented dialogue system to investigate low-level instructions for NLIE.
Our system grounds language on the level of edit operations, and suggests options for a user to choose from.
An analysis shows that users generally adapt to utilizing the proposed low-level language interface.
arXiv Detail & Related papers (2020-02-11T20:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.