Can We Enhance the Quality of Mobile Crowdsensing Data Without Ground Truth?
- URL: http://arxiv.org/abs/2405.18725v2
- Date: Wed, 08 Jan 2025 08:20:07 GMT
- Title: Can We Enhance the Quality of Mobile Crowdsensing Data Without Ground Truth?
- Authors: Jiajie Li, Bo Gu, Shimin Gong, Zhou Su, Mohsen Guizani,
- Abstract summary: This article proposes a prediction- and reputation-based truth discovery framework.
It can separate low-quality data from high-quality data in sensing tasks.
It outperforms existing methods in terms of identification accuracy and data quality enhancement.
- Score: 45.875832406278214
- License:
- Abstract: Mobile crowdsensing (MCS) has emerged as a prominent trend across various domains. However, ensuring the quality of the sensing data submitted by mobile users (MUs) remains a complex and challenging problem. To address this challenge, an advanced method is needed to detect low-quality sensing data and identify malicious MUs that may disrupt the normal operations of an MCS system. Therefore, this article proposes a prediction- and reputation-based truth discovery (PRBTD) framework, which can separate low-quality data from high-quality data in sensing tasks. First, we apply a correlation-focused spatio-temporal Transformer network that learns from the historical sensing data and predicts the ground truth of the data submitted by MUs. However, due to the noise in historical data for training and the bursty values within sensing data, the prediction results can be inaccurate. To address this issue, we use the implications among the sensing data, which are learned from the prediction results but are stable and less affected by inaccurate predictions, to evaluate the quality of the data. Finally, we design a reputation-based truth discovery (TD) module for identifying low-quality data with their implications. Given the sensing data submitted by MUs, PRBTD can eliminate the data with heavy noise and identify malicious MUs with high accuracy. Extensive experimental results demonstrate that the PRBTD method outperforms existing methods in terms of identification accuracy and data quality enhancement.
Related papers
- Surprisingly Fragile: Assessing and Addressing Prompt Instability in Multimodal Foundation Models [1.9001431325800364]
Multimodal foundation models (MFMs) such as OFASys show the potential to unlock analysis of complex data via text prompts alone.
Their performance may suffer in the face of text input that differs even slightly from their training distribution.
This study demonstrates that prompt instability is a major concern for MFMs, leading to a consistent drop in performance across all modalities.
arXiv Detail & Related papers (2024-08-26T19:26:55Z) - PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Augmenting Softmax Information for Selective Classification with
Out-of-Distribution Data [7.221206118679026]
We show that existing post-hoc methods perform quite differently compared to when evaluated only on OOD detection.
We propose a novel method for SCOD, Softmax Information Retaining Combination (SIRC), that augments softmax-based confidence scores with feature-agnostic information.
Experiments on a wide variety of ImageNet-scale datasets and convolutional neural network architectures show that SIRC is able to consistently match or outperform the baseline for SCOD.
arXiv Detail & Related papers (2022-07-15T14:39:57Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Robust Event Classification Using Imperfect Real-world PMU Data [58.26737360525643]
We study robust event classification using imperfect real-world phasor measurement unit (PMU) data.
We develop a novel machine learning framework for training robust event classifiers.
arXiv Detail & Related papers (2021-10-19T17:41:43Z) - Data Profiling for Adversarial Training: On the Ruin of Problematic Data [27.11328449349065]
Problems in adversarial training include robustness-accuracy trade-off, robust overfitting, and gradient masking.
We show that these problems share one common cause -- low quality samples in the dataset.
We find that when problematic data is removed, robust overfitting and gradient masking can be largely alleviated.
arXiv Detail & Related papers (2021-02-15T10:17:24Z) - A Unified Plug-and-Play Framework for Effective Data Denoising and
Robust Abstention [4.200576272300216]
We propose a unified filtering framework leveraging underlying data density.
Our framework can effectively denoising training data and avoid predicting uncertain test data points.
arXiv Detail & Related papers (2020-09-25T04:18:08Z) - On the Role of Dataset Quality and Heterogeneity in Model Confidence [27.657631193015252]
Safety-critical applications require machine learning models that output accurate and calibrated probabilities.
Uncalibrated deep networks are known to make over-confident predictions.
We study the impact of dataset quality by studying the impact of dataset size and the label noise on the model confidence.
arXiv Detail & Related papers (2020-02-23T05:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.