On Sampling Collaborative Filtering Datasets
- URL: http://arxiv.org/abs/2201.04768v1
- Date: Thu, 13 Jan 2022 02:39:22 GMT
- Title: On Sampling Collaborative Filtering Datasets
- Authors: Noveen Sachdeva, Carole-Jean Wu, Julian McAuley
- Abstract summary: We study the practical consequences of dataset sampling strategies on the ranking performance of recommendation algorithms.
We develop an oracle, Data-Genie, which can suggest the sampling scheme that is most likely to preserve model performance for a given dataset.
- Score: 9.041133460836361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the practical consequences of dataset sampling strategies on the
ranking performance of recommendation algorithms. Recommender systems are
generally trained and evaluated on samples of larger datasets. Samples are
often taken in a naive or ad-hoc fashion: e.g. by sampling a dataset randomly
or by selecting users or items with many interactions. As we demonstrate,
commonly-used data sampling schemes can have significant consequences on
algorithm performance. Following this observation, this paper makes three main
contributions: (1) characterizing the effect of sampling on algorithm
performance, in terms of algorithm and dataset characteristics (e.g. sparsity
characteristics, sequential dynamics, etc.); (2) designing SVP-CF, which is a
data-specific sampling strategy, that aims to preserve the relative performance
of models after sampling, and is especially suited to long-tailed interaction
data; and (3) developing an oracle, Data-Genie, which can suggest the sampling
scheme that is most likely to preserve model performance for a given dataset.
The main benefit of Data-Genie is that it will allow recommender system
practitioners to quickly prototype and compare various approaches, while
remaining confident that algorithm performance will be preserved, once the
algorithm is retrained and deployed on the complete data. Detailed experiments
show that using Data-Genie, we can discard upto 5x more data than any sampling
strategy with the same level of performance.
Related papers
- Generating Realistic Tabular Data with Large Language Models [49.03536886067729]
Large language models (LLM) have been used for diverse tasks, but do not capture the correct correlation between the features and the target variable.
We propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
Our experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks.
arXiv Detail & Related papers (2024-10-29T04:14:32Z) - A CLIP-Powered Framework for Robust and Generalizable Data Selection [51.46695086779598]
Real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance.
Data selection has shown promise in identifying the most representative samples from the entire dataset.
We propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection.
arXiv Detail & Related papers (2024-10-15T03:00:58Z) - Dataset Quantization with Active Learning based Adaptive Sampling [11.157462442942775]
We show that maintaining performance is feasible even with uneven sample distributions.
We propose a novel active learning based adaptive sampling strategy to optimize the sample selection.
Our approach outperforms the state-of-the-art dataset compression methods.
arXiv Detail & Related papers (2024-07-09T23:09:18Z) - RECOST: External Knowledge Guided Data-efficient Instruction Tuning [25.985023475991625]
We argue that most current data-efficient instruction-tuning methods are highly dependent on the quality of the original instruction-tuning dataset.
We propose a framework dubbed as textbfRECOST, which integrates external-knowledge-base re-ranking and diversity-consistent sampling into a single pipeline.
arXiv Detail & Related papers (2024-02-27T09:47:36Z) - Experiment Planning with Function Approximation [49.50254688629728]
We study the problem of experiment planning with function approximation in contextual bandit problems.
We propose two experiment planning strategies compatible with function approximation.
We show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small.
arXiv Detail & Related papers (2024-01-10T14:40:23Z) - TRIAGE: Characterizing and auditing training data for improved
regression [80.11415390605215]
We introduce TRIAGE, a novel data characterization framework tailored to regression tasks and compatible with a broad class of regressors.
TRIAGE utilizes conformal predictive distributions to provide a model-agnostic scoring method, the TRIAGE score.
We show that TRIAGE's characterization is consistent and highlight its utility to improve performance via data sculpting/filtering, in multiple regression settings.
arXiv Detail & Related papers (2023-10-29T10:31:59Z) - Reinforced Approximate Exploratory Data Analysis [7.974685452145769]
We are first to consider the impact of sampling in interactive data exploration settings as they introduce approximation errors.
We propose a Deep Reinforcement Learning (DRL) based framework which can optimize the sample selection in order to keep the analysis and insight generation flow intact.
arXiv Detail & Related papers (2022-12-12T20:20:22Z) - Pareto Optimization for Active Learning under Out-of-Distribution Data
Scenarios [79.02009938011447]
We propose a sampling scheme, which selects optimal subsets of unlabeled samples with fixed batch size from the unlabeled data pool.
Experimental results show its effectiveness on both classical Machine Learning (ML) and Deep Learning (DL) tasks.
arXiv Detail & Related papers (2022-07-04T04:11:44Z) - Adaptive Sampling Strategies to Construct Equitable Training Datasets [0.7036032466145111]
In domains ranging from computer vision to natural language processing, machine learning models have been shown to exhibit stark disparities.
One factor contributing to these performance gaps is a lack of representation in the data the models are trained on.
We formalize the problem of creating equitable training datasets, and propose a statistical framework for addressing this problem.
arXiv Detail & Related papers (2022-01-31T19:19:30Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.