Generating with Fairness: A Modality-Diffused Counterfactual Framework for Incomplete Multimodal Recommendations
- URL: http://arxiv.org/abs/2501.11916v4
- Date: Sat, 22 Feb 2025 06:40:08 GMT
- Title: Generating with Fairness: A Modality-Diffused Counterfactual Framework for Incomplete Multimodal Recommendations
- Authors: Jin Li, Shoujin Wang, Qi Zhang, Shui Yu, Fang Chen,
- Abstract summary: We propose a novel Modality-Diffused Counterfactual (MoDiCF) framework for incomplete multimodal recommendations.<n>MoDiCF features two key modules: a novel modality-diffused data completion module and a new counterfactual multimodal recommendation module.
- Score: 25.59598957355806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incomplete scenario is a prevalent, practical, yet challenging setting in Multimodal Recommendations (MMRec), where some item modalities are missing due to various factors. Recently, a few efforts have sought to improve the recommendation accuracy by exploring generic structures from incomplete data. However, two significant gaps persist: 1) the difficulty in accurately generating missing data due to the limited ability to capture modality distributions; and 2) the critical but overlooked visibility bias, where items with missing modalities are more likely to be disregarded due to the prioritization of items' multimodal data over user preference alignment. This bias raises serious concerns about the fair treatment of items. To bridge these two gaps, we propose a novel Modality-Diffused Counterfactual (MoDiCF) framework for incomplete multimodal recommendations. MoDiCF features two key modules: a novel modality-diffused data completion module and a new counterfactual multimodal recommendation module. The former, equipped with a particularly designed multimodal generative framework, accurately generates and iteratively refines missing data from learned modality-specific distribution spaces. The latter, grounded in the causal perspective, effectively mitigates the negative causal effects of visibility bias and thus assures fairness in recommendations. Both modules work collaboratively to address the two aforementioned significant gaps for generating more accurate and fair results. Extensive experiments on three real-world datasets demonstrate the superior performance of MoDiCF in terms of both recommendation accuracy and fairness. The code and processed datasets are released at https://github.com/JinLi-i/MoDiCF.
Related papers
- FindRec: Stein-Guided Entropic Flow for Multi-Modal Sequential Recommendation [50.438552588818]
We propose textbfFindRec (textbfFlexible unified textbfinformation textbfdisentanglement for multi-modal sequential textbfRecommendation)<n>A Stein kernel-based Integrated Information Coordination Module (IICM) theoretically guarantees distribution consistency between multimodal features and ID streams.<n>A cross-modal expert routing mechanism that adaptively filters and combines multimodal features based on their contextual relevance.
arXiv Detail & Related papers (2025-07-07T04:09:45Z) - Multi-Modal Dataset Distillation in the Wild [75.64263877043615]
We propose Multi-modal dataset Distillation in the Wild, i.e., MDW, to distill noisy multi-modal datasets into compact clean ones for effective and efficient model training.<n>Specifically, MDW introduces learnable fine-grained correspondences during distillation and adaptively optimize distilled data to emphasize correspondence-discriminative regions.<n>Extensive experiments validate MDW's theoretical and empirical efficacy with remarkable scalability, surpassing prior methods by over 15% across various compression ratios.
arXiv Detail & Related papers (2025-06-02T12:18:20Z) - CROSSAN: Towards Efficient and Effective Adaptation of Multiple Multimodal Foundation Models for Sequential Recommendation [6.013740443562439]
Multimodal Foundation Models (MFMs) excel at representing diverse raw modalities.
MFMs' application in sequential recommendation remains largely unexplored.
It remains unclear whether we can efficiently adapt multiple (>2) MFMs for the sequential recommendation task.
We propose a plug-and-play Cross-modal Side Adapter Network (CROSSAN)
arXiv Detail & Related papers (2025-04-14T15:14:59Z) - Smoothing the Shift: Towards Stable Test-Time Adaptation under Complex Multimodal Noises [3.7816957214446103]
Test-Time Adaptation (TTA) aims to tackle distribution shifts using unlabeled test data without access to the source data.
Existing TTA methods fail in such multimodal scenario because the abrupt distribution shifts will destroy the prior knowledge from the source model.
We propose two novel strategies: sample identification with interquartile range Smoothing and unimodal assistance, and Mutual information sharing.
arXiv Detail & Related papers (2025-03-04T13:36:16Z) - MINIMA: Modality Invariant Image Matching [52.505282811925454]
We present MINIMA, a unified image matching framework for multiple cross-modal cases.
We scale up the modalities from cheap but rich RGB-only matching data, by means of generative models.
With MD-syn, we can directly train any advanced matching pipeline on randomly selected modality pairs to obtain cross-modal ability.
arXiv Detail & Related papers (2024-12-27T02:39:50Z) - U3M: Unbiased Multiscale Modal Fusion Model for Multimodal Semantic Segmentation [63.31007867379312]
We introduce U3M: An Unbiased Multiscale Modal Fusion Model for Multimodal Semantics.
We employ feature fusion at multiple scales to ensure the effective extraction and integration of both global and local features.
Experimental results demonstrate that our approach achieves superior performance across multiple datasets.
arXiv Detail & Related papers (2024-05-24T08:58:48Z) - NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion [29.130355774088205]
FuseMoE is a mixture-of-experts framework incorporated with an innovative gating function.<n>Designed to integrate a diverse number of modalities, FuseMoE is effective in managing scenarios with missing modalities and irregularly sampled data trajectories.
arXiv Detail & Related papers (2024-02-05T17:37:46Z) - Causality-Inspired Fair Representation Learning for Multimodal Recommendation [10.786383939272115]
We propose a novel fair multimodal recommendation approach (dubbed FMMRec) through causality-inspired fairness-oriented modal disentanglement and relation-aware fairness learning.<n>Our approach aims to achieve counterfactual fairness in multimodal recommendations.
arXiv Detail & Related papers (2023-10-26T13:10:59Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - SUMMIT: Source-Free Adaptation of Uni-Modal Models to Multi-Modal
Targets [30.262094419776208]
Current approaches assume that the source data is available during adaptation and that the source consists of paired multi-modal data.
We propose a switching framework which automatically chooses between two complementary methods of cross-modal pseudo-label fusion.
Our method achieves an improvement in mIoU of up to 12% over competing baselines.
arXiv Detail & Related papers (2023-08-23T02:57:58Z) - Missing Modality Robustness in Semi-Supervised Multi-Modal Semantic
Segmentation [27.23513712371972]
We propose a simple yet efficient multi-modal fusion mechanism Linear Fusion.
We also propose M3L: Multi-modal Teacher for Masked Modality Learning.
Our proposal shows an absolute improvement of up to 10% on robust mIoU above the most competitive baselines.
arXiv Detail & Related papers (2023-04-21T05:52:50Z) - Exploiting modality-invariant feature for robust multimodal emotion
recognition with missing modalities [76.08541852988536]
We propose to use invariant features for a missing modality imagination network (IF-MMIN)
We show that the proposed model outperforms all baselines and invariantly improves the overall emotion recognition performance under uncertain missing-modality conditions.
arXiv Detail & Related papers (2022-10-27T12:16:25Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.