Alignment Unlocks Complementarity: A Framework for Multiview Circuit Representation Learning
- URL: http://arxiv.org/abs/2509.20968v1
- Date: Thu, 25 Sep 2025 10:12:04 GMT
- Title: Alignment Unlocks Complementarity: A Framework for Multiview Circuit Representation Learning
- Authors: Zhengyuan Shi, Jingxin Wang, Wentao Jiang, Chengyu Ma, Ziyang Zheng, Zhufei Chu, Weikang Qian, Qiang Xu,
- Abstract summary: Multiview learning on Boolean circuits holds immense promise, as different graph-based representations offer complementary structural and semantic information.<n>We introduce MixGate, a framework built on a principled training curriculum that teaches the model a shared, function-aware representation space.<n>We show that our alignment-first strategy transforms masked modeling from an ineffective technique into a powerful performance driver.
- Score: 12.528410977116438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiview learning on Boolean circuits holds immense promise, as different graph-based representations offer complementary structural and semantic information. However, the vast structural heterogeneity between views, such as an And-Inverter Graph (AIG) versus an XOR-Majority Graph (XMG), poses a critical barrier to effective fusion, especially for self-supervised techniques like masked modeling. Naively applying such methods fails, as the cross-view context is perceived as noise. Our key insight is that functional alignment is a necessary precondition to unlock the power of multiview self-supervision. We introduce MixGate, a framework built on a principled training curriculum that first teaches the model a shared, function-aware representation space via an Equivalence Alignment Loss. Only then do we introduce a multiview masked modeling objective, which can now leverage the aligned views as a rich, complementary signal. Extensive experiments, including a crucial ablation study, demonstrate that our alignment-first strategy transforms masked modeling from an ineffective technique into a powerful performance driver.
Related papers
- Self-Supervised Learning with a Multi-Task Latent Space Objective [71.49269645849675]
Self-supervised learning (SSL) methods learn visual representations by aligning different views of the same image.<n>We show that assigning a separate predictor to each view type stabilizes multi-crop training, resulting in significant performance gains.<n>This yields a simple multi-task formulation of asymmetric Siamese SSL that combines global, local, and masked views into a single framework.
arXiv Detail & Related papers (2026-02-05T16:33:30Z) - Enhancing Semi-Supervised Multi-View Graph Convolutional Networks via Supervised Contrastive Learning and Self-Training [9.300953069946969]
graph convolutional network (GCN)-based multi-view learning provides a powerful framework for integrating structural information from heterogeneous views.<n>Existing methods often fail to fully exploit the complementary information across views, leading to suboptimal feature representations and limited performance.<n>We propose MV-SupGCN, a semi-supervised GCN model that integrates several complementary components with clear motivations and mutual reinforcement.
arXiv Detail & Related papers (2025-12-15T16:39:23Z) - TRACE: Learning to Compute on Graphs [15.34239150750753]
We introduce textbfTRACE, a new paradigm built on an architecturally sound backbone and a principled learning objective.<n>First, TRACE employs a Hierarchical Transformer that mirrors the step-by-step flow of computation.<n>Second, we introduce textbffunction shift learning, a novel objective that decouples the learning problem.
arXiv Detail & Related papers (2025-09-26T05:22:32Z) - Harmonizing Visual Representations for Unified Multimodal Understanding and Generation [53.01486796503091]
We present emphHarmon, a unified autoregressive framework that harmonizes understanding and generation tasks with a shared MAR encoder.<n>Harmon achieves state-of-the-art image generation results on the GenEval, MJHQ30K and WISE benchmarks.
arXiv Detail & Related papers (2025-03-27T20:50:38Z) - Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - Balanced Multi-Relational Graph Clustering [5.531383184058319]
Multi-relational graph clustering has demonstrated remarkable success in uncovering underlying patterns in complex networks.
Our empirical study finds the pervasive presence of imbalance in real-world graphs, which is in principle contradictory to the motivation of alignment.
We propose Balanced Multi-Relational Graph Clustering (BMGC), comprising unsupervised dominant view mining and dual signals guided representation learning.
arXiv Detail & Related papers (2024-07-23T22:11:13Z) - Masked Two-channel Decoupling Framework for Incomplete Multi-view Weak Multi-label Learning [21.49630640829186]
In this paper, we focus on the complex yet highly realistic task of incomplete multi-view weak multi-label learning.
We propose a masked two-channel decoupling framework based on deep neural networks to solve this problem.
Our model is fully adaptable to arbitrary view and label absences while also performing well on the ideal full data.
arXiv Detail & Related papers (2024-04-26T11:39:50Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Enhancing Representations through Heterogeneous Self-Supervised Learning [61.40674648939691]
We propose Heterogeneous Self-Supervised Learning (HSSL), which enforces a base model to learn from an auxiliary head whose architecture is heterogeneous from the base model.
The HSSL endows the base model with new characteristics in a representation learning way without structural changes.
The HSSL is compatible with various self-supervised methods, achieving superior performances on various downstream tasks.
arXiv Detail & Related papers (2023-10-08T10:44:05Z) - A Clustering-guided Contrastive Fusion for Multi-view Representation
Learning [7.630965478083513]
We propose a deep fusion network to fuse view-specific representations into the view-common representation.
We also design an asymmetrical contrastive strategy that aligns the view-common representation and each view-specific representation.
In the incomplete view scenario, our proposed method resists noise interference better than those of our competitors.
arXiv Detail & Related papers (2022-12-28T07:21:05Z) - Exploring The Role of Mean Teachers in Self-supervised Masked
Auto-Encoders [64.03000385267339]
Masked image modeling (MIM) has become a popular strategy for self-supervised learning(SSL) of visual representations with Vision Transformers.
We present a simple SSL method, the Reconstruction-Consistent Masked Auto-Encoder (RC-MAE) by adding an EMA teacher to MAE.
RC-MAE converges faster and requires less memory usage than state-of-the-art self-distillation methods during pre-training.
arXiv Detail & Related papers (2022-10-05T08:08:55Z) - Vision Transformers: From Semantic Segmentation to Dense Prediction [139.15562023284187]
We explore the global context learning potentials of vision transformers (ViTs) for dense visual prediction.
Our motivation is that through learning global context at full receptive field layer by layer, ViTs may capture stronger long-range dependency information.
We formulate a family of Hierarchical Local-Global (HLG) Transformers, characterized by local attention within windows and global-attention across windows in a pyramidal architecture.
arXiv Detail & Related papers (2022-07-19T15:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.