SHORING: Design Provable Conditional High-Order Interaction Network via
Symbolic Testing
- URL: http://arxiv.org/abs/2107.01326v1
- Date: Sat, 3 Jul 2021 02:33:32 GMT
- Title: SHORING: Design Provable Conditional High-Order Interaction Network via
Symbolic Testing
- Authors: Hui Li, Xing Fu, Ruofan Wu, Jinyu Xu, Kai Xiao, Xiaofu Chang, Weiqiang
Wang, Shuai Chen, Leilei Shi, Tao Xiong, Yuan Qi
- Abstract summary: We propose a symbolic testing framework that helps to answer the question of what kinds of expert-derived features could be learned by a neural network.
Inspired by this testing framework, we introduce an efficient architecture named SHORING.
We argue that SHORING is capable of learning certain standard symbolic expressions which the standard multi-head self-attention network fails to learn.
- Score: 15.324528842034177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning provides a promising way to extract effective representations
from raw data in an end-to-end fashion and has proven its effectiveness in
various domains such as computer vision, natural language processing, etc.
However, in domains such as content/product recommendation and risk management,
where sequence of event data is the most used raw data form and experts derived
features are more commonly used, deep learning models struggle to dominate the
game. In this paper, we propose a symbolic testing framework that helps to
answer the question of what kinds of expert-derived features could be learned
by a neural network. Inspired by this testing framework, we introduce an
efficient architecture named SHORING, which contains two components:
\textit{event network} and \textit{sequence network}. The \textit{event}
network learns arbitrarily yet efficiently high-order \textit{event-level}
embeddings via a provable reparameterization trick, the \textit{sequence}
network aggregates from sequence of \textit{event-level} embeddings. We argue
that SHORING is capable of learning certain standard symbolic expressions which
the standard multi-head self-attention network fails to learn, and conduct
comprehensive experiments and ablation studies on four synthetic datasets and
three real-world datasets. The results show that SHORING empirically
outperforms the state-of-the-art methods.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork [60.889175951038496]
Large-scale neural networks have demonstrated remarkable performance in different domains like vision and language processing.
One of the key questions of structural pruning is how to estimate the channel significance.
We propose a novel algorithmic framework, namely textttPASS.
It is a tailored hyper-network to take both visual prompts and network weight statistics as input, and output layer-wise channel sparsity in a recurrent manner.
arXiv Detail & Related papers (2024-07-24T16:47:45Z) - Text2Data: Low-Resource Data Generation with Textual Control [100.5970757736845]
Text2Data is a novel approach that utilizes unlabeled data to understand the underlying data distribution.
It undergoes finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification [25.84902508816679]
We introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously.
This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider.
Our method achieves state-of-the-art performance on three commonly-used datasets.
arXiv Detail & Related papers (2022-11-24T05:44:26Z) - FaceTopoNet: Facial Expression Recognition using Face Topology Learning [23.139108533273777]
We propose an end-to-end deep model for facial expression recognition, which is capable of learning an effective tree topology of the face.
Our model then traverses the learned tree to generate a sequence, which is then used to form an embedding to feed a sequential learner.
We perform extensive experiments on four large-scale in-the-wild facial expression datasets to evaluate our approach.
arXiv Detail & Related papers (2022-09-13T22:02:54Z) - TeKo: Text-Rich Graph Neural Networks with External Knowledge [75.91477450060808]
We propose a novel text-rich graph neural network with external knowledge (TeKo)
We first present a flexible heterogeneous semantic network that incorporates high-quality entities.
We then introduce two types of external knowledge, that is, structured triplets and unstructured entity description.
arXiv Detail & Related papers (2022-06-15T02:33:10Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Minimally-Supervised Structure-Rich Text Categorization via Learning on
Text-Rich Networks [61.23408995934415]
We propose a novel framework for minimally supervised categorization by learning from the text-rich network.
Specifically, we jointly train two modules with different inductive biases -- a text analysis module for text understanding and a network learning module for class-discriminative, scalable network learning.
Our experiments show that given only three seed documents per category, our framework can achieve an accuracy of about 92%.
arXiv Detail & Related papers (2021-02-23T04:14:34Z) - Adversarial Context Aware Network Embeddings for Textual Networks [8.680676599607123]
Existing approaches learn embeddings of text and network structure by enforcing embeddings of connected nodes to be similar.
This implies that these approaches require edge information for learning embeddings and they cannot learn embeddings of unseen nodes.
We propose an approach that achieves both modality fusion and the capability to learn embeddings of unseen nodes.
arXiv Detail & Related papers (2020-11-05T05:20:01Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Text Recognition in Real Scenarios with a Few Labeled Samples [55.07859517380136]
Scene text recognition (STR) is still a hot research topic in computer vision field.
This paper proposes a few-shot adversarial sequence domain adaptation (FASDA) approach to build sequence adaptation.
Our approach can maximize the character-level confusion between the source domain and the target domain.
arXiv Detail & Related papers (2020-06-22T13:03:01Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Deep Learning Approach for Enhanced Cyber Threat Indicators in Twitter
Stream [3.7354197654171797]
This work proposes a deep learning based approach for tweet data analysis.
To convert the tweets into numerical representations, various text representations are employed.
For comparative analysis, the classical text representation method with classical machine learning algorithm is employed.
arXiv Detail & Related papers (2020-03-31T00:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.