Reverse-engineering Bar Charts Using Neural Networks
- URL: http://arxiv.org/abs/2009.02491v1
- Date: Sat, 5 Sep 2020 08:14:35 GMT
- Title: Reverse-engineering Bar Charts Using Neural Networks
- Authors: Fangfang Zhou, Yong Zhao, Wenjiang Chen, Yijing Tan, Yaqi Xu, Yi Chen,
Chao Liu, Ying Zhao
- Abstract summary: We propose a neural network-based method for reverse-engineering bar charts.
We adopt a neural network-based object detection model to simultaneously localize and classify textual information.
We design an encoder-decoder framework that integrates convolutional and recurrent neural networks to extract numeric information.
- Score: 13.300297308628785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reverse-engineering bar charts extracts textual and numeric information from
the visual representations of bar charts to support application scenarios that
require the underlying information. In this paper, we propose a neural
network-based method for reverse-engineering bar charts. We adopt a neural
network-based object detection model to simultaneously localize and classify
textual information. This approach improves the efficiency of textual
information extraction. We design an encoder-decoder framework that integrates
convolutional and recurrent neural networks to extract numeric information. We
further introduce an attention mechanism into the framework to achieve high
accuracy and robustness. Synthetic and real-world datasets are used to evaluate
the effectiveness of the method. To the best of our knowledge, this work takes
the lead in constructing a complete neural network-based method of
reverse-engineering bar charts.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Mutual information estimation for graph convolutional neural networks [0.0]
We present an architecture-agnostic method for tracking a network's internal representations during training, which are then used to create a mutual information plane.
We compare how the inductive bias introduced in graph-based architectures changes the mutual information plane relative to a fully connected neural network.
arXiv Detail & Related papers (2022-03-31T08:30:04Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Distillation of Weighted Automata from Recurrent Neural Networks using a
Spectral Approach [0.0]
This paper is an attempt to bridge the gap between deep learning and grammatical inference.
It provides an algorithm to extract a formal language from any recurrent neural network trained for language modelling.
arXiv Detail & Related papers (2020-09-28T07:04:15Z) - Embedded Encoder-Decoder in Convolutional Networks Towards Explainable
AI [0.0]
This paper proposes a new explainable convolutional neural network (XCNN) which represents important and driving visual features of stimuli.
The experimental results on the CIFAR-10, Tiny ImageNet, and MNIST datasets showed the success of our algorithm (XCNN) to make CNNs explainable.
arXiv Detail & Related papers (2020-06-19T15:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.