Research on Data Fusion Algorithm Based on Deep Learning in Target
Tracking
- URL: http://arxiv.org/abs/2211.12776v1
- Date: Wed, 23 Nov 2022 08:44:59 GMT
- Title: Research on Data Fusion Algorithm Based on Deep Learning in Target
Tracking
- Authors: Huihui Wu
- Abstract summary: An eye tracking data fusion algorithm based on long and short-term memory network is proposed.
The experimental results show that compared with the two fusion algorithms based on deep learning, the algorithm proposed in this paper performs well in terms of fusion quality.
- Score: 10.335589214502987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming at the limitation that deep long and short-term memory network(DLSTM)
algorithm cannot perform parallel computing and cannot obtain global
information, in this paper, feature extraction and feature processing are
firstly carried out according to the characteristics of eye movement data and
tracking data, then by introducing a convolutional neural network (CNN) into a
deep long and short-term memory network, developed a new network structure and
designed a fusion strategy, an eye tracking data fusion algorithm based on long
and short-term memory network is proposed. The experimental results show that
compared with the two fusion algorithms based on deep learning, the algorithm
proposed in this paper performs well in terms of fusion quality.
Related papers
- Deep learning for the design of non-Hermitian topolectrical circuits [8.960003862907877]
We introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians.
Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.
arXiv Detail & Related papers (2024-02-15T14:41:55Z) - Deep-Unfolding for Next-Generation Transceivers [49.338084953253755]
The stringent performance requirements of future wireless networks are spurring studies on defining the next-generation multiple-input multiple-output (MIMO) transceivers.
For the design of advanced transceivers in wireless communications, optimization approaches often leading to iterative algorithms have achieved great success.
Deep learning, approximating the iterative algorithms with deep neural networks (DNNs) can significantly reduce the computational time.
Deep-unfolding has emerged which incorporates the benefits of both deep learning and iterative algorithms, by unfolding the iterative algorithm into a layer-wise structure.
arXiv Detail & Related papers (2023-05-15T02:13:41Z) - CCasGNN: Collaborative Cascade Prediction Based on Graph Neural Networks [0.49269463638915806]
Cascade prediction aims at modeling information diffusion in the network.
Recent efforts devoted to combining network structure and sequence features by graph neural networks and recurrent neural networks.
We propose a novel method CCasGNN considering the individual profile, structural features, and sequence information.
arXiv Detail & Related papers (2021-12-07T11:37:36Z) - A SAR speckle filter based on Residual Convolutional Neural Networks [68.8204255655161]
This work aims to present a novel method for filtering the speckle noise from Sentinel-1 data by applying Deep Learning (DL) algorithms, based on Convolutional Neural Networks (CNNs)
The obtained results, if compared with the state of the art, show a clear improvement in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM)
arXiv Detail & Related papers (2021-04-19T14:43:07Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - FG-Net: Fast Large-Scale LiDAR Point CloudsUnderstanding Network
Leveraging CorrelatedFeature Mining and Geometric-Aware Modelling [15.059508985699575]
FG-Net is a general deep learning framework for large-scale point clouds understanding without voxelizations.
We propose a deep convolutional neural network leveraging correlated feature mining and deformable convolution based geometric-aware modelling.
Our approaches outperform state-of-the-art approaches in terms of accuracy and efficiency.
arXiv Detail & Related papers (2020-12-17T08:20:09Z) - Fast Reinforcement Learning with Incremental Gaussian Mixture Models [0.0]
An online and incremental algorithm capable of learning from a single pass through data, called Incremental Gaussian Mixture Network (IGMN), was employed as a sample-efficient function approximator for the joint state and Q-values space.
Results are analyzed to explain the properties of the obtained algorithm, and it is observed that the use of the IGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks trained by gradient descent methods.
arXiv Detail & Related papers (2020-11-02T03:18:15Z) - Learning Robust Feature Representations for Scene Text Detection [0.0]
We present a network architecture derived from the loss to maximize conditional log-likelihood.
By extending the layer of latent variables to multiple layers, the network is able to learn robust features on scale.
In experiments, the proposed algorithm significantly outperforms state-of-the-art methods in terms of both recall and precision.
arXiv Detail & Related papers (2020-05-26T01:06:47Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.