Lightweight Gaze Estimation Model Via Fusion Global Information
- URL: http://arxiv.org/abs/2411.18064v1
- Date: Wed, 27 Nov 2024 05:16:14 GMT
- Title: Lightweight Gaze Estimation Model Via Fusion Global Information
- Authors: Zhang Cheng, Yanxia Wang,
- Abstract summary: This paper proposes a novel lightweight gaze estimation model FGI-Net.
It fuses global information into the CNN, effectively compensating for the need of multi-layer convolution.
It achieves a smaller angle error with 87.1% and 79.1% reduction in parameters and FLOPs.
- Score: 0.9668407688201359
- License:
- Abstract: Deep learning-based appearance gaze estimation methods are gaining popularity due to their high accuracy and fewer constraints from the environment. However, existing high-precision models often rely on deeper networks, leading to problems such as large parameters, long training time, and slow convergence. In terms of this issue, this paper proposes a novel lightweight gaze estimation model FGI-Net(Fusion Global Information). The model fuses global information into the CNN, effectively compensating for the need of multi-layer convolution and pooling to indirectly capture global information, while reducing the complexity of the model, improving the model accuracy and convergence speed. To validate the performance of the model, a large number of experiments are conducted, comparing accuracy with existing classical models and lightweight models, comparing convergence speed with models of different architectures, and conducting ablation experiments. Experimental results show that compared with GazeCaps, the latest gaze estimation model, FGI-Net achieves a smaller angle error with 87.1% and 79.1% reduction in parameters and FLOPs, respectively (MPIIFaceGaze is 3.74{\deg}, EyeDiap is 5.15{\deg}, Gaze360 is 10.50{\deg} and RT-Gene is 6.02{\deg}). Moreover, compared with different architectural models such as CNN and Transformer, FGI-Net is able to quickly converge to a higher accuracy range with fewer iterations of training, when achieving optimal accuracy on the Gaze360 and EyeDiap datasets, the FGI-Net model has 25% and 37.5% fewer iterations of training compared to GazeTR, respectively.
Related papers
- Low-Resolution Neural Networks [0.552480439325792]
This study examines the impact of parameter bit precision on model performance compared to standard 32-bit models.
Models analyzed include those with fully connected layers, convolutional layers, and transformer blocks.
Findings suggest a potential new era for optimized neural network models with reduced memory requirements and improved computational efficiency.
arXiv Detail & Related papers (2025-02-12T21:19:28Z) - Efficient Gravitational Wave Parameter Estimation via Knowledge Distillation: A ResNet1D-IAF Approach [2.4184866684341473]
This study presents a novel approach using knowledge distillation techniques to enhance computational efficiency in gravitational wave analysis.
We develop a framework combining ResNet1D and Inverse Autoregressive Flow (IAF) architectures, where knowledge from a complex teacher model is transferred to a lighter student model.
Our experimental results show that the student model achieves a validation loss of 3.70 with optimal configuration (40,100,0.75), compared to the teacher model's 4.09, while reducing the number of parameters by 43%.
arXiv Detail & Related papers (2024-12-11T03:56:46Z) - HyCubE: Efficient Knowledge Hypergraph 3D Circular Convolutional Embedding [21.479738859698344]
It is desirable and challenging for knowledge hypergraph embedding to reach a trade-off between model effectiveness and efficiency.
We propose an end-to-end efficient knowledge hypergraph embedding model, HyCubE, which designs a novel 3D circular convolutional neural network.
Our proposed model consistently outperforms state-of-the-art baselines, with an average improvement of 8.22% and a maximum improvement of 33.82%.
arXiv Detail & Related papers (2024-02-14T06:05:37Z) - Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric
Super-Resolution with BLASTNet 2.0 Data [4.293221567339693]
Analysis of compressible turbulent flows is essential for applications related to propulsion, energy generation, and the environment.
We present a 2.2 TB network-of-datasets containing 744 full-domain samples from 34 high-fidelity direct numerical simulations.
We benchmark a total of 49 variations of five deep learning approaches for 3D super-resolution.
arXiv Detail & Related papers (2023-09-23T18:57:02Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - Model soups: averaging weights of multiple fine-tuned models improves
accuracy without increasing inference time [69.7693300927423]
We show that averaging the weights of multiple models fine-tuned with different hyper parameter configurations improves accuracy and robustness.
We show that the model soup approach extends to multiple image classification and natural language processing tasks.
arXiv Detail & Related papers (2022-03-10T17:03:49Z) - Improving the Deployment of Recycling Classification through Efficient
Hyper-Parameter Analysis [0.0]
This paper develops a more efficient variant of WasteNet, a collaborative recycling classification model.
The newly developed model scores a test-set accuracy of 95.8% with a real world accuracy of 95%, a 14% increase over the original.
Our acceleration pipeline boosted model throughput by 750% to 24 inferences per second on the Jetson Nano embedded device.
arXiv Detail & Related papers (2021-10-21T10:42:14Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Towards Practical Lipreading with Distilled and Efficient Models [57.41253104365274]
Lipreading has witnessed a lot of progress due to the resurgence of neural networks.
Recent works have placed emphasis on aspects such as improving performance by finding the optimal architecture or improving generalization.
There is still a significant gap between the current methodologies and the requirements for an effective deployment of lipreading in practical scenarios.
We propose a series of innovations that significantly bridge that gap: first, we raise the state-of-the-art performance by a wide margin on LRW and LRW-1000 to 88.5% and 46.6%, respectively using self-distillation.
arXiv Detail & Related papers (2020-07-13T16:56:27Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.