SA-MLP: A Low-Power Multiplication-Free Deep Network for 3D Point Cloud Classification in Resource-Constrained Environments
- URL: http://arxiv.org/abs/2409.01998v2
- Date: Wed, 15 Jan 2025 18:07:13 GMT
- Title: SA-MLP: A Low-Power Multiplication-Free Deep Network for 3D Point Cloud Classification in Resource-Constrained Environments
- Authors: Qiang Zheng, Chao Zhang, Jian Sun,
- Abstract summary: Point cloud classification plays a crucial role in the processing and analysis of data from 3D sensors such as LiDAR.
Traditional neural networks, which rely heavily on multiplication operations, often face challenges in terms of high computational costs and energy consumption.
This study presents a novel family of efficient multiplication-based architectures designed to improve the computational efficiency of point cloud classification tasks.
- Score: 46.266960248570086
- License:
- Abstract: Point cloud classification plays a crucial role in the processing and analysis of data from 3D sensors such as LiDAR, which are commonly used in applications like autonomous vehicles, robotics, and environmental monitoring. However, traditional neural networks, which rely heavily on multiplication operations, often face challenges in terms of high computational costs and energy consumption. This study presents a novel family of efficient MLP-based architectures designed to improve the computational efficiency of point cloud classification tasks in sensor systems. The baseline model, Mul-MLP, utilizes conventional multiplication operations, while Add-MLP and Shift-MLP replace multiplications with addition and shift operations, respectively. These replacements leverage more sensor-friendly operations that can significantly reduce computational overhead, making them particularly suitable for resource-constrained sensor platforms. To further enhance performance, we propose SA-MLP, a hybrid architecture that alternates between shift and adder layers, preserving the network depth while optimizing computational efficiency. Unlike previous approaches such as ShiftAddNet, which increase the layer count and limit representational capacity by freezing shift weights, SA-MLP fully exploits the complementary advantages of shift and adder layers by employing distinct learning rates and optimizers. Experimental results show that Add-MLP and Shift-MLP achieve competitive performance compared to Mul-MLP, while SA-MLP surpasses the baseline, delivering results comparable to state-of-the-art MLP models in terms of both classification accuracy and computational efficiency. This work offers a promising, energy-efficient solution for sensor-driven applications requiring real-time point cloud classification, particularly in environments with limited computational resources.
Related papers
- Transforming Indoor Localization: Advanced Transformer Architecture for NLOS Dominated Wireless Environments with Distributed Sensors [7.630782404476683]
We introduce a novel tokenization approach, referred to as Sensor Snapshot Tokenization (SST), which preserves variable-specific representations of power delay profile ( PDP)
We also propose a lightweight Swish-Gated Linear Unit-based Transformer (L-SwiGLU Transformer) model, designed to reduce computational complexity without compromising localization accuracy.
arXiv Detail & Related papers (2025-01-14T01:16:30Z) - OP-LoRA: The Blessing of Dimensionality [93.08208871549557]
Low-rank adapters enable fine-tuning of large models with only a small number of parameters.
They often pose optimization challenges, with poor convergence.
We introduce an over- parameterized approach that accelerates training without increasing inference costs.
We achieve improvements in vision-language tasks and especially notable increases in image generation.
arXiv Detail & Related papers (2024-12-13T18:55:19Z) - TinyML NLP Approach for Semantic Wireless Sentiment Classification [49.801175302937246]
We introduce split learning (SL) as an energy-efficient alternative, privacy-preserving tiny machine learning (MLTiny) scheme.
Our results show that SL reduces processing power and CO2 emissions while maintaining high accuracy, whereas FL offers a balanced compromise between efficiency and privacy.
arXiv Detail & Related papers (2024-11-09T21:26:59Z) - Scaling Laws for Predicting Downstream Performance in LLMs [75.28559015477137]
This work focuses on the pre-training loss as a more-efficient metric for performance estimation.
We extend the power law analytical function to predict domain-specific pre-training loss based on FLOPs across data sources.
We employ a two-layer neural network to model the non-linear relationship between multiple domain-specific loss and downstream performance.
arXiv Detail & Related papers (2024-10-11T04:57:48Z) - GERA: Geometric Embedding for Efficient Point Registration Analysis [20.690695788384517]
We propose a novel point cloud registration network that leverages a pure geometric architecture, constructing geometric information offline.
Our method is the first to replace 3D coordinate inputs with offline-constructed geometric encoding, improving generalization and stability.
arXiv Detail & Related papers (2024-10-01T11:19:56Z) - Resource Allocation for Stable LLM Training in Mobile Edge Computing [11.366306689957353]
This paper explores a collaborative training framework that integrates mobile users with edge servers to optimize resource allocation.
We formulate a multi-objective optimization problem to minimize the total energy consumption and delay during training.
We also address the common issue of instability in model performance by incorporating stability enhancements into our objective function.
arXiv Detail & Related papers (2024-09-30T12:36:27Z) - A Masked Pruning Approach for Dimensionality Reduction in
Communication-Efficient Federated Learning Systems [11.639503711252663]
Federated Learning (FL) represents a growing machine learning (ML) paradigm designed for training models across numerous nodes.
We develop a novel algorithm that overcomes limitations by combining a pruning-based method with the FL process.
We present an extensive experimental study demonstrating the superior performance of MPFL compared to existing methods.
arXiv Detail & Related papers (2023-12-06T20:29:23Z) - Boosting Convolution with Efficient MLP-Permutation for Volumetric
Medical Image Segmentation [32.645022002807416]
Multi-layer perceptron (MLP) network has regained popularity among researchers due to their comparable results to ViT.
We propose a novel permutable hybrid network for Vol-MedSeg, named PHNet, which capitalizes on the strengths of both convolution neural networks (CNNs) and PHNet.
arXiv Detail & Related papers (2023-03-23T08:59:09Z) - The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
Transformers [59.87030906486969]
This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse.
We show that sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks.
We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers.
arXiv Detail & Related papers (2022-10-12T15:25:19Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.