Self-Supervised Learning at the Edge: The Cost of Labeling
- URL: http://arxiv.org/abs/2507.07033v1
- Date: Wed, 09 Jul 2025 17:03:50 GMT
- Title: Self-Supervised Learning at the Edge: The Cost of Labeling
- Authors: Roberto Pereira, Fernanda Famá, Asal Rangrazi, Marco Miozzo, Charalampos Kalalas, Paolo Dini,
- Abstract summary: Contrastive learning (CL) has emerged as an alternative to traditional supervised machine learning solutions.<n>ssl techniques for edge-based learning focus on trade-offs between model performance and energy efficiency.<n>We demonstrate that tailored SSL strategies can achieve competitive performance while reducing resource consumption by up to 4X.
- Score: 41.11831047923664
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Contrastive learning (CL) has recently emerged as an alternative to traditional supervised machine learning solutions by enabling rich representations from unstructured and unlabeled data. However, CL and, more broadly, self-supervised learning (SSL) methods often demand a large amount of data and computational resources, posing challenges for deployment on resource-constrained edge devices. In this work, we explore the feasibility and efficiency of SSL techniques for edge-based learning, focusing on trade-offs between model performance and energy efficiency. In particular, we analyze how different SSL techniques adapt to limited computational, data, and energy budgets, evaluating their effectiveness in learning robust representations under resource-constrained settings. Moreover, we also consider the energy costs involved in labeling data and assess how semi-supervised learning may assist in reducing the overall energy consumed to train CL models. Through extensive experiments, we demonstrate that tailored SSL strategies can achieve competitive performance while reducing resource consumption by up to 4X, underscoring their potential for energy-efficient learning at the edge.
Related papers
- Contrastive Self-Supervised Learning at the Edge: An Energy Perspective [47.71700347940481]
We conduct an evaluation of four widely used contrastive learning frameworks: SimCLR, MoCo, SimSiam, and Barlow Twins.<n>We focus on the practical feasibility of these CL frameworks for edge and fog deployment, and introduce a systematic benchmarking strategy.<n>Our findings reveal that SimCLR, contrary to its perceived computational cost, demonstrates the lowest energy consumption across various data regimes.
arXiv Detail & Related papers (2025-10-09T15:57:44Z) - DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [70.91804882618243]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.<n>We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.<n>Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - Learning for Cross-Layer Resource Allocation in MEC-Aided Cell-Free Networks [71.30914500714262]
Cross-layer resource allocation over mobile edge computing (MEC)-aided cell-free networks can sufficiently exploit the transmitting and computing resources to promote the data rate.<n>Joint subcarrier allocation and beamforming optimization are investigated for the MEC-aided cell-free network from the perspective of deep learning.
arXiv Detail & Related papers (2024-12-21T10:18:55Z) - eFedLLM: Efficient LLM Inference Based on Federated Learning [1.6179784294541053]
Large Language Models (LLMs) herald a transformative era in artificial intelligence (AI)
This paper introduces an effective approach that enhances the operational efficiency and affordability of LLM inference.
arXiv Detail & Related papers (2024-11-24T22:50:02Z) - A Review on Discriminative Self-supervised Learning Methods in Computer Vision [5.5547914920738]
Self-supervised learning (SSL) has rapidly emerged as a transformative approach in computer vision.<n>This review presents a comprehensive analysis of discriminative SSL methods, which focus on learning representations by solving pretext tasks that do not require human labels.
arXiv Detail & Related papers (2024-05-08T11:15:20Z) - Compute-Efficient Active Learning [0.0]
Active learning aims at reducing labeling costs by selecting the most informative samples from an unlabeled dataset.
Traditional active learning process often demands extensive computational resources, hindering scalability and efficiency.
We present a novel method designed to alleviate the computational burden associated with active learning on massive datasets.
arXiv Detail & Related papers (2024-01-15T12:32:07Z) - Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning [51.02352381270177]
Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology.
The choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy.
This article provides a comprehensive overview of the SFL process and thoroughly analyze energy consumption and privacy.
arXiv Detail & Related papers (2023-11-15T23:23:42Z) - Energy-Efficient and Real-Time Sensing for Federated Continual Learning via Sample-Driven Control [21.871879862642235]
Real-Time Sensing (RTS) systems must continuously acquire, update, integrate, and apply knowledge to adapt to real-world dynamics.<n>We investigate how the data distribution shift from ideal to practical RTS scenarios affects Artificial Intelligence (AI) model performance.<n>We develop a novel Sample-driven Control for Federated Continual Learning (SCFL) technique, specifically designed for mobile edge networks with RTS capabilities.
arXiv Detail & Related papers (2023-10-11T13:50:28Z) - Optimizing Attention and Cognitive Control Costs Using Temporally-Layered Architectures [0.9831489366502302]
biological control achieves remarkable performance while also optimizing computational energy expenditure and decision frequency.
We propose a Decision Bounded Markov Decision Process (DB-MDP), that constrains the number of decisions and computational energy available to agents in reinforcement learning environments.
We introduce a biologically-inspired, Temporally Layered Architecture (TLA), enabling agents to manage computational costs through two layers with distinct time scales and energy requirements.
arXiv Detail & Related papers (2023-05-30T02:59:06Z) - Sustainable AIGC Workload Scheduling of Geo-Distributed Data Centers: A
Multi-Agent Reinforcement Learning Approach [48.18355658448509]
Recent breakthroughs in generative artificial intelligence have triggered a surge in demand for machine learning training, which poses significant cost burdens and environmental challenges due to its substantial energy consumption.
Scheduling training jobs among geographically distributed cloud data centers unveils the opportunity to optimize the usage of computing capacity powered by inexpensive and low-carbon energy.
We propose an algorithm based on multi-agent reinforcement learning and actor-critic methods to learn the optimal collaborative scheduling strategy through interacting with a cloud system built with real-life workload patterns, energy prices, and carbon intensities.
arXiv Detail & Related papers (2023-04-17T02:12:30Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.