Contrastive Self-Supervised Learning at the Edge: An Energy Perspective
- URL: http://arxiv.org/abs/2510.08374v1
- Date: Thu, 09 Oct 2025 15:57:44 GMT
- Title: Contrastive Self-Supervised Learning at the Edge: An Energy Perspective
- Authors: Fernanda Famá, Roberto Pereira, Charalampos Kalalas, Paolo Dini, Lorena Qendro, Fahim Kawsar, Mohammad Malekzadeh,
- Abstract summary: We conduct an evaluation of four widely used contrastive learning frameworks: SimCLR, MoCo, SimSiam, and Barlow Twins.<n>We focus on the practical feasibility of these CL frameworks for edge and fog deployment, and introduce a systematic benchmarking strategy.<n>Our findings reveal that SimCLR, contrary to its perceived computational cost, demonstrates the lowest energy consumption across various data regimes.
- Score: 47.71700347940481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While contrastive learning (CL) shows considerable promise in self-supervised representation learning, its deployment on resource-constrained devices remains largely underexplored. The substantial computational demands required for training conventional CL frameworks pose a set of challenges, particularly in terms of energy consumption, data availability, and memory usage. We conduct an evaluation of four widely used CL frameworks: SimCLR, MoCo, SimSiam, and Barlow Twins. We focus on the practical feasibility of these CL frameworks for edge and fog deployment, and introduce a systematic benchmarking strategy that includes energy profiling and reduced training data conditions. Our findings reveal that SimCLR, contrary to its perceived computational cost, demonstrates the lowest energy consumption across various data regimes. Finally, we also extend our analysis by evaluating lightweight neural architectures when paired with CL frameworks. Our study aims to provide insights into the resource implications of deploying CL in edge/fog environments with limited processing capabilities and opens several research directions for its future optimization.
Related papers
- Self-Supervised Learning at the Edge: The Cost of Labeling [41.11831047923664]
Contrastive learning (CL) has emerged as an alternative to traditional supervised machine learning solutions.<n>ssl techniques for edge-based learning focus on trade-offs between model performance and energy efficiency.<n>We demonstrate that tailored SSL strategies can achieve competitive performance while reducing resource consumption by up to 4X.
arXiv Detail & Related papers (2025-07-09T17:03:50Z) - Illusion or Algorithm? Investigating Memorization, Emergence, and Symbolic Processing in In-Context Learning [50.53703102032562]
Large-scale Transformer language models (LMs) trained solely on next-token prediction with web-scale data can solve a wide range of tasks.<n>The mechanism behind this capability, known as in-context learning (ICL), remains both controversial and poorly understood.
arXiv Detail & Related papers (2025-05-16T08:50:42Z) - Continual Learning with Neuromorphic Computing: Foundations, Methods, and Emerging Applications [5.213243471774097]
Neuromorphic Continual Learning (NCL) appears as an emerging solution, by leveraging the principles of Spiking Neural Networks (SNNs)<n>This survey covers several hybrid approaches that combine supervised and unsupervised learning paradigms.<n>It also covers optimization techniques including SNN operations reduction, weight quantization, and knowledge distillation.
arXiv Detail & Related papers (2024-10-11T19:49:53Z) - Theory on Mixture-of-Experts in Continual Learning [72.42497633220547]
Continual learning (CL) has garnered significant attention because of its ability to adapt to new tasks that arrive over time.<n>Catastrophic forgetting (of old tasks) has been identified as a major issue in CL, as the model adapts to new tasks.<n>MoE model has recently been shown to effectively mitigate catastrophic forgetting in CL, by employing a gating network.
arXiv Detail & Related papers (2024-06-24T08:29:58Z) - Data Poisoning for In-context Learning [49.77204165250528]
In-context learning (ICL) has been recognized for its innovative ability to adapt to new tasks.<n>This paper delves into the critical issue of ICL's susceptibility to data poisoning attacks.<n>We introduce ICLPoison, a specialized attacking framework conceived to exploit the learning mechanisms of ICL.
arXiv Detail & Related papers (2024-02-03T14:20:20Z) - Analysis of the Memorization and Generalization Capabilities of AI
Agents: Are Continual Learners Robust? [91.682459306359]
In continual learning (CL), an AI agent learns from non-stationary data streams under dynamic environments.
In this paper, a novel CL framework is proposed to achieve robust generalization to dynamic environments while retaining past knowledge.
The generalization and memorization performance of the proposed framework are theoretically analyzed.
arXiv Detail & Related papers (2023-09-18T21:00:01Z) - From MNIST to ImageNet and Back: Benchmarking Continual Curriculum
Learning [9.104068727716294]
Continual learning (CL) is one of the most promising trends in machine learning research.
We introduce two novel CL benchmarks that involve multiple heterogeneous tasks from six image datasets.
We additionally structure our benchmarks so that tasks are presented in increasing and decreasing order of complexity.
arXiv Detail & Related papers (2023-03-16T18:11:19Z) - Self-adversarial Multi-scale Contrastive Learning for Semantic
Segmentation of Thermal Facial Images [11.68189195596647]
We propose Self-Adrial Multi-scale Contrastive Learning (SAM-CL) as a generic learning framework to train segmentation networks.
SAM-CL framework constitutes SAM-CL loss function and a thermal image augmentation (TiAug) as a domain-specific augmentation technique to simulate unconstrained settings.
arXiv Detail & Related papers (2022-09-21T22:58:47Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.