Reinforcement Learning-Based Energy-Aware Coverage Path Planning for Precision Agriculture
- URL: http://arxiv.org/abs/2601.16405v1
- Date: Fri, 23 Jan 2026 02:33:14 GMT
- Title: Reinforcement Learning-Based Energy-Aware Coverage Path Planning for Precision Agriculture
- Authors: Beining Wu, Zihao Ding, Leo Ostigaard, Jun Huang,
- Abstract summary: This paper proposes an energy-aware CPP framework grounded in Soft Actor-Critic (SAC) reinforcement learning.<n>To enable robust and adaptive decision-making under energy limitations, the framework integrates Conal Neural Networks (CNNs) for spatial feature extraction.<n>A dedicated reward function is designed to jointly optimize coverage efficiency, energy consumption, and return-to-base constraints.
- Score: 6.218614094739671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coverage Path Planning (CPP) is a fundamental capability for agricultural robots; however, existing solutions often overlook energy constraints, resulting in incomplete operations in large-scale or resource-limited environments. This paper proposes an energy-aware CPP framework grounded in Soft Actor-Critic (SAC) reinforcement learning, designed for grid-based environments with obstacles and charging stations. To enable robust and adaptive decision-making under energy limitations, the framework integrates Convolutional Neural Networks (CNNs) for spatial feature extraction and Long Short-Term Memory (LSTM) networks for temporal dynamics. A dedicated reward function is designed to jointly optimize coverage efficiency, energy consumption, and return-to-base constraints. Experimental results demonstrate that the proposed approach consistently achieves over 90% coverage while ensuring energy safety, outperforming traditional heuristic algorithms such as Rapidly-exploring Random Tree (RRT), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO) baselines by 13.4-19.5% in coverage and reducing constraint violations by 59.9-88.3%. These findings validate the proposed SAC-based framework as an effective and scalable solution for energy-constrained CPP in agricultural robotics.
Related papers
- EARL: Energy-Aware Optimization of Liquid State Machines for Pervasive AI [0.3867363075280543]
Pervasive AI increasingly depends on on-device learning systems that deliver low-latency and energy-efficient computation under strict resource constraints.<n>Liquid State Machines offer a promising approach for low-power temporal processing in pervasive and neuromorphic systems.<n>This work presents EARL, an energy-aware reinforcement learning framework that integrates Bayesian optimization with an adaptive reinforcement learning based selection policy.
arXiv Detail & Related papers (2026-01-08T18:31:11Z) - Joint Resource Management for Energy-efficient UAV-assisted SWIPT-MEC: A Deep Reinforcement Learning Approach [50.52139512096988]
6G Internet of Things (IoT) networks face challenges in remote areas and disaster scenarios where ground infrastructure is unavailable.<n>This paper proposes a novel aerial unmanned vehicle (UAV)-assisted computing (MEC) system enhanced by directional antennas to provide both computational and energy support for ground edge terminals.
arXiv Detail & Related papers (2025-05-06T06:46:19Z) - Generalising Battery Control in Net-Zero Buildings via Personalised Federated RL [5.195669033269619]
This work studies the challenge of optimal energy management in building-based microgrids through a collaborative and privacy-preserving framework.<n>We evaluate two common RL algorithms (PPO and TRPO) in different collaborative setups to manage distributed energy resources.<n>Our approach emphasizes reducing energy costs and carbon emissions while ensuring privacy.
arXiv Detail & Related papers (2024-12-30T13:38:31Z) - Energy-Aware Dynamic Neural Inference [39.04688735618206]
We introduce an on-device adaptive inference system equipped with an energy-harvester and finite-capacity energy storage.
We show that, as the rate of the ambient energy increases, energy- and confidence-aware control schemes show approximately 5% improvement in accuracy.
We derive a principled policy with theoretical guarantees for confidence-aware and -agnostic controllers.
arXiv Detail & Related papers (2024-11-04T16:51:22Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - Coverage and Capacity Optimization in STAR-RISs Assisted Networks: A
Machine Learning Approach [102.00221938474344]
A novel model is proposed for the coverage and capacity optimization of simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) assisted networks.
A loss function-based update strategy is the core point, which is able to calculate weights for both loss functions of coverage and capacity by a min-norm solver at each update.
The numerical results demonstrate that the investigated update strategy outperforms the fixed weight-based MO algorithms.
arXiv Detail & Related papers (2022-04-13T13:52:22Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Enforcing Policy Feasibility Constraints through Differentiable
Projection for Energy Optimization [57.88118988775461]
We propose PROjected Feasibility (PROF) to enforce convex operational constraints within neural policies.
We demonstrate PROF on two applications: energy-efficient building operation and inverter control.
arXiv Detail & Related papers (2021-05-19T01:58:10Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - ECO: Enabling Energy-Neutral IoT Devices through Runtime Allocation of
Harvested Energy [0.8774604259603302]
We present a runtime-based energy-allocation framework to optimize the utility of the target device under energy constraints.
The proposed framework uses an efficient iterative algorithm to compute initial energy allocations at the beginning of a day.
We evaluate this framework using solar and motion energy harvesting modalities and American Time Use Survey data from 4772 different users.
arXiv Detail & Related papers (2021-02-26T17:21:25Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.