Large region targets observation scheduling by multiple satellites using
resampling particle swarm optimization
- URL: http://arxiv.org/abs/2206.10178v1
- Date: Tue, 21 Jun 2022 08:18:02 GMT
- Title: Large region targets observation scheduling by multiple satellites using
resampling particle swarm optimization
- Authors: Yi Gu, Chao Han, Yuhan Chen, Shenggang Liu, Xinwei Wang
- Abstract summary: The last decades have witnessed a rapid increase of Earth observation satellites (EOSs)
This paper aims to address the EOSs observation scheduling problem for large region targets.
- Score: 0.3324876873771104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The last decades have witnessed a rapid increase of Earth observation
satellites (EOSs), leading to the increasing complexity of EOSs scheduling. On
account of the widespread applications of large region observation, this paper
aims to address the EOSs observation scheduling problem for large region
targets. A rapid coverage calculation method employing a projection reference
plane and a polygon clipping technique is first developed. We then formulate a
nonlinear integer programming model for the scheduling problem, where the
objective function is calculated based on the developed coverage calculation
method. A greedy initialization-based resampling particle swarm optimization
(GI-RPSO) algorithm is proposed to solve the model. The adopted greedy
initialization strategy and particle resampling method contribute to generating
efficient and effective solutions during the evolution process. In the end,
extensive experiments are conducted to illustrate the effectiveness and
reliability of the proposed method. Compared to the traditional particle swarm
optimization and the widely used greedy algorithm, the proposed GI-RPSO can
improve the scheduling result by 5.42% and 15.86%, respectively.
Related papers
- Revisiting Space Mission Planning: A Reinforcement Learning-Guided Approach for Multi-Debris Rendezvous [15.699822139827916]
The aim is to optimize the sequence in which all the given debris should be visited to get the least total time for rendezvous for the entire mission.
A neural network (NN) policy is developed, trained on simulated space missions with varying debris fields.
The reinforcement learning approach demonstrates a significant improvement in planning efficiency.
arXiv Detail & Related papers (2024-09-25T12:50:01Z) - Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation [49.49868273653921]
Diffusion models are promising for joint trajectory prediction and controllable generation in autonomous driving.
We introduce Optimal Gaussian Diffusion (OGD) and Estimated Clean Manifold (ECM) Guidance.
Our methodology streamlines the generative process, enabling practical applications with reduced computational overhead.
arXiv Detail & Related papers (2024-08-01T17:59:59Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - A Schedule of Duties in the Cloud Space Using a Modified Salp Swarm
Algorithm [0.0]
One of the most important NP-hard issues in the cloud domain is scheduling.
One of the collective intelligence algorithms, called the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied.
Results show that our algorithm has generally higher performance than the other algorithms.
arXiv Detail & Related papers (2023-09-18T02:48:41Z) - Using Particle Swarm Optimization as Pathfinding Strategy in a Space
with Obstacles [4.899469599577755]
Particle swarm optimization (PSO) is a search algorithm based on and population-based adaptive optimization.
In this paper, a pathfinding strategy is proposed to improve the efficiency of path planning for a broad range of applications.
arXiv Detail & Related papers (2021-12-16T12:16:02Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Directed particle swarm optimization with Gaussian-process-based
function forecasting [15.733136147164032]
Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths.
We show that our algorithm attains desirable properties for exploratory and exploitative behavior.
arXiv Detail & Related papers (2021-02-08T13:02:57Z) - Motion-Encoded Particle Swarm Optimization for Moving Target Search
Using UAVs [4.061135251278187]
This paper presents a novel algorithm named the motion-encoded particle swarm optimization (MPSO) for finding a moving target with unmanned aerial vehicles (UAVs)
The proposed MPSO is developed to solve that problem by encoding the search trajectory as a series of UAV motion paths evolving over the generation of particles in a PSO algorithm.
Results from extensive simulations with existing methods show that the proposed MPSO improves the detection performance by 24% and time performance by 4.71 times compared to the original PSO.
arXiv Detail & Related papers (2020-10-05T14:17:49Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.