When Mining Electric Locomotives Meet Reinforcement Learning
- URL: http://arxiv.org/abs/2311.08153v1
- Date: Tue, 14 Nov 2023 13:29:01 GMT
- Title: When Mining Electric Locomotives Meet Reinforcement Learning
- Authors: Ying Li, Zhencai Zhu, Xiaoqiang Li, Chunyu Yang and Hao Lu
- Abstract summary: A mining electric locomotive control method that can adapt to different complex mining environments is needed.
In this paper, we propose an improved epsilon-greedy (IEG) algorithm which can better balance the exploration and exploitation.
The simulation results show that this method ensures the locomotives following the front vehicle safely and responding promptly in the event of sudden obstacles on the road.
- Score: 12.757241771695652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the most important auxiliary transportation equipment in coal mines,
mining electric locomotives are mostly operated manually at present. However,
due to the complex and ever-changing coal mine environment, electric locomotive
safety accidents occur frequently these years. A mining electric locomotive
control method that can adapt to different complex mining environments is
needed. Reinforcement Learning (RL) is concerned with how artificial agents
ought to take actions in an environment so as to maximize reward, which can
help achieve automatic control of mining electric locomotive. In this paper, we
present how to apply RL to the autonomous control of mining electric
locomotives. To achieve more precise control, we further propose an improved
epsilon-greedy (IEG) algorithm which can better balance the exploration and
exploitation. To verify the effectiveness of this method, a co-simulation
platform for autonomous control of mining electric locomotives is built which
can complete closed-loop simulation of the vehicles. The simulation results
show that this method ensures the locomotives following the front vehicle
safely and responding promptly in the event of sudden obstacles on the road
when the vehicle in complex and uncertain coal mine environments.
Related papers
- Scenarios Engineering driven Autonomous Transportation in Open-Pit Mines [21.359823385387937]
A novel scenarios engineering (SE) methodology for the autonomous mining truck is proposed for open-pit mines.
This research addresses the unique challenges of autonomous transportation in open-pit mining, promoting productivity, safety, and performance in mining operations.
arXiv Detail & Related papers (2024-03-15T02:26:55Z) - Grow Your Limits: Continuous Improvement with Real-World RL for Robotic
Locomotion [66.69666636971922]
We present APRL, a policy regularization framework that modulates the robot's exploration over the course of training.
APRL enables a quadrupedal robot to efficiently learn to walk entirely in the real world within minutes.
arXiv Detail & Related papers (2023-10-26T17:51:46Z) - Recent Progress in Energy Management of Connected Hybrid Electric
Vehicles Using Reinforcement Learning [6.851787321368938]
The shift towards electrifying transportation aims to curb environmental concerns related to fossil fuel consumption.
The evolution of energy management systems (EMS) from HEVs to connected hybrid electric vehicles (CHEVs) represent a pivotal shift.
This review bridges the gap, highlighting challenges, advancements, and potential contributions of RL-based solutions for future sustainable transportation systems.
arXiv Detail & Related papers (2023-08-28T14:12:52Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Risk-based implementation of COLREGs for autonomous surface vehicles
using deep reinforcement learning [1.304892050913381]
Deep reinforcement learning (DRL) has shown great potential for a wide range of applications.
In this work, a subset of the International Regulations for Preventing Collisions at Sea (COLREGs) is incorporated into a DRL-based path following and obstacle avoidance system.
The resulting autonomous agent dynamically interpolates between path following and COLREG-compliant collision avoidance in the training scenario, isolated encounter situations, and AIS-based simulations of real-world scenarios.
arXiv Detail & Related papers (2021-11-30T21:32:59Z) - A Multi-Agent Deep Reinforcement Learning Coordination Framework for
Connected and Automated Vehicles at Merging Roadways [0.0]
Connected and automated vehicles (CAVs) have the potential to address congestion, accidents, energy consumption, and greenhouse gas emissions.
We propose a framework for coordinating CAVs such that stop-and-go driving is eliminated.
We demonstrate the coordination of CAVs through numerical simulations and show that a smooth traffic flow is achieved by eliminating stop-and-go driving.
arXiv Detail & Related papers (2021-09-23T22:26:52Z) - An Energy-Saving Snake Locomotion Gait Policy Using Deep Reinforcement
Learning [0.0]
In this work, a snake locomotion gait policy is developed via deep reinforcement learning (DRL) for energy-efficient control.
We apply proximal policy optimization (PPO) to each joint motor parameterized by angular velocity and the DRL agent learns the standard serpenoid curve at each timestep.
Comparing to conventional control strategies, the snake robots controlled by the trained PPO agent can achieve faster movement and more energy-efficient locomotion gait.
arXiv Detail & Related papers (2021-03-08T02:06:44Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.