GymFG: A Framework with a Gym Interface for FlightGear
- URL: http://arxiv.org/abs/2004.12481v1
- Date: Sun, 26 Apr 2020 21:06:20 GMT
- Title: GymFG: A Framework with a Gym Interface for FlightGear
- Authors: Andrew Wood, Ali Sydney, Peter Chin, Bishal Thapa, Ryan Ross
- Abstract summary: We develop GymFG: GymFG couples and extends a high fidelity, open-source flight simulator.
We have demonstrated the use of GymFG to train an autonomous aerial agent using Imitation Learning.
- Score: 2.769397444183181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decades, progress in deployable autonomous flight systems has
slowly stagnated. This is reflected in today's production air-crafts, where
pilots only enable simple physics-based systems such as autopilot for takeoff,
landing, navigation, and terrain/traffic avoidance. Evidently, autonomy has not
gained the trust of the community where higher problem complexity and cognitive
workload are required. To address trust, we must revisit the process for
developing autonomous capabilities: modeling and simulation. Given the
prohibitive costs for live tests, we need to prototype and evaluate autonomous
aerial agents in a high fidelity flight simulator with autonomous learning
capabilities applicable to flight systems: such a open-source development
platform is not available. As a result, we have developed GymFG: GymFG couples
and extends a high fidelity, open-source flight simulator and a robust agent
learning framework to facilitate learning of more complex tasks. Furthermore,
we have demonstrated the use of GymFG to train an autonomous aerial agent using
Imitation Learning. With GymFG, we can now deploy innovative ideas to address
complex problems and build the trust necessary to move prototypes to the
real-world.
Related papers
- Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Learning to Fly in Seconds [7.259696592534715]
We show how curriculum learning and a highly optimized simulator enhance sample complexity and lead to fast training times.
Our framework enables Simulation-to-Reality (Sim2Real) transfer for direct control after only 18 seconds of training on a consumer-grade laptop.
arXiv Detail & Related papers (2023-11-22T01:06:45Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - AI Enabled Maneuver Identification via the Maneuver Identification
Challenge [5.628624906988051]
Maneuver ID is an AI challenge using real-world Air Force flight simulator data.
This dataset has been publicly released at Maneuver-ID.mit.edu.
We have applied a variety of AI methods to separate "good" vs "bad" simulator data and categorize and characterize maneuvers.
arXiv Detail & Related papers (2022-11-28T16:55:32Z) - A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a
Platform [0.0]
We proposed a reinforcement learning framework based on Gazebo that is a kind of physical simulation platform (ROS-RL)
We used three continuous action space reinforcement learning algorithms in the framework to dealing with the problem of autonomous landing of drones.
arXiv Detail & Related papers (2022-09-07T06:33:57Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Learning to Fly -- a Gym Environment with PyBullet Physics for
Reinforcement Learning of Multi-agent Quadcopter Control [0.0]
We propose an open-source environment for multiple quadcopters based on the Bullet physics engine.
Its multi-agent and vision based reinforcement learning interfaces, as well as the support of realistic collisions and aerodynamic effects, make it, to the best of our knowledge, a first of its kind.
arXiv Detail & Related papers (2021-03-03T02:47:59Z) - Model-Based Meta-Reinforcement Learning for Flight with Suspended
Payloads [69.21503033239985]
Transporting suspended payloads is challenging for autonomous aerial vehicles.
We propose a meta-learning approach that "learns how to learn" models of altered dynamics within seconds of post-connection flight data.
arXiv Detail & Related papers (2020-04-23T17:43:56Z) - Learning to Fly via Deep Model-Based Reinforcement Learning [37.37420200406336]
We learn a thrust-attitude controller for a quadrotor through model-based reinforcement learning.
We show that "learning to fly" can be achieved with less than 30 minutes of experience with a single drone.
arXiv Detail & Related papers (2020-03-19T15:55:39Z) - AirSim Drone Racing Lab [56.68291351736057]
AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
arXiv Detail & Related papers (2020-03-12T08:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.