Learn-n-Route: Learning implicit preferences for vehicle routing
- URL: http://arxiv.org/abs/2101.03936v1
- Date: Mon, 11 Jan 2021 14:57:46 GMT
- Title: Learn-n-Route: Learning implicit preferences for vehicle routing
- Authors: Rocsildes Canoy, V\'ictor Bucarey, Jayanta Mandi, Tias Guns
- Abstract summary: We investigate a learning decision support system for vehicle routing, where the routing engine learns implicit preferences that human planners have when manually creating route plans (or routings)
The goal is to use these learned subjective preferences on top of the distance-based objective criterion in vehicle routing systems.
- Score: 9.434400627011108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate a learning decision support system for vehicle routing, where
the routing engine learns implicit preferences that human planners have when
manually creating route plans (or routings). The goal is to use these learned
subjective preferences on top of the distance-based objective criterion in
vehicle routing systems. This is an alternative to the practice of
distinctively formulating a custom VRP for every company with its own routing
requirements. Instead, we assume the presence of past vehicle routing solutions
over similar sets of customers, and learn to make similar choices. The learning
approach is based on the concept of learning a Markov model, which corresponds
to a probabilistic transition matrix, rather than a deterministic distance
matrix. This nevertheless allows us to use existing arc routing VRP software in
creating the actual routings, and to optimize over both distances and
preferences at the same time. For the learning, we explore different schemes to
construct the probabilistic transition matrix that can co-evolve with changing
preferences over time. Our results on a use-case with a small transportation
company show that our method is able to generate results that are close to the
manually created solutions, without needing to characterize all constraints and
sub-objectives explicitly. Even in the case of changes in the customer sets,
our method is able to find solutions that are closer to the actual routings
than when using only distances, and hence, solutions that require fewer manual
changes when transformed into practical routings.
Related papers
- DynamicRouteGPT: A Real-Time Multi-Vehicle Dynamic Navigation Framework Based on Large Language Models [13.33340860174857]
Real-time dynamic path planning in complex traffic environments presents challenges, such as varying traffic volumes and signal wait times.
Traditional static routing algorithms like Dijkstra and A* compute shortest paths but often fail under dynamic conditions.
This paper proposes a novel approach based on causal inference for real-time dynamic path planning, balancing global and local optimality.
arXiv Detail & Related papers (2024-08-26T11:19:58Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - Inverse Optimization for Routing Problems [3.282021317933024]
We propose a method for learning decision-makers' behavior in routing problems using Inverse Optimization (IO)
Our examples and results showcase the flexibility and real-world potential of the proposed IO methodology to learn from decision-makers' decisions in routing problems.
arXiv Detail & Related papers (2023-07-14T14:03:47Z) - XRoute Environment: A Novel Reinforcement Learning Environment for
Routing [8.797544401458476]
We introduce the XRoute Environment, a new reinforcement learning environment.
Agents are trained to select and route nets in an advanced, end-to-end routing framework.
The resulting environment is challenging, easy to use, customize and add additional scenarios.
arXiv Detail & Related papers (2023-05-23T08:46:25Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Deep Inverse Reinforcement Learning for Route Choice Modeling [0.6853165736531939]
Route choice modeling is a fundamental task in transportation planning and demand forecasting.
This study proposes a general deep inverse reinforcement learning (IRL) framework for link-based route choice modeling.
Experiment results based on taxi GPS data from Shanghai, China validate the improved performance of the proposed model.
arXiv Detail & Related papers (2022-06-18T06:33:06Z) - Find a Way Forward: a Language-Guided Semantic Map Navigator [53.69229615952205]
This paper attacks the problem of language-guided navigation in a new perspective.
We use novel semantic navigation maps, which enables robots to carry out natural language instructions and move to a target position based on the map observations.
The proposed approach has noticeable performance gains, especially in long-distance navigation cases.
arXiv Detail & Related papers (2022-03-07T07:40:33Z) - Deep Learning Aided Packet Routing in Aeronautical Ad-Hoc Networks
Relying on Real Flight Data: From Single-Objective to Near-Pareto
Multi-Objective Optimization [79.96177511319713]
We invoke deep learning (DL) to assist routing in aeronautical ad-hoc networks (AANETs)
A deep neural network (DNN) is conceived for mapping the local geographic information observed by the forwarding node into the information required for determining the optimal next hop.
We extend the DL-aided routing algorithm to a multi-objective scenario, where we aim for simultaneously minimizing the delay, maximizing the path capacity, and maximizing the path lifetime.
arXiv Detail & Related papers (2021-10-28T14:18:22Z) - Ranking Cost: Building An Efficient and Scalable Circuit Routing Planner
with Evolution-Based Optimization [49.207538634692916]
We propose a new algorithm for circuit routing, named Ranking Cost, to form an efficient and trainable router.
In our method, we introduce a new set of variables called cost maps, which can help the A* router to find out proper paths.
Our algorithm is trained in an end-to-end manner and does not use any artificial data or human demonstration.
arXiv Detail & Related papers (2021-10-08T07:22:45Z) - Data Driven VRP: A Neural Network Model to Learn Hidden Preferences for
VRP [9.434400627011108]
We use a neural network model to estimate the arc probabilities, which allows for additional features and automatic parameter estimation.
We investigate the difference with a prior weighted Markov counting approach, and study the applicability of neural networks in this setting.
arXiv Detail & Related papers (2021-08-10T10:53:44Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.