RouteExplainer: An Explanation Framework for Vehicle Routing Problem
- URL: http://arxiv.org/abs/2403.03585v1
- Date: Wed, 6 Mar 2024 10:01:35 GMT
- Title: RouteExplainer: An Explanation Framework for Vehicle Routing Problem
- Authors: Daisuke Kikuta and Hiroki Ikeuchi and Kengo Tajiri and Yuusuke Nakano
- Abstract summary: We propose RouteExplainer, a post-hoc explanation framework that explains the influence of each edge in a generated route.
Our framework realizes this by rethinking a route as the sequence of actions and extending counterfactual explanations based on the action influence model to VRP.
To enhance the explanation, we additionally propose an edge classifier that infers the intentions of each edge, a loss function to train the edge classifier, and explanation-text generation by Large Language Models (LLMs)
- Score: 1.7034420812099471
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Vehicle Routing Problem (VRP) is a widely studied combinatorial
optimization problem and has been applied to various practical problems. While
the explainability for VRP is significant for improving the reliability and
interactivity in practical VRP applications, it remains unexplored. In this
paper, we propose RouteExplainer, a post-hoc explanation framework that
explains the influence of each edge in a generated route. Our framework
realizes this by rethinking a route as the sequence of actions and extending
counterfactual explanations based on the action influence model to VRP. To
enhance the explanation, we additionally propose an edge classifier that infers
the intentions of each edge, a loss function to train the edge classifier, and
explanation-text generation by Large Language Models (LLMs). We quantitatively
evaluate our edge classifier on four different VRPs. The results demonstrate
its rapid computation while maintaining reasonable accuracy, thereby
highlighting its potential for deployment in practical applications. Moreover,
on the subject of a tourist route, we qualitatively evaluate explanations
generated by our framework. This evaluation not only validates our framework
but also shows the synergy between explanation frameworks and LLMs. See
https://ntt-dkiku.github.io/xai-vrp for our code, datasets, models, and demo.
Related papers
- Improve Vision Language Model Chain-of-thought Reasoning [86.83335752119741]
Chain-of-thought (CoT) reasoning in vision language models (VLMs) is crucial for improving interpretability and trustworthiness.
We show that training VLM on short answers does not generalize well to reasoning tasks that require more detailed responses.
arXiv Detail & Related papers (2024-10-21T17:00:06Z) - Unleashing the Power of Task-Specific Directions in Parameter Efficient Fine-tuning [65.31677646659895]
This paper focuses on the concept of task-specific directions (TSDs)-critical for transitioning large models from pretrained states to task-specific enhancements in PEFT.
We introduce a novel approach, LoRA-Dash, which aims to maximize the impact of TSDs during the fine-tuning process, thereby enhancing model performance on targeted tasks.
arXiv Detail & Related papers (2024-09-02T08:10:51Z) - GASE: Graph Attention Sampling with Edges Fusion for Solving Vehicle Routing Problems [6.084414764415137]
We propose an adaptive Graph Attention Sampling with the Edges Fusion framework to solve vehicle routing problems.
Our proposed model outperforms the existing methods by 2.08%-6.23% and shows stronger generalization ability.
arXiv Detail & Related papers (2024-05-21T03:33:07Z) - Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning [79.38140606606126]
We propose an algorithmic framework that fine-tunes vision-language models (VLMs) with reinforcement learning (RL)
Our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning.
We demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks.
arXiv Detail & Related papers (2024-05-16T17:50:19Z) - MacFormer: Map-Agent Coupled Transformer for Real-time and Robust
Trajectory Prediction [26.231420111336565]
We propose Map-Agent Coupled Transformer (MacFormer) for real-time and robust trajectory prediction.
Our framework explicitly incorporates map constraints into the network via two carefully designed modules named coupled map and reference extractor.
We evaluate our approach on Argoverse 1, Argoverse 2, and nuScenes real-world benchmarks, where it all achieved state-of-the-art performance.
arXiv Detail & Related papers (2023-08-20T14:27:28Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - CoRTX: Contrastive Framework for Real-time Explanation [39.80484758207907]
We propose a COntrastive Real-Time eXplanation (CoRTX) framework to learn the explanation-oriented representation.
Specifically, we design a synthetic strategy to select positive and negative instances for the learning of explanation.
arXiv Detail & Related papers (2023-03-05T23:01:02Z) - Semantic-aware Modular Capsule Routing for Visual Question Answering [55.03883681191765]
We propose a Semantic-aware modUlar caPsulE framework, termed as SUPER, to better capture the instance-specific vision-semantic characteristics.
We comparatively justify the effectiveness and generalization ability of our proposed SUPER scheme over five benchmark datasets.
arXiv Detail & Related papers (2022-07-21T10:48:37Z) - Joint Answering and Explanation for Visual Commonsense Reasoning [46.44588492897933]
Visual Commonsense Reasoning endeavors to pursue a more high-level visual comprehension.
It is composed of two indispensable processes: question answering over a given image and rationale inference for answer explanation.
We present a plug-and-play knowledge distillation enhanced framework to couple the question answering and rationale inference processes.
arXiv Detail & Related papers (2022-02-25T11:26:52Z) - A Peek Into the Reasoning of Neural Networks: Interpreting with
Structural Visual Concepts [38.215184251799194]
We propose a framework (VRX) to interpret classification NNs with intuitive structural visual concepts.
By means of knowledge distillation, we show VRX can take a step towards mimicking the reasoning process of NNs.
arXiv Detail & Related papers (2021-05-01T15:47:42Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.