Vision Paper: Causal Inference for Interpretable and Robust Machine
Learning in Mobility Analysis
- URL: http://arxiv.org/abs/2210.10010v1
- Date: Tue, 18 Oct 2022 17:28:58 GMT
- Title: Vision Paper: Causal Inference for Interpretable and Robust Machine
Learning in Mobility Analysis
- Authors: Yanan Xin, Natasa Tagasovska, Fernando Perez-Cruz, Martin Raubal
- Abstract summary: Building intelligent transportation systems requires an intricate combination of artificial intelligence and mobility analysis.
The past few years have seen rapid development in transportation applications using advanced deep neural networks.
This vision paper emphasizes research challenges in deep learning-based mobility analysis that require interpretability and robustness.
- Score: 71.2468615993246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) is revolutionizing many areas of our lives,
leading a new era of technological advancement. Particularly, the
transportation sector would benefit from the progress in AI and advance the
development of intelligent transportation systems. Building intelligent
transportation systems requires an intricate combination of artificial
intelligence and mobility analysis. The past few years have seen rapid
development in transportation applications using advanced deep neural networks.
However, such deep neural networks are difficult to interpret and lack
robustness, which slows the deployment of these AI-powered algorithms in
practice. To improve their usability, increasing research efforts have been
devoted to developing interpretable and robust machine learning methods, among
which the causal inference approach recently gained traction as it provides
interpretable and actionable information. Moreover, most of these methods are
developed for image or sequential data which do not satisfy specific
requirements of mobility data analysis. This vision paper emphasizes research
challenges in deep learning-based mobility analysis that require
interpretability and robustness, summarizes recent developments in using causal
inference for improving the interpretability and robustness of machine learning
methods, and highlights opportunities in developing causally-enabled machine
learning models tailored for mobility analysis. This research direction will
make AI in the transportation sector more interpretable and reliable, thus
contributing to safer, more efficient, and more sustainable future
transportation systems.
Related papers
- Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture [22.274696991107206]
Neuro-symbolic AI emerges as a promising paradigm, fusing neural and symbolic approaches to enhance interpretability, robustness, and trustworthiness.
Recent neuro-symbolic systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities.
We first systematically categorize neuro-symbolic AI algorithms, and then experimentally evaluate and analyze them in terms of runtime, memory, computational operators, sparsity, and system characteristics.
arXiv Detail & Related papers (2024-09-20T01:32:14Z) - GenAI-powered Multi-Agent Paradigm for Smart Urban Mobility: Opportunities and Challenges for Integrating Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) with Intelligent Transportation Systems [10.310791311301962]
This paper explores the transformative potential of large language models (LLMs) and emerging Retrieval-Augmented Generation (RAG) technologies.
We propose a conceptual framework aimed at developing multi-agent systems capable of intelligently and conversationally delivering smart mobility services.
arXiv Detail & Related papers (2024-08-31T16:14:42Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - A Survey of Generative AI for Intelligent Transportation Systems: Road Transportation Perspective [7.770651543578893]
We introduce the principles of different generative AI techniques.
We classify tasks in ITS into four types: traffic perception, traffic prediction, traffic simulation, and traffic decision-making.
We illustrate how generative AI techniques addresses key issues in these four different types of tasks.
arXiv Detail & Related papers (2023-12-13T16:13:23Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Deep Active Learning for Computer Vision: Past and Future [50.19394935978135]
Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions.
By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies.
arXiv Detail & Related papers (2022-11-27T13:07:14Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Modelling and Reasoning Techniques for Context Aware Computing in
Intelligent Transportation System [0.0]
The amount of raw data generation in Intelligent Transportation System is huge.
This raw data are to be processed to infer contextual information.
This article aims to study context awareness in the Intelligent Transportation System.
arXiv Detail & Related papers (2021-07-29T23:47:52Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.