Finding A Taxi with Illegal Driver Substitution Activity via Behavior Modelings
- URL: http://arxiv.org/abs/2404.11844v1
- Date: Thu, 18 Apr 2024 01:47:31 GMT
- Title: Finding A Taxi with Illegal Driver Substitution Activity via Behavior Modelings
- Authors: Junbiao Pang, Muhammad Ayub Sabir, Zhuyun Wang, Anjing Hu, Xue Yang, Haitao Yu, Qingming Huang,
- Abstract summary: Illegal Driver Substitution (IDS) activity for a taxi is a grave unlawful activity in the taxi industry.
Currently, the IDS activity is manually supervised by law enforcers.
We propose a computational method that helps law enforcers efficiently find the taxis which tend to have the IDS activity.
- Score: 42.090136287906915
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In our urban life, Illegal Driver Substitution (IDS) activity for a taxi is a grave unlawful activity in the taxi industry, possibly causing severe traffic accidents and painful social repercussions. Currently, the IDS activity is manually supervised by law enforcers, i.e., law enforcers empirically choose a taxi and inspect it. The pressing problem of this scheme is the dilemma between the limited number of law-enforcers and the large volume of taxis. In this paper, motivated by this problem, we propose a computational method that helps law enforcers efficiently find the taxis which tend to have the IDS activity. Firstly, our method converts the identification of the IDS activity to a supervised learning task. Secondly, two kinds of taxi driver behaviors, i.e., the Sleeping Time and Location (STL) behavior and the Pick-Up (PU) behavior are proposed. Thirdly, the multiple scale pooling on self-similarity is proposed to encode the individual behaviors into the universal features for all taxis. Finally, a Multiple Component- Multiple Instance Learning (MC-MIL) method is proposed to handle the deficiency of the behavior features and to align the behavior features simultaneously. Extensive experiments on a real-world data set shows that the proposed behavior features have a good generalization ability across different classifiers, and the proposed MC-MIL method suppresses the baseline methods.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Cooperative Advisory Residual Policies for Congestion Mitigation [11.33450610735004]
We develop a class of learned residual policies that can be used in cooperative advisory systems.
Our policies advise drivers to behave in ways that mitigate traffic congestion while accounting for diverse driver behaviors.
Our approaches successfully mitigate congestion while adapting to different driver behaviors.
arXiv Detail & Related papers (2024-06-30T01:10:13Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards
and Ethical Behavior in the MACHIAVELLI Benchmark [61.43264961005614]
We develop a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios.
We evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations.
Our results show that agents can both act competently and morally, so concrete progress can be made in machine ethics.
arXiv Detail & Related papers (2023-04-06T17:59:03Z) - Safe Deep Reinforcement Learning by Verifying Task-Level Properties [84.64203221849648]
Cost functions are commonly employed in Safe Deep Reinforcement Learning (DRL)
The cost is typically encoded as an indicator function due to the difficulty of quantifying the risk of policy decisions in the state space.
In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.
arXiv Detail & Related papers (2023-02-20T15:24:06Z) - Discovering and Explaining Driver Behaviour under HoS Regulations [0.0]
This paper presents an application for summarising raw driver activity logs according to Hours of Service regulations.
The system employs planning, constraint, and clustering techniques to extract and describe what the driver has been doing.
An experimentation in real world data indicates that recurring driving patterns can be clustered from short basic driving sequences to whole drivers working days.
arXiv Detail & Related papers (2023-01-12T15:30:11Z) - Unsupervised Driving Behavior Analysis using Representation Learning and
Exploiting Group-based Training [15.355045011160804]
Driving behavior monitoring plays a crucial role in managing road safety and decreasing the risk of traffic accidents.
Current work performs a robust driving pattern analysis by capturing variations in driving patterns.
It forms consistent groups by learning compressed representation of time series.
arXiv Detail & Related papers (2022-05-12T10:27:47Z) - Causal Imitative Model for Autonomous Driving [85.78593682732836]
We propose Causal Imitative Model (CIM) to address inertia and collision problems.
CIM explicitly discovers the causal model and utilizes it to train the policy.
Our experiments show that our method outperforms previous work in terms of inertia and collision rates.
arXiv Detail & Related papers (2021-12-07T18:59:15Z) - Scalable Deep Reinforcement Learning for Ride-Hailing [0.0]
Ride-hailing services such as Didi Chuxing, Lyft, and Uber arrange thousands of cars to meet ride requests throughout the day.
We consider a Markov decision process (MDP) model of a ride-hailing service system, framing it as a reinforcement learning (RL) problem.
We propose a special decomposition for the MDP actions by sequentially assigning tasks to the drivers.
arXiv Detail & Related papers (2020-09-27T20:07:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.