Experimental Analysis of Trajectory Control Using Computer Vision and
Artificial Intelligence for Autonomous Vehicles
- URL: http://arxiv.org/abs/2106.07003v1
- Date: Sun, 13 Jun 2021 14:23:18 GMT
- Title: Experimental Analysis of Trajectory Control Using Computer Vision and
Artificial Intelligence for Autonomous Vehicles
- Authors: Ammar N. Abbas, Muhammad Asad Irshad, and Hossam Hassan Ammar
- Abstract summary: In this paper, several methodologies for lane detection are discussed.
The next approach is applying a control law based on the perception to control steering and speed control.
A comparative analysis is made between an open-loop response, PID control, and a neural network control law through graphical statistics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perception of the lane boundaries is crucial for the tasks related to
autonomous trajectory control. In this paper, several methodologies for lane
detection are discussed with an experimental illustration: Hough
transformation, Blob analysis, and Bird's eye view. Following the abstraction
of lane marks from the boundary, the next approach is applying a control law
based on the perception to control steering and speed control. In the
following, a comparative analysis is made between an open-loop response, PID
control, and a neural network control law through graphical statistics. To get
the perception of the surrounding a wireless streaming camera connected to
Raspberry Pi is used. After pre-processing the signal received by the camera
the output is sent back to the Raspberry Pi that processes the input and
communicates the control to the motors through Arduino via serial
communication.
Related papers
- ControlNet-XS: Rethinking the Control of Text-to-Image Diffusion Models as Feedback-Control Systems [19.02295657801464]
In this work, we take an existing controlling network (ControlNet) and change the communication between the controlling network and the generation process to be of high-frequency and with large-bandwidth.
We outperform state-of-the-art approaches for pixel-level guidance, such as depth, canny-edges, and semantic segmentation, and are on a par for loose keypoint-guidance of human poses.
All code and pre-trained models will be made publicly available.
arXiv Detail & Related papers (2023-12-11T17:58:06Z) - LeTFuser: Light-weight End-to-end Transformer-Based Sensor Fusion for
Autonomous Driving with Multi-Task Learning [16.241116794114525]
We introduce LeTFuser, an algorithm for fusing multiple RGB-D camera representations.
To perform perception and control tasks simultaneously, we utilize multi-task learning.
arXiv Detail & Related papers (2023-10-19T20:09:08Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - A Pedestrian Detection and Tracking Framework for Autonomous Cars:
Efficient Fusion of Camera and LiDAR Data [0.17205106391379021]
This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data.
The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates.
The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene.
arXiv Detail & Related papers (2021-08-27T16:16:01Z) - Scalable Perception-Action-Communication Loops with Convolutional and
Graph Neural Networks [208.15591625749272]
We present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI)
Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning.
We demonstrate that VGAI yields performance comparable to or better than other decentralized controllers.
arXiv Detail & Related papers (2021-06-24T23:57:21Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - Real-time Lane detection and Motion Planning in Raspberry Pi and Arduino
for an Autonomous Vehicle Prototype [0.0]
The Pi Camera 1.3 captures real-time video, which is then processed by Raspberry-Pi 3.0 Model B.
The image processing algorithms are written in Python 3.7.4 with OpenCV 4.2.
The prototype was tested in a controlled environment in real-time.
arXiv Detail & Related papers (2020-09-20T09:13:15Z) - Built Infrastructure Monitoring and Inspection Using UAVs and
Vision-based Algorithms [2.0305676256390934]
This study presents an inspecting system using real-time control unmanned aerial vehicles (UAVs) to investigate structural surfaces.
The system operates under favourable weather conditions to inspect a target structure, which is the Wentworth light rail base structure in this study.
arXiv Detail & Related papers (2020-05-19T14:37:48Z) - Populations of Spiking Neurons for Reservoir Computing: Closed Loop
Control of a Compliant Quadruped [64.64924554743982]
We present a framework for implementing central pattern generators with spiking neural networks to obtain closed loop robot control.
We demonstrate the learning of predefined gait patterns, speed control and gait transition on a simulated model of a compliant quadrupedal robot.
arXiv Detail & Related papers (2020-04-09T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.