Voice-Based Smart Assistant System for Vehicles using RASA
- URL: http://arxiv.org/abs/2312.01642v1
- Date: Mon, 4 Dec 2023 05:48:18 GMT
- Title: Voice-Based Smart Assistant System for Vehicles using RASA
- Authors: Aditya Paranjape, Yash Patwardhan, Vedant Deshpande, Aniket Darp and
Jayashree Jagdale
- Abstract summary: This paper focuses on the development of a voice-based smart assistance application for vehicles based on the RASA framework.
The smart assistant provides functionalities like navigation, communication via calls, getting weather forecasts and the latest news updates, and music that are completely voice-based in nature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conversational AIs, or chatbots, mimic human speech when conversing. Smart
assistants facilitate the automation of several tasks that needed human
intervention earlier. Because of their accuracy, absence of dependence on human
resources, and accessibility around the clock, chatbots can be employed in
vehicles too. Due to people's propensity to divert their attention away from
the task of driving while engaging in other activities like calling, playing
music, navigation, and getting updates on the weather forecast and latest news,
road safety has declined and accidents have increased as a result. It would be
advantageous to automate these tasks using voice commands rather than carrying
them out manually. This paper focuses on the development of a voice-based smart
assistance application for vehicles based on the RASA framework. The smart
assistant provides functionalities like navigation, communication via calls,
getting weather forecasts and the latest news updates, and music that are
completely voice-based in nature.
Related papers
- QoS prediction in radio vehicular environments via prior user
information [54.853542701389074]
We evaluate ML tree-ensemble methods to predict in the range of minutes with data collected from a cellular test network.
Specifically, we use the correlations of the measurements coming from the radio environment by including information of prior vehicles to enhance the prediction of the target vehicles.
arXiv Detail & Related papers (2024-02-27T17:05:41Z) - Dobby: A Conversational Service Robot Driven by GPT-4 [22.701223191699412]
This work introduces a robotics platform which embeds a conversational AI agent in an embodied system for service tasks.
The agent is derived from a large language model, which has learned from a vast corpus of general knowledge.
In addition to generating dialogue, this agent can interface with the physical world by invoking commands on the robot.
arXiv Detail & Related papers (2023-10-10T04:34:00Z) - Multi-model fusion for Aerial Vision and Dialog Navigation based on
human attention aids [69.98258892165767]
We present an aerial navigation task for the 2023 ICCV Conversation History.
We propose an effective method of fusion training of Human Attention Aided Transformer model (HAA-Transformer) and Human Attention Aided LSTM (HAA-LSTM) models.
arXiv Detail & Related papers (2023-08-27T10:32:52Z) - Efficient Multimodal Neural Networks for Trigger-less Voice Assistants [0.8209843760716959]
We propose a neural network based audio-gesture multimodal fusion system for smartwatches.
The system better understands temporal correlation between audio and gesture data, leading to precise invocations.
It is lightweight and deployable on low-power devices, such as smartwatches, with quick launch times.
arXiv Detail & Related papers (2023-05-20T02:52:02Z) - DOROTHIE: Spoken Dialogue for Handling Unexpected Situations in
Interactive Autonomous Driving Agents [6.639872461610685]
We introduce Dialogue On the ROad To Handle Irregular Events (DOROTHIE), a novel interactive simulation platform.
Based on this platform, we created the Situated Dialogue Navigation (SDN), a navigation benchmark of 183 trials.
SDN is developed to evaluate the agent's ability to predict dialogue moves from humans as well as generate its own dialogue moves and physical navigation actions.
arXiv Detail & Related papers (2022-10-22T17:52:46Z) - AVLEN: Audio-Visual-Language Embodied Navigation in 3D Environments [60.98664330268192]
We present AVLEN -- an interactive agent for Audio-Visual-Language Embodied Navigation.
The goal of AVLEN is to localize an audio event via navigating the 3D visual world.
To realize these abilities, AVLEN uses a multimodal hierarchical reinforcement learning backbone.
arXiv Detail & Related papers (2022-10-14T16:35:06Z) - Converse -- A Tree-Based Modular Task-Oriented Dialogue System [99.78110192324843]
Converse is a flexible tree-based modular task-oriented dialogue system.
Converse supports task dependency and task switching, which are unique features compared to other open-source dialogue frameworks.
arXiv Detail & Related papers (2022-03-23T04:19:05Z) - AI in Smart Cities: Challenges and approaches to enable road vehicle
automation and smart traffic control [56.73750387509709]
SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities.
This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control.
arXiv Detail & Related papers (2021-04-07T14:31:08Z) - Artificial Intelligence Methods in In-Cabin Use Cases: A Survey [4.896568671169519]
The functionality inside the vehicle cabin plays a key role in ensuring a safe and pleasant journey for driver and passenger alike.
Recent advances in the field of artificial intelligence (AI) have enabled a whole range of new applications and assistance systems to solve automated problems in the vehicle cabin.
Results from the surveyed works show that AI technology has a promising future in tackling in-cabin tasks within the autonomous driving aspect.
arXiv Detail & Related papers (2021-01-06T15:08:39Z) - Self-supervised reinforcement learning for speaker localisation with the
iCub humanoid robot [58.2026611111328]
Looking at a person's face is one of the mechanisms that humans rely on when it comes to filtering speech in noisy environments.
Having a robot that can look toward a speaker could benefit ASR performance in challenging environments.
We propose a self-supervised reinforcement learning-based framework inspired by the early development of humans.
arXiv Detail & Related papers (2020-11-12T18:02:15Z) - A Deep Learning based Wearable Healthcare IoT Device for AI-enabled
Hearing Assistance Automation [6.283190933140046]
This research presents a novel AI-enabled Internet of Things (IoT) device capable of assisting those who suffer from impairment of hearing or deafness to communicate with others in conversations.
A server application is created that leverages Google's online speech recognition service to convert the received conversations into texts, then deployed to a micro-display attached to the glasses to display the conversation contents to deaf people.
arXiv Detail & Related papers (2020-05-16T19:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.