An Ambient Intelligence-Based Human Behavior Monitoring Framework for
Ubiquitous Environments
- URL: http://arxiv.org/abs/2106.15609v1
- Date: Tue, 29 Jun 2021 17:50:54 GMT
- Title: An Ambient Intelligence-Based Human Behavior Monitoring Framework for
Ubiquitous Environments
- Authors: Nirmalya Thakur and Chia Y. Han
- Abstract summary: This framework aims to take a holistic approach to study, track, monitor, and analyze human behavior during activities of daily living (ADLs)
It can perform the semantic analysis of user interactions on the diverse contextual parameters during ADLs to identify a list of distinct behavioral patterns associated with different complex activities.
Second, it consists of an intelligent decision-making algorithm that can analyze these behavioral patterns and their relationships with the dynamic contextual and spatial features of the environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This framework for human behavior monitoring aims to take a holistic approach
to study, track, monitor, and analyze human behavior during activities of daily
living (ADLs). The framework consists of two novel functionalities. First, it
can perform the semantic analysis of user interactions on the diverse
contextual parameters during ADLs to identify a list of distinct behavioral
patterns associated with different complex activities. Second, it consists of
an intelligent decision-making algorithm that can analyze these behavioral
patterns and their relationships with the dynamic contextual and spatial
features of the environment to detect any anomalies in user behavior that could
constitute an emergency. These functionalities of this interdisciplinary
framework were developed by integrating the latest advancements and
technologies in human-computer interaction, machine learning, Internet of
Things, pattern recognition, and ubiquitous computing. The framework was
evaluated on a dataset of ADLs, and the performance accuracies of these two
functionalities were found to be 76.71% and 83.87%, respectively. The presented
and discussed results uphold the relevance and immense potential of this
framework to contribute towards improving the quality of life and assisted
living of the aging population in the future of Internet of Things (IoT)-based
ubiquitous living environments, e.g., smart homes.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Distinguishing Engagement Facets: An Essential Component for AI-based
Healthcare [1.14219428942199]
It is essential to monitor the engagement state of patients in various AI-based healthcare paradigms.
This includes medical conditions that alter social behavior such as Autism Spectrum Disorder (ASD) or Attention-Deficit/Hyperactivity Disorder (ADHD)
arXiv Detail & Related papers (2021-11-22T11:58:26Z) - HARPS: An Online POMDP Framework for Human-Assisted Robotic Planning and
Sensing [1.3678064890824186]
The Human Assisted Robotic Planning and Sensing (HARPS) framework is presented for active semantic sensing and planning in human-robot teams.
This approach lets humans opportunistically impose model structure and extend the range of semantic soft data in uncertain environments.
Simulations of a UAV-enabled target search application in a large-scale partially structured environment show significant improvements in time and belief state estimates.
arXiv Detail & Related papers (2021-10-20T00:41:57Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.