FaiR-IoT: Fairness-aware Human-in-the-Loop Reinforcement Learning for
Harnessing Human Variability in Personalized IoT
- URL: http://arxiv.org/abs/2103.16033v1
- Date: Tue, 30 Mar 2021 02:30:25 GMT
- Title: FaiR-IoT: Fairness-aware Human-in-the-Loop Reinforcement Learning for
Harnessing Human Variability in Personalized IoT
- Authors: Salma Elmalaki (University of California, Irvine)
- Abstract summary: FaiR-IoT is a reinforcement learning-based framework for adaptive and fairness-aware human-in-the-loop IoT applications.
We validate the proposed framework on two applications, namely (i) Human-in-the-Loop Automotive Advanced Driver Assistance Systems and (ii) Human-in-the-Loop Smart House.
Results obtained on these two applications validate the generality of FaiR-IoT and its ability to provide a personalized experience.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to the rapid growth in wearable technologies, monitoring complex human
context becomes feasible, paving the way to develop human-in-the-loop IoT
systems that naturally evolve to adapt to the human and environment state
autonomously. Nevertheless, a central challenge in designing such personalized
IoT applications arises from human variability. Such variability stems from the
fact that different humans exhibit different behaviors when interacting with
IoT applications (intra-human variability), the same human may change the
behavior over time when interacting with the same IoT application (inter-human
variability), and human behavior may be affected by the behaviors of other
people in the same environment (multi-human variability). To that end, we
propose FaiR-IoT, a general reinforcement learning-based framework for adaptive
and fairness-aware human-in-the-loop IoT applications. In FaiR-IoT, three
levels of reinforcement learning agents interact to continuously learn human
preferences and maximize the system's performance and fairness while taking
into account the intra-, inter-, and multi-human variability. We validate the
proposed framework on two applications, namely (i) Human-in-the-Loop Automotive
Advanced Driver Assistance Systems and (ii) Human-in-the-Loop Smart House.
Results obtained on these two applications validate the generality of FaiR-IoT
and its ability to provide a personalized experience while enhancing the
system's performance by 40%-60% compared to non-personalized systems and
enhancing the fairness of the multi-human systems by 1.5 orders of magnitude.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Human-Aware Vision-and-Language Navigation: Bridging Simulation to Reality with Dynamic Human Interactions [69.9980759344628]
Vision-and-Language Navigation (VLN) aims to develop embodied agents that navigate based on human instructions.
We introduce Human-Aware Vision-and-Language Navigation (HA-VLN), extending traditional VLN by incorporating dynamic human activities.
We present the Expert-Supervised Cross-Modal (VLN-CM) and Non-Expert-Supervised Decision Transformer (VLN-DT) agents, utilizing cross-modal fusion and diverse training strategies.
arXiv Detail & Related papers (2024-06-27T15:01:42Z) - PAPER-HILT: Personalized and Adaptive Privacy-Aware Early-Exit for
Reinforcement Learning in Human-in-the-Loop Systems [0.6282068591820944]
Reinforcement Learning (RL) has increasingly become a preferred method over traditional rule-based systems in diverse human-in-the-loop (HITL) applications.
This paper focuses on developing an innovative, adaptive RL strategy through exploiting an early-exit approach designed explicitly for privacy preservation in HITL environments.
arXiv Detail & Related papers (2024-03-09T10:24:12Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - MultiIoT: Benchmarking Machine Learning for the Internet of Things [70.74131118309967]
The next generation of machine learning systems must be adept at perceiving and interacting with the physical world.
sensory data from motion, thermal, geolocation, depth, wireless signals, video, and audio are increasingly used to model the states of physical environments.
Existing efforts are often specialized to a single sensory modality or prediction task.
This paper proposes MultiIoT, the most expansive and unified IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 real-world tasks.
arXiv Detail & Related papers (2023-11-10T18:13:08Z) - FAIRO: Fairness-aware Adaptation in Sequential-Decision Making for
Human-in-the-Loop Systems [8.713442325649801]
We propose a novel algorithm for fairness-aware sequential-decision making in Human-in-the-Loop (HITL) adaptation.
In particular, FAIRO decomposes this complex fairness task into adaptive sub-tasks based on individual human preferences.
We show that FAIRO can improve fairness compared with other methods across all three applications by 35.36%.
arXiv Detail & Related papers (2023-07-12T00:35:19Z) - ERUDITE: Human-in-the-Loop IoT for an Adaptive Personalized Learning
System [14.413929652259469]
This paper proposes ERUDITE, a human-in-the-loop IoT system for the learning environment.
By using the brain signals as a sensor modality to infer the human learning state, ERUDITE provides personalized adaptation to the learning environment.
arXiv Detail & Related papers (2023-03-07T23:54:35Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Q-SMASH: Q-Learning-based Self-Adaptation of Human-Centered Internet of
Things [0.8602553195689512]
This article presents Q-SMASH, a reinforcement learning-based approach for self-adaptation of IoT objects in human-centered environments.
Q-SMASH aims to learn the behaviors of users along with respecting human values.
The learning ability of Q-SMASH allows it to adapt itself to the behavioral change of users and make more accurate decisions.
arXiv Detail & Related papers (2021-07-13T09:41:05Z) - SMASH: a Semantic-enabled Multi-agent Approach for Self-adaptation of
Human-centered IoT [0.8602553195689512]
This paper presents SMASH: a multi-agent approach for self-adaptation of IoT applications in human-centered environments.
SMASH agents are provided with a 4-layer architecture based on the BDI agent model that integrates human values with goal-reasoning, planning, and acting.
arXiv Detail & Related papers (2021-05-31T12:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.