Interactive introduction to self-calibrating interfaces
- URL: http://arxiv.org/abs/2212.05766v1
- Date: Mon, 12 Dec 2022 08:39:30 GMT
- Title: Interactive introduction to self-calibrating interfaces
- Authors: Jonathan Grizou
- Abstract summary: This paper aims to provide an intuitive understanding of the self-calibrating interface paradigm.
Under this paradigm, you can choose how to use an interface which can adapt to your preferences on the fly.
We introduce a PIN entering task and gradually release constraints, moving from a pre-calibrated interface to a self-calibrating interface.
- Score: 4.111899441919164
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This interactive paper aims to provide an intuitive understanding of the
self-calibrating interface paradigm. Under this paradigm, you can choose how to
use an interface which can adapt to your preferences on the fly. We introduce a
PIN entering task and gradually release constraints, moving from a
pre-calibrated interface to a self-calibrating interface while increasing the
complexity of input modalities from buttons, to points on a map, to sketches,
and finally to spoken words. This is not a traditional research paper with a
hypothesis and experimental results to support claims; the research supporting
this work has already been done and we refer to it extensively in the later
sections. Instead, our aim is to walk you through an intriguing interaction
paradigm in small logical steps with supporting illustrations, interactive
demonstrations, and videos to reinforce your learning. We designed this paper
for the enjoyments of curious minds of any backgrounds, it is written in plain
English and no prior knowledge is necessary. All demos are available online at
openvault.jgrizou.com and linked individually in the paper.
Related papers
- Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Towards dialogue based, computer aided software requirements elicitation [0.0]
This paper proposes an interaction blueprint that aims for dialogue based, computer aided software requirements analysis.
Compared to mere model extraction approaches, this interaction blueprint encourages individuality, creativity and genuine compromise.
arXiv Detail & Related papers (2023-10-21T09:12:24Z) - Leveraging Explanations in Interactive Machine Learning: An Overview [10.284830265068793]
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities.
This paper presents an overview of research where explanations are combined with interactive capabilities.
arXiv Detail & Related papers (2022-07-29T07:46:11Z) - Rules Of Engagement: Levelling Up To Combat Unethical CUI Design [23.01296770233131]
We propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns.
Our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces.
arXiv Detail & Related papers (2022-07-19T14:02:24Z) - First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual
Information Maximization [112.40598205054994]
We formalize this idea as a completely unsupervised objective for optimizing interfaces.
We conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games.
The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains.
arXiv Detail & Related papers (2022-05-24T21:57:18Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z) - VIRT: Improving Representation-based Models for Text Matching through
Virtual Interaction [50.986371459817256]
We propose a novel textitVirtual InteRacTion mechanism, termed as VIRT, to enable full and deep interaction modeling in representation-based models.
VIRT asks representation-based encoders to conduct virtual interactions to mimic the behaviors as interaction-based models do.
arXiv Detail & Related papers (2021-12-08T09:49:28Z) - Towards Transparent Interactive Semantic Parsing via Step-by-Step
Correction [17.000283696243564]
We investigate an interactive semantic parsing framework that explains the predicted logical form step by step in natural language.
We focus on question answering over knowledge bases (KBQA) as an instantiation of our framework.
Our experiments show that the interactive framework with human feedback has the potential to greatly improve overall parse accuracy.
arXiv Detail & Related papers (2021-10-15T20:11:22Z) - iFacetSum: Coreference-based Interactive Faceted Summarization for
Multi-Document Exploration [63.272359227081836]
iFacetSum integrates interactive summarization together with faceted search.
Fine-grained facets are automatically produced based on cross-document coreference pipelines.
arXiv Detail & Related papers (2021-09-23T20:01:11Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.