Digital Collaborator: Augmenting Task Abstraction in Visualization
Design with Artificial Intelligence
- URL: http://arxiv.org/abs/2003.01304v1
- Date: Tue, 3 Mar 2020 02:53:34 GMT
- Title: Digital Collaborator: Augmenting Task Abstraction in Visualization
Design with Artificial Intelligence
- Authors: Aditeya Pandey, Yixuan Zhang, John A. Guerra-Gomez, Andrea G. Parker,
Michelle A. Borkin
- Abstract summary: We argue that this manual task abstraction process is prone to errors due to designer biases and a lack of domain background and knowledge.
We propose a conceptual Digital Collaborator: an artificial intelligence system that aims to help visualization practitioners by augmenting their ability to validate and reason about the output of task abstraction.
- Score: 25.411840625787445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the task abstraction phase of the visualization design process, including
in "design studies", a practitioner maps the observed domain goals to
generalizable abstract tasks using visualization theory in order to better
understand and address the users needs. We argue that this manual task
abstraction process is prone to errors due to designer biases and a lack of
domain background and knowledge. Under these circumstances, a collaborator can
help validate and provide sanity checks to visualization practitioners during
this important task abstraction stage. However, having a human collaborator is
not always feasible and may be subject to the same biases and pitfalls. In this
paper, we first describe the challenges associated with task abstraction. We
then propose a conceptual Digital Collaborator: an artificial intelligence
system that aims to help visualization practitioners by augmenting their
ability to validate and reason about the output of task abstraction. We also
discuss several practical design challenges of designing and implementing such
systems
Related papers
- Latent Implicit Visual Reasoning [59.39913238320798]
We propose a task-agnostic mechanism that trains LMMs to discover and use visual reasoning tokens without explicit supervision.<n>Our approach outperforms direct fine-tuning and achieves state-of-the-art results on a diverse range of vision-centric tasks.
arXiv Detail & Related papers (2025-12-24T14:59:49Z) - Vision Generalist Model: A Survey [87.49797517847132]
We provide a comprehensive overview of the vision generalist models, delving into their characteristics and capabilities within the field.<n>We take a brief excursion into related domains, shedding light on their interconnections and potential synergies.
arXiv Detail & Related papers (2025-06-11T17:23:41Z) - From Fragment to One Piece: A Survey on AI-Driven Graphic Design [19.042522345775193]
The survey covers various subtasks, including visual element perception and generation, aesthetic and semantic understanding, layout analysis, and generation.
Despite significant progress, challenges remain to understanding human intent, ensuring interpretability, and maintaining control over multilayered compositions.
arXiv Detail & Related papers (2025-03-24T13:05:09Z) - Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following [50.377287115281476]
We show that learning to associate the representations of current and future states with a temporal loss can improve compositional generalization.
We evaluate our approach across diverse robotic manipulation tasks as well as in simulation, showing substantial improvements for tasks specified with either language or goal images.
arXiv Detail & Related papers (2025-02-08T05:26:29Z) - VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - From Reals to Logic and Back: Inventing Symbolic Vocabularies, Actions,
and Models for Planning from Raw Data [20.01856556195228]
This paper presents the first approach for autonomously learning logic-based relational representations for abstract states and actions.
The learned representations constitute auto-invented PDDL-like domain models.
Empirical results in deterministic settings show that powerful abstract representations can be learned from just a handful of robot trajectories.
arXiv Detail & Related papers (2024-02-19T06:28:21Z) - Learning Top-k Subtask Planning Tree based on Discriminative Representation Pre-training for Decision Making [9.302910360945042]
Planning with prior knowledge extracted from complicated real-world tasks is crucial for humans to make accurate decisions.
We introduce a multiple-encoder and individual-predictor regime to learn task-essential representations from sufficient data for simple subtasks.
We also use the attention mechanism to generate a top-k subtask planning tree, which customizes subtask execution plans in guiding complex decisions on unseen tasks.
arXiv Detail & Related papers (2023-12-18T09:00:31Z) - InstructDiffusion: A Generalist Modeling Interface for Vision Tasks [52.981128371910266]
We present InstructDiffusion, a framework for aligning computer vision tasks with human instructions.
InstructDiffusion could handle a variety of vision tasks, including understanding tasks and generative tasks.
It even exhibits the ability to handle unseen tasks and outperforms prior methods on novel datasets.
arXiv Detail & Related papers (2023-09-07T17:56:57Z) - Learning Differentiable Logic Programs for Abstract Visual Reasoning [18.82429807065658]
Differentiable forward reasoning has been developed to integrate reasoning with gradient-based machine learning paradigms.
NEUMANN is a graph-based differentiable forward reasoner, passing messages in a memory-efficient manner and handling structured programs with functors.
We demonstrate that NEUMANN solves visual reasoning tasks efficiently, outperforming neural, symbolic, and neuro-symbolic baselines.
arXiv Detail & Related papers (2023-07-03T11:02:40Z) - Tuning computer vision models with task rewards [88.45787930908102]
Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models.
In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward.
We adopt this approach and show its surprising effectiveness across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning.
arXiv Detail & Related papers (2023-02-16T11:49:48Z) - Constellation: Learning relational abstractions over objects for
compositional imagination [64.99658940906917]
We introduce Constellation, a network that learns relational abstractions of static visual scenes.
This work is a first step in the explicit representation of visual relationships and using them for complex cognitive procedures.
arXiv Detail & Related papers (2021-07-23T11:59:40Z) - Learning Task Informed Abstractions [10.920599910769276]
We propose learning Task Informed Abstractions (TIA) that explicitly separates reward-correlated visual features from distractors.
TIA leads to significant performance gains over state-of-the-art methods on many visual control tasks.
arXiv Detail & Related papers (2021-06-29T17:56:11Z) - Learning abstract structure for drawing by efficient motor program
induction [52.13961975752941]
We develop a naturalistic drawing task to study how humans rapidly acquire structured prior knowledge.
We show that people spontaneously learn abstract drawing procedures that support generalization.
We propose a model of how learners can discover these reusable drawing programs.
arXiv Detail & Related papers (2020-08-08T13:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.