Symphony: Composing Interactive Interfaces for Machine Learning
- URL: http://arxiv.org/abs/2202.08946v1
- Date: Fri, 18 Feb 2022 00:27:30 GMT
- Title: Symphony: Composing Interactive Interfaces for Machine Learning
- Authors: Alex B\"auerle, \'Angel Alexander Cabrera, Fred Hohman, Megan Maher,
David Koski, Xavier Suau, Titus Barik, Dominik Moritz
- Abstract summary: Symphony is a framework for composing interactive ML interfaces with task-specific, data-driven components.
We developed Symphony through participatory design sessions with 10 teams (n=31), and discuss our findings from deploying Symphony to 3 production ML projects at Apple.
- Score: 15.322027013779689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interfaces for machine learning (ML), information and visualizations about
models or data, can help practitioners build robust and responsible ML systems.
Despite their benefits, recent studies of ML teams and our interviews with
practitioners (n=9) showed that ML interfaces have limited adoption in
practice. While existing ML interfaces are effective for specific tasks, they
are not designed to be reused, explored, and shared by multiple stakeholders in
cross-functional teams. To enable analysis and communication between different
ML practitioners, we designed and implemented Symphony, a framework for
composing interactive ML interfaces with task-specific, data-driven components
that can be used across platforms such as computational notebooks and web
dashboards. We developed Symphony through participatory design sessions with 10
teams (n=31), and discuss our findings from deploying Symphony to 3 production
ML projects at Apple. Symphony helped ML practitioners discover previously
unknown issues like data duplicates and blind spots in models while enabling
them to share insights with other stakeholders.
Related papers
- Interaction2Code: How Far Are We From Automatic Interactive Webpage Generation? [30.540795619470483]
We present the first systematic investigation of multi-modal large language models (MLLMs) in generating interactive webpages.
Specifically, we first formulate the Interaction-to-Code task and build the Interaction2Code benchmark.
We then conduct comprehensive experiments on three state-of-the-art (SOTA) MLLMs using both automatic metrics and human evaluations.
arXiv Detail & Related papers (2024-11-05T17:40:03Z) - A Large-Scale Study of Model Integration in ML-Enabled Software Systems [4.776073133338119]
Machine learning (ML) and its embedding in systems has drastically changed the engineering of software-intensive systems.
Traditionally, software engineering focuses on manually created artifacts such as source code and the process of creating them.
We present the first large-scale study of real ML-enabled software systems, covering over 2,928 open source systems on GitHub.
arXiv Detail & Related papers (2024-08-12T15:28:40Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - ClawMachine: Fetching Visual Tokens as An Entity for Referring and Grounding [67.63933036920012]
Existing methods, including proxy encoding and geometry encoding, incorporate additional syntax to encode the object's location.
This study presents ClawMachine, offering a new methodology that notates an entity directly using the visual tokens.
ClawMachine unifies visual referring and grounding into an auto-regressive format and learns with a decoder-only architecture.
arXiv Detail & Related papers (2024-06-17T08:39:16Z) - ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code [76.84199699772903]
ML-Bench is a benchmark rooted in real-world programming applications that leverage existing code repositories to perform tasks.
To evaluate both Large Language Models (LLMs) and AI agents, two setups are employed: ML-LLM-Bench for assessing LLMs' text-to-code conversion within a predefined deployment environment, and ML-Agent-Bench for testing autonomous agents in an end-to-end task execution within a Linux sandbox environment.
arXiv Detail & Related papers (2023-11-16T12:03:21Z) - Octavius: Mitigating Task Interference in MLLMs via LoRA-MoE [83.00018517368973]
Large Language Models (LLMs) can extend their zero-shot capabilities to multimodal learning through instruction tuning.
negative conflicts and interference may have a worse impact on performance.
We combine the well-known Mixture-of-Experts (MoE) and one of the representative PEFT techniques, i.e., LoRA, designing a novel LLM-based decoder, called LoRA-MoE, for multimodal learning.
arXiv Detail & Related papers (2023-11-05T15:48:29Z) - Eliciting Model Steering Interactions from Users via Data and Visual
Design Probes [8.45602005745865]
Domain experts increasingly use automated data science tools to incorporate machine learning (ML) models in their work but struggle to " codify" these models when they are incorrect.
For these experts, semantic interactions can provide an accessible avenue to guide and refine ML models without having to dive into its technical details.
This study examines how experts with a spectrum of ML expertise use semantic interactions to update a simple classification model.
arXiv Detail & Related papers (2023-10-12T20:34:02Z) - ExeKGLib: Knowledge Graphs-Empowered Machine Learning Analytics [6.739841914490015]
We present ExeKGLib, a Python library that allows users with minimal machine learning (ML) knowledge to build ML pipelines.
We demonstrate the usage of ExeKGLib and compare it with conventional ML code to show its benefits.
arXiv Detail & Related papers (2023-05-04T16:10:22Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for
the Metaverse [18.12263246913058]
Real-time multi-task multi-model (MTMM) workloads are emerging for applications areas like extended reality (XR) to support metaverse use cases.
These workloads combine user interactivity with computationally complex machine learning (ML) activities.
These workloads present unique difficulties and constraints.
arXiv Detail & Related papers (2022-11-16T05:08:42Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.