Make It Make Sense! Understanding and Facilitating Sensemaking in
Computational Notebooks
- URL: http://arxiv.org/abs/2312.11431v1
- Date: Mon, 18 Dec 2023 18:33:58 GMT
- Title: Make It Make Sense! Understanding and Facilitating Sensemaking in
Computational Notebooks
- Authors: Souti Chattopadhyay, Zixuan Feng, Emily Arteaga, Audrey Au, Gonzalo
Ramos, Titus Barik, Anita Sarma
- Abstract summary: Porpoise integrates computational notebook features with digital design, grouping cells into labeled sections that can be expanded, collapsed, or annotated for improved sensemaking.
Our study with 24 data scientists found Porpoise enhanced code comprehension, making the experience more akin to reading a book, with one participant describing it as It's really like reading a book.
- Score: 10.621214052177125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reusing and making sense of other scientists' computational notebooks.
However, making sense of existing notebooks is a struggle, as these reference
notebooks are often exploratory, have messy structures, include multiple
alternatives, and have little explanation. To help mitigate these issues, we
developed a catalog of cognitive tasks associated with the sensemaking process.
Utilizing this catalog, we introduce Porpoise: an interactive overlay on
computational notebooks. Porpoise integrates computational notebook features
with digital design, grouping cells into labeled sections that can be expanded,
collapsed, or annotated for improved sensemaking.
We investigated data scientists' needs with unfamiliar computational
notebooks and investigated the impact of Porpoise adaptations on their
comprehension process. Our counterbalanced study with 24 data scientists found
Porpoise enhanced code comprehension, making the experience more akin to
reading a book, with one participant describing it as It's really like reading
a book.
Related papers
- Predicting the Understandability of Computational Notebooks through Code Metrics Analysis [0.5277756703318045]
We employ a fine-tuned DistilBERT transformer to identify user comments associated with code understandability.
We established a criterion called User Opinion Code Understandability (UOCU), which considers the number of relevant comments, upvotes on those comments, total notebook views, and total notebook upvotes.
We trained machine learning models to predict notebook code understandability based solely on their metrics.
arXiv Detail & Related papers (2024-06-16T15:58:40Z) - Manifold Learning via Memory and Context [5.234742752529437]
We present a navigation-based approach to manifold learning using memory and context.
We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning.
We discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems.
arXiv Detail & Related papers (2024-05-17T17:06:19Z) - Notably Inaccessible -- Data Driven Understanding of Data Science
Notebook (In)Accessibility [13.428631054625797]
We perform a large scale systematic analysis of 100000 Jupyter notebooks to identify various accessibility challenges.
We make recommendations to improve accessibility of the artifacts of a notebook, suggest authoring practices, and propose changes to infrastructure to make notebooks accessible.
arXiv Detail & Related papers (2023-08-07T01:33:32Z) - AmadeusGPT: a natural language interface for interactive animal
behavioral analysis [65.55906175884748]
We introduce AmadeusGPT: a natural language interface that turns natural language descriptions of behaviors into machine-executable code.
We show we can produce state-of-the-art performance on the MABE 2022 behavior challenge tasks.
AmadeusGPT presents a novel way to merge deep biological knowledge, large-language models, and core computer vision modules into a more naturally intelligent system.
arXiv Detail & Related papers (2023-07-10T19:15:17Z) - Navigating causal deep learning [78.572170629379]
Causal deep learning (CDL) is a new and important research area in the larger field of machine learning.
This paper categorises methods in causal deep learning beyond Pearl's ladder of causation.
Our paradigm is a tool which helps researchers to: find benchmarks, compare methods, and most importantly: identify research gaps.
arXiv Detail & Related papers (2022-12-01T23:44:23Z) - StickyLand: Breaking the Linear Presentation of Computational Notebooks [5.1175396458764855]
StickyLand is a notebook extension for empowering users to freely organize their code in non-linear ways.
With sticky cells that are always shown on the screen, users can quickly access their notes, instantly observe experiment results, and easily build interactive dashboards.
arXiv Detail & Related papers (2022-02-22T18:25:54Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - MetaKernel: Learning Variational Random Features with Limited Labels [120.90737681252594]
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
arXiv Detail & Related papers (2021-05-08T21:24:09Z) - GIS and Computational Notebooks [0.0]
This chapter introduces computational notebooks in the geographical context.
It begins by explaining the computational paradigm and philosophy that underlies notebooks.
It then unpacks their architecture to illustrate a notebook user's typical workflow.
arXiv Detail & Related papers (2021-01-02T01:59:14Z) - ALICE: Active Learning with Contrastive Natural Language Explanations [69.03658685761538]
We propose Active Learning with Contrastive Explanations (ALICE) to improve data efficiency in learning.
ALICE learns to first use active learning to select the most informative pairs of label classes to elicit contrastive natural language explanations.
It extracts knowledge from these explanations using a semantically extracted knowledge.
arXiv Detail & Related papers (2020-09-22T01:02:07Z) - Learning compositional functions via multiplicative weight updates [97.9457834009578]
We show that multiplicative weight updates satisfy a descent lemma tailored to compositional functions.
We show that Madam can train state of the art neural network architectures without learning rate tuning.
arXiv Detail & Related papers (2020-06-25T17:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.