Patterns, predictions, and actions: A story about machine learning
- URL: http://arxiv.org/abs/2102.05242v1
- Date: Wed, 10 Feb 2021 03:42:03 GMT
- Title: Patterns, predictions, and actions: A story about machine learning
- Authors: Moritz Hardt and Benjamin Recht
- Abstract summary: This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions.
Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences.
- Score: 59.32629659530159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This graduate textbook on machine learning tells a story of how patterns in
data support predictions and consequential actions. Starting with the
foundations of decision making, we cover representation, optimization, and
generalization as the constituents of supervised learning. A chapter on
datasets as benchmarks examines their histories and scientific bases.
Self-contained introductions to causality, the practice of causal inference,
sequential decision making, and reinforcement learning equip the reader with
concepts and tools to reason about actions and their consequences. Throughout,
the text discusses historical context and societal impact. We invite readers
from all backgrounds; some experience with probability, calculus, and linear
algebra suffices.
Related papers
- Introduction to Machine Learning [0.0]
This book introduces the mathematical foundations and techniques that lead to the development and analysis of many of the algorithms that are used in machine learning.
The subject then switches to generative methods, starting with a chapter that presents sampling methods.
The next chapters focus on unsupervised learning methods, for clustering, factor analysis and manifold learning.
arXiv Detail & Related papers (2024-09-04T12:51:41Z) - When to generate hedges in peer-tutoring interactions [1.0466434989449724]
The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviours.
Results show that embedding layers, that capture the semantic information of the previous turns, significantly improves the model's performance.
We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction.
arXiv Detail & Related papers (2023-07-28T14:29:19Z) - Speech representation learning: Learning bidirectional encoders with
single-view, multi-view, and multi-task methods [7.1345443932276424]
This thesis focuses on representation learning for sequence data over time or space.
It aims to improve downstream sequence prediction tasks by using the learned representations.
arXiv Detail & Related papers (2023-07-25T20:38:55Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - REX: Reasoning-aware and Grounded Explanation [30.392986232906107]
We develop a new type of multi-modal explanations that explain the decisions by traversing the reasoning process and grounding keywords in the images.
Second, we identify the critical need to tightly couple important components across the visual and textual modalities for explaining the decisions.
Third, we propose a novel explanation generation method that explicitly models the pairwise correspondence between words and regions of interest.
arXiv Detail & Related papers (2022-03-11T17:28:42Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z) - Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning [9.887110107270196]
Recent work has demonstrated the promise of combining local explanations with active learning for understanding and supervising black-box models.
Here we show that, under specific conditions, these algorithms may misrepresent the quality of the model being learned.
We address this narrative bias by introducing explanatory guided learning.
arXiv Detail & Related papers (2020-07-20T11:51:31Z) - Salience Estimation with Multi-Attention Learning for Abstractive Text
Summarization [86.45110800123216]
In the task of text summarization, salience estimation for words, phrases or sentences is a critical component.
We propose a Multi-Attention Learning framework which contains two new attention learning components for salience estimation.
arXiv Detail & Related papers (2020-04-07T02:38:56Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.