Can Information Behaviour Inform Machine Learning?
- URL: http://arxiv.org/abs/2205.00538v1
- Date: Sun, 1 May 2022 19:00:52 GMT
- Title: Can Information Behaviour Inform Machine Learning?
- Authors: Michael Ridley
- Abstract summary: The paper illustrates how human information behaviour research can bring to machine learning a more nuanced view of information and informing.
Despite their clear differences, the fields of information behaviour and machine learning share many common objectives, paradigms, and key research questions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The objective of this paper is to explore the opportunities for human
information behaviour research to inform and influence the field of machine
learning and the resulting machine information behaviour. Using the development
of foundation models in machine learning as an example, the paper illustrates
how human information behaviour research can bring to machine learning a more
nuanced view of information and informing, a better understanding of
information need and how that affects the communication among people and
systems, guidance on the nature of context and how to operationalize that in
models and systems, and insights into bias, misinformation, and
marginalization. Despite their clear differences, the fields of information
behaviour and machine learning share many common objectives, paradigms, and key
research questions. The example of foundation models illustrates that human
information behaviour research has much to offer in addressing some of the
challenges emerging in the nascent area of machine information behaviour.
Related papers
- Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Explainability in Machine Learning: a Pedagogical Perspective [9.393988089692947]
We provide a pedagogical perspective on how to structure the learning process to better impart knowledge to students and researchers in machine learning.
We discuss the advantages and disadvantages of various opaque and transparent machine learning models.
We will also discuss ways to structure potential assignments to best help students learn to use explainability as a tool alongside any given machine learning application.
arXiv Detail & Related papers (2022-02-21T16:15:57Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Explainable Machine Learning with Prior Knowledge: An Overview [1.1045760002858451]
The complexity of machine learning models has elicited research to make them more explainable.
We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models.
arXiv Detail & Related papers (2021-05-21T07:33:22Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Ethical behavior in humans and machines -- Evaluating training data
quality for beneficial machine learning [0.0]
This study describes new dimensions of data quality for supervised machine learning applications.
The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from.
arXiv Detail & Related papers (2020-08-26T09:48:38Z) - A Review on Intelligent Object Perception Methods Combining
Knowledge-based Reasoning and Machine Learning [60.335974351919816]
Object perception is a fundamental sub-field of Computer Vision.
Recent works seek ways to integrate knowledge engineering in order to expand the level of intelligence of the visual interpretation of objects.
arXiv Detail & Related papers (2019-12-26T13:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.