Interactive Machine Learning: A State of the Art Review
- URL: http://arxiv.org/abs/2207.06196v1
- Date: Wed, 13 Jul 2022 13:43:16 GMT
- Title: Interactive Machine Learning: A State of the Art Review
- Authors: Natnael A. Wondimu, C\'edric Buche and Ubbo Visser
- Abstract summary: We provide a comprehensive analysis of the state-of-the-art of interactive machine learning (iML)
Research works on adversarial black-box attacks and corresponding iML based defense system, exploratory machine learning, resource constrained learning, and iML performance evaluation are analyzed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning has proved useful in many software disciplines, including
computer vision, speech and audio processing, natural language processing,
robotics and some other fields. However, its applicability has been
significantly hampered due its black-box nature and significant resource
consumption. Performance is achieved at the expense of enormous computational
resource and usually compromising the robustness and trustworthiness of the
model. Recent researches have been identifying a lack of interactivity as the
prime source of these machine learning problems. Consequently, interactive
machine learning (iML) has acquired increased attention of researchers on
account of its human-in-the-loop modality and relatively efficient resource
utilization. Thereby, a state-of-the-art review of interactive machine learning
plays a vital role in easing the effort toward building human-centred models.
In this paper, we provide a comprehensive analysis of the state-of-the-art of
iML. We analyze salient research works using merit-oriented and
application/task oriented mixed taxonomy. We use a bottom-up clustering
approach to generate a taxonomy of iML research works. Research works on
adversarial black-box attacks and corresponding iML based defense system,
exploratory machine learning, resource constrained learning, and iML
performance evaluation are analyzed under their corresponding theme in our
merit-oriented taxonomy. We have further classified these research works into
technical and sectoral categories. Finally, research opportunities that we
believe are inspiring for future work in iML are discussed thoroughly.
Related papers
- Large Language Model for Qualitative Research -- A Systematic Mapping Study [3.302912592091359]
Large Language Models (LLMs), powered by advanced generative AI, have emerged as transformative tools.
This study systematically maps the literature on the use of LLMs for qualitative research.
Findings reveal that LLMs are utilized across diverse fields, demonstrating the potential to automate processes.
arXiv Detail & Related papers (2024-11-18T21:28:00Z) - A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - Extraction of Research Objectives, Machine Learning Model Names, and Dataset Names from Academic Papers and Analysis of Their Interrelationships Using LLM and Network Analysis [0.0]
This study proposes a methodology extracting tasks, machine learning methods, and dataset names from scientific papers.
The proposed method's expression extraction performance, when using Llama3, achieves an F-score exceeding 0.8 across various categories.
Benchmarking results on financial domain papers have demonstrated the effectiveness of this method.
arXiv Detail & Related papers (2024-08-22T03:10:52Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Materials science in the era of large language models: a perspective [0.0]
Large Language Models (LLMs) have garnered considerable interest due to their impressive capabilities.
This paper argues their ability to handle ambiguous requirements across a range of tasks and disciplines mean they could be a powerful tool to aid researchers.
arXiv Detail & Related papers (2024-03-11T17:34:25Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.