Active Learning on Neural Networks through Interactive Generation of
Digit Patterns and Visual Representation
- URL: http://arxiv.org/abs/2310.01580v1
- Date: Mon, 2 Oct 2023 19:21:24 GMT
- Title: Active Learning on Neural Networks through Interactive Generation of
Digit Patterns and Visual Representation
- Authors: Dong H. Jeong, Jin-Hee Cho, Feng Chen, Audun Josang, Soo-Yeon Ji
- Abstract summary: An interactive learning system is designed to create digit patterns and recognize them in real time.
An evaluation with multiple datasets is conducted to determine its usability for active learning.
- Score: 9.127485315153312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial neural networks (ANNs) have been broadly utilized to analyze
various data and solve different domain problems. However, neural networks
(NNs) have been considered a black box operation for years because their
underlying computation and meaning are hidden. Due to this nature, users often
face difficulties in interpreting the underlying mechanism of the NNs and the
benefits of using them. In this paper, to improve users' learning and
understanding of NNs, an interactive learning system is designed to create
digit patterns and recognize them in real time. To help users clearly
understand the visual differences of digit patterns (i.e., 0 ~ 9) and their
results with an NN, integrating visualization is considered to present all
digit patterns in a two-dimensional display space with supporting multiple user
interactions. An evaluation with multiple datasets is conducted to determine
its usability for active learning. In addition, informal user testing is
managed during a summer workshop by asking the workshop participants to use the
system.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Simultaneous Learning of the Inputs and Parameters in Neural
Collaborative Filtering [5.076419064097734]
We show that the non-zero elements of the inputs are learnable parameters that determine the weights in combining the user/item embeddings.
We propose to learn the value of the non-zero elements of the inputs jointly with the neural network parameters.
arXiv Detail & Related papers (2022-03-14T19:47:38Z) - The Dual Form of Neural Networks Revisited: Connecting Test Time
Predictions to Training Patterns via Spotlights of Attention [8.131130865777344]
Linear layers in neural networks (NNs) trained by gradient descent can be expressed as a key-value memory system.
No prior work has effectively studied the operations of NNs in such a form.
We conduct experiments on small scale supervised image classification tasks in single-task, multi-task, and continual learning settings.
arXiv Detail & Related papers (2022-02-11T17:49:22Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Deep Collective Learning: Learning Optimal Inputs and Weights Jointly in
Deep Neural Networks [5.6592403195043826]
In deep learning and computer vision literature, visual data are always represented in a manually designed coding scheme.
We boldly question whether the manually designed inputs are good for DNN training for different tasks.
We propose the paradigm of em deep collective learning which aims to learn the weights of DNNs and the inputs to DNNs simultaneously for given tasks.
arXiv Detail & Related papers (2020-09-17T00:33:04Z) - How Researchers Use Diagrams in Communicating Neural Network Systems [5.064404027153093]
This paper reports on a study into the use of neural network system diagrams.
We find high diversity of usage, perception and preference in both creation and interpretation of diagrams.
Considering the interview data alongside existing guidance, we propose guidelines aiming to improve the way in which neural network system diagrams are constructed.
arXiv Detail & Related papers (2020-08-28T10:21:03Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.