What Images are More Memorable to Machines?
- URL: http://arxiv.org/abs/2211.07625v2
- Date: Tue, 11 Jul 2023 13:47:07 GMT
- Title: What Images are More Memorable to Machines?
- Authors: Junlin Han, Huangying Zhan, Jie Hong, Pengfei Fang, Hongdong Li, Lars
Petersson, Ian Reid
- Abstract summary: Similar to humans, machines also tend to memorize certain kinds of images, whereas the types of images that machines and humans are different.
This work proposes the concept of machine memorability and opens a new research direction at the interface between machine memory and visual data.
- Score: 87.14558566342322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the problem of measuring and predicting how memorable an
image is to pattern recognition machines, as a path to explore machine
intelligence. Firstly, we propose a self-supervised machine memory
quantification pipeline, dubbed ``MachineMem measurer'', to collect machine
memorability scores of images. Similar to humans, machines also tend to
memorize certain kinds of images, whereas the types of images that machines and
humans memorize are different. Through in-depth analysis and comprehensive
visualizations, we gradually unveil that``complex" images are usually more
memorable to machines. We further conduct extensive experiments across 11
different machines (from linear classifiers to modern ViTs) and 9 pre-training
methods to analyze and understand machine memory. This work proposes the
concept of machine memorability and opens a new research direction at the
interface between machine memory and visual data.
Related papers
- An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels [65.64402188506644]
vanilla Transformers can operate by treating each individual pixel as a token and achieve highly performant results.
We mainly showcase the effectiveness of pixels-as-tokens across three well-studied tasks in computer vision.
arXiv Detail & Related papers (2024-06-13T17:59:58Z) - Survey on Memory-Augmented Neural Networks: Cognitive Insights to AI
Applications [4.9008611361629955]
Memory-Augmented Neural Networks (MANNs) blend human-like memory processes into AI.
The study investigates advanced architectures such as Hopfield Networks, Neural Turing Machines, Correlation Matrix Memories, Memformer, and Neural Attention Memory.
It dives into real-world uses of MANNs across Natural Language Processing, Computer Vision, Multimodal Learning, and Retrieval Models.
arXiv Detail & Related papers (2023-12-11T06:05:09Z) - Can I say, now machines can think? [0.0]
We analyzed and explored the capabilities of artificial intelligence-enabled machines.
Turing Test is a critical aspect of evaluating machines' ability.
There are other aspects of intelligence too, and AI machines exhibit most of these aspects.
arXiv Detail & Related papers (2023-07-11T11:44:09Z) - A neuromorphic approach to image processing and machine vision [0.9137554315375922]
We explore the implementation of visual tasks such as image segmentation, visual attention and object recognition.
We have emphasized on the employment of non-volatile memory devices such as memristors to realize artificial visual systems.
arXiv Detail & Related papers (2022-08-07T05:01:57Z) - Automatic Image Content Extraction: Operationalizing Machine Learning in
Humanistic Photographic Studies of Large Visual Archives [81.88384269259706]
We introduce Automatic Image Content Extraction framework for machine learning-based search and analysis of large image archives.
The proposed framework can be applied in several domains in humanities and social sciences.
arXiv Detail & Related papers (2022-04-05T12:19:24Z) - Self-supervised machine learning model for analysis of nanowire
morphologies from transmission electron microscopy images [0.0]
We present a self-supervised transfer learning approach that uses a small number of labeled microscopy images for training.
Specifically, we train an image encoder with unlabeled images and use that encoder for transfer learning of different downstream image tasks.
arXiv Detail & Related papers (2022-03-25T19:32:03Z) - Masked Visual Pre-training for Motor Control [118.18189211080225]
Self-supervised visual pre-training from real-world images is effective for learning motor control tasks from pixels.
We freeze the visual encoder and train neural network controllers on top with reinforcement learning.
This is the first self-supervised model to exploit real-world images at scale for motor control.
arXiv Detail & Related papers (2022-03-11T18:58:10Z) - Automated Graph Machine Learning: Approaches, Libraries, Benchmarks and Directions [58.220137936626315]
This paper extensively discusses automated graph machine learning approaches.
We introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning.
Also, we describe a tailored benchmark that supports unified, reproducible, and efficient evaluations.
arXiv Detail & Related papers (2022-01-04T18:31:31Z) - Multimodal Material Classification for Robots using Spectroscopy and
High Resolution Texture Imaging [14.458436940557924]
We present a multimodal sensing technique, leveraging near-infrared spectroscopy and close-range high resolution texture imaging.
We show that this representation enables a robot to recognize materials with greater performance as compared to prior state-of-the-art approaches.
arXiv Detail & Related papers (2020-04-02T17:33:54Z) - Reservoir memory machines [79.79659145328856]
We propose reservoir memory machines, which are able to solve some of the benchmark tests for Neural Turing Machines.
Our model can also be seen as an extension of echo state networks with an external memory, enabling arbitrarily long storage without interference.
arXiv Detail & Related papers (2020-02-12T01:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.