A Review of Pulse-Coupled Neural Network Applications in Computer Vision and Image Processing
- URL: http://arxiv.org/abs/2406.00239v1
- Date: Sat, 1 Jun 2024 00:10:05 GMT
- Title: A Review of Pulse-Coupled Neural Network Applications in Computer Vision and Image Processing
- Authors: Nurul Rafi, Pablo Rivas,
- Abstract summary: We present several applications in which pulse-coupled neural networks (PCNNs) have successfully addressed some fundamental image processing and computer vision challenges.
Results achieved in these applications suggest that the PCNN architecture generates useful perceptual information relevant to a wide variety of computer vision tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research in neural models inspired by mammal's visual cortex has led to many spiking neural networks such as pulse-coupled neural networks (PCNNs). These models are oscillating, spatio-temporal models stimulated with images to produce several time-based responses. This paper reviews PCNN's state of the art, covering its mathematical formulation, variants, and other simplifications found in the literature. We present several applications in which PCNN architectures have successfully addressed some fundamental image processing and computer vision challenges, including image segmentation, edge detection, medical imaging, image fusion, image compression, object recognition, and remote sensing. Results achieved in these applications suggest that the PCNN architecture generates useful perceptual information relevant to a wide variety of computer vision tasks.
Related papers
- Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - DQNAS: Neural Architecture Search using Reinforcement Learning [6.33280703577189]
Convolutional Neural Networks have been used in a variety of image related applications.
In this paper, we propose an automated Neural Architecture Search framework, guided by the principles of Reinforcement Learning.
arXiv Detail & Related papers (2023-01-17T04:01:47Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - Understanding the Influence of Receptive Field and Network Complexity in
Neural-Network-Guided TEM Image Analysis [0.0]
We systematically examine how neural network architecture choices affect how neural networks segment in transmission electron microscopy (TEM) images.
We find that for low-resolution TEM images which rely on amplitude contrast to distinguish nanoparticles from background, the receptive field does not significantly influence segmentation performance.
On the other hand, for high-resolution TEM images which rely on a combination of amplitude and phase contrast changes to identify nanoparticles, receptive field is a key parameter for increased performance.
arXiv Detail & Related papers (2022-04-08T18:45:15Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Self-Denoising Neural Networks for Few Shot Learning [66.38505903102373]
We present a new training scheme that adds noise at multiple stages of an existing neural architecture while simultaneously learning to be robust to this added noise.
This architecture, which we call a Self-Denoising Neural Network (SDNN), can be applied easily to most modern convolutional neural architectures.
arXiv Detail & Related papers (2021-10-26T03:28:36Z) - Convolutional Neural Networks in Orthodontics: a review [10.334172684650632]
Convolutional neural networks (CNNs) are used in many areas of computer vision.
This review presents the application of CNNs in one of the fields of dentistry - orthodontics.
arXiv Detail & Related papers (2021-04-18T16:02:30Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - 11 TeraFLOPs per second photonic convolutional accelerator for deep
learning optical neural networks [0.0]
We demonstrate a universal optical vector convolutional accelerator operating beyond 10 TeraFLOPS (floating point operations per second)
We then use the same hardware to sequentially form a deep optical CNN with ten output neurons, achieving successful recognition of full 10 digits with 900 pixel handwritten digit images with 88% accuracy.
This approach is scalable and trainable to much more complex networks for demanding applications such as unmanned vehicle and real-time video recognition.
arXiv Detail & Related papers (2020-11-14T21:24:01Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - MVC-Net: A Convolutional Neural Network Architecture for Manifold-Valued
Images With Applications [5.352699766206807]
We present a detailed description of how to use MVC layers to build full, multi-layer neural networks that operate on manifold-valued images.
We empirically demonstrate superior performance of the MVC-nets in medical imaging and computer vision tasks.
arXiv Detail & Related papers (2020-03-02T22:37:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.