Neuromorphic Retina: An FPGA-based Emulator
- URL: http://arxiv.org/abs/2501.08943v1
- Date: Wed, 15 Jan 2025 16:45:45 GMT
- Title: Neuromorphic Retina: An FPGA-based Emulator
- Authors: Prince Phillip, Pallab Kumar Nath, Kapil Jainwal, Andre van Schaik, Chetan Singh Thakur,
- Abstract summary: We are emulating a neuromorphic retina model on an FPGA.<n>Phasic and tonic cells are realizable in the retina in the simplest way possible.
- Score: 1.6444558948529873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implementing accurate models of the retina is a challenging task, particularly in the context of creating visual prosthetics and devices. Notwithstanding the presence of diverse artificial renditions of the retina, the imperative task persists to pursue a more realistic model. In this work, we are emulating a neuromorphic retina model on an FPGA. The key feature of this model is its powerful adaptation to luminance and contrast, which allows it to accurately emulate the sensitivity of the biological retina to changes in light levels. Phasic and tonic cells are realizable in the retina in the simplest way possible. Our FPGA implementation of the proposed biologically inspired digital retina, incorporating a receptive field with a center-surround structure, is reconfigurable and can support 128*128 pixel images at a frame rate of 200fps. It consumes 1720 slices, approximately 3.7k Look-Up Tables (LUTs), and Flip-Flops (FFs) on the FPGA. This implementation provides a high-performance, low-power, and small-area solution and could be a significant step forward in the development of biologically plausible retinal prostheses with enhanced information processing capabilities
Related papers
- Neural-Driven Image Editing [51.11173675034121]
Traditional image editing relies on manual prompting, making it labor-intensive and inaccessible to individuals with limited motor control or language abilities.<n>We propose LoongX, a hands-free image editing approach driven by neurophysiological signals.<n>LoongX utilizes state-of-the-art diffusion models trained on a comprehensive dataset of 23,928 image editing pairs.
arXiv Detail & Related papers (2025-07-07T18:31:50Z) - RetinaLogos: Fine-Grained Synthesis of High-Resolution Retinal Images Through Captions [16.85664533914851]
Existing methods for synthesising Colour Fundus Photographs largely rely on predefined disease labels.<n>We first introduce an innovative pipeline that creates a large-scale, captioned retinal dataset comprising 1.4 million entries.<n>We employ a novel three-step training framework, RetinaLogos, which enables fine-grained semantic control over retinal images.
arXiv Detail & Related papers (2025-05-19T09:18:11Z) - Hyperspectral Image Restoration and Super-resolution with Physics-Aware Deep Learning for Biomedical Applications [1.5227564673552003]
We present a deep learning-based approach that restores and enhances pixel resolution post-acquisition without any priori knowledge.<n>Fine-tuned using metrics aligned with the imaging model, our physics-aware method achieves a 16X pixel super-resolution enhancement and a 12X imaging speedup.<n>All methods are available as open-source software on GitHub.
arXiv Detail & Related papers (2025-03-03T17:23:23Z) - DAMamba: Vision State Space Model with Dynamic Adaptive Scan [51.81060691414399]
State space models (SSMs) have recently garnered significant attention in computer vision.
We propose Dynamic Adaptive Scan (DAS), a data-driven method that adaptively allocates scanning orders and regions.
Based on DAS, we propose the vision backbone DAMamba, which significantly outperforms current state-of-the-art vision Mamba models in vision tasks.
arXiv Detail & Related papers (2025-02-18T08:12:47Z) - Progressive Retinal Image Registration via Global and Local Deformable Transformations [49.032894312826244]
We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
arXiv Detail & Related papers (2024-09-02T08:43:50Z) - Retina-Inspired Object Motion Segmentation for Event-Cameras [0.0]
Event-cameras have emerged as a revolutionary technology with a high temporal resolution that far surpasses standard active pixel cameras.<n>This research showcases the potential of additional retinal functionalities to extract visual features.
arXiv Detail & Related papers (2024-08-18T12:28:26Z) - Generative deep learning-enabled ultra-large field-of-view lens-free imaging [8.474666653683638]
We present a deep-learning(DL)-based imaging framework - GenLFI - leveraging generative artificial intelligence (AI) for holographic image reconstruction.
We demonstrate that GenLFI can achieve a real-time FOV over 550 $mm2$, surpassing the current LFI system by more than 20-fold, and even larger than the world's largest confocal microscope by 1.76 times.
arXiv Detail & Related papers (2024-03-12T16:20:27Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Neural Echos: Depthwise Convolutional Filters Replicate Biological
Receptive Fields [56.69755544814834]
We present evidence suggesting that depthwise convolutional kernels are effectively replicating the biological receptive fields observed in the mammalian retina.
We propose a scheme that draws inspiration from the biological receptive fields.
arXiv Detail & Related papers (2024-01-18T18:06:22Z) - Low latency optical-based mode tracking with machine learning deployed on FPGAs on a tokamak [0.8506991993461593]
This study demonstrates an FPGA-based high-speed camera data acquisition and processing system.
It enables application in real-time machine-learning-based tokamak diagnostic and control as well as potential applications in other scientific domains.
arXiv Detail & Related papers (2023-11-30T19:00:03Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Intriguing Properties of Vision Transformers [114.28522466830374]
Vision transformers (ViT) have demonstrated impressive performance across various machine vision problems.
We systematically study this question via an extensive set of experiments and comparisons with a high-performing convolutional neural network (CNN)
We show effective features of ViTs are due to flexible receptive and dynamic fields possible via the self-attention mechanism.
arXiv Detail & Related papers (2021-05-21T17:59:18Z) - A Neuromorphic Proto-Object Based Dynamic Visual Saliency Model with an
FPGA Implementation [1.2387676601792899]
We present a neuromorphic, bottom-up, dynamic visual saliency model based on the notion of proto-objects.
This model outperforms state-of-the-art dynamic visual saliency models in predicting human eye fixations on a commonly used video dataset.
We introduce a Field-Programmable Gate Array implementation of the model on an Opal Kelly 7350 Kintex-7 board.
arXiv Detail & Related papers (2020-02-27T03:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.