Exploiting Large Neuroimaging Datasets to Create Connectome-Constrained
Approaches for more Robust, Efficient, and Adaptable Artificial Intelligence
- URL: http://arxiv.org/abs/2305.17300v1
- Date: Fri, 26 May 2023 23:04:53 GMT
- Title: Exploiting Large Neuroimaging Datasets to Create Connectome-Constrained
Approaches for more Robust, Efficient, and Adaptable Artificial Intelligence
- Authors: Erik C. Johnson, Brian S. Robinson, Gautam K. Vallabha, Justin Joyce,
Jordan K. Matelsky, Raphael Norman-Tenazas, Isaac Western, Marisel
Villafa\~ne-Delgado, Martha Cervantes, Michael S. Robinette, Arun V. Reddy,
Lindsey Kitchell, Patricia K. Rivlin, Elizabeth P. Reilly, Nathan Drenkow,
Matthew J. Roos, I-Jeng Wang, Brock A. Wester, William R. Gray-Roncal, Joan
A. Hoffmann
- Abstract summary: We envision a pipeline to utilize large neuroimaging datasets, including maps of the brain.
We have developed a technique for discovery of repeated subcircuits, or motifs.
Third, the team analyzed circuitry for memory formation in the fruit fly connectome, enabling the design of a novel generative replay approach.
- Score: 4.998666322418252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the progress in deep learning networks, efficient learning at the
edge (enabling adaptable, low-complexity machine learning solutions) remains a
critical need for defense and commercial applications. We envision a pipeline
to utilize large neuroimaging datasets, including maps of the brain which
capture neuron and synapse connectivity, to improve machine learning
approaches. We have pursued different approaches within this pipeline
structure. First, as a demonstration of data-driven discovery, the team has
developed a technique for discovery of repeated subcircuits, or motifs. These
were incorporated into a neural architecture search approach to evolve network
architectures. Second, we have conducted analysis of the heading direction
circuit in the fruit fly, which performs fusion of visual and angular velocity
features, to explore augmenting existing computational models with new insight.
Our team discovered a novel pattern of connectivity, implemented a new model,
and demonstrated sensor fusion on a robotic platform. Third, the team analyzed
circuitry for memory formation in the fruit fly connectome, enabling the design
of a novel generative replay approach. Finally, the team has begun analysis of
connectivity in mammalian cortex to explore potential improvements to
transformer networks. These constraints increased network robustness on the
most challenging examples in the CIFAR-10-C computer vision robustness
benchmark task, while reducing learnable attention parameters by over an order
of magnitude. Taken together, these results demonstrate multiple potential
approaches to utilize insight from neural systems for developing robust and
efficient machine learning techniques.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - Automatic Discovery of Visual Circuits [66.99553804855931]
We explore scalable methods for extracting the subgraph of a vision model's computational graph that underlies recognition of a specific visual concept.
We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.
arXiv Detail & Related papers (2024-04-22T17:00:57Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Creating Powerful and Interpretable Models withRegression Networks [2.2049183478692584]
We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis.
We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets.
arXiv Detail & Related papers (2021-07-30T03:37:00Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Intrinsic Motivation and Episodic Memories for Robot Exploration of
High-Dimensional Sensory Spaces [0.0]
This work presents an architecture that generates curiosity-driven goal-directed exploration behaviours for an image sensor of a microfarming robot.
A combination of deep neural networks for offline unsupervised learning of low-dimensional features from images, and of online learning of shallow neural networks representing the inverse and forward kinematics of the system have been used.
The artificial curiosity system assigns interest values to a set of pre-defined goals, and drives the exploration towards those that are expected to maximise the learning progress.
arXiv Detail & Related papers (2020-01-07T11:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.