Data Science In Olfaction
- URL: http://arxiv.org/abs/2404.05501v1
- Date: Mon, 8 Apr 2024 13:25:02 GMT
- Title: Data Science In Olfaction
- Authors: Vivek Agarwal, Joshua Harvey, Dmitry Rinberg, Vasant Dhar,
- Abstract summary: We conceptualize smell from a Data Science and AI perspective, that relates the properties of odorants to how they are sensed and analyzed in the olfactory system from the nose to the brain.
Drawing distinctions to color vision, we argue that smell presents unique measurement challenges, including the complexity of stimuli, the high dimensionality of the sensory apparatus, as well as what constitutes ground truth.
We present results using machine learning-based classification of neural responses to odors as they are recorded in the mouse olfactory bulb with calcium imaging.
- Score: 1.4499463058550683
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Advances in neural sensing technology are making it possible to observe the olfactory process in great detail. In this paper, we conceptualize smell from a Data Science and AI perspective, that relates the properties of odorants to how they are sensed and analyzed in the olfactory system from the nose to the brain. Drawing distinctions to color vision, we argue that smell presents unique measurement challenges, including the complexity of stimuli, the high dimensionality of the sensory apparatus, as well as what constitutes ground truth. In the face of these challenges, we argue for the centrality of odorant-receptor interactions in developing a theory of olfaction. Such a theory is likely to find widespread industrial applications, and enhance our understanding of smell, and in the longer-term, how it relates to other senses and language. As an initial use case of the data, we present results using machine learning-based classification of neural responses to odors as they are recorded in the mouse olfactory bulb with calcium imaging.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - DeepNose: An Equivariant Convolutional Neural Network Predictive Of Human Olfactory Percepts [0.9187505256430946]
We train a convolutional neural network (CNN) to predict human percepts from semantic datasets.
Our network offers high-fidelity perceptual predictions for different olfactory datasets.
We propose that the DeepNose network can use 3D molecular shapes to generate high-quality predictions for human olfactory percepts.
arXiv Detail & Related papers (2024-12-11T19:35:24Z) - Sniff AI: Is My 'Spicy' Your 'Spicy'? Exploring LLM's Perceptual Alignment with Human Smell Experiences [12.203995379495916]
This work focuses on olfaction, human smell experiences.
We conducted a user study with 40 participants to investigate how well AI can interpret human descriptions of scents.
Results indicated limited perceptual alignment, with biases towards certain scents, like lemon and peppermint, and continued failing to identify others, like rosemary.
arXiv Detail & Related papers (2024-11-11T12:56:52Z) - Can Transformers Smell Like Humans? [17.14976015153551]
We show that representations encoded from transformers pre-trained on general chemical structures are highly aligned with human olfactory perception.
We also evaluate the extent to which this alignment is associated with physicochemical features of odorants known to be relevant for olfactory decoding.
arXiv Detail & Related papers (2024-11-05T12:19:39Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Learn to integrate parts for whole through correlated neural variability [8.173681663544757]
Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
arXiv Detail & Related papers (2024-01-01T13:05:29Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.