A new nature inspired modularity function adapted for unsupervised
learning involving spatially embedded networks: A comparative analysis
- URL: http://arxiv.org/abs/2007.09330v1
- Date: Sat, 18 Jul 2020 04:32:14 GMT
- Title: A new nature inspired modularity function adapted for unsupervised
learning involving spatially embedded networks: A comparative analysis
- Authors: Raj Kishore, Zohar Nussinov, Kisor Kumar Sahu
- Abstract summary: Unsupervised machine learning methods can be of great help in many traditional engineering disciplines.
We have compared the performance of our newly developed modularity function with some of the well-known modularity functions.
We show that for the class of networks considered in this article, our method produce much better results than the competing methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised machine learning methods can be of great help in many
traditional engineering disciplines, where huge amount of labeled data is not
readily available or is extremely difficult or costly to generate. Two specific
examples include the structure of granular materials and atomic structure of
metallic glasses. While the former is critically important for several hundreds
of billion dollars global industries, the latter is still a big puzzle in
fundamental science. One thing is common in both the examples is that the
particles are the elements of the ensembles that are embedded in Euclidean
space and one can create a spatially embedded network to represent their key
features. Some recent studies show that clustering, which generically refers to
unsupervised learning, holds great promise in partitioning these networks. In
many complex networks, the spatial information of nodes play very important
role in determining the network properties. So understanding the structure of
such networks is very crucial. We have compared the performance of our newly
developed modularity function with some of the well-known modularity functions.
We performed this comparison by finding the best partition in 2D and 3D
granular assemblies. We show that for the class of networks considered in this
article, our method produce much better results than the competing methods.
Related papers
- Modular Blended Attention Network for Video Question Answering [1.131316248570352]
We present an approach to facilitate the question with a reusable and composable neural unit.
We have conducted experiments on three commonly used datasets.
arXiv Detail & Related papers (2023-11-02T14:22:17Z) - The effect of network topologies on fully decentralized learning: a
preliminary investigation [2.9592782993171918]
In a decentralized machine learning system, data is partitioned among multiple devices or nodes, each of which trains a local model using its own data.
We investigate how different types of topologies impact the "spreading of knowledge"
Specifically, we highlight the different roles in this process of more or less connected nodes (hubs and leaves)
arXiv Detail & Related papers (2023-07-29T09:39:17Z) - General-Purpose Multimodal Transformer meets Remote Sensing Semantic
Segmentation [35.100738362291416]
Multimodal AI seeks to exploit complementary data sources, particularly for complex tasks like semantic segmentation.
Recent trends in general-purpose multimodal networks have shown great potential to achieve state-of-the-art performance.
We propose a UNet-inspired module that employs 3D convolution to encode vital local information and learn cross-modal features simultaneously.
arXiv Detail & Related papers (2023-07-07T04:58:34Z) - How Do Transformers Learn Topic Structure: Towards a Mechanistic
Understanding [56.222097640468306]
We provide mechanistic understanding of how transformers learn "semantic structure"
We show, through a combination of mathematical analysis and experiments on Wikipedia data, that the embedding layer and the self-attention layer encode the topical structure.
arXiv Detail & Related papers (2023-03-07T21:42:17Z) - Clustering units in neural networks: upstream vs downstream information [3.222802562733787]
We study modularity of hidden layer representations of feedforward, fully connected networks.
We find two surprising results: first, dropout dramatically increased modularity, while other forms of weight regularization had more modest effects.
This has important implications for representation-learning, as it suggests that finding modular representations that reflect structure in inputs may be a distinct goal from learning modular representations that reflect structure in outputs.
arXiv Detail & Related papers (2022-03-22T15:35:10Z) - Efficient Transfer Learning via Joint Adaptation of Network Architecture
and Weight [66.8543732597723]
Recent worksin neural architecture search (NAS) can aid transfer learning by establishing sufficient network search space.
We propose a novel framework consisting of two modules, the neural architecturesearch module for architecture transfer and the neural weight search module for weight transfer.
These two modules conduct search on thetarget task based on a reduced super-networks, so we only need to trainonce on the source task.
arXiv Detail & Related papers (2021-05-19T08:58:04Z) - ATOM3D: Tasks On Molecules in Three Dimensions [91.72138447636769]
Deep neural networks have recently gained significant attention.
In this work we present ATOM3D, a collection of both novel and existing datasets spanning several key classes of biomolecules.
We develop three-dimensional molecular learning networks for each of these tasks, finding that they consistently improve performance.
arXiv Detail & Related papers (2020-12-07T20:18:23Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - SpatialSim: Recognizing Spatial Configurations of Objects with Graph
Neural Networks [31.695447265278126]
We show how a machine can learn and compare classes of geometric spatial configurations that are invariant to the point of view of an external observer.
We propose SpatialSim (Spatial Similarity), a novel geometrical reasoning benchmark, and argue that progress on this benchmark would pave the way towards a general solution.
Secondly, we study how inductive relational biases exhibited by fully-connected message-passing Graph Neural Networks (MPGNNs) are useful to solve those tasks.
arXiv Detail & Related papers (2020-04-09T14:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.