Can one hear the position of nodes?
- URL: http://arxiv.org/abs/2211.06325v1
- Date: Thu, 10 Nov 2022 16:00:53 GMT
- Title: Can one hear the position of nodes?
- Authors: Rami Puzis
- Abstract summary: The sound emitted by vibrations of individual nodes reflects the structure of the overall network topology.
A sound recognition neural network is trained to infer centrality measures from the nodes' wave-forms.
Auralization of the network topology may open new directions in arts, competing with network visualization.
- Score: 5.634825161148484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wave propagation through nodes and links of a network forms the basis of
spectral graph theory. Nevertheless, the sound emitted by nodes within the
resonating chamber formed by a network are not well studied. The sound emitted
by vibrations of individual nodes reflects the structure of the overall network
topology but also the location of the node within the network. In this article,
a sound recognition neural network is trained to infer centrality measures from
the nodes' wave-forms. In addition to advancing network representation
learning, sounds emitted by nodes are plausible in most cases. Auralization of
the network topology may open new directions in arts, competing with network
visualization.
Related papers
- Towards Inductive Robustness: Distilling and Fostering Wave-induced
Resonance in Transductive GCNs Against Graph Adversarial Attacks [56.56052273318443]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks, where slight perturbations in the graph structure can lead to erroneous predictions.
Here, we discover that transductive GCNs inherently possess a distillable robustness, achieved through a wave-induced resonance process.
We present Graph Resonance-fostering Network (GRN) to foster this resonance via learning node representations.
arXiv Detail & Related papers (2023-12-14T04:25:50Z) - Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Investigation of Densely Connected Convolutional Networks with Domain
Adversarial Learning for Noise Robust Speech Recognition [41.88097793717185]
We investigate densely connected convolutional networks (DenseNets) and their extension with domain adversarial training for noise robust speech recognition.
DenseNets are very deep, compact convolutional neural networks which have demonstrated incredible improvements over the state-of-the-art results in computer vision.
arXiv Detail & Related papers (2021-12-19T10:29:17Z) - Predicting Hidden Links and Missing Nodes in Scale-Free Networks with
Artificial Neural Networks [1.0152838128195467]
We proposed a methodology in a form of an algorithm to predict hidden links and missing nodes in scale-free networks.
We used Bela Bollobas's directed scale-free random graph generation algorithm as a generator of random networks to generate a large set of scale-free network's data.
arXiv Detail & Related papers (2021-09-25T10:23:28Z) - Full network nonlocality [68.8204255655161]
We introduce the concept of full network nonlocality, which describes correlations that necessitate all links in a network to distribute nonlocal resources.
We show that the most well-known network Bell test does not witness full network nonlocality.
More generally, we point out that established methods for analysing local and theory-independent correlations in networks can be combined in order to deduce sufficient conditions for full network nonlocality.
arXiv Detail & Related papers (2021-05-19T18:00:02Z) - Artificial Neural Networks generated by Low Discrepancy Sequences [59.51653996175648]
We generate artificial neural networks as random walks on a dense network graph.
Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards.
We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
arXiv Detail & Related papers (2021-03-05T08:45:43Z) - Geometric Scattering Attention Networks [14.558882688159297]
We introduce a new attention-based architecture to produce adaptive task-driven node representations.
We show the resulting geometric scattering attention network (GSAN) outperforms previous networks in semi-supervised node classification.
arXiv Detail & Related papers (2020-10-28T14:36:40Z) - ResiliNet: Failure-Resilient Inference in Distributed Neural Networks [56.255913459850674]
We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures.
Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks.
arXiv Detail & Related papers (2020-02-18T05:58:24Z) - Adversarial Deep Network Embedding for Cross-network Node Classification [27.777464531860325]
Cross-network node classification leverages the abundant labeled nodes from a source network to help classify unlabeled nodes in a target network.
In this paper, we propose an adversarial cross-network deep network embedding model to integrate adversarial domain adaptation with deep network embedding.
The proposed ACDNE model achieves the state-of-the-art performance in cross-network node classification.
arXiv Detail & Related papers (2020-02-18T04:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.