Topological properties and organizing principles of semantic networks
- URL: http://arxiv.org/abs/2304.12940v2
- Date: Thu, 17 Aug 2023 14:51:25 GMT
- Title: Topological properties and organizing principles of semantic networks
- Authors: Gabriel Budel, Ying Jin, Piet Van Mieghem, Maksim Kitsak
- Abstract summary: We study the properties of semantic networks from ConceptNet, defined by 7 semantic relations from 11 different languages.
We find that semantic networks have universal basic properties: they are sparse, highly clustered, and many exhibit power-law degree distributions.
In some networks the connections are similarity-based, while in others the connections are more complementarity-based.
- Score: 3.8462776107938317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpreting natural language is an increasingly important task in computer
algorithms due to the growing availability of unstructured textual data.
Natural Language Processing (NLP) applications rely on semantic networks for
structured knowledge representation. The fundamental properties of semantic
networks must be taken into account when designing NLP algorithms, yet they
remain to be structurally investigated. We study the properties of semantic
networks from ConceptNet, defined by 7 semantic relations from 11 different
languages. We find that semantic networks have universal basic properties: they
are sparse, highly clustered, and many exhibit power-law degree distributions.
Our findings show that the majority of the considered networks are scale-free.
Some networks exhibit language-specific properties determined by grammatical
rules, for example networks from highly inflected languages, such as e.g.
Latin, German, French and Spanish, show peaks in the degree distribution that
deviate from a power law. We find that depending on the semantic relation type
and the language, the link formation in semantic networks is guided by
different principles. In some networks the connections are similarity-based,
while in others the connections are more complementarity-based. Finally, we
demonstrate how knowledge of similarity and complementarity in semantic
networks can improve NLP algorithms in missing link inference.
Related papers
- Less Data, More Knowledge: Building Next Generation Semantic
Communication Networks [180.82142885410238]
We present the first rigorous vision of a scalable end-to-end semantic communication network.
We first discuss how the design of semantic communication networks requires a move from data-driven networks towards knowledge-driven ones.
By using semantic representation and languages, we show that the traditional transmitter and receiver now become a teacher and apprentice.
arXiv Detail & Related papers (2022-11-25T19:03:25Z) - Imitation Learning-based Implicit Semantic-aware Communication Networks:
Multi-layer Representation and Collaborative Reasoning [68.63380306259742]
Despite its promising potential, semantic communications and semantic-aware networking are still at their infancy.
We propose a novel reasoning-based implicit semantic-aware communication network architecture that allows multiple tiers of CDC and edge servers to collaborate.
We introduce a new multi-layer representation of semantic information taking into consideration both the hierarchical structure of implicit semantics as well as the personalized inference preference of individual users.
arXiv Detail & Related papers (2022-10-28T13:26:08Z) - TeKo: Text-Rich Graph Neural Networks with External Knowledge [75.91477450060808]
We propose a novel text-rich graph neural network with external knowledge (TeKo)
We first present a flexible heterogeneous semantic network that incorporates high-quality entities.
We then introduce two types of external knowledge, that is, structured triplets and unstructured entity description.
arXiv Detail & Related papers (2022-06-15T02:33:10Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Compositional Networks Enable Systematic Generalization for Grounded
Language Understanding [21.481360281719006]
Humans are remarkably flexible when understanding new sentences that include combinations of concepts they have never encountered before.
Recent work has shown that while deep networks can mimic some human language abilities when presented with novel sentences, systematic variation uncovers the limitations in the language-understanding abilities of networks.
We demonstrate that these limitations can be overcome by addressing the generalization challenges in the gSCAN dataset.
arXiv Detail & Related papers (2020-08-06T16:17:35Z) - Generative Adversarial Phonology: Modeling unsupervised phonetic and
phonological learning with neural networks [0.0]
Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations.
This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture.
We propose a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties.
arXiv Detail & Related papers (2020-06-06T20:31:23Z) - Using word embeddings to improve the discriminability of co-occurrence
text networks [0.1611401281366893]
We investigate whether the use of word embeddings as a tool to create virtual links in co-occurrence networks may improve the quality of classification systems.
Our results revealed that the discriminability in the stylometry task is improved when using Glove, Word2Vec and FastText.
arXiv Detail & Related papers (2020-03-13T13:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.