Global and Local Feature Learning for Ego-Network Analysis
- URL: http://arxiv.org/abs/2002.06685v1
- Date: Sun, 16 Feb 2020 21:35:04 GMT
- Title: Global and Local Feature Learning for Ego-Network Analysis
- Authors: Fatemeh Salehi Rizi, Michael Granitzer, Konstantin Ziegler
- Abstract summary: In an ego-network, an individual (ego) organizes its friends (alters) in different groups (social circles)
Recent advances in language modeling via deep learning have inspired new methods for learning network representations.
We show that the task of social circle prediction benefits from a combination of global and local features generated by our technique.
- Score: 0.7661062091984316
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In an ego-network, an individual (ego) organizes its friends (alters) in
different groups (social circles). This social network can be efficiently
analyzed after learning representations of the ego and its alters in a
low-dimensional, real vector space. These representations are then easily
exploited via statistical models for tasks such as social circle detection and
prediction. Recent advances in language modeling via deep learning have
inspired new methods for learning network representations. These methods can
capture the global structure of networks. In this paper, we evolve these
techniques to also encode the local structure of neighborhoods. Therefore, our
local representations capture network features that are hidden in the global
representation of large networks. We show that the task of social circle
prediction benefits from a combination of global and local features generated
by our technique.
Related papers
- Unsupervised Learning via Network-Aware Embeddings [0.0]
We show how to create network-aware embeddings by estimating the network distance between numeric node attributes.
Our method is fully open source and data and code are available to reproduce all results in the paper.
arXiv Detail & Related papers (2023-09-19T08:17:48Z) - Improving neural network representations using human similarity
judgments [33.62351833204206]
We study the impact of supervising a global structure by linearly aligning it with human similarity judgments.
We propose a novel method that aligns the global structure of representations while preserving their local structure.
Our results indicate that human visual representations are globally organized in a way that facilitates learning from few examples.
arXiv Detail & Related papers (2023-06-07T15:17:54Z) - A General Framework for Interpretable Neural Learning based on Local Information-Theoretic Goal Functions [1.5236380958983644]
We introduce 'infomorphic' neural networks to perform tasks from supervised, unsupervised and memory learning.
By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.
arXiv Detail & Related papers (2023-06-03T16:34:25Z) - Distillation with Contrast is All You Need for Self-Supervised Point
Cloud Representation Learning [53.90317574898643]
We propose a simple and general framework for self-supervised point cloud representation learning.
Inspired by how human beings understand the world, we utilize knowledge distillation to learn both global shape information and the relationship between global shape and local structures.
Our method achieves the state-of-the-art performance on linear classification and multiple other downstream tasks.
arXiv Detail & Related papers (2022-02-09T02:51:59Z) - An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning [77.72330187258498]
We propose a novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet)
ERPCNet extracts and aggregates localities based on semantic relevance and visual correlations without human-annotated regions.
It not only discovers global-cooperative localities dynamically but also converges faster for policy gradient optimization.
arXiv Detail & Related papers (2021-11-03T11:13:13Z) - Network representation learning systematic review: ancestors and current
development state [1.0312968200748116]
We present a systematic survey of network representation learning, known as network embedding, from birth to the current development state.
We provide also formal definitions of basic concepts required to understand network representation learning.
Most commonly used downstream tasks to evaluate embeddings, their evaluation metrics and popular datasets are highlighted.
arXiv Detail & Related papers (2021-09-14T14:44:44Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.