Neural Representation Learning for Scribal Hands of Linear B
- URL: http://arxiv.org/abs/2108.04199v1
- Date: Wed, 14 Jul 2021 20:33:59 GMT
- Title: Neural Representation Learning for Scribal Hands of Linear B
- Authors: Nikita Srivatsan, Jason Vega, Christina Skelton, Taylor
Berg-Kirkpatrick
- Abstract summary: We present an investigation into the use of neural feature extraction in performing scribal hand analysis of the Linear B writing system.
We propose learning features using a fully unsupervised neural network that does not require any human annotation.
- Score: 23.603494290484086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present an investigation into the use of neural feature
extraction in performing scribal hand analysis of the Linear B writing system.
While prior work has demonstrated the usefulness of strategies such as
phylogenetic systematics in tracing Linear B's history, these approaches have
relied on manually extracted features which can be very time consuming to
define by hand. Instead we propose learning features using a fully unsupervised
neural network that does not require any human annotation. Specifically our
model assigns each glyph written by the same scribal hand a shared vector
embedding to represent that author's stylistic patterns, and each glyph
representing the same syllabic sign a shared vector embedding to represent the
identifying shape of that character. Thus the properties of each image in our
dataset are represented as the combination of a scribe embedding and a sign
embedding. We train this model using both a reconstructive loss governed by a
decoder that seeks to reproduce glyphs from their corresponding embeddings, and
a discriminative loss which measures the model's ability to predict whether or
not an embedding corresponds to a given image. Among the key contributions of
this work we (1) present a new dataset of Linear B glyphs, annotated by scribal
hand and sign type, (2) propose a neural model for disentangling properties of
scribal hands from glyph shape, and (3) quantitatively evaluate the learned
embeddings on findplace prediction and similarity to manually extracted
features, showing improvements over simpler baseline methods.
Related papers
- Self-Supervised Representation Learning for Online Handwriting Text
Classification [0.8594140167290099]
We propose the novel Part of Stroke Masking (POSM) as a pretext task for pretraining models to extract informative representations from the online handwriting of individuals in English and Chinese languages.
To evaluate the quality of the extracted representations, we use both intrinsic and extrinsic evaluation methods.
The pretrained models are fine-tuned to achieve state-of-the-art results in tasks such as writer identification, gender classification, and handedness classification.
arXiv Detail & Related papers (2023-10-10T14:07:49Z) - A Multi-Grained Self-Interpretable Symbolic-Neural Model For
Single/Multi-Labeled Text Classification [29.075766631810595]
We propose a Symbolic-Neural model that can learn to explicitly predict class labels of text spans from a constituency tree.
As the structured language model learns to predict constituency trees in a self-supervised manner, only raw texts and sentence-level labels are required as training data.
Our experiments demonstrate that our approach could achieve good prediction accuracy in downstream tasks.
arXiv Detail & Related papers (2023-03-06T03:25:43Z) - Generative Graph Neural Networks for Link Prediction [13.643916060589463]
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis.
This paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP.
Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
arXiv Detail & Related papers (2022-12-31T10:07:19Z) - Text Representation Enrichment Utilizing Graph based Approaches: Stock
Market Technical Analysis Case Study [0.0]
We propose a transductive hybrid approach composed of an unsupervised node representation learning model followed by a node classification/edge prediction model.
The proposed model is developed to classify stock market technical analysis reports, which to our knowledge is the first work in this domain.
arXiv Detail & Related papers (2022-11-29T11:26:08Z) - Repurposing Knowledge Graph Embeddings for Triple Representation via
Weak Supervision [77.34726150561087]
Current methods learn triple embeddings from scratch without utilizing entity and predicate embeddings from pre-trained models.
We develop a method for automatically sampling triples from a knowledge graph and estimating their pairwise similarities from pre-trained embedding models.
These pairwise similarity scores are then fed to a Siamese-like neural architecture to fine-tune triple representations.
arXiv Detail & Related papers (2022-08-22T14:07:08Z) - Drawing out of Distribution with Neuro-Symbolic Generative Models [49.79371715591122]
Drawing out of Distribution is a neuro-symbolic generative model of stroke-based drawing.
DooD operates directly on images, requires no supervision or expensive test-time inference.
We evaluate DooD on its ability to generalise across both data and tasks.
arXiv Detail & Related papers (2022-06-03T21:40:22Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - VisGraphNet: a complex network interpretation of convolutional neural
features [6.50413414010073]
We propose and investigate the use of visibility graphs to model the feature map of a neural network.
The work is motivated by an alternative viewpoint provided by these graphs over the original data.
arXiv Detail & Related papers (2021-08-27T20:21:04Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z) - Learning to map source code to software vulnerability using
code-as-a-graph [67.62847721118142]
We explore the applicability of Graph Neural Networks in learning the nuances of source code from a security perspective.
We show that a code-as-graph encoding is more meaningful for vulnerability detection than existing code-as-photo and linear sequence encoding approaches.
arXiv Detail & Related papers (2020-06-15T16:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.