Unsupervised Statistical Learning for Die Analysis in Ancient
Numismatics
- URL: http://arxiv.org/abs/2112.00290v1
- Date: Wed, 1 Dec 2021 06:02:07 GMT
- Title: Unsupervised Statistical Learning for Die Analysis in Ancient
Numismatics
- Authors: Andreas Heinecke, Emanuel Mayer, Abhinav Natarajan, Yoonju Jung
- Abstract summary: We propose a model for unsupervised computational die analysis, which can reduce the time investment necessary for large-scale die studies by several orders of magnitude.
The efficacy of our method is demonstrated through an analysis of 1135 Roman silver coins struck between 64-66 C.E.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Die analysis is an essential numismatic method, and an important tool of
ancient economic history. Yet, manual die studies are too labor-intensive to
comprehensively study large coinages such as those of the Roman Empire. We
address this problem by proposing a model for unsupervised computational die
analysis, which can reduce the time investment necessary for large-scale die
studies by several orders of magnitude, in many cases from years to weeks. From
a computer vision viewpoint, die studies present a challenging unsupervised
clustering problem, because they involve an unknown and large number of highly
similar semantic classes of imbalanced sizes. We address these issues through
determining dissimilarities between coin faces derived from specifically
devised Gaussian process-based keypoint features in a Bayesian distance
clustering framework. The efficacy of our method is demonstrated through an
analysis of 1135 Roman silver coins struck between 64-66 C.E..
Related papers
- A Survey of Deep Long-Tail Classification Advancements [1.6233132273470656]
Many data distributions in the real world are hardly uniform. Instead, skewed and long-tailed distributions of various kinds are commonly observed.
This poses an interesting problem for machine learning, where most algorithms assume or work well with uniformly distributed data.
The problem is further exacerbated by current state-of-the-art deep learning models requiring large volumes of training data.
arXiv Detail & Related papers (2024-04-24T01:59:02Z) - Scalable Learning of Item Response Theory Models [48.91265296134559]
Item Response Theory (IRT) models aim to assess latent abilities of $n$ examinees along with latent difficulty characteristics of $m$ test items from categorical data.
We leverage the similarity of these models to logistic regression, which can be approximated accurately using small weighted subsets called coresets.
arXiv Detail & Related papers (2024-03-01T17:12:53Z) - Statistical Inference with Limited Memory: A Survey [22.41443027099101]
We review the state-of-the-art of statistical inference under memory constraints in several canonical problems.
We discuss the main results in this developing field, and by identifying recurrent themes, we extract some fundamental building blocks for algorithmic construction.
arXiv Detail & Related papers (2023-12-23T11:14:33Z) - Sparse Representations, Inference and Learning [0.0]
We will present a general framework that can be used in a large variety of problems with weak long-range interactions.
We shall see how these problems can be studied at the replica symmetric level, using developments of the cavity methods.
arXiv Detail & Related papers (2023-06-28T10:58:27Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Provable Reinforcement Learning with a Short-Term Memory [68.00677878812908]
We study a new subclass of POMDPs, whose latent states can be decoded by the most recent history of a short length $m$.
In particular, in the rich-observation setting, we develop new algorithms using a novel "moment matching" approach with a sample complexity that scales exponentially.
Our results show that a short-term memory suffices for reinforcement learning in these environments.
arXiv Detail & Related papers (2022-02-08T16:39:57Z) - Mean-field methods and algorithmic perspectives for high-dimensional
machine learning [5.406386303264086]
We revisit an approach based on the tools of statistical physics of disordered systems.
We capitalize on the deep connection between the replica method and message passing algorithms in order to shed light on the phase diagrams of various theoretical models.
arXiv Detail & Related papers (2021-03-10T09:02:36Z) - Sample-Efficient Reinforcement Learning of Undercomplete POMDPs [91.40308354344505]
This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of Partially Observable Decision Processes (POMDPs)
We present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works.
arXiv Detail & Related papers (2020-06-22T17:58:54Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.