Integrating Visual and X-Ray Machine Learning Features in the Study of Paintings by Goya
- URL: http://arxiv.org/abs/2511.01000v1
- Date: Sun, 02 Nov 2025 16:23:37 GMT
- Title: Integrating Visual and X-Ray Machine Learning Features in the Study of Paintings by Goya
- Authors: Hassan Ugail, Ismail Lujain Jaleel,
- Abstract summary: We introduce a novel machine learning framework that applies identical feature extraction techniques to both visual and X-ray images of Goya paintings.<n>The framework achieves 97.8% classification accuracy with a 0.022 false positive rate.<n>Our results establish the effectiveness of applying identical computational methods to both visual and radiographic imagery in art authentication applications.
- Score: 0.30693357740321775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Art authentication of Francisco Goya's works presents complex computational challenges due to his heterogeneous stylistic evolution and extensive historical patterns of forgery. We introduce a novel multimodal machine learning framework that applies identical feature extraction techniques to both visual and X-ray radiographic images of Goya paintings. The unified feature extraction pipeline incorporates Grey-Level Co-occurrence Matrix descriptors, Local Binary Patterns, entropy measures, energy calculations, and colour distribution analysis applied consistently across both imaging modalities. The extracted features from both visual and X-ray images are processed through an optimised One-Class Support Vector Machine with hyperparameter tuning. Using a dataset of 24 authenticated Goya paintings with corresponding X-ray images, split into an 80/20 train-test configuration with 10-fold cross-validation, the framework achieves 97.8% classification accuracy with a 0.022 false positive rate. Case study analysis of ``Un Gigante'' demonstrates the practical efficacy of our pipeline, achieving 92.3% authentication confidence through unified multimodal feature analysis. Our results indicate substantial performance improvement over single-modal approaches, establishing the effectiveness of applying identical computational methods to both visual and radiographic imagery in art authentication applications.
Related papers
- Novel computational workflows for natural and biomedical image processing based on hypercomplex algebras [49.81327385913137]
Hypercomplex image processing extends conventional techniques in a unified paradigm encompassing algebraic and geometric principles.<n>This workleverages quaternions and the two-dimensional planes split framework (splitting of a quaternion - representing a pixel - into pairs of 2D planes) for natural/biomedical image analysis.<n>The proposed can regulate color appearance (e.g. with alternative renditions and grayscale conversion) and image contrast, be part of automated image processing pipelines.
arXiv Detail & Related papers (2025-02-11T18:38:02Z) - SAGI: Semantically Aligned and Uncertainty Guided AI Image Inpainting [11.216906046169683]
SAGI-D is the largest and most diverse dataset of AI-generated inpaintings.<n>Our experiments show that semantic alignment significantly improves image quality and aesthetics.<n>Using SAGI-D for training several image forensic approaches increases in-domain detection performance on average by 37.4%.
arXiv Detail & Related papers (2025-02-10T15:56:28Z) - BGM: Background Mixup for X-ray Prohibited Items Detection [75.58709178012502]
Background Mixup (BGM) is a background-based augmentation technique tailored for X-ray security imaging domain.<n>Unlike conventional methods, BGM is founded on an in-depth analysis of physical properties.<n>BGM mixes background patches across regions on both 1) texture structure and 2) material variation, to benefit models from complicated background cues.
arXiv Detail & Related papers (2024-11-30T12:26:55Z) - PAD-F: Prior-Aware Debiasing Framework for Long-Tailed X-ray Prohibited Item Detection [56.25222232778367]
The distribution of object classes in real-world prohibited item detection scenarios often exhibits a distinct long-tailed distribution.<n>We introduce the Prior-Aware Debiasing Framework (PAD-F), a novel approach that employs a two-pronged strategy.<n>PAD-F significantly boosts the performance of multiple popular detectors.
arXiv Detail & Related papers (2024-11-27T06:13:56Z) - Image-GS: Content-Adaptive Image Representation via 2D Gaussians [52.598772767324036]
We introduce Image-GS, a content-adaptive image representation based on 2D Gaussians radiance.<n>It supports hardware-friendly rapid access for real-time usage, requiring only 0.3K MACs to decode a pixel.<n>We demonstrate its versatility with several applications, including texture compression, semantics-aware compression, and joint image compression and restoration.
arXiv Detail & Related papers (2024-07-02T00:45:21Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Wavelet leader based formalism to compute multifractal features for
classifying lung nodules in X-ray images [0.0]
The proposed method includes a pre-processing step where two enhancement techniques are applied.
As a novelty, multifractal features using wavelet leader based formalism are used with Support Vector Machine nodule.
Best results were obtained when using multifractal features in combination with classical texture features, with a maximum ROC AUC of 75%.
arXiv Detail & Related papers (2022-07-01T08:31:44Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - Mixed X-Ray Image Separation for Artworks with Concealed Designs [32.83098605051855]
We propose a self-supervised deep learning-based image separation approach that can be applied to the X-ray images from such paintings.
One of these reconstructed images is related to the X-ray image of the concealed painting, while the second one contains only information related to the X-ray of the visible painting.
The proposed method is demonstrated on a real painting with concealed content, Dona Isabel de Porcel by Francisco de Goya, to show its effectiveness.
arXiv Detail & Related papers (2022-01-23T03:20:35Z) - Image Completion via Inference in Deep Generative Models [16.99337751292915]
We consider image completion from the perspective of amortized inference in an image generative model.
We demonstrate superior sample quality and diversity compared to prior art on the CIFAR-10 and FFHQ-256 datasets.
arXiv Detail & Related papers (2021-02-24T02:59:43Z) - Image Separation with Side Information: A Connected Auto-Encoders Based
Approach [18.18248997032482]
We deal with the problem of separating mixed X-ray images originating from the radiography of double-sided paintings.
We propose a new Neural Network architecture, based upon 'connected' auto-encoders, designed to separate the mixed X-ray image into two simulated X-ray images corresponding to each side.
These tests show that the proposed approach outperforms other state-of-the-art X-ray image separation methods for art investigation applications.
arXiv Detail & Related papers (2020-09-16T18:39:42Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.