Artificial Intelligence Algorithms for Natural Language Processing and
the Semantic Web Ontology Learning
- URL: http://arxiv.org/abs/2108.13772v1
- Date: Tue, 31 Aug 2021 11:57:41 GMT
- Title: Artificial Intelligence Algorithms for Natural Language Processing and
the Semantic Web Ontology Learning
- Authors: Bryar A. Hassan and Tarik A. Rashid
- Abstract summary: A new evolutionary clustering algorithm star (ECA*) is proposed.
Experiments were conducted to evaluate ECA* against five state-of-the-art approaches.
The results indicate that ECA* overcomes its competitive techniques in terms of the ability to find the right clusters.
- Score: 0.76146285961466
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evolutionary clustering algorithms have considered as the most popular and
widely used evolutionary algorithms for minimising optimisation and practical
problems in nearly all fields. In this thesis, a new evolutionary clustering
algorithm star (ECA*) is proposed. Additionally, a number of experiments were
conducted to evaluate ECA* against five state-of-the-art approaches. For this,
32 heterogeneous and multi-featured datasets were used to examine their
performance using internal and external clustering measures, and to measure the
sensitivity of their performance towards dataset features in the form of
operational framework. The results indicate that ECA* overcomes its competitive
techniques in terms of the ability to find the right clusters. Based on its
superior performance, exploiting and adapting ECA* on the ontology learning had
a vital possibility. In the process of deriving concept hierarchies from
corpora, generating formal context may lead to a time-consuming process.
Therefore, formal context size reduction results in removing uninterested and
erroneous pairs, taking less time to extract the concept lattice and concept
hierarchies accordingly. In this premise, this work aims to propose a framework
to reduce the ambiguity of the formal context of the existing framework using
an adaptive version of ECA*. In turn, an experiment was conducted by applying
385 sample corpora from Wikipedia on the two frameworks to examine the
reduction of formal context size, which leads to yield concept lattice and
concept hierarchy. The resulting lattice of formal context was evaluated to the
original one using concept lattice-invariants. Accordingly, the homomorphic
between the two lattices preserves the quality of resulting concept hierarchies
by 89% in contrast to the basic ones, and the reduced concept lattice inherits
the structural relation of the original one.
Related papers
- Unfolding ADMM for Enhanced Subspace Clustering of Hyperspectral Images [43.152314090830174]
We introduce an innovative clustering architecture for hyperspectral images (HSI) by unfolding an iterative solver based on the Alternating Direction Method of Multipliers (ADMM) for sparse subspace clustering.
Our approach captures well the structural characteristics of HSI data by employing the K nearest neighbors algorithm as part of a structure preservation module.
arXiv Detail & Related papers (2024-04-10T15:51:46Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - Tight Guarantees for Interactive Decision Making with the
Decision-Estimation Coefficient [51.37720227675476]
We introduce a new variant of the Decision-Estimation Coefficient, and use it to derive new lower bounds that improve upon prior work on three fronts.
We provide upper bounds on regret that scale with the same quantity, thereby closing all but one of the gaps between upper and lower bounds in Foster et al.
Our results apply to both the regret framework and PAC framework, and make use of several new analysis and algorithm design techniques that we anticipate will find broader use.
arXiv Detail & Related papers (2023-01-19T18:24:08Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Supervised Dimensionality Reduction and Classification with
Convolutional Autoencoders [1.1164202369517053]
A Convolutional Autoencoder is combined to simultaneously produce supervised dimensionality reduction and predictions.
The resulting Latent Space can be utilized to improve traditional, interpretable classification algorithms.
The proposed methodology introduces advanced explainability regarding, not only the data structure through the produced latent space, but also about the classification behaviour.
arXiv Detail & Related papers (2022-08-25T15:18:33Z) - Formal context reduction in deriving concept hierarchies from corpora
using adaptive evolutionary clustering algorithm star [15.154538450706474]
The process of deriving concept hierarchies from corpora is typically a time-consuming and resource-intensive process.
The resulting lattice of formal context is evaluated to the standard one using concept lattice-invariants.
The results show that adaptive ECA* performs concept lattice faster than other mentioned competitive techniques in different fill ratios.
arXiv Detail & Related papers (2021-07-10T07:18:03Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.