A New Multifractal-based Deep Learning Model for Text Mining
- URL: http://arxiv.org/abs/2111.13861v2
- Date: Fri, 1 Sep 2023 00:05:04 GMT
- Title: A New Multifractal-based Deep Learning Model for Text Mining
- Authors: Zhenhua Wang, Ming Ren, Dong Gao
- Abstract summary: This study builds upon the foundation of perceiving text as a complex system, armed with the proposed multifractal method that deciphers the multifractal attributes embedded within the text landscape.
This endeavor culminates in the birth of our novel model, which also harnesses the power of the proposed activation function to facilitate nonlinear information transmission within its neural network architecture.
- Score: 5.316374570374179
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this world full of uncertainty, where the fabric of existence weaves
patterns of complexity, multifractal emerges as beacons of insight,
illuminating them. As we delve into the realm of text mining that underpins
various natural language processing applications and powers a range of
intelligent services, we recognize that behind the veil of text lies a
manifestation of human thought and cognition, intricately intertwined with the
complexities. Building upon the foundation of perceiving text as a complex
system, this study embarks on a journey to unravel the hidden treasures within,
armed with the proposed multifractal method that deciphers the multifractal
attributes embedded within the text landscape. This endeavor culminates in the
birth of our novel model, which also harnesses the power of the proposed
activation function to facilitate nonlinear information transmission within its
neural network architecture. The success on experiments anchored in real-world
technical reports covering the extraction of technical term and classification
of hazard events, stands as a testament to our endeavors. This research venture
not only expands our understanding of text mining but also opens new horizons
for knowledge discovery across various domains.
Related papers
- ARPA: A Novel Hybrid Model for Advancing Visual Word Disambiguation Using Large Language Models and Transformers [1.6541870997607049]
We present ARPA, an architecture that fuses the unparalleled contextual understanding of large language models with the advanced feature extraction capabilities of transformers.
ARPA's introduction marks a significant milestone in visual word disambiguation, offering a compelling solution.
We invite researchers and practitioners to explore the capabilities of our model, envisioning a future where such hybrid models drive unprecedented advancements in artificial intelligence.
arXiv Detail & Related papers (2024-08-12T10:15:13Z) - Open Visual Knowledge Extraction via Relation-Oriented Multimodality
Model Prompting [89.95541601837719]
We take a first exploration to a new paradigm of open visual knowledge extraction.
OpenVik consists of an open relational region detector to detect regions potentially containing relational knowledge.
A visual knowledge generator that generates format-free knowledge by prompting the large multimodality model with the detected region of interest.
arXiv Detail & Related papers (2023-10-28T20:09:29Z) - Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object
Detection [72.36017150922504]
We propose a multi-modal contextual knowledge distillation framework, MMC-Det, to transfer the learned contextual knowledge from a teacher fusion transformer to a student detector.
The diverse multi-modal masked language modeling is realized by an object divergence constraint upon traditional multi-modal masked language modeling (MLM)
arXiv Detail & Related papers (2023-08-30T08:33:13Z) - A Brief Yet In-Depth Survey of Deep Learning-Based Image Watermarking [1.249418440326334]
This paper presents a comprehensive survey on deep learning-based image watermarking.
It focuses on the invisible embedding and extraction of watermarks within a cover image, aiming to offer a seamless blend of robustness and adaptability.
We introduce a refined categorization, segmenting the field into Embedder-Extractor, Deep Networks as a Feature Transformation, and Hybrid Methods.
arXiv Detail & Related papers (2023-08-08T22:06:14Z) - Combo of Thinking and Observing for Outside-Knowledge VQA [13.838435454270014]
Outside-knowledge visual question answering is a challenging task that requires both the acquisition and the use of open-ended real-world knowledge.
In this paper, we are inspired to constrain the cross-modality space into the same space of natural-language space.
We propose a novel framework consisting of a multimodal encoder, a textual encoder and an answer decoder.
arXiv Detail & Related papers (2023-05-10T18:32:32Z) - A Survey of Text Representation Methods and Their Genealogy [0.0]
In recent years, with the advent of highly scalable artificial-neural-network-based text representation methods the field of natural language processing has seen unprecedented growth and sophistication.
We provide a survey of current approaches, by arranging them in a genealogy, and by conceptualizing a taxonomy of text representation methods to examine and explain the state-of-the-art.
arXiv Detail & Related papers (2022-11-26T15:22:01Z) - Vision+X: A Survey on Multimodal Learning in the Light of Data [64.03266872103835]
multimodal machine learning that incorporates data from various sources has become an increasingly popular research area.
We analyze the commonness and uniqueness of each data format mainly ranging from vision, audio, text, and motions.
We investigate the existing literature on multimodal learning from both the representation learning and downstream application levels.
arXiv Detail & Related papers (2022-10-05T13:14:57Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.