State-of-the-art Advances of Deep-learning Linguistic Steganalysis Research
- URL: http://arxiv.org/abs/2409.01780v1
- Date: Tue, 3 Sep 2024 10:49:42 GMT
- Title: State-of-the-art Advances of Deep-learning Linguistic Steganalysis Research
- Authors: Yihao Wang, Ru Zhang, Yifan Tang, Jianyi Liu,
- Abstract summary: This study offers a comprehensive review of existing contributions and evaluates prevailing developmental trajectories.
We first provided a formalized exposition of the general formulas for linguistic steganalysis, while comparing the differences between this field and the domain of text classification.
We classified the existing work into two levels based on vector space mapping and feature extraction models, thereby comparing the research motivations, model advantages, and other details.
- Score: 6.99735992267331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the evolution of generative linguistic steganography techniques, conventional steganalysis falls short in robustly quantifying the alterations induced by steganography, thereby complicating detection. Consequently, the research paradigm has pivoted towards deep-learning-based linguistic steganalysis. This study offers a comprehensive review of existing contributions and evaluates prevailing developmental trajectories. Specifically, we first provided a formalized exposition of the general formulas for linguistic steganalysis, while comparing the differences between this field and the domain of text classification. Subsequently, we classified the existing work into two levels based on vector space mapping and feature extraction models, thereby comparing the research motivations, model advantages, and other details. A comparative analysis of the experiments is conducted to assess the performances. Finally, the challenges faced by this field are discussed, and several directions for future development and key issues that urgently need to be addressed are proposed.
Related papers
- A Review of Deep Learning Approaches for Non-Invasive Cognitive Impairment Detection [35.31259047578382]
This review paper explores recent advances in deep learning approaches for non-invasive cognitive impairment detection.
We examine various non-invasive indicators of cognitive decline, including speech and language, facial, and motoric mobility.
Despite significant progress, several challenges remain, including data standardization and accessibility, model explainability, longitudinal analysis limitations, and clinical adaptation.
arXiv Detail & Related papers (2024-10-25T17:44:59Z) - Comprehensive Study on Sentiment Analysis: From Rule-based to modern LLM based system [0.0]
This study examines the historical development of sentiment analysis, highlighting the transition from lexicon-based and pattern-based approaches to more sophisticated machine learning and deep learning models.
The paper reviews state-of-the-art approaches, identifies emerging trends, and outlines future research directions to advance the field.
arXiv Detail & Related papers (2024-09-16T04:44:52Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Systematic Analysis of COVID-19 Ontologies [5.286727853896068]
The study is conducted through a dual-stage approach, commencing with a systematic review of relevant literature.
Twenty-four COVID-19 Ontologies (CovOs) are selected and examined.
The METHONTOLOGY approach emerges as a favored design methodology, often coupled with application-based or data-centric evaluation methods.
arXiv Detail & Related papers (2023-09-15T18:17:01Z) - Recent Advances in Direct Speech-to-text Translation [58.692782919570845]
We categorize the existing research work into three directions based on the main challenges -- modeling burden, data scarcity, and application issues.
For the challenge of data scarcity, recent work resorts to many sophisticated techniques, such as data augmentation, pre-training, knowledge distillation, and multilingual modeling.
We analyze and summarize the application issues, which include real-time, segmentation, named entity, gender bias, and code-switching.
arXiv Detail & Related papers (2023-06-20T16:14:27Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - A comprehensive comparative evaluation and analysis of Distributional
Semantic Models [61.41800660636555]
We perform a comprehensive evaluation of type distributional vectors, either produced by static DSMs or obtained by averaging the contextualized vectors generated by BERT.
The results show that the alleged superiority of predict based models is more apparent than real, and surely not ubiquitous.
We borrow from cognitive neuroscience the methodology of Representational Similarity Analysis (RSA) to inspect the semantic spaces generated by distributional models.
arXiv Detail & Related papers (2021-05-20T15:18:06Z) - A Discussion on Practical Considerations with Sparse Regression
Methodologies [0.0]
Two papers published in Statistical Science study the comparative performance of several sparse regression methodologies.
We summarize and compare the two studies and aim to provide clarity and value to users.
arXiv Detail & Related papers (2020-11-18T15:58:35Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.