Joint Semantic-Native Communication and Inference via Minimal Simplicial
Structures
- URL: http://arxiv.org/abs/2308.16789v1
- Date: Thu, 31 Aug 2023 15:04:28 GMT
- Title: Joint Semantic-Native Communication and Inference via Minimal Simplicial
Structures
- Authors: Qiyang Zhao, Hang Zou, Mehdi Bennis, Merouane Debbah, Ebtesam
Almazrouei, Faouzi Bader
- Abstract summary: Student agent queries a teacher agent to generate higher-order data semantics.
The teacher first maps its data into a k-order simplicial complex and learns its high-order correlations.
For effective communication and inference, the teacher seeks minimally sufficient and invariant semantic structures.
- Score: 28.87693546117844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study the problem of semantic communication and inference,
in which a student agent (i.e. mobile device) queries a teacher agent (i.e.
cloud sever) to generate higher-order data semantics living in a simplicial
complex. Specifically, the teacher first maps its data into a k-order
simplicial complex and learns its high-order correlations. For effective
communication and inference, the teacher seeks minimally sufficient and
invariant semantic structures prior to conveying information. These minimal
simplicial structures are found via judiciously removing simplices selected by
the Hodge Laplacians without compromising the inference query accuracy.
Subsequently, the student locally runs its own set of queries based on a masked
simplicial convolutional autoencoder (SCAE) leveraging both local and remote
teacher's knowledge. Numerical results corroborate the effectiveness of the
proposed approach in terms of improving inference query accuracy under
different channel conditions and simplicial structures. Experiments on a
coauthorship dataset show that removing simplices by ranking the Laplacian
values yields a 85% reduction in payload size without sacrificing accuracy.
Joint semantic communication and inference by masked SCAE improves query
accuracy by 25% compared to local student based query and 15% compared to
remote teacher based query. Finally, incorporating channel semantics is shown
to effectively improve inference accuracy, notably at low SNR values.
Related papers
- SPARKLE: Enhancing SPARQL Generation with Direct KG Integration in Decoding [0.46040036610482665]
We present a novel end-to-end natural language to SPARQL framework, SPARKLE.
SPARKLE leverages the structure of knowledge base directly during the decoding, effectively integrating knowledge into the query generation.
We show that SPARKLE achieves new state-of-the-art results on SimpleQuestions-Wiki and highest F1 score on LCQuAD 1.0.
arXiv Detail & Related papers (2024-06-29T06:43:11Z) - MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering [64.6741991162092]
We present MinPrompt, a minimal data augmentation framework for open-domain question answering.
We transform the raw text into a graph structure to build connections between different factual sentences.
We then apply graph algorithms to identify the minimal set of sentences needed to cover the most information in the raw text.
We generate QA pairs based on the identified sentence subset and train the model on the selected sentences to obtain the final model.
arXiv Detail & Related papers (2023-10-08T04:44:36Z) - Prompt Algebra for Task Composition [131.97623832435812]
We consider Visual Language Models with prompt tuning as our base classifier.
We propose constrained prompt tuning to improve performance of the composite classifier.
On UTZappos it improves classification accuracy over the best base model by 8.45% on average.
arXiv Detail & Related papers (2023-06-01T03:20:54Z) - M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios [103.6153593636399]
We propose a vision-language prompt tuning method with mitigated label bias (M-Tuning)
It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario.
Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.
arXiv Detail & Related papers (2023-03-09T09:05:47Z) - Semantic-Native Communication: A Simplicial Complex Perspective [50.099494681671224]
We study semantic communication from a topological space perspective.
A transmitter first maps its data into a $k$-order simplicial complex and then learns its high-order correlations.
The receiver decodes the structure and infers the missing or distorted data.
arXiv Detail & Related papers (2022-10-30T22:33:44Z) - Neural-Symbolic Entangled Framework for Complex Query Answering [22.663509971491138]
We propose a Neural and Entangled framework (ENeSy) for complex query answering.
It enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
ENeSy achieves the SOTA performance on several benchmarks, especially in the setting of the training model only with the link prediction task.
arXiv Detail & Related papers (2022-09-19T06:07:10Z) - Automatically Generating Counterfactuals for Relation Exaction [18.740447044960796]
relation extraction (RE) is a fundamental task in natural language processing.
Current deep neural models have achieved high accuracy but are easily affected by spurious correlations.
We develop a novel approach to derive contextual counterfactuals for entities.
arXiv Detail & Related papers (2022-02-22T04:46:10Z) - Open-set Short Utterance Forensic Speaker Verification using
Teacher-Student Network with Explicit Inductive Bias [59.788358876316295]
We propose a pipeline solution to improve speaker verification on a small actual forensic field dataset.
By leveraging large-scale out-of-domain datasets, a knowledge distillation based objective function is proposed for teacher-student learning.
We show that the proposed objective function can efficiently improve the performance of teacher-student learning on short utterances.
arXiv Detail & Related papers (2020-09-21T00:58:40Z) - End-to-End Object Detection with Transformers [88.06357745922716]
We present a new method that views object detection as a direct set prediction problem.
Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components.
The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss.
arXiv Detail & Related papers (2020-05-26T17:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.