Obtaining physical layer data of latest generation networks for investigating adversary attacks
- URL: http://arxiv.org/abs/2405.19340v1
- Date: Thu, 2 May 2024 06:03:27 GMT
- Title: Obtaining physical layer data of latest generation networks for investigating adversary attacks
- Authors: M. V. Ushakova, Yu. A. Ushakov, L. V. Legashev,
- Abstract summary: Machine learning can be used to optimize the functions of latest generation data networks such as 5G and 6G.
adversarial measures that manipulate the behaviour of intelligent machine learning models are becoming a major concern.
A simulation model is proposed that works in conjunction with machine learning applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of machine learning is developing rapidly and is being used in various fields of science and technology. In this way, machine learning can be used to optimize the functions of latest generation data networks such as 5G and 6G. This also applies to functions at a lower level. A feature of the use of machine learning in the radio path for targeted radiation generation in modern ultra-massive MIMO, reconfigurable intelligent interfaces and other technologies is the complex acquisition and processing of data from the physical layer. Additionally, adversarial measures that manipulate the behaviour of intelligent machine learning models are becoming a major concern, as many machine learning models are sensitive to incorrect input data. To obtain data on attacks directly from processing service information, a simulation model is proposed that works in conjunction with machine learning applications.
Related papers
- The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Machine Learning with Chaotic Strange Attractors [0.0]
We present an analog computing method that harnesses chaotic nonlinear attractors to perform machine learning tasks with low power consumption.
Inspired by neuromorphic computing, our model is a programmable, versatile, and generalized platform for machine learning tasks.
When deployed as a simple analog device, it only requires milliwatt-scale power levels while being on par with current machine learning techniques.
arXiv Detail & Related papers (2023-09-23T12:54:38Z) - Machine Learning for QoS Prediction in Vehicular Communication:
Challenges and Solution Approaches [46.52224306624461]
We consider maximum throughput prediction enhancing, for example, streaming or high-definition mapping applications.
We highlight how confidence can be built on machine learning technologies by better understanding the underlying characteristics of the collected data.
We use explainable AI to show that machine learning can learn underlying principles of wireless networks without being explicitly programmed.
arXiv Detail & Related papers (2023-02-23T12:29:20Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Scientific Machine Learning Benchmarks [0.17205106391379021]
The breakthrough in Deep Learning neural networks has transformed the use of AI and machine learning technologies for the analysis of very large experimental datasets.
Identifying the most appropriate machine learning algorithm for the analysis of any given scientific dataset is still a challenge for scientists.
We describe our approach to the development of scientific machine learning benchmarks and review other approaches to benchmarking scientific machine learning.
arXiv Detail & Related papers (2021-10-25T10:05:11Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Cost-effective Machine Learning Inference Offload for Edge Computing [0.3149883354098941]
This paper proposes a novel offloading mechanism by leveraging installed-base on-premises (edge) computational resources.
The proposed mechanism allows the edge devices to offload heavy and compute-intensive workloads to edge nodes instead of using remote cloud.
arXiv Detail & Related papers (2020-12-07T21:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.