PIXHELL Attack: Leaking Sensitive Information from Air-Gap Computers via `Singing Pixels'
- URL: http://arxiv.org/abs/2409.04930v1
- Date: Sat, 7 Sep 2024 23:09:56 GMT
- Title: PIXHELL Attack: Leaking Sensitive Information from Air-Gap Computers via `Singing Pixels'
- Authors: Mordechai Guri,
- Abstract summary: PIXHELL is a new type of covert channel attack allowing hackers to leak information via noise generated by the pixels on the screen.
The malicious code exploits the sound generated by coils and capacitors to control the frequencies emanating from the screen.
Our test shows that with a PIXHELL attack, textual and binary data can be exfiltrated from air-gapped, audio-gapped computers at a distance of 2m via sound modulated from LCD screens.
- Score: 1.74048653626208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Air-gapped systems are disconnected from the Internet and other networks because they contain or process sensitive data. However, it is known that attackers can use computer speakers to leak data via sound to circumvent the air-gap defense. To cope with this threat, when highly sensitive data is involved, the prohibition of loudspeakers or audio hardware might be enforced. This measure is known as an `audio gap'. In this paper, we present PIXHELL, a new type of covert channel attack allowing hackers to leak information via noise generated by the pixels on the screen. No audio hardware or loudspeakers is required. Malware in the air-gap and audio-gap computers generates crafted pixel patterns that produce noise in the frequency range of 0 - 22 kHz. The malicious code exploits the sound generated by coils and capacitors to control the frequencies emanating from the screen. Acoustic signals can encode and transmit sensitive information. We present the adversarial attack model, cover related work, and provide technical background. We discuss bitmap generation and correlated acoustic signals and provide implementation details on the modulation and demodulation process. We evaluated the covert channel on various screens and tested it with different types of information. We also discuss \textit{evasion and stealth} using low-brightness patterns that appear like black, turned-off screens. Finally, we propose a set of countermeasures. Our test shows that with a PIXHELL attack, textual and binary data can be exfiltrated from air-gapped, audio-gapped computers at a distance of 2m via sound modulated from LCD screens.
Related papers
- RAMBO: Leaking Secrets from Air-Gap Computers by Spelling Covert Radio Signals from Computer RAM [1.74048653626208]
We present an attack allowing adversaries to leak information from air-gapped computers.
We show that malware on a compromised computer can generate radio signals from memory buses (RAM)
With software-defined radio (SDR) hardware, and a simple off-the-shelf antenna, an attacker can intercept transmitted raw radio signals from a distance.
arXiv Detail & Related papers (2024-09-03T21:06:04Z) - Unsupervised Denoising for Signal-Dependent and Row-Correlated Imaging Noise [54.0185721303932]
We present the first fully unsupervised deep learning-based denoiser capable of handling imaging noise that is row-correlated.
Our approach uses a Variational Autoencoder with a specially designed autoregressive decoder.
Our method does not require a pre-trained noise model and can be trained from scratch using unpaired noisy data.
arXiv Detail & Related papers (2023-10-11T20:48:20Z) - A Survey on Acoustic Side Channel Attacks on Keyboards [0.0]
Mechanical keyboards are susceptible to acoustic side-channel attacks.
Researchers have developed methods that can extract typed keystrokes from ambient noise.
With the improvement of microphone technology, the potential vulnerability to acoustic side-channel attacks also increases.
arXiv Detail & Related papers (2023-09-20T02:26:53Z) - Betray Oneself: A Novel Audio DeepFake Detection Model via
Mono-to-Stereo Conversion [70.99781219121803]
Audio Deepfake Detection (ADD) aims to detect the fake audio generated by text-to-speech (TTS), voice conversion (VC) and replay, etc.
We propose a novel ADD model, termed as M2S-ADD, that attempts to discover audio authenticity cues during the mono-to-stereo conversion process.
arXiv Detail & Related papers (2023-05-25T02:54:29Z) - NUANCE: Near Ultrasound Attack On Networked Communication Environments [0.0]
This study investigates a primary inaudible attack vector on Amazon Alexa voice services using near ultrasound trojans.
The research maps each attack vector to a tactic or technique from the MITRE ATT&CK matrix.
The experiment involved generating and surveying fifty near-ultrasonic audios to assess the attacks' effectiveness.
arXiv Detail & Related papers (2023-04-25T23:28:46Z) - VarietySound: Timbre-Controllable Video to Sound Generation via
Unsupervised Information Disentanglement [68.42632589736881]
We pose the task of generating sound with a specific timbre given a video input and a reference audio sample.
To solve this task, we disentangle each target sound audio into three components: temporal information, acoustic information, and background information.
Our method can generate high-quality audio samples with good synchronization with events in video and high timbre similarity with the reference audio.
arXiv Detail & Related papers (2022-11-19T11:12:01Z) - SceneFake: An Initial Dataset and Benchmarks for Scene Fake Audio Detection [54.74467470358476]
This paper proposes a dataset for scene fake audio detection named SceneFake.
A manipulated audio is generated by only tampering with the acoustic scene of an original audio.
Some scene fake audio detection benchmark results on the SceneFake dataset are reported in this paper.
arXiv Detail & Related papers (2022-11-11T09:05:50Z) - Partially Fake Audio Detection by Self-attention-based Fake Span
Discovery [89.21979663248007]
We propose a novel framework by introducing the question-answering (fake span discovery) strategy with the self-attention mechanism to detect partially fake audios.
Our submission ranked second in the partially fake audio detection track of ADD 2022.
arXiv Detail & Related papers (2022-02-14T13:20:55Z) - Generating Visually Aligned Sound from Videos [83.89485254543888]
We focus on the task of generating sound from natural videos.
The sound should be both temporally and content-wise aligned with visual signals.
Some sounds generated outside of a camera can not be inferred from video content.
arXiv Detail & Related papers (2020-07-14T07:51:06Z) - Private Speech Classification with Secure Multiparty Computation [15.065527713259542]
We propose the first privacy-preserving solution for deep learning-based audio classification that is provably secure.
Our approach allows to classify a speech signal of one party with a deep neural network of another party without Bob ever seeing Alice's speech signal in an unencrypted manner.
We evaluate the efficiency-security-accuracy trade-off of the proposed solution in a use case for privacy-preserving emotion detection from speech with a convolutional neural network.
arXiv Detail & Related papers (2020-07-01T05:26:06Z) - Detecting Audio Attacks on ASR Systems with Dropout Uncertainty [40.9172128924305]
We show that our defense is able to detect attacks created through optimized perturbations and frequency masking.
We test our defense on Mozilla's CommonVoice dataset, the UrbanSound dataset, and an excerpt of the LibriSpeech dataset.
arXiv Detail & Related papers (2020-06-02T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.