RoboMal: Malware Detection for Robot Network Systems
- URL: http://arxiv.org/abs/2201.08470v1
- Date: Thu, 20 Jan 2022 22:11:38 GMT
- Title: RoboMal: Malware Detection for Robot Network Systems
- Authors: Upinder Kaur, Haozhe Zhou, Xiaxin Shen, Byung-Cheol Min, Richard M.
Voyles
- Abstract summary: We propose the RoboMal framework of static malware detection on binary executables to detect malware before it gets a chance to execute.
The framework is compared against widely used supervised learning models: GRU, CNN, and ANN.
Notably, the LSTM-based RoboMal model outperforms the other models with an accuracy of 85% and precision of 87% in 10-fold cross-validation.
- Score: 4.357338639836869
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Robot systems are increasingly integrating into numerous avenues of modern
life. From cleaning houses to providing guidance and emotional support, robots
now work directly with humans. Due to their far-reaching applications and
progressively complex architecture, they are being targeted by adversarial
attacks such as sensor-actuator attacks, data spoofing, malware, and network
intrusion. Therefore, security for robotic systems has become crucial. In this
paper, we address the underserved area of malware detection in robotic
software. Since robots work in close proximity to humans, often with direct
interactions, malware could have life-threatening impacts. Hence, we propose
the RoboMal framework of static malware detection on binary executables to
detect malware before it gets a chance to execute. Additionally, we address the
great paucity of data in this space by providing the RoboMal dataset comprising
controller executables of a small-scale autonomous car. The performance of the
framework is compared against widely used supervised learning models: GRU, CNN,
and ANN. Notably, the LSTM-based RoboMal model outperforms the other models
with an accuracy of 85% and precision of 87% in 10-fold cross-validation, hence
proving the effectiveness of the proposed framework.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Jailbreaking LLM-Controlled Robots [82.04590367171932]
Large language models (LLMs) have revolutionized the field of robotics by enabling contextual reasoning and intuitive human-robot interaction.
LLMs are vulnerable to jailbreaking attacks, wherein malicious prompters elicit harmful text by bypassing LLM safety guardrails.
We introduce RoboPAIR, the first algorithm designed to jailbreak LLM-controlled robots.
arXiv Detail & Related papers (2024-10-17T15:55:36Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - ROS-Causal: A ROS-based Causal Analysis Framework for Human-Robot Interaction Applications [3.8625803348911774]
This paper introduces ROS-Causal, a framework for causal discovery in human-robot spatial interactions.
An ad-hoc simulator, integrated with ROS, illustrates the approach's effectiveness.
arXiv Detail & Related papers (2024-02-25T11:37:23Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Reconstructing Robot Operations via Radio-Frequency Side-Channel [1.0742675209112622]
In recent years, a variety of attacks have been proposed that actively target the robot itself from the cyber domain.
In this work, we investigate whether an insider adversary can accurately fingerprint robot movements and operational warehousing via the radio frequency side channel.
arXiv Detail & Related papers (2022-09-21T08:14:51Z) - A New Paradigm of Threats in Robotics Behaviors [4.873362301533825]
We identify a new paradigm of security threats in the next generation of robots.
These threats fall beyond the known hardware or network-based ones.
We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples.
arXiv Detail & Related papers (2021-03-24T15:33:49Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Fault-Aware Robust Control via Adversarial Reinforcement Learning [35.16413579212691]
We propose an adversarial reinforcement learning framework, which significantly increases robot fragility over joint damage cases.
We validate our algorithm on a three-fingered robot hand and a quadruped robot.
Our algorithm can be trained only in simulation and directly deployed on a real robot without any fine-tuning.
arXiv Detail & Related papers (2020-11-17T16:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.