Going Viral: Case Studies on the Impact of Protestware
- URL: http://arxiv.org/abs/2401.16715v1
- Date: Tue, 30 Jan 2024 03:23:04 GMT
- Title: Going Viral: Case Studies on the Impact of Protestware
- Authors: Youmei Fan, Dong Wang, Supatsara Wattanakriengkrai, Hathaichanok
Damrongsiri, Christoph Treude, Hideaki Hata, Raula Gaikovina Kula
- Abstract summary: We study two notable protestware cases, Colors.js and es5-ext, comparing with discussions of a typical security vulnerability as a baseline.
We perform a thematic analysis of more than two thousand protest-related posts to extract the different narratives when discussing protestware.
- Score: 13.697165741749513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Maintainers are now self-sabotaging their work in order to take political or
economic stances, a practice referred to as "protestware". In this poster, we
present our approach to understand how the discourse about such an attack went
viral, how it is received by the community, and whether developers respond to
the attack in a timely manner. We study two notable protestware cases, i.e.,
Colors.js and es5-ext, comparing with discussions of a typical security
vulnerability as a baseline, i.e., Ua-parser, and perform a thematic analysis
of more than two thousand protest-related posts to extract the different
narratives when discussing protestware.
Related papers
- An Investigation into Protestware [3.236198583140341]
Protestware is software that can be used to organize protests.
Recent events in the Russo-Ukrainian war has sparked a new wave of protestware.
arXiv Detail & Related papers (2024-09-30T01:17:16Z) - Developer Reactions to Protestware in Open Source Software: The cases of color.js and es5.ext [13.043109610854646]
We study two notable protestware cases i.e., colors.js and es5-ext.
By establishing a taxonomy of protestware discussions, we identify posts that express stances and provide technical mitigation instructions.
This work sheds light on the nuanced landscape of protestware discussions, offering insights for both researchers and developers.
arXiv Detail & Related papers (2024-09-24T02:26:48Z) - Adversarial Attacks on Multimodal Agents [73.97379283655127]
Vision-enabled language models (VLMs) are now used to build autonomous multimodal agents capable of taking actions in real environments.
We show that multimodal agents raise new safety risks, even though attacking agents is more challenging than prior attacks due to limited access to and knowledge about the environment.
arXiv Detail & Related papers (2024-06-18T17:32:48Z) - SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models [34.557309967708406]
In this work, we investigate the potential vulnerabilities of such instruction-following speech-language models to adversarial attacks and jailbreaking.
We design algorithms that can generate adversarial examples to jailbreak SLMs in both white-box and black-box attack settings without human involvement.
Our models, trained on dialog data with speech instructions, achieve state-of-the-art performance on spoken question-answering task, scoring over 80% on both safety and helpfulness metrics.
arXiv Detail & Related papers (2024-05-14T04:51:23Z) - Discursive objection strategies in online comments: Developing a classification schema and validating its training [2.6603898952678167]
Most Americans agree that misinformation, hate speech and harassment are harmful and inadequately curbed on social media.
We conducted a content analysis of more than 6500 comment replies to trending news videos on YouTube and Twitter.
We identified seven distinct discursive objection strategies.
arXiv Detail & Related papers (2024-05-13T19:39:00Z) - Leveraging the Context through Multi-Round Interactions for Jailbreaking Attacks [55.603893267803265]
Large Language Models (LLMs) are susceptible to Jailbreaking attacks.
Jailbreaking attacks aim to extract harmful information by subtly modifying the attack query.
We focus on a new attack form, called Contextual Interaction Attack.
arXiv Detail & Related papers (2024-02-14T13:45:19Z) - The defender's perspective on automatic speaker verification: An
overview [87.83259209657292]
The reliability of automatic speaker verification (ASV) has been undermined by the emergence of spoofing attacks.
The aim of this paper is to provide a thorough and systematic overview of the defense methods used against these types of attacks.
arXiv Detail & Related papers (2023-05-22T08:01:59Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Dynamic Emotions of Supporters and Opponents of Anti-racism Movement
from George Floyd Protests [4.628652869726037]
This study attempts to empirically examine a recent anti-racism movement initiated by the death of George Floyd with the lens of stance prediction and aspect-based sentiment analysis (ABSA)
First, this study found the stance of the tweet and users do change over the course of the protest. Furthermore, there are more users who shifted the stance compared to those who maintained the stance.
arXiv Detail & Related papers (2021-09-29T08:27:30Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.