Othered, Silenced and Scapegoated: Understanding the Situated Security
of Marginalised Populations in Lebanon
- URL: http://arxiv.org/abs/2306.10149v1
- Date: Fri, 16 Jun 2023 19:36:39 GMT
- Title: Othered, Silenced and Scapegoated: Understanding the Situated Security
of Marginalised Populations in Lebanon
- Authors: Jessica McClearn, Rikke Bjerg Jensen, Reem Talhouk
- Abstract summary: We situate our work in the post-conflict Lebanese context, shaped by sectarian divides, failing governance and economic collapse.
Our research highlights how LGBTQI+ identifying people and refugees are scapegoated for the failings of the Lebanese government.
We show how government-supported incitements of violence aimed at transferring blame from the political leadership to these groups lead to amplified digital security risks.
- Score: 17.10104036777213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we explore the digital security experiences of marginalised
populations in Lebanon such as LGBTQI+ identifying people, refugees and women.
We situate our work in the post-conflict Lebanese context, which is shaped by
sectarian divides, failing governance and economic collapse. We do so through
an ethnographically informed study conducted in Beirut, Lebanon, in July 2022
and through interviews with 13 people with Lebanese digital and human rights
expertise. Our research highlights how LGBTQI+ identifying people and refugees
are scapegoated for the failings of the Lebanese government, while women who
speak out against such failings are silenced. We show how government-supported
incitements of violence aimed at transferring blame from the political
leadership to these groups lead to amplified digital security risks for already
at-risk populations. Positioning our work in broader sociological
understandings of security, we discuss how the Lebanese context impacts
identity and ontological security. We conclude by proposing to design for and
with positive security in post-conflict settings.
Related papers
- A Survey of Attacks on Large Language Models [5.845689496906739]
Large language models (LLMs) and LLM-based agents have been widely deployed in a wide range of applications in the real world.<n>This paper provides a systematic overview of the details of adversarial attacks targeting both LLMs and LLM-based agents.
arXiv Detail & Related papers (2025-05-18T22:55:16Z) - Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report [50.268821168513654]
We present Foundation-Sec-8B, a cybersecurity-focused large language model (LLMs) built on the Llama 3.1 architecture.
We evaluate it across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks.
By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.
arXiv Detail & Related papers (2025-04-28T08:41:12Z) - SafeDialBench: A Fine-Grained Safety Benchmark for Large Language Models in Multi-Turn Dialogues with Diverse Jailbreak Attacks [90.41592442792181]
We propose a fine-grained benchmark SafeDialBench for evaluating the safety of Large Language Models (LLMs)<n>Specifically, we design a two-tier hierarchical safety taxonomy that considers 6 safety dimensions and generates more than 4000 multi-turn dialogues in both Chinese and English under 22 dialogue scenarios.<n> Notably, we construct an innovative assessment framework of LLMs, measuring capabilities in detecting, and handling unsafe information and maintaining consistency when facing jailbreak attacks.
arXiv Detail & Related papers (2025-02-16T12:08:08Z) - Silenced Voices: Exploring Social Media Polarization and Women's Participation in Peacebuilding in Ethiopia [16.99659597567309]
The study highlights the significant threats of social media polarization and weaponization in Ethiopia.
It uncovers the lack of effective digital peacebuilding initiatives.
The study recommends enhanced moderation and ethical considerations in algorithmic design gains traction.
arXiv Detail & Related papers (2024-12-02T14:37:41Z) - Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents [67.07177243654485]
This survey collects and analyzes the different threats faced by large language models-based agents.
We identify six key features of LLM-based agents, based on which we summarize the current research progress.
We select four representative agents as case studies to analyze the risks they may face in practical use.
arXiv Detail & Related papers (2024-11-14T15:40:04Z) - Arabic Dataset for LLM Safeguard Evaluation [62.96160492994489]
This study explores the safety of large language models (LLMs) in Arabic with its linguistic and cultural complexities.
We present an Arab-region-specific safety evaluation dataset consisting of 5,799 questions, including direct attacks, indirect attacks, and harmless requests with sensitive words.
arXiv Detail & Related papers (2024-10-22T14:12:43Z) - Multimodal Situational Safety [73.63981779844916]
We present the first evaluation and analysis of a novel safety challenge termed Multimodal Situational Safety.
For an MLLM to respond safely, whether through language or action, it often needs to assess the safety implications of a language query within its corresponding visual context.
We develop the Multimodal Situational Safety benchmark (MSSBench) to assess the situational safety performance of current MLLMs.
arXiv Detail & Related papers (2024-10-08T16:16:07Z) - CYBERSECEVAL 3: Advancing the Evaluation of Cybersecurity Risks and Capabilities in Large Language Models [2.2779399250291577]
CYBERSECEVAL 3 assesses 8 different risks across two broad categories: risk to third parties, and risk to application developers and end users.
Compared to previous work, we add new areas focused on offensive security capabilities: automated social engineering, scaling manual offensive cyber operations, and autonomous offensive cyber operations.
arXiv Detail & Related papers (2024-08-02T23:47:27Z) - The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies [58.94148083602662]
Large Language Models (LLMs) agents have evolved to perform complex tasks.<n>The widespread applications of LLM agents demonstrate their significant commercial value.<n>However, they also expose security and privacy vulnerabilities.<n>This survey aims to provide a comprehensive overview of the newly emerged privacy and security issues faced by LLM agents.
arXiv Detail & Related papers (2024-07-28T00:26:24Z) - Purple-teaming LLMs with Adversarial Defender Training [57.535241000787416]
We present Purple-teaming LLMs with Adversarial Defender training (PAD)
PAD is a pipeline designed to safeguard LLMs by novelly incorporating the red-teaming (attack) and blue-teaming (safety training) techniques.
PAD significantly outperforms existing baselines in both finding effective attacks and establishing a robust safe guardrail.
arXiv Detail & Related papers (2024-07-01T23:25:30Z) - Security Patchworking in Lebanon: Infrastructuring Across Failing
Infrastructures [13.04459271722538]
We look at the infrastructuring work carried out by people in Lebanon to establish and maintain everyday security in response to multiple failing infrastructures.
Through our analysis we develop the notion of security patchworking that makes visible the infrastructuring work necessitated to secure basic needs.
Such practices are rooted in differing mechanisms of protection that often result in new forms of insecurity.
arXiv Detail & Related papers (2023-10-25T20:12:20Z) - SafetyBench: Evaluating the Safety of Large Language Models [54.878612385780805]
SafetyBench is a comprehensive benchmark for evaluating the safety of Large Language Models (LLMs)
It comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns.
Our tests over 25 popular Chinese and English LLMs in both zero-shot and few-shot settings reveal a substantial performance advantage for GPT-4 over its counterparts.
arXiv Detail & Related papers (2023-09-13T15:56:50Z) - Safety Assessment of Chinese Large Language Models [51.83369778259149]
Large language models (LLMs) may generate insulting and discriminatory content, reflect incorrect social values, and may be used for malicious purposes.
To promote the deployment of safe, responsible, and ethical AI, we release SafetyPrompts including 100k augmented prompts and responses by LLMs.
arXiv Detail & Related papers (2023-04-20T16:27:35Z) - LEBANONUPRISING: a thorough study of Lebanese tweets [0.0]
On October 17, Lebanon witnessed the start of a revolution; the LebanonUprising hashtag became viral on Twitter.
A dataset consisting of a 100,0000 tweets was collected between 18 and 21 October.
We conducted a sentiment analysis study for the tweets in spoken Lebanese Arabic related to the LebanonUprising hashtag using different machine learning algorithms.
arXiv Detail & Related papers (2020-09-30T05:50:08Z) - Migration and Refugee Crisis: a Critical Analysis of Online Public
Perception [2.9005223064604078]
The migration rate and the level of resentments towards migrants are an important issue in modern civilisation.
We analyse sentiment and the associated context of expressions in a vast collection of tweets related to the EU refugee crisis.
Our study reveals a marginally higher proportion of negative sentiments vis-a-vis migrants and a large proportion of the negative sentiments is more reflected among the ordinary users.
arXiv Detail & Related papers (2020-07-20T02:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.