A Policy-oriented Agent-based Model of Recruitment into Organized Crime
- URL: http://arxiv.org/abs/2001.03494v1
- Date: Fri, 10 Jan 2020 15:06:52 GMT
- Title: A Policy-oriented Agent-based Model of Recruitment into Organized Crime
- Authors: Gian Maria Campedelli, Francesco Calderoni, Mario Paolucci, Tommaso
Comunale, Daniele Vilone, Federico Cecconi, and Giulia Andrighetto
- Abstract summary: This study proposes the formalization, development and analysis of an agent-based model (ABM) that simulates a neighborhood of Palermo (Sicily)
Using empirical data on social, economic and criminal conditions of the area under analysis, we use a multi-layer network approach to simulate this scenario.
As the final goal, we test different policies to counter recruitment into organized crime groups (OCGs)
- Score: 0.6332429219530602
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Criminal organizations exploit their presence on territories and local
communities to recruit new workforce in order to carry out their criminal
activities and business. The ability to attract individuals is crucial for
maintaining power and control over the territories in which these groups are
settled. This study proposes the formalization, development and analysis of an
agent-based model (ABM) that simulates a neighborhood of Palermo (Sicily) with
the aim to understand the pathways that lead individuals to recruitment into
organized crime groups (OCGs). Using empirical data on social, economic and
criminal conditions of the area under analysis, we use a multi-layer network
approach to simulate this scenario. As the final goal, we test different
policies to counter recruitment into OCGs. These scenarios are based on two
different dimensions of prevention and intervention: (i) primary and secondary
socialization and (ii) law enforcement targeting strategies.
Related papers
- Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents [67.07177243654485]
This survey collects and analyzes the different threats faced by large language models-based agents.
We identify six key features of LLM-based agents, based on which we summarize the current research progress.
We select four representative agents as case studies to analyze the risks they may face in practical use.
arXiv Detail & Related papers (2024-11-14T15:40:04Z) - ROMA-iQSS: An Objective Alignment Approach via State-Based Value Learning and ROund-Robin Multi-Agent Scheduling [44.276285521929424]
We introduce a decentralized state-based value learning algorithm that enables agents to independently discover optimal states.
Our theoretical analysis shows that our approach leads decentralized agents to an optimal collective policy.
Empirical experiments further demonstrate that our method outperforms existing decentralized state-based and action-based value learning strategies.
arXiv Detail & Related papers (2024-04-05T09:39:47Z) - Quantifying Agent Interaction in Multi-agent Reinforcement Learning for
Cost-efficient Generalization [63.554226552130054]
Generalization poses a significant challenge in Multi-agent Reinforcement Learning (MARL)
The extent to which an agent is influenced by unseen co-players depends on the agent's policy and the specific scenario.
We present the Level of Influence (LoI), a metric quantifying the interaction intensity among agents within a given scenario and environment.
arXiv Detail & Related papers (2023-10-11T06:09:26Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Emergent Behaviors in Multi-Agent Target Acquisition [0.0]
We simulate a Multi-Agent System (MAS) using Reinforcement Learning (RL) in a pursuit-evasion game.
We create different adversarial scenarios by replacing RL-trained pursuers' policies with two distinct (non-RL) analytical strategies.
The novelty of our approach entails the creation of an influential feature set that reveals underlying data regularities.
arXiv Detail & Related papers (2022-12-15T15:20:58Z) - Strategic Decision-Making in the Presence of Information Asymmetry:
Provably Efficient RL with Algorithmic Instruments [55.41685740015095]
We study offline reinforcement learning under a novel model called strategic MDP.
We propose a novel algorithm, Pessimistic policy Learning with Algorithmic iNstruments (PLAN)
arXiv Detail & Related papers (2022-08-23T15:32:44Z) - Influence-based Reinforcement Learning for Intrinsically-motivated
Agents [0.0]
We present an algorithmic framework of two reinforcement learning agents each with a different objective.
We introduce a novel function approximation approach to assess the influence $F$ of a certain policy on others.
Our method was evaluated on the suite of OpenAI gym tasks as well as cooperative and mixed scenarios.
arXiv Detail & Related papers (2021-08-28T05:36:10Z) - Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting
Pot [71.28884625011987]
Melting Pot is a MARL evaluation suite that uses reinforcement learning to reduce the human labor required to create novel test scenarios.
We have created over 80 unique test scenarios covering a broad range of research topics.
We apply these test scenarios to standard MARL training algorithms, and demonstrate how Melting Pot reveals weaknesses not apparent from training performance alone.
arXiv Detail & Related papers (2021-07-14T17:22:14Z) - Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness [116.804536884437]
We propose an opposite behavior aware framework for policy learning in goal-oriented dialogues.
We estimate the opposite agent's policy from its behavior and use this estimation to improve the target agent by regarding it as part of the target policy.
arXiv Detail & Related papers (2020-04-21T03:13:44Z) - A Complex Networks Approach to Find Latent Clusters of Terrorist Groups [5.746505534720595]
We build a multi-partite network that includes terrorist groups and related information on tactics, weapons, targets, active regions.
We show that groups belonging to opposite ideologies can share very common behaviors and that Islamist/jihadist groups hold behavioral characteristics with respect to the others.
arXiv Detail & Related papers (2020-01-10T10:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.