Towards a Goal-Centric Assessment of Requirements Engineering Methods for Privacy by Design
- URL: http://arxiv.org/abs/2601.16080v1
- Date: Thu, 22 Jan 2026 16:22:23 GMT
- Title: Towards a Goal-Centric Assessment of Requirements Engineering Methods for Privacy by Design
- Authors: Oleksandr Kosenkov, Ehsan Zabardast, Jannik Fischbach, Tony Gorschek, Daniel Mendez,
- Abstract summary: Implementing privacy by design according to General Regulation (PbD) report is met with growing number of Data Protection engineering (RE) approaches.<n>We suggest goal-centric approach for PbD methods assessment.
- Score: 13.815715903288622
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Implementing privacy by design (PbD) according to the General Data Protection Regulation (GDPR) is met with a growing number of requirements engineering (RE) approaches. However, the question of which RE method for PbD fits best the goals of organisations remains a challenge. We report our endeavor to close this gap by synthesizing a goal-centric approach for PbD methods assessment. We used literature review, interviews, and validation with practitioners to achieve the goal of our study. As practitioners do not approach PbD systematically, we suggest that RE methods for PbD should be assessed against organisational goals, rather than process characteristics only. We hope that, when further developed, the goal-centric approach could support the development, selection, and tailoring of RE practices for PbD.
Related papers
- Privacy by Design: Aligning GDPR and Software Engineering Specifications with a Requirements Engineering Approach [14.785943510581923]
Legal knowledge should be captured in specifications to address the demands of different stakeholders and ensure compliance.<n>Existing approaches do not account for complex intersection between problem and solution space.
arXiv Detail & Related papers (2025-10-24T15:59:34Z) - Contrastive Dimension Reduction: A Systematic Review [14.568717191353244]
Contrastive dimension reduction (CDR) methods aim to extract signal unique to or enriched in a treatment (foreground) group relative to a control (background) group.<n>In this review, we provide a systematic overview of existing CDR methods.<n>We highlight key applications and challenges in existing CDR methods, and identify open questions and future directions.
arXiv Detail & Related papers (2025-10-13T18:58:46Z) - Preference Robustness for DPO with Applications to Public Health [26.99327564250612]
We propose DPO-PRO, a robust fine-tuning algorithm based on Direct Preference Optimization (DPO)<n>We evaluate DPO-PRO on a real-world maternal mobile health program operated by the non-profit organization ARMMAN.
arXiv Detail & Related papers (2025-09-02T18:10:32Z) - Value of Information-based Deceptive Path Planning Under Adversarial Interventions [26.543790095871433]
We propose a novel Markov decision process (MDP)-based model for the deceptive path planning problem under adversarial interventions.<n>Using the VoI objectives we propose, path planning agents deceive the adversarial observer into choosing suboptimal interventions.
arXiv Detail & Related papers (2025-03-31T16:31:29Z) - A Comprehensive Survey of Direct Preference Optimization: Datasets, Theories, Variants, and Applications [49.58110250828268]
Direct Preference Optimization (DPO) has emerged as a promising approach for alignment.<n>Despite DPO's various advancements and inherent limitations, an in-depth review of these aspects is currently lacking in the literature.
arXiv Detail & Related papers (2024-10-21T02:27:24Z) - Constrained Reinforcement Learning with Average Reward Objective: Model-Based and Model-Free Algorithms [34.593772931446125]
monograph focuses on the exploration of various model-based and model-free approaches for Constrained within the context of average reward Markov Decision Processes (MDPs)
The primal-dual policy gradient-based algorithm is explored as a solution for constrained MDPs.
arXiv Detail & Related papers (2024-06-17T12:46:02Z) - Towards an Enforceable GDPR Specification [49.1574468325115]
Privacy by Design (PbD) is prescribed by modern privacy regulations such as the EU's.
One emerging technique to realize PbD is enforcement (RE)
We present a set of requirements and an iterative methodology for creating formal specifications of legal provisions.
arXiv Detail & Related papers (2024-02-27T09:38:51Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Secrets of RLHF in Large Language Models Part I: PPO [81.01936993929127]
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence.
reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit.
In this report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training.
arXiv Detail & Related papers (2023-07-11T01:55:24Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Imitating Graph-Based Planning with Goal-Conditioned Policies [72.61631088613048]
We present a self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy.
We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods.
arXiv Detail & Related papers (2023-03-20T14:51:10Z) - Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes [65.91730154730905]
In applications of offline reinforcement learning to observational data, such as in healthcare or education, a general concern is that observed actions might be affected by unobserved factors.
Here we tackle this by considering off-policy evaluation in a partially observed Markov decision process (POMDP)
We extend the framework of proximal causal inference to our POMDP setting, providing a variety of settings where identification is made possible.
arXiv Detail & Related papers (2021-10-28T17:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.