A Framework for Assessing Proportionate Intervention with Face
Recognition Systems in Real-Life Scenarios
- URL: http://arxiv.org/abs/2402.05731v1
- Date: Thu, 8 Feb 2024 15:07:21 GMT
- Title: A Framework for Assessing Proportionate Intervention with Face
Recognition Systems in Real-Life Scenarios
- Authors: Pablo Negri and Isabelle Hupont and Emilia Gomez
- Abstract summary: Face recognition (FR) has reached a high technical maturity but its use needs to be carefully assessed from an ethical perspective.
Recent AI policies propose that such FR interventions should be proportionate and deployed only when strictly necessary.
This paper proposes a framework to contribute to assessing whether an FR intervention is proportionate or not for a given context of use.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face recognition (FR) has reached a high technical maturity. However, its use
needs to be carefully assessed from an ethical perspective, especially in
sensitive scenarios. This is precisely the focus of this paper: the use of FR
for the identification of specific subjects in moderately to densely crowded
spaces (e.g. public spaces, sports stadiums, train stations) and law
enforcement scenarios. In particular, there is a need to consider the trade-off
between the need to protect privacy and fundamental rights of citizens as well
as their safety. Recent Artificial Intelligence (AI) policies, notably the
European AI Act, propose that such FR interventions should be proportionate and
deployed only when strictly necessary. Nevertheless, concrete guidelines on how
to address the concept of proportional FR intervention are lacking to date.
This paper proposes a framework to contribute to assessing whether an FR
intervention is proportionate or not for a given context of use in the above
mentioned scenarios. It also identifies the main quantitative and qualitative
variables relevant to the FR intervention decision (e.g. number of people in
the scene, level of harm that the person(s) in search could perpetrate,
consequences to individual rights and freedoms) and propose a 2D graphical
model making it possible to balance these variables in terms of ethical cost vs
security gain. Finally, different FR scenarios inspired by real-world
deployments validate the proposed model. The framework is conceived as a simple
support tool for decision makers when confronted with the deployment of an FR
system.
Related papers
- FairPFN: A Tabular Foundation Model for Causal Fairness [39.83807136585407]
Causal fairness provides a transparent, human-in-the-loop framework to mitigate algorithmic discrimination.<n>We propose FairPFN, a model pre-trained on synthetic causal fairness data to identify and mitigate the causal effects of protected attributes in its predictions.
arXiv Detail & Related papers (2025-06-08T09:15:45Z) - Fairness in Federated Learning: Fairness for Whom? [4.276697874428501]
We argue that existing approaches tend to optimize narrow system level metrics, while overlooking how harms arise throughout the FL lifecycle.<n>Our analysis reveals five recurring pitfalls: 1) fairness framed solely through the lens of server client architecture, 2) a mismatch between simulations and motivating use-cases and contexts, 3) definitions that conflate protecting the system with protecting its users, 4) interventions that target isolated stages of the lifecycle while upstream and downstream effects, 5) and a lack of multi-stakeholder alignment where multiple fairness definitions can be relevant at once.
arXiv Detail & Related papers (2025-05-27T10:41:19Z) - Does Representation Intervention Really Identify Desired Concepts and Elicit Alignment? [73.80382983108997]
Representation intervention aims to locate and modify the representations that encode the underlying concepts in Large Language Models.<n>If the interventions are faithful, the intervened LLMs should erase the harmful concepts and be robust to both in-distribution adversarial prompts and the out-of-distribution jailbreaks.<n>We propose Concept Concentration (COCA), which simplifies the decision boundary between harmful and benign representations.
arXiv Detail & Related papers (2025-05-24T12:23:52Z) - Mitigating Bias in Facial Recognition Systems: Centroid Fairness Loss Optimization [9.537960917804993]
societal demand for fair AI systems has put pressure on the research community to develop predictive models that meet new fairness criteria.
In particular, the variability of the errors made by certain Facial Recognition (FR) systems across specific segments of the population compromises the deployment of the latter.
We propose a novel post-processing approach to improve the fairness of pre-trained FR models by optimizing a regression loss which acts on centroid-based scores.
arXiv Detail & Related papers (2025-04-27T22:17:44Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts [11.883785681042593]
DePrompt is a desensitization protection and effectiveness evaluation framework for prompt.
We integrate contextual attributes to define privacy types, achieving high-precision PII entity identification.
Our framework is adaptable to prompts and can be extended to text usability-dependent scenarios.
arXiv Detail & Related papers (2024-08-16T02:38:25Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - AI and ethics in insurance: a new solution to mitigate proxy
discrimination in risk modeling [0.0]
Driven by the growing attention of regulators on the ethical use of data in insurance, the actuarial community must rethink pricing and risk selection practices.
Equity is a philosophy concept that has many different definitions in every jurisdiction that influence each other without currently reaching consensus.
We propose an innovative method, not yet met in the literature, to reduce the risks of indirect discrimination thanks to mathematical concepts of linear algebra.
arXiv Detail & Related papers (2023-07-25T16:20:56Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Tensions Between the Proxies of Human Values in AI [20.303537771118048]
We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars.
We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
arXiv Detail & Related papers (2022-12-14T21:13:48Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - RobFR: Benchmarking Adversarial Robustness on Face Recognition [41.296221656624716]
Face recognition (FR) has recently made substantial progress and achieved high accuracy on standard benchmarks.
To facilitate a better understanding of the adversarial vulnerability on FR, we develop an adversarial robustness evaluation library on FR named textbfRobFR.
RobFR involves 15 popular naturally trained FR models, 9 models with representative defense mechanisms and 2 commercial FR API services.
arXiv Detail & Related papers (2020-07-08T13:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.