Algorithmic Robustness
- URL: http://arxiv.org/abs/2311.06275v1
- Date: Tue, 17 Oct 2023 17:51:12 GMT
- Title: Algorithmic Robustness
- Authors: David Jensen, Brian LaMacchia, Ufuk Topcu, Pamela Wisniewski
- Abstract summary: Robustness is an important enabler of other goals that are frequently cited in the context of public policy decisions about computational systems.
This document provides a brief roadmap to some of the concepts and existing research around the idea of algorithmic robustness.
- Score: 18.406992961818368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic robustness refers to the sustained performance of a computational
system in the face of change in the nature of the environment in which that
system operates or in the task that the system is meant to perform. Below, we
motivate the importance of algorithmic robustness, present a conceptual
framework, and highlight the relevant areas of research for which algorithmic
robustness is relevant. Why robustness? Robustness is an important enabler of
other goals that are frequently cited in the context of public policy decisions
about computational systems, including trustworthiness, accountability,
fairness, and safety. Despite this dependence, it tends to be under-recognized
compared to these other concepts. This is unfortunate, because robustness is
often more immediately achievable than these other ultimate goals, which can be
more subjective and exacting. Thus, we highlight robustness as an important
goal for researchers, engineers, regulators, and policymakers when considering
the design, implementation, and deployment of computational systems. We urge
researchers and practitioners to elevate the attention paid to robustness when
designing and evaluating computational systems. For many key systems, the
immediate question after any demonstration of high performance should be: "How
robust is that performance to realistic changes in the task or environment?"
Greater robustness will set the stage for systems that are more trustworthy,
accountable, fair, and safe. Toward that end, this document provides a brief
roadmap to some of the concepts and existing research around the idea of
algorithmic robustness.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Semi-Supervised Multi-Task Learning Based Framework for Power System Security Assessment [0.0]
This paper develops a novel machine learning-based framework using Semi-Supervised Multi-Task Learning (SS-MTL) for power system dynamic security assessment.
The learning algorithm underlying the proposed framework integrates conditional masked encoders and employs multi-task learning for classification-aware feature representation.
Various experiments on the IEEE 68-bus system were conducted to validate the proposed method.
arXiv Detail & Related papers (2024-07-11T22:42:53Z) - Scalarisation-based risk concepts for robust multi-objective optimisation [4.12484724941528]
We study the multi-objective case of this problem.
We identify that the majority of all robust multi-objective algorithms rely on two key operations: robustification and scalarisation.
As these operations are not necessarily commutative, the order that they are performed in has an impact on the resulting solutions.
arXiv Detail & Related papers (2024-05-16T16:11:00Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Investigating Robustness in Cyber-Physical Systems: Specification-Centric Analysis in the face of System Deviations [8.8690305802668]
A critical attribute of cyber-physical systems (CPS) is robustness, denoting its capacity to operate safely.
This paper proposes a novel specification-based robustness, which characterizes the effectiveness of a controller in meeting a specified system requirement.
We present an innovative two-layer simulation-based analysis framework designed to identify subtle robustness violations.
arXiv Detail & Related papers (2023-11-13T16:44:43Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Beyond Robustness: A Taxonomy of Approaches towards Resilient
Multi-Robot Systems [41.71459547415086]
We analyze how resilience is achieved in networks of agents and multi-robot systems.
We argue that resilience must become a central engineering design consideration.
arXiv Detail & Related papers (2021-09-25T11:25:02Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.