Yes We Care! -- Certification for Machine Learning Methods through the
Care Label Framework
- URL: http://arxiv.org/abs/2105.10197v1
- Date: Fri, 21 May 2021 08:15:21 GMT
- Title: Yes We Care! -- Certification for Machine Learning Methods through the
Care Label Framework
- Authors: Katharina Morik and Helena Kotthaus and Lukas Heppe and Danny Heinrich
and Raphael Fischer and Sascha M\"ucke and Andreas Pauly and Matthias Jakobs
and Nico Piatkowski
- Abstract summary: We propose a unified framework that certifies learning methods via care labels.
Care labels are easy to understand and draw inspiration from well-known certificates like textile labels or property cards of electronic devices.
- Score: 5.189820825770516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning applications have become ubiquitous. Their applications from
machine embedded control in production over process optimization in diverse
areas (e.g., traffic, finance, sciences) to direct user interactions like
advertising and recommendations. This has led to an increased effort of making
machine learning trustworthy. Explainable and fair AI have already matured.
They address knowledgeable users and application engineers. However, there are
users that want to deploy a learned model in a similar way as their washing
machine. These stakeholders do not want to spend time understanding the model.
Instead, they want to rely on guaranteed properties. What are the relevant
properties? How can they be expressed to stakeholders without presupposing
machine learning knowledge? How can they be guaranteed for a certain
implementation of a model? These questions move far beyond the current
state-of-the-art and we want to address them here. We propose a unified
framework that certifies learning methods via care labels. They are easy to
understand and draw inspiration from well-known certificates like textile
labels or property cards of electronic devices. Our framework considers both,
the machine learning theory and a given implementation. We test the
implementation's compliance with theoretical properties and bounds. In this
paper, we illustrate care labels by a prototype implementation of a
certification suite for a selection of probabilistic graphical models.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Formal and Practical Elements for the Certification of Machine Learning
Systems [0.9217021281095907]
We show how parameters of machine learning models are not hand-coded nor derived from physics but learned from data.
As a result, requirements cannot be directly traced to lines of code, hindering the current bottom-up aerospace certification paradigm.
Based on a scalable statistical verifier, our proposed framework is model-agnostic and tool-independent.
arXiv Detail & Related papers (2023-10-05T00:20:59Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - Lessons from Formally Verified Deployed Software Systems (Extended version) [65.69802414600832]
This article examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use.
It considers the technologies used, the form of verification applied, the results obtained, and the lessons that the software industry should draw regarding its ability to benefit from formal verification techniques and tools.
arXiv Detail & Related papers (2023-01-05T18:18:46Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Implementation of an Automated Learning System for Non-experts [26.776682627968476]
This paper detailed the engineering system implementation of an automated machine learning system called YMIR.
After importing training/validation data into the system, a user without AI knowledge can label the data, train models, perform data mining and evaluation by simply clicking buttons.
The code of the system has already been released to GitHub.
arXiv Detail & Related papers (2022-03-26T00:28:29Z) - A Practical Tutorial on Explainable AI Techniques [5.671062637797752]
This tutorial is meant to be the go-to handbook for any audience with a computer science background.
It aims at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box.
arXiv Detail & Related papers (2021-11-13T17:47:31Z) - The Care Label Concept: A Certification Suite for Trustworthy and
Resource-Aware Machine Learning [5.684803689061448]
Machine learning applications have become ubiquitous. This has led to an increased effort of making machine learning trustworthy.
For those who do not want to invest time into understanding the method or the learned model, we offer care labels.
Care labels are the result of a certification suite that tests whether stated guarantees hold.
arXiv Detail & Related papers (2021-06-01T14:16:41Z) - A Review of Formal Methods applied to Machine Learning [0.6853165736531939]
We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems.
We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software.
We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations.
arXiv Detail & Related papers (2021-04-06T12:48:17Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.