The Care Label Concept: A Certification Suite for Trustworthy and
Resource-Aware Machine Learning
- URL: http://arxiv.org/abs/2106.00512v1
- Date: Tue, 1 Jun 2021 14:16:41 GMT
- Title: The Care Label Concept: A Certification Suite for Trustworthy and
Resource-Aware Machine Learning
- Authors: Katharina Morik and Helena Kotthaus and Lukas Heppe and Danny Heinrich
and Raphael Fischer and Andreas Pauly and Nico Piatkowski
- Abstract summary: Machine learning applications have become ubiquitous. This has led to an increased effort of making machine learning trustworthy.
For those who do not want to invest time into understanding the method or the learned model, we offer care labels.
Care labels are the result of a certification suite that tests whether stated guarantees hold.
- Score: 5.684803689061448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning applications have become ubiquitous. This has led to an
increased effort of making machine learning trustworthy. Explainable and fair
AI have already matured. They address knowledgeable users and application
engineers. For those who do not want to invest time into understanding the
method or the learned model, we offer care labels: easy to understand at a
glance, allowing for method or model comparisons, and, at the same time,
scientifically well-based. On one hand, this transforms descriptions as given
by, e.g., Fact Sheets or Model Cards, into a form that is well-suited for
end-users. On the other hand, care labels are the result of a certification
suite that tests whether stated guarantees hold. In this paper, we present two
experiments with our certification suite. One shows the care labels for
configurations of Markov random fields (MRFs). Based on the underlying theory
of MRFs, each choice leads to its specific rating of static properties like,
e.g., expressivity and reliability. In addition, the implementation is tested
and resource consumption is measured yielding dynamic properties. This
two-level procedure is followed by another experiment certifying deep neural
network (DNN) models. There, we draw the static properties from the literature
on a particular model and data set. At the second level, experiments are
generated that deliver measurements of robustness against certain attacks. We
illustrate this by ResNet-18 and MobileNetV3 applied to ImageNet.
Related papers
- Laminator: Verifiable ML Property Cards using Hardware-assisted Attestations [10.278905067763686]
A malicious model provider can include false information in ML property cards, raising a need for ML property cards.
We show how to realized them using property attestation, technical mechanisms by which a prover (e.g., a model provider) can attest different ML properties during training and inference to a verifier (e.g., an auditor)
arXiv Detail & Related papers (2024-06-25T13:36:53Z) - Automated Trustworthiness Testing for Machine Learning Classifiers [3.3423762257383207]
This paper proposes TOWER, the first technique to automatically create trustworthiness oracles that determine whether text classifier predictions are trustworthy.
Our hypothesis is that a prediction is trustworthy if the words in its explanation are semantically related to the predicted class.
The results show that TOWER can detect a decrease in trustworthiness as noise increases, but is not effective when evaluated against the human-labeled dataset.
arXiv Detail & Related papers (2024-06-07T20:25:05Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Provable Robustness for Streaming Models with a Sliding Window [51.85182389861261]
In deep learning applications such as online content recommendation and stock market analysis, models use historical data to make predictions.
We derive robustness certificates for models that use a fixed-size sliding window over the input stream.
Our guarantees hold for the average model performance across the entire stream and are independent of stream size, making them suitable for large data streams.
arXiv Detail & Related papers (2023-03-28T21:02:35Z) - Solution for the EPO CodeFest on Green Plastics: Hierarchical
multi-label classification of patents relating to green plastics using deep
learning [4.050982413149992]
This work aims at hierarchical multi-label patents classification for patents disclosing technologies related to green plastics.
We first propose a classification scheme for this technology and a way to learn a machine learning model to classify patents into the proposed classification scheme.
To achieve this, we come up with a strategy to automatically assign labels to patents in order to create a labeled training dataset that can be used to learn a classification model in a supervised learning setting.
arXiv Detail & Related papers (2023-02-22T19:06:58Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - A Practical Tutorial on Explainable AI Techniques [5.671062637797752]
This tutorial is meant to be the go-to handbook for any audience with a computer science background.
It aims at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box.
arXiv Detail & Related papers (2021-11-13T17:47:31Z) - Yes We Care! -- Certification for Machine Learning Methods through the
Care Label Framework [5.189820825770516]
We propose a unified framework that certifies learning methods via care labels.
Care labels are easy to understand and draw inspiration from well-known certificates like textile labels or property cards of electronic devices.
arXiv Detail & Related papers (2021-05-21T08:15:21Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.