A Model-oriented Reasoning Framework for Privacy Analysis of Complex Systems
- URL: http://arxiv.org/abs/2405.08356v1
- Date: Tue, 14 May 2024 06:52:56 GMT
- Title: A Model-oriented Reasoning Framework for Privacy Analysis of Complex Systems
- Authors: Sebastian Rehms, Stefan Köpsell, Verena Klös, Florian Tschorsch,
- Abstract summary: This paper proposes a reasoning framework for privacy properties of systems and their environments.
It can capture any knowledge leaks on different logical levels to answer the question: which entity can learn what?
- Score: 2.001711587270359
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a reasoning framework for privacy properties of systems and their environments that can capture any knowledge leaks on different logical levels of the system to answer the question: which entity can learn what? With the term knowledge we refer to any kind of data, meta-data or interpretation of those that might be relevant. To achieve this, we present a modeling framework that forces the developers to explicitly describe which knowledge is available at which entity, which knowledge flows between entities and which knowledge can be inferred from other knowledge. In addition, privacy requirements are specified as rules describing forbidden knowledge for entities. Our modeling approach is incremental, starting from an abstract view of the system and adding details through well-defined transformations. This work is intended to complement existing approaches and introduces steps towards more formal foundations for privacy oriented analyses while keeping them as accessible as possible. It is designed to be extensible through schemata and vocabulary to enable compatibility with external requirements and standards.
Related papers
- GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Establishing Knowledge Preference in Language Models [80.70632813935644]
Language models are known to encode a great amount of factual knowledge through pretraining.
Such knowledge might be insufficient to cater to user requests.
When answering questions about ongoing events, the model should use recent news articles to update its response.
When some facts are edited in the model, the updated facts should override all prior knowledge learned by the model.
arXiv Detail & Related papers (2024-07-17T23:16:11Z) - Permissible Knowledge Pooling [0.0]
This paper introduces new modal logics for knowledge pooling and sharing.
It also outlines their axiomatizations and discusses a potential framework for permissible knowledge pooling.
arXiv Detail & Related papers (2024-04-04T12:51:28Z) - Categorical semiotics: Foundations for Knowledge Integration [0.0]
We tackle the challenging task of developing a comprehensive framework for defining and analyzing deep learning architectures.
Our methodology employs graphical structures that resemble Ehresmann's sketches, interpreted within a universe of fuzzy sets.
This approach offers a unified theory that elegantly encompasses both deterministic and non-deterministic neural network designs.
arXiv Detail & Related papers (2024-04-01T23:19:01Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog [12.081212540168055]
We present a modified version of the MutliWOZ based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance.
In line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources.
We demonstrate that our model is robust to perturbations to knowledge modality (source of information) and that it can fuse information from structured as well as unstructured knowledge to generate responses.
arXiv Detail & Related papers (2022-10-13T18:49:59Z) - Knowledge-grounded Dialog State Tracking [12.585986197627477]
We propose to perform dialog state tracking grounded on knowledge encoded externally.
We query relevant knowledge of various forms based on the dialog context.
We demonstrate superior performance of our proposed method over strong baselines.
arXiv Detail & Related papers (2022-10-13T01:34:08Z) - Combining pre-trained language models and structured knowledge [9.521634184008574]
transformer-based language models have achieved state of the art performance in various NLP benchmarks.
It has proven challenging to integrate structured information, such as knowledge graphs into these models.
We examine a variety of approaches to integrate structured knowledge into current language models and determine challenges, and possible opportunities to leverage both structured and unstructured information sources.
arXiv Detail & Related papers (2021-01-28T21:54:03Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.