DPCL: a Language Template for Normative Specifications
- URL: http://arxiv.org/abs/2201.04477v1
- Date: Wed, 12 Jan 2022 13:51:11 GMT
- Title: DPCL: a Language Template for Normative Specifications
- Authors: Giovanni Sileno, Thomas van Binsbergen, Matteo Pascucci, Tom van
Engers
- Abstract summary: Legal core concepts have been proposed to systematize and relationships relevant to reasoning.
No solution amongst those has achieved general acceptance, and no common ground (representational, computational) has been identified.
This presentation will introduce DPCL, a domain-specific language (norms) for specifying higher-level policies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several solutions for specifying normative artefacts (norms, contracts,
policies) in a computational processable way have been presented in the
literature. Legal core ontologies have been proposed to systematize concepts
and relationships relevant to normative reasoning. However, no solution amongst
those has achieved general acceptance, and no common ground (representational,
computational) has been identified enabling us to easily compare them. Yet, all
these efforts share the same motivation of representing normative directives,
therefore it is plausible that there may be a representational model
encompassing all of them. This presentation will introduce DPCL, a
domain-specific language (DSL) for specifying higher-level policies (including
norms, contracts, etc.), centred on Hohfeld's framework of fundamental legal
concepts. DPCL has to be seen primarily as a "template", i.e. as an
informational model for architectural reference, rather than a fully-fledged
formal language; it aims to make explicit the general requirements that should
be expected in a language for norm specification. In this respect, it goes
rather in the direction of legal core ontologies, but differently from those,
our proposal aims to keep the character of a DSL, rather than a set of axioms
in a logical framework: it is meant to be cross-compiled to underlying
languages/tools adequate to the type of target application. We provide here an
overview of some of the language features.
Related papers
- Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations [61.141986747544024]
We present an approach that empowers application developers to tune a model to their particular values, social norms, laws and other regulations.
We lay out three main components of such an Alignment Studio architecture: Framers, Instructors, and Auditors.
arXiv Detail & Related papers (2024-03-08T21:26:49Z) - Automated legal reasoning with discretion to act using s(LAW) [0.294944680995069]
ethical and legal concerns make it necessary for automated reasoners to justify in human-understandable terms.
We propose to use s(CASP), a top-down execution model for predicate ASP, to model vague concepts following a set of patterns.
We have implemented a framework, called s(LAW), to model, reason, and justify the applicable legislation and validate it by translating (and benchmarking) a representative use case.
arXiv Detail & Related papers (2024-01-25T21:11:08Z) - Towards Grammatical Tagging for the Legal Language of Cybersecurity [0.0]
Legal language can be understood as the language typically used by those engaged in the legal profession.
Recent legislation on cybersecurity obviously uses legal language in writing.
This paper faces the challenge of the essential interpretation of the legal language of cybersecurity.
arXiv Detail & Related papers (2023-06-29T15:39:20Z) - Multilingual Conceptual Coverage in Text-to-Image Models [98.80343331645626]
"Conceptual Coverage Across Languages" (CoCo-CroLa) is a technique for benchmarking the degree to which any generative text-to-image system provides multilingual parity to its training language in terms of tangible nouns.
For each model we can assess "conceptual coverage" of a given target language relative to a source language by comparing the population of images generated for a series of tangible nouns in the source language to the population of images generated for each noun under translation in the target language.
arXiv Detail & Related papers (2023-06-02T17:59:09Z) - Prompting Language-Informed Distribution for Compositional Zero-Shot Learning [73.49852821602057]
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts.
We propose a model by prompting the language-informed distribution, aka., PLID, for the task.
Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.
arXiv Detail & Related papers (2023-05-23T18:00:22Z) - DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules [64.93179829965072]
DADA is a modular approach to imbue SAE-trained models with multi-dialectal robustness.
We show that DADA is effective for both single task and instruction fine language models.
arXiv Detail & Related papers (2023-05-22T18:43:31Z) - Bridging between LegalRuleML and TPTP for Automated Normative Reasoning
(extended version) [77.34726150561087]
LegalRuleML is an XML-based representation framework for modeling and exchanging normative rules.
The TPTP input and output formats are general-purpose standards for the interaction with automated reasoning systems.
We provide a bridge between the two communities by defining a logic-pluralistic normative reasoning language based on the TPTP format.
arXiv Detail & Related papers (2022-09-12T08:42:34Z) - Norm Participation Grounds Language [16.726800816202033]
I propose a different, and more wide-ranging conception of how grounding should be understood: What grounds language is its normative nature.
There are standards for doing things right, these standards are public and authoritative, while at the same time acceptance of authority can be disputed and negotiated.
What grounds language, then, is the determined use that language users make of it, and what it is grounded in is the community of language users.
arXiv Detail & Related papers (2022-06-06T20:21:59Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - LexGLUE: A Benchmark Dataset for Legal Language Understanding in English [15.026117429782996]
We introduce the Legal General Language Evaluation (LexGLUE) benchmark, a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks.
We also provide an evaluation and analysis of several generic and legal-oriented models demonstrating that the latter consistently offer performance improvements across multiple tasks.
arXiv Detail & Related papers (2021-10-03T10:50:51Z) - Exploring Probabilistic Soft Logic as a framework for integrating
top-down and bottom-up processing of language in a task context [0.6091702876917279]
The architecture integrates existing NLP components to produce candidate analyses on eight levels of linguistic modeling.
The architecture builds on Universal Dependencies (UD) as its representation formalism on the form level and on Abstract Meaning Representations (AMRs) to represent semantic analyses of learner answers.
arXiv Detail & Related papers (2020-04-15T11:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.