Algorithmic Identity Based on Metaparameters: A Path to Reliability, Auditability, and Traceability
- URL: http://arxiv.org/abs/2601.16234v1
- Date: Wed, 21 Jan 2026 07:35:14 GMT
- Title: Algorithmic Identity Based on Metaparameters: A Path to Reliability, Auditability, and Traceability
- Authors: Juliao Braga, Percival Henriques, Juliana C. Braga, Itana Stiubiener,
- Abstract summary: The use of algorithms is increasing across various fields such as healthcare, justice, finance, and education.<n>This article explores the potential of the Digital Object Identifier (DOI) to identify algorithms.<n>The use of DOIs facilitates tracking the origin of algorithms, enables audits, prevents biases, promotes research, and strengthens ethical considerations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of algorithms is increasing across various fields such as healthcare, justice, finance, and education. This growth has significantly accelerated with the advent of Artificial Intelligence (AI) technologies based on Large Language Models (LLMs) since 2022. This expansion presents substantial challenges related to accountability, ethics, and transparency. This article explores the potential of the Digital Object Identifier (DOI) to identify algorithms, aiming to enhance accountability, transparency, and reliability in their development and application, particularly in AI agents and multimodal LLMs. The use of DOIs facilitates tracking the origin of algorithms, enables audits, prevents biases, promotes research reproducibility, and strengthens ethical considerations. The discussion addresses the challenges and solutions associated with maintaining algorithms identified by DOI, their application in API security, and the proposal of a cryptographic authentication protocol.
Related papers
- Copyright Detection in Large Language Models: An Ethical Approach to Generative AI Development [0.0]
This paper introduce an open-source copyright detection platform that enables content creators to verify whether their work was used in training datasets.<n>With an intuitive user interface and scalable backend, this framework contributes to increasing transparency in AI development and ethical compliance.
arXiv Detail & Related papers (2025-11-25T18:46:14Z) - Executable Knowledge Graphs for Replicating AI Research [65.41207324831583]
Executable Knowledge Graphs (xKG) is a modular and pluggable knowledge base that automatically integrates technical insights, code snippets, and domain-specific knowledge extracted from scientific literature.<n>Code will released at https://github.com/zjunlp/xKG.
arXiv Detail & Related papers (2025-10-20T17:53:23Z) - Does Machine Unlearning Truly Remove Knowledge? [80.83986295685128]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - A Framework for Cryptographic Verifiability of End-to-End AI Pipelines [0.8075866265341175]
We propose a framework for complete verifiable AI pipelines, identifying key components and analyzing existing cryptographic approaches.<n>This framework could be used to combat misinformation by providing cryptographic proofs alongside AI-generated assets.
arXiv Detail & Related papers (2025-03-28T16:20:57Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - algoTRIC: Symmetric and asymmetric encryption algorithms for Cryptography -- A comparative analysis in AI era [0.0]
This paper presents a comparative analysis of symmetric (SE) and asymmetric encryption (AE) algorithms.<n>It focuses on their role in securing sensitive information in AI-driven environments.<n>The paper concludes by addressing the security concerns that encryption algorithms must tackle in the age of AI.
arXiv Detail & Related papers (2024-12-12T16:25:39Z) - Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools [0.0]
The integration of artificial intelligence (AI) and optimization hold substantial promise for improving the efficiency, reliability, and resilience of engineered systems.
This paper identifies ethical considerations required when deploying algorithms at the intersection of AI and optimization.
Rather than providing a prescriptive set of rules, this paper aims to foster reflection and awareness among researchers.
arXiv Detail & Related papers (2024-09-17T18:37:53Z) - Building Intelligence Identification System via Large Language Model Watermarking: A Survey and Beyond [35.13949723065787]
Large Language Models (LLMs) are increasingly integrated into diverse industries, posing substantial security risks due to unauthorized replication and misuse.
We propose a mathematical framework based on mutual information theory, which systematizes the identification process to achieve more precise and customized watermarking.
arXiv Detail & Related papers (2024-07-15T07:20:02Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.