The Doctrine of Cyber Effect: An Ethics Framework for Defensive Cyber
Deception
- URL: http://arxiv.org/abs/2302.13362v1
- Date: Sun, 26 Feb 2023 17:41:47 GMT
- Title: The Doctrine of Cyber Effect: An Ethics Framework for Defensive Cyber
Deception
- Authors: Quanyan Zhu
- Abstract summary: This work focuses on the ethics of using defensive deception in cyberspace.
We propose a doctrine of cyber effect that incorporates five ethical principles: goodwill, deontology, no-harm, transparency, and fairness.
This doctrine has broader applicability, including for ethical issues such as AI accountability and controversies related to YouTube recommendations.
- Score: 22.102728605081534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The lack of established rules and regulations in cyberspace is attributed to
the absence of agreed-upon ethical principles, making it difficult to establish
accountability, regulations, and laws. Addressing this challenge requires
examining cyberspace from fundamental philosophical principles. This work
focuses on the ethics of using defensive deception in cyberspace, proposing a
doctrine of cyber effect that incorporates five ethical principles: goodwill,
deontology, no-harm, transparency, and fairness. To guide the design of
defensive cyber deception, we develop a reasoning framework, the game of
ethical duplicity, which is consistent with the doctrine. While originally
intended for cyber deception, this doctrine has broader applicability,
including for ethical issues such as AI accountability and controversies
related to YouTube recommendations. By establishing ethical principles, we can
promote greater accountability, regulation, and protection in the digital
realm.
Related papers
- Ethical Hacking and its role in Cybersecurity [0.0]
This review paper investigates the diverse functions of ethical hacking within modern cybersecurity.
It analyzes the progression of ethical hacking techniques, their use in identifying vulnerabilities and conducting penetration tests, and their influence on strengthening organizational security.
arXiv Detail & Related papers (2024-08-28T11:06:17Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns [13.075370397377078]
We use ongoing work on the Cyber Security Body of Knowledge (CyBOK) to help elicit and document the responsibilities and ethics of the profession.
Based on a literature review of the ethics of cybersecurity, we use CyBOK to frame the exploration of ethical challenges in the cybersecurity profession.
Our findings indicate that there are broad ethical challenges across the whole of cybersecurity, but also that different areas of cybersecurity can face specific ethical considerations.
arXiv Detail & Related papers (2023-11-16T19:44:03Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Macro Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions [1.864621482724548]
We develop a taxonomy of 21 normative ethical principles which can be operationalised in AI.
We envision this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
arXiv Detail & Related papers (2022-08-12T08:48:16Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Ethics of AI: A Systematic Literature Review of Principles and
Challenges [3.7129018407842445]
Transparency, privacy, accountability and fairness are identified as the most common AI ethics principles.
Lack of ethical knowledge and vague principles are reported as the significant challenges for considering ethics in AI.
arXiv Detail & Related papers (2021-09-12T15:33:43Z) - AI virtues -- The missing link in putting AI ethics into practice [0.0]
The paper defines four basic AI virtues, namely justice, honesty, responsibility and care.
It defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues.
arXiv Detail & Related papers (2020-11-25T14:14:47Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.