Response to Office of the Privacy Commissioner of Canada Consultation
Proposals pertaining to amendments to PIPEDA relative to Artificial
Intelligence
- URL: http://arxiv.org/abs/2006.07025v1
- Date: Fri, 12 Jun 2020 09:20:04 GMT
- Title: Response to Office of the Privacy Commissioner of Canada Consultation
Proposals pertaining to amendments to PIPEDA relative to Artificial
Intelligence
- Authors: Mirka Snyder Caron (1) and Abhishek Gupta (1 and 2) ((1) Montreal AI
Ethics Institute, (2) Microsoft)
- Abstract summary: The Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide comments.
The present document includes MAIEI comments and recommendations in writing.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In February 2020, the Montreal AI Ethics Institute (MAIEI) was invited by the
Office of the Privacy Commissioner of Canada (OPCC) to provide for comments
both at a closed roundtable and in writing on the OPCC consultation proposal
for amendments relative to Artificial Intelligence (AI), to the Canadian
privacy legislation, the Personal Information Protection and Electronic
Documents Act (PIPEDA). The present document includes MAIEI comments and
recommendations in writing. Per MAIEI's mission and mandate to act as a
catalyst for public feedback pertaining to AI Ethics and regulatory technology
developments, as well as to provide for public competence-building workshops on
critical topics in such domains, the reader will also find such public feedback
and propositions by Montrealers who participated at MAIEI's workshops,
submitted as Schedule 1 to the present report. For each of OPCC 12 proposals,
and underlying questions, as described on its website, MAIEI provides a short
reply, a summary list of recommendations, as well as comments relevant to the
question at hand. We leave you with three general statements to keep in mind
while going through the next pages:
1) AI systems should be used to augment human capacity for meaningful and
purposeful connections and associations, not as a substitute for trust.
2) Humans have collectively accepted to uphold the rule of law, but for
machines, the code is rule. Where socio-technical systems are deployed to make
important decisions, profiles or inferences about individuals, we will
increasingly have to attempt the difficult exercise of drafting and encoding
our law in a manner learnable by machines.
3) Let us work collectively towards a world where Responsible AI becomes the
rule, before our socio-technical systems become "too connected to fail".
Related papers
- Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers [20.16404495546234]
Several regulatory frameworks have been introduced by different countries worldwide.
Many of these frameworks emphasize the need for auditing and improving the trustworthiness of AI tools.
Although these regulatory frameworks highlight the necessity of enforcement, practitioners often lack detailed guidance on implementing them.
We provide easy-to-understand summaries of state-of-the-art literature and highlight various gaps that exist between regulatory guidelines and existing AI research.
arXiv Detail & Related papers (2024-07-11T17:28:07Z) - AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act [2.1897070577406734]
Despite its importance, there is a lack of standards and guidelines to assist with drawing up AI and risk documentation aligned with the AI Act.
We propose AI Cards as a novel holistic framework for representing a given intended use of an AI system.
arXiv Detail & Related papers (2024-06-26T09:51:49Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - The Design and Implementation of a National AI Platform for Public
Healthcare in Italy: Implications for Semantics and Interoperability [62.997667081978825]
The Italian National Health Service is adopting Artificial Intelligence through its technical agencies.
Such a vast programme requires special care in formalising the knowledge domain.
Questions have been raised about the impact that AI could have on patients, practitioners, and health systems.
arXiv Detail & Related papers (2023-04-24T08:00:02Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.