A Framework for Democratizing AI
- URL: http://arxiv.org/abs/2001.00818v1
- Date: Wed, 1 Jan 2020 17:30:14 GMT
- Title: A Framework for Democratizing AI
- Authors: Shakkeel Ahmed, Ravi S. Mula, Soma S. Dhavala
- Abstract summary: Machine Learning and Artificial Intelligence are an integral part of the Fourth Industrial Revolution.
De democratizing AI is a multi-faceted problem, and it requires advancements in science, technology and policy.
We introduce an opinionated, textttmlsquare framework that provides a single point of interface to a variety of solutions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning and Artificial Intelligence are considered an integral part
of the Fourth Industrial Revolution. Their impact, and far-reaching
consequences, while acknowledged, are yet to be comprehended. These
technologies are very specialized, and few organizations and select highly
trained professionals have the wherewithal, in terms of money, manpower, and
might, to chart the future. However, concentration of power can lead to
marginalization, causing severe inequalities. Regulatory agencies and
governments across the globe are creating national policies, and laws around
these technologies to protect the rights of the digital citizens, as well as to
empower them. Even private, not-for-profit organizations are also contributing
to democratizing the technologies by making them \emph{accessible} and
\emph{affordable}. However, accessibility and affordability are all but a few
of the facets of democratizing the field. Others include, but not limited to,
\emph{portability}, \emph{explainability}, \emph{credibility}, \emph{fairness},
among others. As one can imagine, democratizing AI is a multi-faceted problem,
and it requires advancements in science, technology and policy. At
\texttt{mlsquare}, we are developing scientific tools in this space.
Specifically, we introduce an opinionated, extensible, \texttt{Python}
framework that provides a single point of interface to a variety of solutions
in each of the categories mentioned above. We present the design details, APIs
of the framework, reference implementations, road map for development, and
guidelines for contributions.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - The impact of generative artificial intelligence on socioeconomic inequalities and policy making [1.5156317247732694]
Generative artificial intelligence has the potential to both exacerbate and ameliorate existing socioeconomic inequalities.
Our goal is to highlight how generative AI could worsen existing inequalities while illuminating how AI may help mitigate pervasive social problems.
In the information domain, generative AI can democratize content creation and access, but may dramatically expand the production and proliferation of misinformation.
In education, it offers personalized learning, but may widen the digital divide.
In healthcare, it might improve diagnostics and accessibility, but could deepen pre-existing inequalities.
arXiv Detail & Related papers (2023-12-16T10:37:22Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Amplifying Limitations, Harms and Risks of Large Language Models [1.0152838128195467]
We present this article as a small gesture in an attempt to counter what appears to be exponentially growing hype around Artificial Intelligence.
It may also help those outside of the field to become more informed about some of the limitations of AI technology.
arXiv Detail & Related papers (2023-07-06T11:53:45Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Machine Learning Featurizations for AI Hacking of Political Systems [0.0]
In the recent essay "The Coming AI Hackers," Schneier proposed a future application of artificial intelligences to discover, manipulate, and exploit vulnerabilities of social, economic, and political systems.
This work advances the concept by applying to it theory from machine learning, hypothesizing some possible "featurization" frameworks for AI hacking.
We develop graph and sequence data representations that would enable the application of a range of deep learning models to predict attributes and outcomes of political systems.
arXiv Detail & Related papers (2021-10-08T16:51:31Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI Ethics Needs Good Data [0.8701566919381224]
We argue that discourse on AI must transcend the language of 'ethics' and engage with power and political economy.
We offer four 'economys' on which Good Data AI can be built: community, rights, usability and politics.
arXiv Detail & Related papers (2021-02-15T04:16:27Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.