aiSTROM -- A roadmap for developing a successful AI strategy
- URL: http://arxiv.org/abs/2107.06071v1
- Date: Fri, 25 Jun 2021 08:40:15 GMT
- Title: aiSTROM -- A roadmap for developing a successful AI strategy
- Authors: Dorien Herremans
- Abstract summary: A total of 34% of AI research and development projects fails or are abandoned, according to a recent survey by Rackspace Technology.
We propose a new strategic framework, aiSTROM, that empowers managers to create a successful AI strategy.
- Score: 3.5788754401889014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A total of 34% of AI research and development projects fails or are
abandoned, according to a recent survey by Rackspace Technology of 1,870
companies. We propose a new strategic framework, aiSTROM, that empowers
managers to create a successful AI strategy based on a thorough literature
review. This provides a unique and integrated approach that guides managers and
lead developers through the various challenges in the implementation process.
In the aiSTROM framework, we start by identifying the top n potential projects
(typically 3-5). For each of those, seven areas of focus are thoroughly
analysed. These areas include creating a data strategy that takes into account
unique cross-departmental machine learning data requirements, security, and
legal requirements. aiSTROM then guides managers to think about how to put
together an interdisciplinary artificial intelligence (AI) implementation team
given the scarcity of AI talent. Once an AI team strategy has been established,
it needs to be positioned within the organization, either cross-departmental or
as a separate division. Other considerations include AI as a service (AIaas),
or outsourcing development. Looking at new technologies, we have to consider
challenges such as bias, legality of black-box-models, and keeping humans in
the loop. Next, like any project, we need value-based key performance
indicators (KPIs) to track and validate the progress. Depending on the
company's risk-strategy, a SWOT analysis (strengths, weaknesses, opportunities,
and threats) can help further classify the shortlisted projects. Finally, we
should make sure that our strategy includes continuous education of employees
to enable a culture of adoption. This unique and comprehensive framework offers
a valuable, literature supported, tool for managers and lead developers.
Related papers
- Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Strategic Integration of Artificial Intelligence in the C-Suite: The Role of the Chief AI Officer [0.0]
I explore the role of the Chief AI Officer (CAIO) within the C-suite, emphasizing the necessity of this position for successful AI strategy, integration, and governance.
I analyze future scenarios based on current trends in three key areas: the AI Economy, AI Organization, and Competition in the Age of AI.
This paper advances the discussion on AI leadership by providing a rationale for the strategic integration of AI at the executive level and examining the role of the Chief AI Officer within organizations.
arXiv Detail & Related papers (2024-04-30T19:07:18Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z) - OpenAGI: When LLM Meets Domain Experts [51.86179657467822]
Human Intelligence (HI) excels at combining basic skills to solve complex tasks.
This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents.
We introduce OpenAGI, an open-source platform designed for solving multi-step, real-world tasks.
arXiv Detail & Related papers (2023-04-10T03:55:35Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Towards AI-Empowered Crowdsourcing [27.0404686687184]
We propose a taxonomy which divides AI-Empowered Crowdsourcing into three major areas: task delegation, motivating workers, and quality control.
We discuss the limitations and insights, and curate the challenges of doing research in each of these areas to highlight promising future research directions.
arXiv Detail & Related papers (2022-12-28T05:06:55Z) - Do We Need Explainable AI in Companies? Investigation of Challenges,
Expectations, and Chances from Employees' Perspective [0.8057006406834467]
Using AI poses new requirements for companies and their employees, including transparency and comprehensibility of AI systems.
The field of Explainable AI (XAI) aims to address these issues.
This project report paper provides insights into employees' needs and attitudes towards (X)AI.
arXiv Detail & Related papers (2022-10-07T13:11:28Z) - Responsible AI Implementation: A Human-centered Framework for
Accelerating the Innovation Process [0.8481798330936974]
This paper proposes a theoretical framework for responsible artificial intelligence (AI) implementation.
The proposed framework emphasizes a synergistic business technology approach for the agile co-creation process.
The framework emphasizes establishing and maintaining trust throughout the human-centered design and agile development of AI.
arXiv Detail & Related papers (2022-09-15T06:24:01Z) - Competency Model Approach to AI Literacy: Research-based Path from
Initial Framework to Model [0.0]
Research on AI Literacy could lead to an effective and practical platform for developing these skills.
We propose and advocate for a pathway for developing AI Literacy as a pragmatic and useful tool for AI education.
arXiv Detail & Related papers (2021-08-12T15:42:32Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.