Rethinking Software Engineering in the Foundation Model Era: A Curated
Catalogue of Challenges in the Development of Trustworthy FMware
- URL: http://arxiv.org/abs/2402.15943v2
- Date: Mon, 4 Mar 2024 04:22:37 GMT
- Title: Rethinking Software Engineering in the Foundation Model Era: A Curated
Catalogue of Challenges in the Development of Trustworthy FMware
- Authors: Ahmed E. Hassan, Dayi Lin, Gopi Krishnan Rajbahadur, Keheliya Gallaba,
Filipe R. Cogo, Boyuan Chen, Haoxiang Zhang, Kishanthan Thangarajah, Gustavo
Ansaldi Oliva, Jiahuei Lin, Wali Mohammad Abdullah, Zhen Ming Jiang
- Abstract summary: We identify 10 key SE4FMware challenges that have caused enterprise FMware development to be unproductive, costly, and risky.
We present FMArts, which is our long-term effort towards creating a cradle-to-grave platform for the engineering of trustworthy FMware.
- Score: 13.21876203209586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation models (FMs), such as Large Language Models (LLMs), have
revolutionized software development by enabling new use cases and business
models. We refer to software built using FMs as FMware. The unique properties
of FMware (e.g., prompts, agents, and the need for orchestration), coupled with
the intrinsic limitations of FMs (e.g., hallucination) lead to a completely new
set of software engineering challenges. Based on our industrial experience, we
identified 10 key SE4FMware challenges that have caused enterprise FMware
development to be unproductive, costly, and risky. In this paper, we discuss
these challenges in detail and state the path for innovation that we envision.
Next, we present FMArts, which is our long-term effort towards creating a
cradle-to-grave platform for the engineering of trustworthy FMware. Finally, we
(i) show how the unique properties of FMArts enabled us to design and develop a
complex FMware for a large customer in a timely manner and (ii) discuss the
lessons that we learned in doing so. We hope that the disclosure of the
aforementioned challenges and our associated efforts to tackle them will not
only raise awareness but also promote deeper and further discussions, knowledge
sharing, and innovative solutions across the software engineering discipline.
Related papers
- Software Performance Engineering for Foundation Model-Powered Software (FMware) [6.283211168007636]
Foundation Models (FMs) like Large Language Models (LLMs) are revolutionizing software development.
This paper highlights the significance of Software Performance Engineering (SPE) in FMware.
We identify four key challenges: cognitive architecture design, communication protocols, tuning and optimization, and deployment.
arXiv Detail & Related papers (2024-11-14T16:42:19Z) - Specialized Foundation Models Struggle to Beat Supervised Baselines [60.23386520331143]
We look at three modalities -- genomics, satellite imaging, and time series -- with multiple recent FMs and compare them to a standard supervised learning workflow.
We find that it is consistently possible to train simple supervised models that match or even outperform the latest foundation models.
arXiv Detail & Related papers (2024-11-05T04:10:59Z) - From Cool Demos to Production-Ready FMware: Core Challenges and a Technology Roadmap [12.313710667597897]
The rapid expansion of foundation models (FMs) has given rise to FMware--software systems that integrate FMs as core components.
transitioning to production-ready systems presents numerous challenges, including reliability, high implementation costs, scalability, and compliance with privacy regulations.
We identify critical issues in FM selection, data and model alignment, prompt engineering, agent orchestration, system testing, and deployment, alongside cross-cutting concerns such as memory management, observability, and feedback integration.
arXiv Detail & Related papers (2024-10-28T07:16:00Z) - Software Engineering and Foundation Models: Insights from Industry Blogs Using a Jury of Foundation Models [11.993910471523073]
We analyze 155 FM4SE and 997 SE4FM blog posts from leading technology companies.
We observed that while code generation is the most prominent FM4SE task, FMs are leveraged for many other SE activities.
Although the emphasis is on cloud deployments, there is a growing interest in compressing FMs and deploying them on smaller devices.
arXiv Detail & Related papers (2024-10-11T17:27:04Z) - Foundation Models for the Electric Power Grid [53.02072064670517]
Foundation models (FMs) currently dominate news headlines.
We argue that an FM learning from diverse grid data and topologies could unlock transformative capabilities.
We discuss a power grid FM concept, namely GridFM, based on graph neural networks and show how different downstream tasks benefit.
arXiv Detail & Related papers (2024-07-12T17:09:47Z) - Foundation Model Engineering: Engineering Foundation Models Just as Engineering Software [8.14005646330662]
Foundation Models (FMs) become a new type of software by treating data and models as the source code.
We outline our vision of introducing Foundation Model (FM) engineering, a strategic response to the anticipated FM crisis.
arXiv Detail & Related papers (2024-07-11T04:40:02Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Telecom AI Native Systems in the Age of Generative AI -- An Engineering
Perspective [8.199676957406167]
generative AI and foundational models (FMs) have ushered in transformative changes across various industries.
This article explores the integration of FMs in the telecommunications industry, shedding light on the concept of AI native telco.
It delves into the engineering considerations and unique challenges associated with implementing FMs into the software life cycle.
arXiv Detail & Related papers (2023-10-18T07:55:54Z) - Learn From Model Beyond Fine-Tuning: A Survey [78.80920533793595]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - The Role of Federated Learning in a Wireless World with Foundation Models [59.8129893837421]
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
Currently, the exploration of the interplay between FMs and federated learning (FL) is still in its nascent stage.
This article explores the extent to which FMs are suitable for FL over wireless networks, including a broad overview of research challenges and opportunities.
arXiv Detail & Related papers (2023-10-06T04:13:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.