Foundation Models for Software Engineering of Cyber-Physical Systems: the Road Ahead
- URL: http://arxiv.org/abs/2504.04630v1
- Date: Sun, 06 Apr 2025 21:42:02 GMT
- Title: Foundation Models for Software Engineering of Cyber-Physical Systems: the Road Ahead
- Authors: Chengjie Lu, Pablo Valle, Jiahui Wu, Erblin Isaku, Hassan Sartaj, Aitor Arrieta, Shaukat Ali,
- Abstract summary: Foundation Models (FMs) are increasingly used to support various software engineering activities.<n>Their applications in the software engineering of Cyber-Physical Systems (CPSs) are also growing.<n>To address this, we present a research roadmap for integrating FMs into various phases of CPS software engineering.
- Score: 11.86894216728323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation Models (FMs), particularly Large Language Models (LLMs), are increasingly used to support various software engineering activities (e.g., coding and testing). Their applications in the software engineering of Cyber-Physical Systems (CPSs) are also growing. However, research in this area remains limited. Moreover, existing studies have primarily focused on LLMs-only one type of FM-leaving ample opportunities to explore others, such as vision-language models. We argue that, in addition to LLMs, other FMs utilizing different data modalities (e.g., images, audio) and multimodal models (which integrate multiple modalities) hold great potential for supporting CPS software engineering, given that these systems process diverse data types. To address this, we present a research roadmap for integrating FMs into various phases of CPS software engineering, highlighting key research opportunities and challenges for the software engineering community.
Related papers
- Software Engineering and Foundation Models: Insights from Industry Blogs Using a Jury of Foundation Models [11.993910471523073]
We analyze 155 FM4SE and 997 SE4FM blog posts from leading technology companies.<n>We observed that while code generation is the most prominent FM4SE task, FMs are leveraged for many other SE activities.<n>Although the emphasis is on cloud deployments, there is a growing interest in compressing FMs and deploying them on smaller devices.
arXiv Detail & Related papers (2024-10-11T17:27:04Z) - Machine Learning Operations: A Mapping Study [0.0]
This article discusses the issues that exist in several components of the MLOps pipeline.
A systematic mapping study is performed to identify the challenges that arise in the MLOps system categorized by different focus areas.
The main value of this work is it maps distinctive challenges in MLOps along with the recommended solutions outlined in our study.
arXiv Detail & Related papers (2024-09-28T17:17:40Z) - A Comprehensive Review of Multimodal Large Language Models: Performance and Challenges Across Different Tasks [74.52259252807191]
Multimodal Large Language Models (MLLMs) address the complexities of real-world applications far beyond the capabilities of single-modality systems.
This paper systematically sorts out the applications of MLLM in multimodal tasks such as natural language, vision, and audio.
arXiv Detail & Related papers (2024-08-02T15:14:53Z) - A Systematic Literature Review on the Use of Machine Learning in Software Engineering [0.0]
The study was carried out following the objective and the research questions to explore the current state of the art in applying machine learning techniques in software engineering processes.
The review identifies the key areas within software engineering where ML has been applied, including software quality assurance, software maintenance, software comprehension, and software documentation.
arXiv Detail & Related papers (2024-06-19T23:04:27Z) - A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges [60.546677053091685]
Large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain.
We explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation.
We highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications.
arXiv Detail & Related papers (2024-06-15T16:11:35Z) - LLMs Meet Multimodal Generation and Editing: A Survey [89.76691959033323]
This survey elaborates on multimodal generation and editing across various domains, comprising image, video, 3D, and audio.
We summarize the notable advancements with milestone works in these fields and categorize these studies into LLM-based and CLIP/T5-based methods.
We dig into tool-augmented multimodal agents that can leverage existing generative models for human-computer interaction.
arXiv Detail & Related papers (2024-05-29T17:59:20Z) - State Space Model for New-Generation Network Alternative to Transformers: A Survey [52.812260379420394]
In the post-deep learning era, the Transformer architecture has demonstrated its powerful performance across pre-trained big models and various downstream tasks.
To further reduce the complexity of attention models, numerous efforts have been made to design more efficient methods.
Among them, the State Space Model (SSM), as a possible replacement for the self-attention based Transformer model, has drawn more and more attention in recent years.
arXiv Detail & Related papers (2024-04-15T07:24:45Z) - OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models
in Medicine [55.29668193415034]
We present OpenMEDLab, an open-source platform for multi-modality foundation models.
It encapsulates solutions of pioneering attempts in prompting and fine-tuning large language and vision models for frontline clinical and bioinformatic applications.
It opens access to a group of pre-trained foundation models for various medical image modalities, clinical text, protein engineering, etc.
arXiv Detail & Related papers (2024-02-28T03:51:02Z) - Software Testing with Large Language Models: Survey, Landscape, and
Vision [32.34617250991638]
Pre-trained large language models (LLMs) have emerged as a breakthrough technology in natural language processing and artificial intelligence.
This paper provides a comprehensive review of the utilization of LLMs in software testing.
arXiv Detail & Related papers (2023-07-14T08:26:12Z) - A Model-Driven Engineering Approach to Machine Learning and Software
Modeling [0.5156484100374059]
Models are used in both the Software Engineering (SE) and the Artificial Intelligence (AI) communities.
The main focus is on the Internet of Things (IoT) and smart Cyber-Physical Systems (CPS) use cases, where both ML and model-driven SE play a key role.
arXiv Detail & Related papers (2021-07-06T15:50:50Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.