ToffA-DSPL: an approach of trade-off analysis for designing dynamic software product lines
- URL: http://arxiv.org/abs/2407.01722v1
- Date: Mon, 1 Jul 2024 18:51:32 GMT
- Title: ToffA-DSPL: an approach of trade-off analysis for designing dynamic software product lines
- Authors: Michelle Larissa Luciano Carvalho, Paulo Cesar Masiero, Ismayle de Sousa Santos, Eduardo Santana de Almeida,
- Abstract summary: We propose an approach of Trade-off Analysis for DSPL at design-time, named ToffA-DSPL.
It deals with the configuration selection process considering interactions between NFRs and contexts.
In general, the configurations suggested by ToffA-DSPL provide high satisfaction levels of NFRs.
- Score: 3.623080116477751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software engineers have adopted the Dynamic Software Product Lines (DSPL) engineering practices to develop Dynamically Adaptable Software (DAS). DAS is seen as a DSPL application and must cope with a large number of configurations of features, Non-functional Requirements (NFRs), and contexts. However, the accurate representation of the impact of features over NFRs and contexts for the identification of optimal configurations is not a trivial task. Software engineers need to have domain knowledge and design DAS before deploying to satisfy those requirements. Aiming to handle them, we proposed an approach of Trade-off Analysis for DSPL at design-time, named ToffA-DSPL. It deals with the configuration selection process considering interactions between NFRs and contexts. We performed an exploratory study based on simulations to identify the usefulness of the ToffA-DSPL approach. In general, the configurations suggested by ToffA-DSPL provide high satisfaction levels of NFRs. Based on simulations, we evidenced that our approach aims to explore reuse and is useful for generating valid and optimal configurations. In addition, ToffA-DSPL enables software engineers to conduct trade-off analysis, evaluate changes in the context feature, and define an adaptation model from optimal configurations found in the analysis.
Related papers
- A Systematic Literature Review of Parameter-Efficient Fine-Tuning for Large Code Models [2.171120568435925]
Large Language Models (LLMs) for code require significant computational resources for training and fine-tuning.
To address this, the research community has increasingly turned to Efficient Fine-Tuning (PEFT)
PEFT enables the adaptation of large models by updating only a small subset of parameters, rather than the entire model.
Our study synthesizes findings from 27 peer-reviewed papers, identifying patterns in configuration strategies and adaptation trade-offs.
arXiv Detail & Related papers (2025-04-29T16:19:25Z) - Efficient Adaptation For Remote Sensing Visual Grounding [0.0]
Foundation models can associate textual descriptions with object positions through the Visual Grounding (VG) task.
Due to domain-specific challenges, their direct application to remote sensing (RS) produces sub-optimal results.
This study highlights the potential of PEFT techniques to advance efficient and precise multi-modal analysis in RS.
arXiv Detail & Related papers (2025-03-29T13:49:11Z) - Empowering Large Language Models in Wireless Communication: A Novel Dataset and Fine-Tuning Framework [81.29965270493238]
We develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) for wireless communication applications.
The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard.
We introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data.
arXiv Detail & Related papers (2025-01-16T16:19:53Z) - Preference-Oriented Supervised Fine-Tuning: Favoring Target Model Over Aligned Large Language Models [12.500777267361102]
We introduce a novel textbfpreference-textbforiented supervised textbffine-textbftuning approach, namely PoFT.
The intuition is to boost SFT by imposing a particular preference: textitfavoring the target model over aligned LLMs on the same SFT data.
PoFT achieves stable and consistent improvements over the SFT baselines across different training datasets and base models.
arXiv Detail & Related papers (2024-12-17T12:49:14Z) - Language Model Evolutionary Algorithms for Recommender Systems: Benchmarks and Algorithm Comparisons [33.70598394905857]
Large language models (LLMs) have significantly enhanced the functionality of evolutionary algorithms (EAs)
We introduce a benchmark problem set, named RSBench, to assess the performance of LLM-based EAs in recommendation prompt optimization.
We develop three LLM-based EAs based on established EA frameworks and experimentally evaluate their performance using RSBench.
arXiv Detail & Related papers (2024-11-16T04:35:17Z) - Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers [19.13500546022262]
We introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR)
LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics.
arXiv Detail & Related papers (2024-09-16T18:45:07Z) - The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities [0.35998666903987897]
This report examines the fine-tuning of Large Language Models (LLMs)
It outlines the historical evolution of LLMs from traditional Natural Language Processing (NLP) models to their pivotal role in AI.
The report introduces a structured seven-stage pipeline for fine-tuning LLMs.
arXiv Detail & Related papers (2024-08-23T14:48:02Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - Can LLMs Configure Software Tools [0.76146285961466]
In software engineering, the meticulous configuration of software tools is crucial in ensuring optimal performance within intricate systems.
In this study, we embark on an exploration of leveraging Large-Language Models (LLMs) to streamline the software configuration process.
Our work presents a novel approach that employs LLMs, such as Chat-GPT, to identify starting conditions and narrow down the search space, improving configuration efficiency.
arXiv Detail & Related papers (2023-12-11T05:03:02Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of supervised fine-tuning (SFT)
We also review the potential pitfalls of SFT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies.
arXiv Detail & Related papers (2023-08-21T15:35:16Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z) - Learning Off-Policy with Online Planning [18.63424441772675]
We investigate a novel instantiation of H-step lookahead with a learned model and a terminal value function.
We show the flexibility of LOOP to incorporate safety constraints during deployment with a set of navigation environments.
arXiv Detail & Related papers (2020-08-23T16:18:44Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.