Quality Monitoring and Assessment of Deployed Deep Learning Models for
Network AIOps
- URL: http://arxiv.org/abs/2202.13642v1
- Date: Mon, 28 Feb 2022 09:37:12 GMT
- Title: Quality Monitoring and Assessment of Deployed Deep Learning Models for
Network AIOps
- Authors: Lixuan Yang, Dario Rossi
- Abstract summary: Deep Learning (DL) models are software artifacts, they need to be regularly maintained and updated.
In the lifecycle of a DL model deployment, it is important to assess the quality of deployed models, to detect "stale" models and prioritize their update.
This article proposes simple yet effective techniques for (i) quality assessment of individual inference, and (ii) overall model quality tracking over multiple inferences.
- Score: 9.881249708266237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) has recently attracted a lot of attention,
transitioning from research labs to a wide range of successful deployments in
many fields, which is particularly true for Deep Learning (DL) techniques.
Ultimately, DL models being software artifacts, they need to be regularly
maintained and updated: AIOps is the logical extension of the DevOps software
development practices to AI-software applied to network operation and
management. In the lifecycle of a DL model deployment, it is important to
assess the quality of deployed models, to detect "stale" models and prioritize
their update. In this article, we cover the issue in the context of network
management, proposing simple yet effective techniques for (i) quality
assessment of individual inference, and for (ii) overall model quality tracking
over multiple inferences, that we apply to two use cases, representative of the
network management and image recognition fields.
Related papers
- A Systematic Literature Review of Parameter-Efficient Fine-Tuning for Large Code Models [2.171120568435925]
Large Language Models (LLMs) for code require significant computational resources for training and fine-tuning.
To address this, the research community has increasingly turned to Efficient Fine-Tuning (PEFT)
PEFT enables the adaptation of large models by updating only a small subset of parameters, rather than the entire model.
Our study synthesizes findings from 27 peer-reviewed papers, identifying patterns in configuration strategies and adaptation trade-offs.
arXiv Detail & Related papers (2025-04-29T16:19:25Z) - Toward Neurosymbolic Program Comprehension [46.874490406174644]
We advocate for a Neurosymbolic research direction that combines the strengths of existing DL techniques with traditional symbolic methods.
We present preliminary results for our envisioned approach, aimed at establishing the first Neurosymbolic Program framework.
arXiv Detail & Related papers (2025-02-03T20:38:58Z) - Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling [128.24325909395188]
We introduce InternVL 2.5, an advanced multimodal large language model (MLLM) series that builds upon InternVL 2.0.
InternVL 2.5 exhibits competitive performance, rivaling leading commercial models such as GPT-4o and Claude-3.5-Sonnet.
We hope this model contributes to the open-source community by setting new standards for developing and applying multimodal AI systems.
arXiv Detail & Related papers (2024-12-06T18:57:08Z) - DLBacktrace: A Model Agnostic Explainability for any Deep Learning Models [1.747623282473278]
Deep learning models operate as opaque 'black boxes' with limited transparency in their decision-making processes.
This study addresses the pressing need for interpretability in AI systems, emphasizing its role in fostering trust, ensuring accountability, and promoting responsible deployment in mission-critical fields.
We introduce DLBacktrace, an innovative technique developed by the AryaXAI team to illuminate model decisions across a wide array of domains.
arXiv Detail & Related papers (2024-11-19T16:54:30Z) - Next-Gen Software Engineering: AI-Assisted Big Models [0.0]
This paper aims to facilitate a synthesis between models and AI in software engineering.
The paper provides an overview of the current status of AI-assisted software engineering.
A vision of AI-assisted Big Models in SE is put forth, with the aim of capitalising on the advantages inherent to both approaches.
arXiv Detail & Related papers (2024-09-26T16:49:57Z) - AI Foundation Models in Remote Sensing: A Survey [6.036426846159163]
This paper provides a comprehensive survey of foundation models in the remote sensing domain.
We categorize these models based on their applications in computer vision and domain-specific tasks.
We highlight emerging trends and the significant advancements achieved by these foundation models.
arXiv Detail & Related papers (2024-08-06T22:39:34Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Amazon SageMaker Model Monitor: A System for Real-Time Insights into
Deployed Machine Learning Models [15.013638492229376]
We present Amazon SageMaker Model Monitor, a fully managed service that continuously monitors the quality of machine learning models hosted on Amazon SageMaker.
Our system automatically detects data, concept, bias, and feature attribution drift in models in real-time and provides alerts so that model owners can take corrective actions.
arXiv Detail & Related papers (2021-11-26T18:35:38Z) - INTERN: A New Learning Paradigm Towards General Vision [117.3343347061931]
We develop a new learning paradigm named INTERN.
By learning with supervisory signals from multiple sources in multiple stages, the model being trained will develop strong generalizability.
In most cases, our models, adapted with only 10% of the training data in the target domain, outperform the counterparts trained with the full set of data.
arXiv Detail & Related papers (2021-11-16T18:42:50Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - A Visual Analytics Framework for Explaining and Diagnosing Transfer
Learning Processes [42.57604833160855]
We present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks.
Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks.
arXiv Detail & Related papers (2020-09-15T05:59:00Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.