Stochastic Configuration Machines for Industrial Artificial Intelligence
- URL: http://arxiv.org/abs/2308.13570v6
- Date: Sat, 7 Oct 2023 05:21:54 GMT
- Title: Stochastic Configuration Machines for Industrial Artificial Intelligence
- Authors: Dianhui Wang and Matthew J. Felicetti
- Abstract summary: configuration networks (SCNs) play a key role in industrial artificial intelligence (IAI)
This paper proposes a new randomized learner model, termed configuration machines (SCMs) to stress effective modelling and data size saving.
Experimental studies are carried out over some benchmark datasets and three industrial applications.
- Score: 4.57421617811378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time predictive modelling with desired accuracy is highly expected in
industrial artificial intelligence (IAI), where neural networks play a key
role. Neural networks in IAI require powerful, high-performance computing
devices to operate a large number of floating point data. Based on stochastic
configuration networks (SCNs), this paper proposes a new randomized learner
model, termed stochastic configuration machines (SCMs), to stress effective
modelling and data size saving that are useful and valuable for industrial
applications. Compared to SCNs and random vector functional-link (RVFL) nets
with binarized implementation, the model storage of SCMs can be significantly
compressed while retaining favourable prediction performance. Besides the
architecture of the SCM learner model and its learning algorithm, as an
important part of this contribution, we also provide a theoretical basis on the
learning capacity of SCMs by analysing the model's complexity. Experimental
studies are carried out over some benchmark datasets and three industrial
applications. The results demonstrate that SCM has great potential for dealing
with industrial data analytics.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Sustainable Diffusion-based Incentive Mechanism for Generative AI-driven Digital Twins in Industrial Cyber-Physical Systems [65.22300383287904]
Industrial Cyber-Physical Systems (ICPSs) are an integral component of modern manufacturing and industries.
By digitizing data throughout the product life cycle, Digital Twins (DTs) in ICPSs enable a shift from current industrial infrastructures to intelligent and adaptive infrastructures.
mechanisms that leverage sensing Industrial Internet of Things (IIoT) devices to share data for the construction of DTs are susceptible to adverse selection problems.
arXiv Detail & Related papers (2024-08-02T10:47:10Z) - Improving the Real-Data Driven Network Evaluation Model for Digital Twin Networks [0.2499907423888049]
Digital Twin Networks (DTN) technology is expected to become the foundation technology for autonomous networks.
DTN has the advantage of being able to operate and system networks based on real-time collected data in a closed-loop system.
Various AI research and standardization work is ongoing to optimize the use of DTN.
arXiv Detail & Related papers (2024-05-14T09:55:03Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Stochastic Configuration Machines: FPGA Implementation [4.57421617811378]
configuration networks (SCNs) are a prime choice in industrial applications due to their merits and feasibility for data modelling.
This paper aims to implement SCM models on a field programmable gate array (FPGA) and introduce binary-coded inputs to improve learning performance.
arXiv Detail & Related papers (2023-10-30T02:04:20Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - On Energy-Based Models with Overparametrized Shallow Neural Networks [44.74000986284978]
Energy-based models (EBMs) are a powerful framework for generative modeling.
In this work we focus on shallow neural networks.
We show that models trained in the so-called "active" regime provide a statistical advantage over their associated "lazy" or kernel regime.
arXiv Detail & Related papers (2021-04-15T15:34:58Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - Learning Queuing Networks by Recurrent Neural Networks [0.0]
We propose a machine-learning approach to derive performance models from data.
We exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations.
This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model.
arXiv Detail & Related papers (2020-02-25T10:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.