LUNAR: Cellular Automata for Drifting Data Streams
- URL: http://arxiv.org/abs/2002.02164v1
- Date: Thu, 6 Feb 2020 09:10:43 GMT
- Title: LUNAR: Cellular Automata for Drifting Data Streams
- Authors: Jesus L. Lobo, Javier Del Ser, Francisco Herrera
- Abstract summary: We propose LUNAR, a streamified version of cellular automata.
It is able to act as a real incremental learner while adapting to drifting conditions.
- Score: 19.98517714325424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advent of huges volumes of data produced in the form of fast
streams, real-time machine learning has become a challenge of relevance
emerging in a plethora of real-world applications. Processing such fast streams
often demands high memory and processing resources. In addition, they can be
affected by non-stationary phenomena (concept drift), by which learning methods
have to detect changes in the distribution of streaming data, and adapt to
these evolving conditions. A lack of efficient and scalable solutions is
particularly noted in real-time scenarios where computing resources are
severely constrained, as it occurs in networks of small, numerous,
interconnected processing units (such as the so-called Smart Dust, Utility Fog,
or Swarm Robotics paradigms). In this work we propose LUNAR, a streamified
version of cellular automata devised to successfully meet the aforementioned
requirements. It is able to act as a real incremental learner while adapting to
drifting conditions. Extensive simulations with synthetic and real data will
provide evidence of its competitive behavior in terms of classification
performance when compared to long-established and successful online learning
methods.
Related papers
- Sample-efficient Imitative Multi-token Decision Transformer for Real-world Driving [18.34685506480288]
We propose Sample-efficient Imitative Multi-token Decision Transformer (SimDT)
SimDT introduces multi-token prediction, online imitative learning pipeline and prioritized experience replay to sequence-modelling reinforcement learning.
Results exceed popular imitation and reinforcement learning algorithms both in open-loop and closed-loop settings on Waymax benchmark.
arXiv Detail & Related papers (2024-06-18T14:27:14Z) - Multi-Stream Cellular Test-Time Adaptation of Real-Time Models Evolving in Dynamic Environments [53.79708667153109]
Smart objects, notably autonomous vehicles, face challenges in critical local computations due to limited resources.
We propose a novel Multi-Stream Cellular Test-Time Adaptation setup where models adapt on the fly to a dynamic environment divided into cells.
We validate our methodology in the context of autonomous vehicles navigating across cells defined based on location and weather conditions.
arXiv Detail & Related papers (2024-04-27T15:00:57Z) - Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - HFedMS: Heterogeneous Federated Learning with Memorable Data Semantics
in Industrial Metaverse [49.1501082763252]
This paper presents HFEDMS for incorporating practical FL into the emerging Industrial Metaverse.
It reduces data heterogeneity through dynamic grouping and training mode conversion.
Then, it compensates for the forgotten knowledge by fusing compressed historical data semantics.
Experiments have been conducted on the streamed non-i.i.d. FEMNIST dataset using 368 simulated devices.
arXiv Detail & Related papers (2022-11-07T04:33:24Z) - Continual Learning with Transformers for Image Classification [12.028617058465333]
In computer vision, neural network models struggle to continually learn new concepts without forgetting what has been learnt in the past.
We develop a solution called Adaptive Distillation of Adapters (ADA), which is developed to perform continual learning.
We empirically demonstrate on different classification tasks that this method maintains a good predictive performance without retraining the model.
arXiv Detail & Related papers (2022-06-28T15:30:10Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - On the performance of deep learning models for time series
classification in streaming [0.0]
This work is to assess the performance of different types of deep architectures for data streaming classification.
We evaluate models such as multi-layer perceptrons, recurrent, convolutional and temporal convolutional neural networks over several time-series datasets.
arXiv Detail & Related papers (2020-03-05T11:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.