A Machine Learning Tutorial for Operational Meteorology, Part II: Neural
Networks and Deep Learning
- URL: http://arxiv.org/abs/2211.00147v1
- Date: Mon, 31 Oct 2022 21:10:48 GMT
- Title: A Machine Learning Tutorial for Operational Meteorology, Part II: Neural
Networks and Deep Learning
- Authors: Randy J. Chase, David R. Harrison, Gary Lackmann and Amy McGovern
- Abstract summary: This paper discusses machine learning methods in a plain language format that is targeted for the operational meteorolgical community.
This is the second paper in a pair that aim to serve as a machine learning resource for meteorologists.
Specifically this paper covers perceptrons, artificial neural networks, convolutional neural networks and U-networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past decade the use of machine learning in meteorology has grown
rapidly. Specifically neural networks and deep learning have been being used at
an unprecedented rate. In order to fill the dearth of resources covering neural
networks with a meteorological lens, this paper discusses machine learning
methods in a plain language format that is targeted for the operational
meteorolgical community. This is the second paper in a pair that aim to serve
as a machine learning resource for meteorologists. While the first paper
focused on traditional machine learning methods (e.g., random forest), here a
broad spectrum of neural networks and deep learning methods are discussed.
Specifically this paper covers perceptrons, artificial neural networks,
convolutional neural networks and U-networks. Like the part 1 paper, this
manuscript discusses the terms associated with neural networks and their
training. Then the manuscript provides some intuition behind every method and
concludes by showing each method used in a meteorological example of diagnosing
thunderstorms from satellite images (e.g., lightning flashes). This paper is
accompanied by an open-source code repository to allow readers to explore
neural networks using either the dataset provided (which is used in the paper)
or as a template for alternate datasets.
Related papers
- Detecting Moving Objects With Machine Learning [0.0]
This chapter presents a review of the use of machine learning techniques to find moving objects in astronomical imagery.
I discuss various pitfalls with the use of machine learning techniques, including a discussion on the important issue of overfitting.
arXiv Detail & Related papers (2024-05-10T00:13:39Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Conditional computation in neural networks: principles and research trends [48.14569369912931]
This article summarizes principles and ideas from the emerging area of applying textitconditional computation methods to the design of neural networks.
In particular, we focus on neural networks that can dynamically activate or de-activate parts of their computational graph conditionally on their input.
arXiv Detail & Related papers (2024-03-12T11:56:38Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Training Spiking Neural Networks Using Lessons From Deep Learning [28.827506468167652]
The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like.
Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here.
A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available.
arXiv Detail & Related papers (2021-09-27T09:28:04Z) - Reservoir Stack Machines [77.12475691708838]
Memory-augmented neural networks equip a recurrent neural network with an explicit memory to support tasks that require information storage.
We introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages.
Our results show that the reservoir stack machine achieves zero error, even on test sequences longer than the training data.
arXiv Detail & Related papers (2021-05-04T16:50:40Z) - Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis
of Economic Systems [0.0]
We consider approaches for time series forecasting based on deep neural networks and neuro-fuzzy nets.
This paper presents also an overview of approaches for incorporating rule-based methodology into deep learning neural networks.
arXiv Detail & Related papers (2020-11-11T06:21:08Z) - A Practical Tutorial on Graph Neural Networks [49.919443059032226]
Graph neural networks (GNNs) have recently grown in popularity in the field of artificial intelligence (AI)
This tutorial exposes the power and novelty of GNNs to AI practitioners.
arXiv Detail & Related papers (2020-10-11T12:36:17Z) - Applications of Deep Neural Networks with Keras [0.0]
Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain.
This course will introduce the student to classic neural network structures, Conversa Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adrial Networks (GAN)
arXiv Detail & Related papers (2020-09-11T22:09:10Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - An Overview of Neural Network Compression [2.550900579709111]
In recent years there has been a resurgence in model compression techniques, particularly for deep convolutional neural networks and self-attention based networks such as the Transformer.
This paper provides a timely overview of both old and current compression techniques for deep neural networks, including pruning, quantization, tensor decomposition, knowledge distillation and combinations thereof.
arXiv Detail & Related papers (2020-06-05T20:28:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.