Introduction to Machine Learning for the Sciences
- URL: http://arxiv.org/abs/2102.04883v1
- Date: Mon, 8 Feb 2021 16:25:46 GMT
- Title: Introduction to Machine Learning for the Sciences
- Authors: Titus Neupert, Mark H Fischer, Eliska Greplova, Kenny Choo, Michael
Denner
- Abstract summary: The notes start with an exposition of machine learning methods without neural networks, such as principle component analysis, t-SNE, and linear regression.
We continue with an introduction to both basic and advanced neural network structures such as conventional neural networks, (variational) autoencoders, generative adversarial networks, restricted Boltzmann machines, and recurrent neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This is an introductory machine learning course specifically developed with
STEM students in mind. We discuss supervised, unsupervised, and reinforcement
learning. The notes start with an exposition of machine learning methods
without neural networks, such as principle component analysis, t-SNE, and
linear regression. We continue with an introduction to both basic and advanced
neural network structures such as conventional neural networks, (variational)
autoencoders, generative adversarial networks, restricted Boltzmann machines,
and recurrent neural networks. Questions of interpretability are discussed
using the examples of dreaming and adversarial attacks.
Related papers
- Collective variables of neural networks: empirical time evolution and scaling laws [0.535514140374842]
We show that certain measures on the spectrum of the empirical neural tangent kernel, specifically entropy and trace, yield insight into the representations learned by a neural network.
Results are demonstrated first on test cases before being shown on more complex networks, including transformers, auto-encoders, graph neural networks, and reinforcement learning studies.
arXiv Detail & Related papers (2024-10-09T21:37:14Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - COOL: A Constraint Object-Oriented Logic Programming Language and its
Neural-Symbolic Compilation System [0.0]
We introduce the COOL programming language, which seamlessly combines logical reasoning with neural network technologies.
COOL is engineered to autonomously handle data collection, mitigating the need for user-supplied initial data.
It incorporates user prompts into the coding process to reduce the risks of undertraining and enhances the interaction among models throughout their lifecycle.
arXiv Detail & Related papers (2023-11-07T06:29:59Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - The Physics of Machine Learning: An Intuitive Introduction for the
Physical Scientist [0.0]
This article is intended for physical scientists who wish to gain deeper insights into machine learning algorithms.
We begin with a review of two energy-based machine learning algorithms, Hopfield networks and Boltzmann machines, and their connection to the Ising model.
We then delve into additional, more "practical," machine learning architectures including feedforward neural networks, convolutional neural networks, and autoencoders.
arXiv Detail & Related papers (2021-11-27T15:12:42Z) - Explainable artificial intelligence for mechanics: physics-informing
neural networks for constitutive models [0.0]
In mechanics, the new and active field of physics-informed neural networks attempts to mitigate this disadvantage by designing deep neural networks on the basis of mechanical knowledge.
We propose a first step towards a physics-forming-in approach, which explains neural networks trained on mechanical data a posteriori.
Therein, the principal component analysis decorrelates the distributed representations in cell states of RNNs and allows the comparison to known and fundamental functions.
arXiv Detail & Related papers (2021-04-20T18:38:52Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.