A Regression Tsetlin Machine with Integer Weighted Clauses for Compact
Pattern Representation
- URL: http://arxiv.org/abs/2002.01245v1
- Date: Tue, 4 Feb 2020 12:06:16 GMT
- Title: A Regression Tsetlin Machine with Integer Weighted Clauses for Compact
Pattern Representation
- Authors: K. Darshana Abeyrathna, Ole-Christoffer Granmo, Morten Goodwin
- Abstract summary: The Regression Tsetlin Machine (RTM) addresses the lack of interpretability impeding state-of-the-art nonlinear regression models.
We introduce integer weighted clauses to reduce computation cost N times and increase interpretability.
We evaluate the potential of the integer weighted RTM using six artificial datasets.
- Score: 9.432068833600884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Regression Tsetlin Machine (RTM) addresses the lack of interpretability
impeding state-of-the-art nonlinear regression models. It does this by using
conjunctive clauses in propositional logic to capture the underlying non-linear
frequent patterns in the data. These, in turn, are combined into a continuous
output through summation, akin to a linear regression function, however, with
non-linear components and unity weights. Although the RTM has solved non-linear
regression problems with competitive accuracy, the resolution of the output is
proportional to the number of clauses employed. This means that computation
cost increases with resolution. To reduce this problem, we here introduce
integer weighted RTM clauses. Our integer weighted clause is a compact
representation of multiple clauses that capture the same sub-pattern-N
repeating clauses are turned into one, with an integer weight N. This reduces
computation cost N times, and increases interpretability through a sparser
representation. We further introduce a novel learning scheme that allows us to
simultaneously learn both the clauses and their weights, taking advantage of
so-called stochastic searching on the line. We evaluate the potential of the
integer weighted RTM empirically using six artificial datasets. The results
show that the integer weighted RTM is able to acquire on par or better accuracy
using significantly less computational resources compared to regular RTMs. We
further show that integer weights yield improved accuracy over real-valued
ones.
Related papers
- Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues [65.41946981594567]
Linear Recurrent Neural Networks (LRNNs) have emerged as efficient alternatives to Transformers in large language modeling.
LRNNs struggle to perform state-tracking which may impair performance in tasks such as code evaluation or tracking a chess game.
Our work enhances the expressivity of modern LRNNs, broadening their applicability without changing the cost of training or inference.
arXiv Detail & Related papers (2024-11-19T14:35:38Z) - LFFR: Logistic Function For (single-output) Regression [0.0]
We implement privacy-preserving regression training using data encrypted under a fully homomorphic encryption scheme.
We develop a novel and efficient algorithm called LFFR for homomorphic regression using the logistic function.
arXiv Detail & Related papers (2024-07-13T17:33:49Z) - Hardness and Algorithms for Robust and Sparse Optimization [17.842787715567436]
We explore algorithms and limitations for sparse optimization problems such as sparse linear regression and robust linear regression.
Specifically, the sparse linear regression problem seeks a $k$-sparse vector $xinmathbbRd$ to minimize $|Ax-b|$.
The robust linear regression problem seeks a set $S$ that ignores at most $k$ rows and a vector $x$ to minimize $|(Ax-b)_S|$.
arXiv Detail & Related papers (2022-06-29T01:40:38Z) - ReLU Regression with Massart Noise [52.10842036932169]
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data.
We focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model.
We develop an efficient algorithm that achieves exact parameter recovery in this model.
arXiv Detail & Related papers (2021-09-10T02:13:22Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for
Mixed-signal DNN Accelerator [33.19099033687952]
FORMS is a fine-grained ReRAM-based DNN accelerator with polarized weights.
It achieves significant throughput improvement and speed up in frame per second over ISAAC with similar area cost.
arXiv Detail & Related papers (2021-06-16T21:42:08Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Model-based multi-parameter mapping [0.0]
Quantitative MR imaging is increasingly favoured for its richer information content and standardised measures.
Estimations often assume noise subsets of data to solve for different quantities in isolation.
Instead, a generative model can be formulated and inverted to jointly recover parameter estimates.
arXiv Detail & Related papers (2021-02-02T17:00:11Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - Extending the Tsetlin Machine With Integer-Weighted Clauses for
Increased Interpretability [9.432068833600884]
Building machine models that are both interpretable and accurate is an unresolved challenge for many pattern recognition problems.
Using a linear combination of conjunctive clauses in propositional logic, Tsetlin Machines (TMs) have shown competitive performance on diverse benchmarks.
Here, we address the accuracy-interpretability challenge by equipping the TM clauses with integer weights.
arXiv Detail & Related papers (2020-05-11T14:18:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.