An Optimized Toolbox for Advanced Image Processing with Tsetlin Machine Composites
- URL: http://arxiv.org/abs/2406.00704v1
- Date: Sun, 2 Jun 2024 10:52:48 GMT
- Title: An Optimized Toolbox for Advanced Image Processing with Tsetlin Machine Composites
- Authors: Ylva Grønningsæter, Halvor S. Smørvik, Ole-Christoffer Granmo,
- Abstract summary: We leverage the recently proposed TM Composites architecture and introduce a range of TM Specialists.
These include Canny edge detection, Histogram of Oriented Gradients, adaptive mean thresholding, adaptive Gaussian thresholding, Otsu's thresholding, color thermometers, and adaptive color thermometers.
The result is a toolbox that provides new state-of-the-art results on CIFAR-10 for TMs with an accuracy of 82.8%.
- Score: 9.669581902728552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Tsetlin Machine (TM) has achieved competitive results on several image classification benchmarks, including MNIST, K-MNIST, F-MNIST, and CIFAR-2. However, color image classification is arguably still in its infancy for TMs, with CIFAR-10 being a focal point for tracking progress. Over the past few years, TM's CIFAR-10 accuracy has increased from around 61% in 2020 to 75.1% in 2023 with the introduction of Drop Clause. In this paper, we leverage the recently proposed TM Composites architecture and introduce a range of TM Specialists that use various image processing techniques. These include Canny edge detection, Histogram of Oriented Gradients, adaptive mean thresholding, adaptive Gaussian thresholding, Otsu's thresholding, color thermometers, and adaptive color thermometers. In addition, we conduct a rigorous hyperparameter search, where we uncover optimal hyperparameters for several of the TM Specialists. The result is a toolbox that provides new state-of-the-art results on CIFAR-10 for TMs with an accuracy of 82.8%. In conclusion, our toolbox of TM Specialists forms a foundation for new TM applications and a landmark for further research on TM Composites in image analysis.
Related papers
- SynthRAD2025 Grand Challenge dataset: generating synthetic CTs for radiotherapy [0.0]
The SynthRAD2025 dataset and Grand Challenge promote advancements in synthetic computed tomography (sCT) generation.
The dataset includes 2362 cases: 890 MRI-only CT (CBCTCT) and 1472 CBCTCT pairs from head-and-neck, thoracic, and abdominal cancer patients treated at five European university medical centers.
Data is provided in MetaImage (mha) format, ensuring compatibility with medical image processing tools.
arXiv Detail & Related papers (2025-02-24T19:53:09Z) - A Robust Pipeline for Classification and Detection of Bleeding Frames in Wireless Capsule Endoscopy using Swin Transformer and RT-DETR [1.7499351967216343]
Solution combines the Swin Transformer for the initial classification of bleeding frames and RT-DETR for further detection of bleeding.
On the validation set, this approach achieves a classification accuracy of 98.5% compared to 91.7% without any pre-processing.
On the test set, this approach achieves a classification accuracy and F1 score of 87.0% and 89.0% respectively.
arXiv Detail & Related papers (2024-06-12T09:58:42Z) - wmh_seg: Transformer based U-Net for Robust and Automatic White Matter
Hyperintensity Segmentation across 1.5T, 3T and 7T [1.583327010995414]
White matter hyperintensity (WMH) remains the top imaging biomarker for neurodegenerative diseases.
Recent deep learning models exhibit promise in WMH segmentation but still face challenges.
We introduce wmh_seg, a novel deep learning model leveraging a transformer-based encoder from SegFormer.
arXiv Detail & Related papers (2024-02-20T03:57:16Z) - TMComposites: Plug-and-Play Collaboration Between Specialized Tsetlin
Machines [12.838678214659422]
This paper introduces plug-and-play collaboration between specialized TMs, referred to as TM Composites.
The collaboration relies on a TM's ability to specialize during learning and to assess its competence during inference.
We implement three TM specializations in our empirical evaluation.
arXiv Detail & Related papers (2023-09-09T14:00:39Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Visual Atoms: Pre-training Vision Transformers with Sinusoidal Waves [18.5408134000081]
Formula-driven supervised learning has been shown to be an effective method for pre-training transformers.
VisualAtom-21k is used for pre-training ViT-Base, the top-1 accuracy reached 83.7% when fine-tuning on ImageNet-1k.
Unlike JFT-300M which is a static dataset, the quality of synthetic datasets will continue to improve.
arXiv Detail & Related papers (2023-03-02T09:47:28Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Global Context Vision Transformers [78.5346173956383]
We propose global context vision transformer (GC ViT), a novel architecture that enhances parameter and compute utilization for computer vision.
We address the lack of the inductive bias in ViTs, and propose to leverage a modified fused inverted residual blocks in our architecture.
Our proposed GC ViT achieves state-of-the-art results across image classification, object detection and semantic segmentation tasks.
arXiv Detail & Related papers (2022-06-20T18:42:44Z) - Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline [80.13652104204691]
In this paper, we construct a large-scale benchmark with high diversity for visible-thermal UAV tracking (VTUAV)
We provide a coarse-to-fine attribute annotation, where frame-level attributes are provided to exploit the potential of challenge-specific trackers.
In addition, we design a new RGB-T baseline, named Hierarchical Multi-modal Fusion Tracker (HMFT), which fuses RGB-T data in various levels.
arXiv Detail & Related papers (2022-04-08T15:22:33Z) - ZARTS: On Zero-order Optimization for Neural Architecture Search [94.41017048659664]
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency.
This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation.
In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue.
arXiv Detail & Related papers (2021-10-10T09:35:15Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware
Generative Adversarial Networks [5.3580471186206005]
We trained a generative adversarial network (GAN) to generate synthetic MR knee images conditioned on various acquisition parameters.
In a Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable.
arXiv Detail & Related papers (2021-02-17T11:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.