NBC-Softmax : Darkweb Author fingerprinting and migration tracking
- URL: http://arxiv.org/abs/2212.08184v1
- Date: Thu, 15 Dec 2022 23:00:33 GMT
- Title: NBC-Softmax : Darkweb Author fingerprinting and migration tracking
- Authors: Gayan K. Kulatilleke, Shekhar S. Chandra, Marius Portmann
- Abstract summary: Metric learning aims to learn distances from the data, which enhances the performance of similarity-based algorithms.
We propose NBC-Softmax, a contrastive loss based clustering technique for softmax loss.
Our technique meets the criterion for larger number of samples, thus achieving block contrastiveness.
- Score: 1.1470070927586016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metric learning aims to learn distances from the data, which enhances the
performance of similarity-based algorithms. An author style detection task is a
metric learning problem, where learning style features with small intra-class
variations and larger inter-class differences is of great importance to achieve
better performance. Recently, metric learning based on softmax loss has been
used successfully for style detection. While softmax loss can produce separable
representations, its discriminative power is relatively poor. In this work, we
propose NBC-Softmax, a contrastive loss based clustering technique for softmax
loss, which is more intuitive and able to achieve superior performance. Our
technique meets the criterion for larger number of samples, thus achieving
block contrastiveness, which is proven to outperform pair-wise losses. It uses
mini-batch sampling effectively and is scalable. Experiments on 4 darkweb
social forums, with NBCSAuthor that uses the proposed NBC-Softmax for author
and sybil detection, shows that our negative block contrastive approach
constantly outperforms state-of-the-art methods using the same network
architecture.
Our code is publicly available at : https://github.com/gayanku/NBC-Softmax
Related papers
- Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms [80.37846867546517]
We show how to train eight different neural networks with custom objectives.
We exploit their second-order information via their empirical Fisherssian matrices.
We apply Loss Lossiable algorithms to achieve significant improvements for less differentiable algorithms.
arXiv Detail & Related papers (2024-10-24T18:02:11Z) - Spectral Aware Softmax for Visible-Infrared Person Re-Identification [123.69049942659285]
Visible-infrared person re-identification (VI-ReID) aims to match specific pedestrian images from different modalities.
Existing methods still follow the softmax loss training paradigm, which is widely used in single-modality classification tasks.
We propose the spectral-aware softmax (SA-Softmax) loss, which can fully explore the embedding space with the modality information.
arXiv Detail & Related papers (2023-02-03T02:57:18Z) - Distinction Maximization Loss: Efficiently Improving Classification
Accuracy, Uncertainty Estimation, and Out-of-Distribution Detection Simply
Replacing the Loss and Calibrating [2.262407399039118]
We propose training deterministic deep neural networks using our DisMax loss.
DisMax usually outperforms all current approaches simultaneously in classification accuracy, uncertainty estimation, inference efficiency, and out-of-distribution detection.
arXiv Detail & Related papers (2022-05-12T04:37:35Z) - Real Additive Margin Softmax for Speaker Verification [14.226089039985151]
We show that AM-Softmax loss does not implement real max-margin training.
We present a Real AM-Softmax loss which involves a true margin function in the softmax training.
arXiv Detail & Related papers (2021-10-18T09:11:14Z) - Frequency-aware Discriminative Feature Learning Supervised by
Single-Center Loss for Face Forgery Detection [89.43987367139724]
Face forgery detection is raising ever-increasing interest in computer vision.
Recent works have reached sound achievements, but there are still unignorable problems.
A novel frequency-aware discriminative feature learning framework is proposed in this paper.
arXiv Detail & Related papers (2021-03-16T14:17:17Z) - Partial FC: Training 10 Million Identities on a Single Machine [23.7030637489807]
We analyze the optimization goal of softmax-based loss functions and the difficulty of training massive identities.
Experiment demonstrates no loss of accuracy when training with only 10% randomly sampled classes for the softmax-based loss functions.
We also implement a very efficient distributed sampling algorithm, taking into account model accuracy and training efficiency.
arXiv Detail & Related papers (2020-10-11T11:15:26Z) - Balanced Meta-Softmax for Long-Tailed Visual Recognition [46.215759445665434]
We show that the Softmax function, though used in most classification tasks, gives a biased gradient estimation under the long-tailed setup.
This paper presents Balanced Softmax, an elegant unbiased extension of Softmax, to accommodate the label distribution shift between training and testing.
In our experiments, we demonstrate that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and instance segmentation tasks.
arXiv Detail & Related papers (2020-07-21T12:05:00Z) - Loss Function Search for Face Recognition [75.79325080027908]
We develop a reward-guided search method to automatically obtain the best candidate.
Experimental results on a variety of face recognition benchmarks have demonstrated the effectiveness of our method.
arXiv Detail & Related papers (2020-07-10T03:40:10Z) - Taming GANs with Lookahead-Minmax [63.90038365274479]
Experimental results on MNIST, SVHN, CIFAR-10, and ImageNet demonstrate a clear advantage of combining Lookahead-minmax with Adam or extragradient.
Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of 12.19 without using the class labels.
arXiv Detail & Related papers (2020-06-25T17:13:23Z) - Towards Certified Robustness of Distance Metric Learning [53.96113074344632]
We advocate imposing an adversarial margin in the input space so as to improve the generalization and robustness of metric learning algorithms.
We show that the enlarged margin is beneficial to the generalization ability by using the theoretical technique of algorithmic robustness.
arXiv Detail & Related papers (2020-06-10T16:51:53Z) - More Information Supervised Probabilistic Deep Face Embedding Learning [10.52667214402514]
We analyse margin based softmax loss in probability view.
An auto-encoder architecture called Linear-Auto-TS-Encoder(LATSE) is proposed to corroborate this finding.
arXiv Detail & Related papers (2020-06-08T12:33:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.