Modern Software Development for JUNO offline software
- URL: http://arxiv.org/abs/2309.13780v1
- Date: Mon, 25 Sep 2023 00:13:47 GMT
- Title: Modern Software Development for JUNO offline software
- Authors: Tao Lin (on behalf of the JUNO collaboration)
- Abstract summary: The Jiangmen Underground Neutrino Observatory (JUNO), under construction in South China, primarily aims to determine the neutrino mass hierarchy and to precise measure the neutrino oscillation parameters.
The development of the JUNO offline software (JUNOSW) started in 2012, and it is quite challenging to maintain the JUNOSW for such a long time.
New stringent requirements came out, such as how to reduce the building time for the whole project, how to deploy offline algorithms to an online environment, and how to improve the code quality with code review and continuous integration.
This contribution will present the software development system based
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Jiangmen Underground Neutrino Observatory (JUNO), under construction in
South China, primarily aims to determine the neutrino mass hierarchy and to
precise measure the neutrino oscillation parameters. The data-taking is
expected to start in 2024 and the detector plans to run for more than 20 years.
The development of the JUNO offline software (JUNOSW) started in 2012, and it
is quite challenging to maintain the JUNOSW for such a long time. In the last
ten years, tools such as Subversion, Trac, and CMT had been adopted for
software development. However, new stringent requirements came out, such as how
to reduce the building time for the whole project, how to deploy offline
algorithms to an online environment, and how to improve the code quality with
code review and continuous integration. To meet the further requirements of
software development, modern development tools are evaluated for JUNOSW, such
as Git, GitLab, CMake, Docker, and Kubernetes. This contribution will present
the software development system based on these modern tools for JUNOSW and the
functionalities achieved: CMake macros are developed to simplify the build
instructions for users; CMake generator expressions are used to control the
build flags for the online and offline environments; a tool named git-junoenv
is developed to help users partially checkout and build the software; a script
is used to build and deploy the software on the CVMFS server; a Docker image
with CVMFS client installed is created for continuous integration; a GitLab
agent is set up to manage GitLab runners in Kubernetes with all the
configurations in a GitLab repository.
Related papers
- Git Context Controller: Manage the Context of LLM-based Agents like Git [6.521644491529639]
Large language model (LLM) based agents have shown impressive capabilities by interleaving internal reasoning with external tool use.<n>We introduce Git-Context-Controller (GCC), a structured context management framework inspired by software version control systems.<n>In a self-replication case study, a GCC-augmented agent builds a new CLI agent from scratch, achieving 40.7 task resolution, compared to only 11.7 without GCC.
arXiv Detail & Related papers (2025-07-30T08:01:45Z) - SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving [90.32201622392137]
We present SwingArena, a competitive evaluation framework for Large Language Models (LLMs)<n>Unlike traditional static benchmarks, SwingArena models the collaborative process of software by pairing LLMs as iterations, who generate patches, and reviewers, who create test cases and verify the patches through continuous integration (CI) pipelines.
arXiv Detail & Related papers (2025-05-29T18:28:02Z) - EnvBench: A Benchmark for Automated Environment Setup [76.02998475135824]
Large Language Models have enabled researchers to focus on practical repository-level tasks in software engineering domain.
Existing studies on environment setup introduce innovative agentic strategies, but their evaluation is often based on small datasets.
To address this gap, we introduce a comprehensive environment setup benchmark EnvBench.
arXiv Detail & Related papers (2025-03-18T17:19:12Z) - PyPackIT: Automated Research Software Engineering for Scientific Python Applications on GitHub [0.0]
PyPackIT is a user-friendly, ready-to-use software that enables scientists to focus on the scientific aspects of their projects.
PyPackIT offers a robust project infrastructure including a build-ready Python package skeleton, a fully operational documentation and test suite, and a control center for dynamic project management.
arXiv Detail & Related papers (2025-03-06T19:41:55Z) - LLM Agents Making Agent Tools [2.5529148902034637]
Tool use has turned large language models (LLMs) into powerful agents that can perform complex multi-step tasks.<n>But these tools must be implemented in advance by human developers.<n>We propose ToolMaker, an agentic framework that autonomously transforms papers with code into LLM-compatible tools.
arXiv Detail & Related papers (2025-02-17T11:44:11Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - OpenHands: An Open Platform for AI Software Developers as Generalist Agents [109.8507367518992]
We introduce OpenHands, a platform for the development of AI agents that interact with the world in similar ways to a human developer.
We describe how the platform allows for the implementation of new agents, safe interaction with sandboxed environments for code execution, and incorporation of evaluation benchmarks.
arXiv Detail & Related papers (2024-07-23T17:50:43Z) - Securing Confidential Data For Distributed Software Development Teams: Encrypted Container File [0.0]
Cloud-based version management services like GitHub are commonly used for source code and other files.
A challenge arises when developers from different companies or organizations share the platform, as sensitive data should be encrypted to restrict access to certain developers only.
This paper discusses existing tools addressing this issue, highlighting their shortcomings.
The authors propose their own solution, Encrypted Container Files, designed to overcome the deficiencies observed in other tools.
arXiv Detail & Related papers (2024-07-12T10:19:49Z) - GitHub Marketplace for Automation and Innovation in Software Production [2.0749231618270803]
GitHub Marketplace hosts automation tools to assist developers with the production of their GitHub-hosted projects.
This study explores the platform's characteristics, features, and policies and identifies common themes in production automation.
arXiv Detail & Related papers (2024-07-07T23:55:15Z) - KGym: A Platform and Dataset to Benchmark Large Language Models on Linux Kernel Crash Resolution [59.20933707301566]
Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks.
In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel.
To evaluate if ML models are useful while developing such large-scale systems-level software, we introduce kGym and kBench.
arXiv Detail & Related papers (2024-07-02T21:44:22Z) - AutoCodeRover: Autonomous Program Improvement [8.66280420062806]
We propose an automated approach for solving GitHub issues to autonomously achieve program improvement.
In our approach called AutoCodeRover, LLMs are combined with sophisticated code search capabilities, ultimately leading to a program modification or patch.
Experiments on SWE-bench-lite (300 real-life GitHub issues) show increased efficacy in solving GitHub issues (19% on SWE-bench-lite), which is higher than the efficacy of the recently reported SWE-agent.
arXiv Detail & Related papers (2024-04-08T11:55:09Z) - DevBench: A Comprehensive Benchmark for Software Development [72.24266814625685]
DevBench is a benchmark that evaluates large language models (LLMs) across various stages of the software development lifecycle.
Empirical studies show that current LLMs, including GPT-4-Turbo, fail to solve the challenges presented within DevBench.
Our findings offer actionable insights for the future development of LLMs toward real-world programming applications.
arXiv Detail & Related papers (2024-03-13T15:13:44Z) - GitAgent: Facilitating Autonomous Agent with GitHub by Tool Extension [81.44231422624055]
A growing area of research focuses on Large Language Models (LLMs) equipped with external tools capable of performing diverse tasks.
In this paper, we introduce GitAgent, an agent capable of achieving the autonomous tool extension from GitHub.
arXiv Detail & Related papers (2023-12-28T15:47:30Z) - Testing GitHub projects on custom resources using unprivileged
Kubernetes runners [1.137903861863692]
GitHub is a popular repository for hosting software projects.
Native GitHub Actions make it easy for software developers to validate new commits and have confidence that new code does not introduce major bugs.
The freely available test environments are limited to only a few popular setups but can be extended with custom Action Runners.
arXiv Detail & Related papers (2023-05-17T16:31:41Z) - The GitHub Development Workflow Automation Ecosystems [47.818229204130596]
Large-scale software development has become a highly collaborative endeavour.
This chapter explores the ecosystems of development bots and GitHub Actions.
It provides an extensive survey of the state-of-the-art in this domain.
arXiv Detail & Related papers (2023-05-08T15:24:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.