NERFIFY: A Multi-Agent Framework for Turning NeRF Papers into Code
- URL: http://arxiv.org/abs/2603.00805v1
- Date: Sat, 28 Feb 2026 20:57:32 GMT
- Title: NERFIFY: A Multi-Agent Framework for Turning NeRF Papers into Code
- Authors: Seemandhar Jain, Keshav Gupta, Kunal Gupta, Manmohan Chandraker,
- Abstract summary: We introduce NERFIFY, a framework that reliably converts NeRF research papers into trainable Nerfstudio plugins.<n>Code, data and implementations will be publicly released.
- Score: 49.610331036334316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of neural radiance field (NeRF) research requires significant efforts to reimplement papers before building upon them. We introduce NERFIFY, a multi-agent framework that reliably converts NeRF research papers into trainable Nerfstudio plugins, in contrast to generic paper-to-code methods and frontier models like GPT-5 that usually fail to produce runnable code. NERFIFY achieves domain-specific executability through six key innovations: (1) Context-free grammar (CFG): LLM synthesis is constrained by Nerfstudio formalized as a CFG, ensuring generated code satisfies architectural invariants. (2) Graph-of-Thought code synthesis: Specialized multi-file-agents generate repositories in topological dependency order, validating contracts and errors at each node. (3) Compositional citation recovery: Agents automatically retrieve and integrate components (samplers, encoders, proposal networks) from citation graphs of references. (4) Visual feedback: Artifacts are diagnosed through PSNR-minima ROI analysis, cross-view geometric validation, and VLM-guided patching to iteratively improve quality. (5) Knowledge enhancement: Beyond reproduction, methods can be improved with novel optimizations. (6) Benchmarking: An evaluation framework is designed for NeRF paper-to-code synthesis across 30 diverse papers. On papers without public implementations, NERFIFY achieves visual quality matching expert human code (+/-0.5 dB PSNR, +/-0.2 SSIM) while reducing implementation time from weeks to minutes. NERFIFY demonstrates that a domain-aware design enables code translation for complex vision papers, potentiating accelerated and democratized reproducible research. Code, data and implementations will be publicly released.
Related papers
- RECODE-H: A Benchmark for Research Code Development with Interactive Human Feedback [87.97664892075811]
We present RECODE-H, a benchmark of 102 tasks from research papers and repositories.<n>It includes structured instructions,unit tests, and a five-level feedback hierarchy to reflect realistic researcher-agent collaboration.<n>We also present ReCodeAgent, a framework that integrates feedback into iterative code generation.
arXiv Detail & Related papers (2025-10-07T17:45:35Z) - LOGOS: LLM-driven End-to-End Grounded Theory Development and Schema Induction for Qualitative Research [9.819685510441902]
Grounded theory offers deep insights from qualitative data, but reliance on expert-intensive manual coding presents a major scalability bottleneck.<n>We introduce LOGOS, a novel, end-to-end framework that fully automates the grounded theory workflow.<n> LOGOS integrates LLM-driven coding, semantic clustering, graph reasoning, and a novel iterative refinement process to build highly reusable codebooks.
arXiv Detail & Related papers (2025-09-29T05:16:09Z) - Reflective Paper-to-Code Reproduction Enabled by Fine-Grained Verification [46.845133190560375]
Motivated by how humans use systematic checklists to efficiently debug complex code, we propose textbfRePro, a textbfReflective Paper-to-Code textbfReproduction framework.<n>It automatically extracts a paper's fingerprint, referring to a comprehensive set of accurate and atomic criteria serving as high-quality supervisory signals.<n>It achieves 13.0% performance gap over baselines, and it correctly revises complex logical and mathematical criteria in reflecting.
arXiv Detail & Related papers (2025-08-21T06:57:44Z) - Open-Source Agentic Hybrid RAG Framework for Scientific Literature Review [2.092154729589438]
We present an agentic approach that encapsulates the hybrid RAG pipeline within an autonomous agent.<n>Our pipeline ingests bibliometric open-access data from PubMed, arXiv, and Google Scholar APIs.<n>A Llama-3.3-70B agent selects GraphRAG (translating queries to Cypher for KG) or VectorRAG (combining sparse and dense retrieval with re-ranking)
arXiv Detail & Related papers (2025-07-30T18:54:15Z) - Towards A Generalist Code Embedding Model Based On Massive Data Synthesis [35.04242699869519]
We introduce textbfCodeR (underlineCode underlineRetrieval), a state-of-the-art embedding model for general-purpose code retrieval.<n>The superior performance of CodeR is built upon CodeR-Pile, a large-scale synthetic dataset constructed under the DRU principle.
arXiv Detail & Related papers (2025-05-19T04:37:53Z) - Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning [70.04746094652653]
We introduce PaperCoder, a framework that transforms machine learning papers into functional code repositories.<n>PaperCoder operates in three stages: planning, designs the system architecture with diagrams, identifies file dependencies, and generates configuration files.<n>We then evaluate PaperCoder on generating code implementations from machine learning papers based on both model-based and human evaluations.
arXiv Detail & Related papers (2025-04-24T01:57:01Z) - Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries [51.72836644350993]
Multimodal Pretraining DEL-Fusion model (MPDF)
We develop pretraining tasks applying contrastive objectives between different compound representations and their text descriptions.
We propose a novel DEL-fusion framework that amalgamates compound information at the atomic, submolecular, and molecular levels.
arXiv Detail & Related papers (2024-09-07T17:32:21Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [136.15885067858298]
This study presents a novel feature-matching-based sparse geometry regularization module, enhanced by a spatially consistent geometry filtering mechanism and a frequency-guided geometric regularization strategy.<n>Our experiments demonstrate that SGCNeRF achieves superior geometry-consistent outcomes and also surpasses FreeNeRF, with improvements of 0.7 dB in PSNR on LLFF and DTU.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis [1.374796982212312]
Neural Radiance Fields (NeRFs) have shown impressive results for novel view synthesis when a sufficiently large amount of views are available.
We propose CombiNeRF, a framework that synergically combines several regularization techniques, some of them novel, in order to unify the benefits of each.
We show that CombiNeRF outperforms the state-of-the-art methods with few-shot settings in several publicly available datasets.
arXiv Detail & Related papers (2024-03-21T13:59:00Z) - KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain
Question Answering [68.00631278030627]
We propose a novel method KG-FiD, which filters noisy passages by leveraging the structural relationship among the retrieved passages with a knowledge graph.
We show that KG-FiD can improve vanilla FiD by up to 1.5% on answer exact match score and achieve comparable performance with FiD with only 40% of computation cost.
arXiv Detail & Related papers (2021-10-08T18:39:59Z) - On fine-tuning of Autoencoders for Fuzzy rule classifiers [6.80011340736829]
This paper presents a novel scheme to incorporate the use of autoencoders in Fuzzy rule classifiers (FRC)
Autoencoders when stacked can learn the complex non-linear relationships amongst data, and the proposed framework built towards FRC can allow users to input expert knowledge to the system.
This paper further introduces four novel fine-tuning strategies for autoencoders to improve the FRC's classification and rule reduction performance.
arXiv Detail & Related papers (2021-06-21T15:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.