Coupled Physics-Gated Adaptation: Spatially Decoding Volumetric Photochemical Conversion in Complex 3D-Printed Objects
- URL: http://arxiv.org/abs/2511.19913v1
- Date: Tue, 25 Nov 2025 04:42:40 GMT
- Title: Coupled Physics-Gated Adaptation: Spatially Decoding Volumetric Photochemical Conversion in Complex 3D-Printed Objects
- Authors: Maryam Eftekharifar, Churun Zhang, Jialiang Wei, Xudong Cao, Hossein Heidari,
- Abstract summary: We introduce a new computer vision task: predicting dense, non-visual physical properties from 3D visual data.<n>We propose Coupled Physics-Gated Adaptation (C-PGA), a novel multimodal fusion architecture.<n>This approach offers a breakthrough in virtual chemical characterisation, eliminating the need for traditional post-print measurements.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a framework that pioneers the prediction of photochemical conversion in complex three-dimensionally printed objects, introducing a challenging new computer vision task: predicting dense, non-visual volumetric physical properties from 3D visual data. This approach leverages the largest-ever optically printed 3D specimen dataset, comprising a large family of parametrically designed complex minimal surface structures that have undergone terminal chemical characterisation. Conventional vision models are ill-equipped for this task, as they lack an inductive bias for the coupled, non-linear interactions of optical physics (diffraction, absorption) and material physics (diffusion, convection) that govern the final chemical state. To address this, we propose Coupled Physics-Gated Adaptation (C-PGA), a novel multimodal fusion architecture. Unlike standard concatenation, C-PGA explicitly models physical coupling by using sparse geometrical and process parameters (e.g., surface transport, print layer height) as a Query to dynamically gate and adapt the dense visual features via feature-wise linear modulation (FiLM). This mechanism spatially modulates dual 3D visual streams-extracted by parallel 3D-CNNs processing raw projection stacks and their diffusion-diffraction corrected counterparts allowing the model to recalibrate its visual perception based on the physical context. This approach offers a breakthrough in virtual chemical characterisation, eliminating the need for traditional post-print measurements and enabling precise control over the chemical conversion state.
Related papers
- Physics-informed Active Polarimetric 3D Imaging for Specular Surfaces [4.019683930752727]
We propose a physics-informed deep learning framework for single-shot 3D imaging of complex specular surfaces.<n>The proposed method achieves accurate and robust normal estimation in single-shot with fast inference, enabling practical 3D imaging of complex specular surfaces.
arXiv Detail & Related papers (2026-02-23T03:28:41Z) - PhyScensis: Physics-Augmented LLM Agents for Complex Physical Scene Arrangement [89.35154754765502]
PhyScensis is an agent-based framework powered by a physics engine to produce physically plausible scene configurations.<n>Our framework preserves strong controllability over fine-grained textual descriptions and numerical parameters.<n> Experimental results show that our method outperforms prior approaches in scene complexity, visual quality, and physical accuracy.
arXiv Detail & Related papers (2026-02-16T17:55:25Z) - Physics-Informed Deformable Gaussian Splatting: Towards Unified Constitutive Laws for Time-Evolving Material Field [31.2769262836663]
We propose Physics-Informed Deformable Gaussian Splatting (PIDG) to capture diverse physics-driven motion patterns in dynamic scenes.<n> Specifically, we adopt static-dynamic decoupled 4D hash encoding to reconstruct geometry and motion efficiently.<n>We further supervise data fitting by matching Lagrangian particle flow to camera-compensated optical flow, which accelerates convergence and improves generalization.
arXiv Detail & Related papers (2025-11-09T09:35:03Z) - Accelerating 3D Photoacoustic Computed Tomography with End-to-End Physics-Aware Neural Operators [74.65171736966131]
Photoacoustic computed tomography (PACT) combines optical contrast with ultrasonic resolution, achieving deep-tissue imaging beyond the optical diffusion limit.<n>Current implementations require dense transducer arrays and prolonged acquisition times, limiting clinical translation.<n>We introduce Pano, an end-to-end physics-aware model that directly learns the inverse acoustic mapping from sensor measurements to volumetric reconstructions.
arXiv Detail & Related papers (2025-09-11T23:12:55Z) - MatDecompSDF: High-Fidelity 3D Shape and PBR Material Decomposition from Multi-View Images [20.219010684946888]
MatDecompSDF is a framework for recovering high-fidelity 3D shapes and decomposing their physically-based material properties from multi-view images.<n>Our method produces editable and relightable assets that can be seamlessly integrated into standard graphics pipelines.
arXiv Detail & Related papers (2025-07-07T08:22:32Z) - Physics Augmented Machine Learning Discovery of Composition-Dependent Constitutive Laws for 3D Printed Digital Materials [0.0]
Multimaterial 3D printing, particularly through polymer jetting, enables the fabrication of digital materials by mixing distinct photopolymers at the micron scale within a single build.<n>This presents an integrated experimental and computational investigation into the tunable uniaxial mechanical tension torsion.<n>The proposed model accurately captures the nonlinear, rate-dependent behavior of 3D printed digital materials.
arXiv Detail & Related papers (2025-07-01T18:45:34Z) - Sampling 3D Molecular Conformers with Diffusion Transformers [13.536503487456622]
Diffusion Transformers (DiTs) have demonstrated strong performance in generative modeling.<n>Applying DiTs to molecules introduces novel challenges, such as integrating discrete molecular graph information with continuous 3D geometry.<n>We propose DiTMC, a framework that adapts DiTs to address these challenges through a modular architecture.
arXiv Detail & Related papers (2025-06-18T11:47:59Z) - DGS-LRM: Real-Time Deformable 3D Gaussian Reconstruction From Monocular Videos [52.46386528202226]
We introduce the Deformable Gaussian Splats Large Reconstruction Model (DGS-LRM)<n>It is the first feed-forward method predicting deformable 3D Gaussian splats from a monocular posed video of any dynamic scene.<n>It achieves performance on par with state-of-the-art monocular video 3D tracking methods.
arXiv Detail & Related papers (2025-06-11T17:59:58Z) - Generalizable and Relightable Gaussian Splatting for Human Novel View Synthesis [49.67420486373202]
GRGS is a generalizable and relightable 3D Gaussian framework for high-fidelity human novel view synthesis under diverse lighting conditions.<n>We introduce a Lighting-aware Geometry Refinement (LGR) module trained on synthetically relit data to predict accurate depth and surface normals.
arXiv Detail & Related papers (2025-05-27T17:59:47Z) - Physically Compatible 3D Object Modeling from a Single Image [109.98124149566927]
We present a framework that transforms single images into 3D physical objects.<n>Our framework embeds physical compatibility into the reconstruction process.<n>It consistently enhances the physical realism of 3D models over existing methods.
arXiv Detail & Related papers (2024-05-30T21:59:29Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.