Neural Local Inter-reflection Modeling for Garment Fold Rendering

POSTECH1, Sogang University2
Eurographics 2026
Performance comparison showing neural model vs baselines
Generalization to unseen garments

Performance comparison (left): Our neural local inter-reflection model yields lower error (RMSE, FLIP) compared to the direct illumination integrator ("Direct") and path tracer ("PT") in an equal time budget. Generalization to unseen garments (right): By learning inter-reflection effects conditioned on local geometric features and material properties, our model synthesizes high-fidelity multi-bounce effects for garments with unseen geometry and textures.

Abstract

Realistic garment rendering requires simulating complex multi-bounce light paths within intricate fold geometries. In these regions, conventional path tracing is computationally expensive as light becomes trapped, necessitating high bounce counts for convergence. We observe that these local inter-reflections are highly localized and exhibit radiance patterns strongly correlated with local fold shapes. Based on these insights, we propose a neural local inter-reflection model that factorizes light transport into overall intensity and directional distribution. By learning the relationship between incident light, material properties, and a novel fold shape descriptor, our model approximates multi-bounce effects using a compact Spherical Harmonics representation. Our approach demonstrates strong generalization to unseen geometries and various fabric textures without retraining. Compared to full path tracing, our method significantly reduces rendering time while preserving high visual fidelity.

Gallery

Supplementary Video

BibTeX

Coming Soon!