Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
TL;DR本研究提出了一种名为 mĩp-NeRF 360 的模型,通过应用非线性场景参数化、在线蒸馏和一种新的基于畸变的正则化器,克服了 3D 场景中存在的挑战,在高度复杂的未限定真实世界场景中减少了均方误差 57%,能够提供逼真的合成视图和详细的深度图。
Abstract
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF