An effective method for synthesizing new views and reconstructing hidden surfaces with spatial
Main Article Content
Abstract
The article introduces a solution that integrates ray casting in spatial grids with multi-layer perceptron models to synthesize novel views and reconstruct implicit surfaces. This method leverages grid-based querying to enhance geometric reconstruction and overall model performance. It starts with 2D images captured from various viewpoints, which are transformed into new 3D views. The process begins with camera calibration to derive camera matrices. Rays are then extracted, including their origins, direction vectors, colors, and a specified region of interest in a spatial grid for 3D reconstruction. A ray-space projection technique samples space elements and queries signed distance function values representing surface geometry, along with colors from features stored at each spatial grid node. Density is computed based on signed distance values, and the ray color is synthesized from all elements along the ray, weighted by density and color. To improve representation in unbounded scenes, a Neural Radiance Fields model combined with a spherical Gaussian function models areas outside the region of interest. Tested on the BlendedMVS and DTU datasets, this method demonstrates high performance in novel view synthesis and implicit surface reconstruction, achieving enhanced efficiency without compromising quality.