PCM-NeRF: Probabilistic Camera Modeling for Neural Radiance Fields under Pose Uncertainty

Shravan Venkatraman*1, Rakesh Raj Madavan*2, Pavan Kumar S1
1Vellore Institute of Technology, Chennai, India
2Shiv Nadar University, Chennai, India

BMVC 2025 (Under Review)

PCM-NeRF models camera poses as probabilistic distributions with learnable uncertainties, enabling consistent and accurate surface reconstruction in challenging scenarios.


Sculpture Teaser Method Tradeoff Scatter
Left — our model captures fine and complex surface details. Right — a scatter plot comparing performance against state-of-the-art methods.

Abstract

Neural surface reconstruction methods typically treat camera poses as fixed values, assuming perfect accuracy from Structure-from-Motion(SfM) systems. This assumption breaks down with imperfect pose estimates, leading to distorted or incomplete reconstructions. We present PCM-NeRF, a novel approach that explicitly models camera poses as probability distributions with learnable uncertainties rather than fixed points in SE(3). Our framework introduces: (1) a probabilistic camera representation with learnable mean and uncertainty parameters, (2) an uncertainty-guided optimization process that couples view reliability with pose uncertainty, and (3) a volumetric distribution alignment method for geometric consistency across views. By modeling pose uncertainty explicitly, our method achieves high-quality reconstruction even with significant pose errors, without requiring foreground masks. Experiments on outlier poses-filled scenes demonstrate that PCM-NeRF outperforms state-of-the-art methods in challenging scenarios with severe pose errors, particularly for complex structures.


PCM-NeRF Architecture

Method

We propose PCM-NeRF, a probabilistic framework that models camera poses as distributions in \( \text{SE}(3) \)(Special Euclidean group in 3D) with learnable uncertainty. Our primary motive is that camera pose estimates from SfM have varying levels of reliability that should be quantified and leveraged during optimization. Unlike prior work, we represent each pose as a multivariate normal distribution with trainable mean and covariance, capturing varying degrees of confidence in both translation and rotation components. We introduce an uncertainty-aware optimization that couples view reliability with pose learning, enabling mutual reinforcement between uncertainty estimation and scene reconstruction. We further develop a volumetric distribution alignment technique to enforce geometric consistency across uncertain views. This probabilistic treatment of camera poses enables robust reconstruction even in the presence of severe pose noise (outliers).


Results


Qualitative Comparison

We evaluate our method against leading approaches across all scenes in the dataset using Chamfer Distance (CD) (Table 1) and F-Score (Table 2) metrics to measure geometric accuracy. Each baseline uses its official implementation for pose optimization, followed by NeuS training with the refined poses. Our approach achieves an average improvement of 21.5% in CD and 3.4% in F-Score. The approach taken by previous approaches like BARF [26] are based on local optimization of the poses that can degrade the overall quality of the surface due to inconsistencies. Rather, we perform a global sampling and optimization of poses which is more robust to uncertainties and outliers.

Chamfer Distance (CD) Results

Method Baby Bear Bell Clock Deaf Farmer Pavilion Sculpture
NeuS0.690.313.331.160.552.490.290.66
Neuralangelo0.700.650.380.594.891.950.31
BARF1.080.283.310.190.462.130.380.57
SCNeRF1.190.273.741.330.461.450.230.81
GARF2.042.253.090.500.591.580.960.57
L2G-NeRF1.150.291.260.240.402.180.460.37
Joint-TensoRF3.111.222.490.360.362.511.350.70
PoRF0.310.490.300.303.802.20
SG-NeRF0.560.250.980.150.450.870.200.22
Ours0.250.240.510.160.230.790.240.20

F-Score Results

Method Baby Bear Bell Clock Deaf Farmer Pavilion Sculpture
NeuS0.650.930.480.720.840.540.930.70
Neuralangelo0.740.800.660.140.470.89
BARF0.580.910.490.950.660.510.860.87
SCNeRF0.560.930.490.650.400.590.950.73
GARF0.770.950.820.800.570.570.410.71
L2G-NeRF0.850.920.650.890.870.470.81
Joint-TensoRF0.740.900.840.600.240.340.350.76
PoRF0.920.780.920.390.35
SG-NeRF0.740.930.710.960.870.760.940.92
Ours0.960.950.940.950.960.830.930.92