Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis

Ajay Jain
UC Berkeley
Matthew Tancik
UC Berkeley
Pieter Abbeel
UC Berkeley
Paper (arXiv) Code (coming soon!)

Summary

  • Task: render a scene from novel poses given just a few photos.
  • Neural Radiance Fields (NeRF) generate crisp renderings with 20-100 photos, but overfit with only a few.
  • Problem: NeRF is only trained to render observed poses, leading to artifacts when few are available.
  • Key insight: Scenes share high-level semantic properties across viewpoints, and pre-trained 2D visual encoders can extract these semantics. "An X is an X from any viewpoint."
  • Our proposed DietNeRF supervises NeRF from arbitrary poses by ensuring renderings have consistent high-level semantics using the CLIP Vision Transformer.
  • We generate plausible novel views given 1-8 views of a test scene.
Qualitative results

Novel views synthesized given 8 training images per object.

Overview of DietNeRF

Our scene representation learns consistent high-level semantics.


Abstract

We present DietNeRF, a 3D neural scene representation estimated from a few images. Neural Radiance Fields (NeRF) learn a continuous volumetric representation of a scene through multi-view consistency, and can be rendered from novel viewpoints by ray casting. While NeRF has an impressive ability to reconstruct geometry and fine details given many images, up to 100 for challenging 360° scenes, it often finds a degenerate solution to its image reconstruction objective when only a few input views are available. To improve few-shot quality, we propose DietNeRF. We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. DietNeRF is trained on individual scenes to (1) correctly render given input views from the same pose, and (2) match high-level semantic attributes across different, random poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse single-view, 2D photographs mined from the web with natural language supervision. In experiments, DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions.


Challenges of few-shot view synthesis

Challenges of few-shot view-synthesis with neural radiance fields

Few-shot novel view synthesis is a challenging problem. (A) With 100 observations of an object, NeRF estimates a detailed and accurate representation purely from multi-view consistency. (B) However, with 8 views, the same NeRF overfits by placing the object in the near-field of the training cameras. (C) NeRF can converge when simplified and tuned, but poorly captures fine detail. (D) Without prior knowledge about similar objects, single-scene view synthesis cannot plausibly complete unobserved regions, such as the left side of an object seen from the right. In this work, we find that these failures occur because NeRF is only supervised from the sparse training poses.


Training from scratch


Training images

By training with our semantic consistency loss, DietNeRF is able to render plausible novel views given only 8 training images per object (shown on top).


Using only a single view


Semantic consistency improves perceptual quality from a single input view. Fine-tuning pixelNeRF with NeRF's MSE loss slightly improves a rendering of the input view, but does not remove most perceptual flaws like blurriness in novel views. Fine-tuning with both MSE and semantic consistency losses (DietPixelNeRF, bottom) improves sharpness of all views.

Citation

Ajay Jain, Matthew Tancik, Pieter Abbeel. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. arXiv, 2021.

@article{jain2021dietnerf,
  title={Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis},
  author={Ajay Jain and Matthew Tancik and Pieter Abbeel},
  year={2021},
  journal={arXiv},
}