Back in 2007, Microsoft Live Labs (a group inside Microsoft Research) announced the Photosynth project. Using various photos that are available on the web, Photosynth created dreamscapes and allowed users to navigate them. Photosynth.net website allowed users to upload various photos of the same location or object and create a 3D model of the photos and a point cloud of a photographed location or object.
Google Research team today unveiled a new project called NeRF in the Wild (Neural Radiance Fields for Unconstrained Photo Collections). I think NeRF is Photosynth steroids. Similar to Photosynth, NeRF-W can reconstruct 3D scenes from internet photography. Thanks to advanced image processing algorithms that are available today, the results are just fantastic. NeRF can even adjust varying lighting from photos to create a consistent final 3D scene.
NeRF-W captures lighting and photometric post-processing in a low-dimensional latent embedding space. Interpolating between two embeddings smoothly captures variation in appearance without affecting 3D geometry. NeRF-W disentangles lighting from the underlying 3D scene geometry. The latter remains consistent even as the former changes.
Check out some of the results below.
You can learn more about this project from the video below.