AI-generated imagery and 3D content have come a long way in a very short space of time. It was only two years ago that Google researchers revealed NeRF, or Neural Radiance Fields, and less than two weeks ago NVIDIA blew us away with almost real-time generation of 3D scenes from just a few dozen still photographs using their “Instant NeRF” techniques.
Well, now, a new paper has been released by the folks at Waymo describing “Block-NeRF“, a technique for “scalable large scene neural view synthesis” – basically, generating really really large environments. And as a proof of concept, they recreated the city of San Francisco from 2.8 million photographs. And this video, Károly Zsolnai-Fehér at Two Minute Papers explains how it all works.
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!