3D content creators are clamoring for NVIDIA Instant NeRF, an inverse rendering tool that turns a set of static images into a realistic 3D scene.
Since its debut earlier this year, tens of thousands of developers around the world have downloaded the source code and used it to render spectacular scenes, sharing eye-catching results on social media.
The research behind Instant NeRF is being honored as a best paper at SIGGRAPH — which runs Aug. 8-11 in Vancouver and online — for its contribution to the future of computer graphics research. One of just five papers selected for this award, it’s among 17 papers and workshops with NVIDIA authors that are being presented at the conference, covering topics spanning neural rendering, 3D simulation, holography and more.
NVIDIA recently held an Instant NeRF sweepstakes, asking developers to share 3D scenes created with the software for a chance to win a high-end NVIDIA GPU. Hundreds participated, posting 3D scenes of landmarks like Stonehenge, their backyards and even their pets.
Among the creators using Instant NeRF are:
Through the Looking Glass: Karen X. Cheng and James Perlman
San Francisco-based creative director Karen X. Cheng is working with software engineer James Perlman to render 3D scenes that test the boundaries of what Instant NeRF can create.
The duo has used Instant NeRF to create scenes that explore reflections within a mirror (shown above) and handle complex environments with multiple people — like a group enjoying ramen at a restaurant.
“The algorithm itself is groundbreaking — the fact that you can render a physical scene with higher fidelity than normal photogrammetry techniques is just astounding,” Perlman said. “It’s incredible how accurately you can reconstruct lighting, color differences or other tiny details.”
“It even makes mistakes look artistic,” said Cheng. “We really lean into that, and play with training a scene less sometimes, experimenting with 1,000, or 5,000 or 50,000 iterations. Sometimes I’ll prefer the ones trained less because the edges are softer and you get an oil-painting effect.”
Using prior tools, it would take them three or four days to train a “decent-quality” scene. With Instant NeRF, the pair can churn out about 20 a day, using an NVIDIA RTX A6000 GPU to render, train and preview their 3D scenes.
With rapid rendering comes faster iteration.
“Being able to render quickly is very necessary for the creative process. We’d meet up and shoot 15 or 20 different versions, run them overnight and then see what’s working,” said Cheng. “Everything we’ve published has been shot and reshot a dozen times, which is only possible when you can run several scenes a day.”
Preserving Moments in Time: Hugues Bruyère
Hugues Bruyère, partner and chief of innovation at Dpt., a Montreal-based creative studio, uses Instant NeRF daily.
“3D captures have always been of strong interest to me because I can go back to those volumetric reconstructions and move in them, adding an extra dimension of meaning to them,” he said.
Bruyère rendered 3D scenes with Instant NeRF using the data he’d previously captured for traditional photogrammetry relying on mirrorless digital cameras, smartphones, 360 cameras and drones. He uses an NVIDIA GeForce RTX 3090 GPU to render his Instant NeRF scenes.
Bruyère believes Instant NeRF could be a powerful tool to help preserve and share cultural artifacts through online libraries, museums, virtual-reality experiences and heritage-conservation projects.
“The aspect of capturing itself is being democratized, as camera and software solutions become cheaper,” he said. “In a few months or years, people will be able to capture objects, places, moments and memories and have them volumetrically rendered in real time, shareable and preserved forever.”
Using pictures taken with a smartphone, Bruyère created an Instant NeRF render of an ancient marble statue of Zeus from an exhibition at Toronto’s Royal Ontario Museum.
Stepping Into Remote Scenes: Jonathan Stephens
Jonathan Stephens, chief evangelist for spatial computing company EveryPoint, has been exploring Instant NeRF for both creative and practical applications.
EveryPoint reconstructs 3D scenes such as stockpiles, railyards and quarries to help businesses manage their resources. With Instant NeRF, Stephens can capture a scene more completely, allowing clients to freely explore a scene. He uses an NVIDIA GeForce RTX 3080 GPU to run scenes rendered with Instant NeRF.
“What I really like about Instant NeRF is that you quickly know if your render is working,” Stephens said. “With a large photogrammetry set, you could be waiting hours or days. Here, I can test out a bunch of different datasets and know within minutes.”
He’s also experimented with making NeRFs using footage from lightweight devices like smart glasses. Instant NeRF could turn the low-resolution, bumpy footage from Stephens walking down the street into a smooth 3D scene.
Find NVIDIA at SIGGRAPH
Tune in for a special address by NVIDIA CEO Jensen Huang and other senior leaders on Tuesday, Aug. 9, at 9 a.m. PT to hear about the research and technology behind AI-powered virtual worlds.
NVIDIA is also presenting a score of in-person and virtual sessions for SIGGRAPH attendees, including:
Learn how to create with Instant NeRF in the hands-on demo, NVIDIA Instant NeRF — Getting Started With Neural Radiance Fields. Instant NeRF will also be part of SIGGRAPH’s “Real-Time Live” showcase — where in-person attendees can vote for a winning project.
For more interactive sessions, the NVIDIA Deep Learning Institute is offering free hands-on training with NVIDIA Omniverse and other 3D graphics technologies for in-person conference attendees.
And peek behind the scenes of NVIDIA GTC in the documentary premiere, The Art of Collaboration: NVIDIA, Omniverse, and GTC, taking place Aug. 10 at 10 a.m. PT, to learn how NVIDIA’s creative, engineering and research teams used the company’s technology to deliver the visual effects in the latest GTC keynote address.
Find out more about NVIDIA at SIGGRAPH, and see a full schedule of events and sessions in this show guide.