Dynamic LiDAR Re-simulation using Compositional Neural Fields

1ETH Zurich
2TU Munich
3Technion
4NVIDIA
CVPR 2024


Abstract


We introduce DyNFL, a novel neural field-based approach for high-fidelity re-simulation of LiDAR scans in dynamic driving scenes. DyNFL processes LiDAR measurements from dynamic environments, accompanied by bounding boxes of moving objects, to construct an editable neural field. This field, comprising separately reconstructed static backgrounds and dynamic objects, allows users to modify viewpoints, adjust object positions, and seamlessly add or remove objects in the re-simulated scene. A key innovation of our method is the neural field composition technique, which effectively integrates reconstructed neural assets from various scenes through a ray drop test, accounting for occlusions and transparent surfaces. Our evaluation with both synthetic and real-world environments demonstrates that DyNFL substantial improves dynamic scene simulation based on LiDAR scans, offering a combination of physical fidelity and flexible editing capabilities.

Overview


Our method takes LiDAR scans and tracked bounding boxes of dynamic vehicles as input. DyNFL first decomposes the scene into a static background and N dynamic vehicles, each modelled using a dedicated neural field. These neural fields are then composed to re-simulate LiDAR scans in dynamic scenes. Our composition technique supports various scene edits, including altering object trajectories, removing and adding reconstructed neural assets between scenes.


LiDAR novel view synthesis (NVS) by editing the scene


DyNFL enables to synthesize LiDAR novel views by editing the scene, such as vehicle removal, trajectory manipulation, and vehicle insertion. For each example, Original scene(Left) and scene after edition(Right) are demonstrated.


Vehicle removal


Trajectory manipulation


Vehicle insertion


LiDAR NVS with varying sensor configuration


LiDAR novel view synthesis by changing sensor elevation angle θ, poses (x, y, z) and number of beams on Waymo Dynamic dataset. The points are color-coded by the intensity values (0 0.25).


Qualitative results for LiDAR range estimation


Qualitative results for range estimation on Waymo Dynamic dataset. Dynamic vehicles are zoomed in, and points are color-coded by range errors~(-100 100 cm).



Qualitative results for range estimation on static scenes (Waymo NVS and TownReal ). Regions with gross errors (-100 100 cm) are highlighted.


Citation


@inproceedings{Wu2023dynfl,
                    title={Dynamic LiDAR Re-simulation using Compositional Neural Fields},
                    author={Wu, Hanfeng and Zuo, Xingxing and Leutenegger, Stefan and Litany, Or and Schindler, Konrad and Huang, Shengyu},
                    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
                    year      = {2024}
                }