Paper | Video | Technical Report (UniOcc)
(Visualization of RenderOcc's prediction, which is supervised only with 2D labels.)
RenderOcc is a novel paradigm for training vision-centric 3D occupancy models only with 2D labels. Specifically, we extract a NeRF-style 3D volume representation from multi-view images, and employ volume rendering techniques to establish 2D renderings, thus enabling direct 3D supervision from 2D semantics and depth labels.
-
Train
# Train RenderOcc with 8 GPUs ./tools/dist_train.sh ./configs/renderocc/renderocc-7frame.py 8
-
Evaluation
# Eval RenderOcc with 8 GPUs ./tools/dist_test.sh ./configs/renderocc/renderocc-7frame.py ./path/to/ckpts.pth 8
-
Visualization
# Dump predictions bash tools/dist_test.sh configs/renderocc/renderocc-7frame.py renderocc-7frame-12e.pth 1 --dump_dir=work_dirs/output # Visualization (select scene-id) python tools/visualization/visual.py work_dirs/output/scene-xxxx
(The pkl file needs to be regenerated for visualization.)
Method | Backbone | 2D-to-3D | Lr Schd | GT | mIoU | Config | Log | Download |
---|---|---|---|---|---|---|---|---|
RenderOcc | Swin-Base | BEVStereo | 12ep | 2D | 24.46 | config | log | model |
- More model weights will be released later.
Many thanks to these excellent open source projects:
Related Projects:
If this work is helpful for your research, please consider citing:
@inproceedings{pan2024renderocc,
title={Renderocc: Vision-centric 3d occupancy prediction with 2d rendering supervision},
author={Pan, Mingjie and Liu, Jiaming and Zhang, Renrui and Huang, Peixiang and Li, Xiaoqi and Xie, Hongwei and Wang, Bing and Liu, Li and Zhang, Shanghang},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
pages={12404--12411},
year={2024},
organization={IEEE}
}