We present ScanNet++, a large-scale dataset that couples together capture of high-quality and commodity-level geometry and color of indoor scenes. Each scene is captured with a high-end laser scanner at sub-millimeter resolution, along with registered 33-megapixel images from a DSLR camera, and RGB-D streams from an iPhone.
ScanNet++ enables a new real-world benchmark for novel view synthesis, both from high-quality RGB capture, and importantly also from commodity-level images, in addition to a new benchmark for 3D semantic scene understanding that comprehensively encapsulates diverse and ambiguous semantic labeling scenarios.
Currently, ScanNet++ contains 460 scenes, 280,000 captured DSLR images, and over 3.7M iPhone RGBD frames.Go to the ScanNet++ Dataset page to request access to the data.
We provide several tools to process the raw ScanNet++ data and use it for novel view synthesis and semantic understanding.
We take privacy very seriously. We have taken great care to ensure that the data is anonymized and does not contain any personally identifiable information. If you notice any privacy concerns, please contact us.
@inproceedings{yeshwanthliu2023scannetpp,
title={ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes},
author={Yeshwanth, Chandan and Liu, Yueh-Cheng and Nie{\ss}ner, Matthias and Dai, Angela},
booktitle = {Proceedings of the International Conference on Computer Vision ({ICCV})},
year={2023}
}