Peking University and Gaode Maps jointly developed Orbit2Ground, a generative photogrammetry method that reconstructs high-fidelity 3D city models from sparse satellite images. It leverages urban geometric priors and generative AI to overcome extreme viewpoint extrapolation challenges from top-down to ground-level perspectives. This enables efficient creation of digital city twins for gaming, drone logistics, and emergency systems.
Key Points
- 1.Joint effort by Peking University and Gaode Maps led by Prof. Chen Baoquan et al.
- 2.Uses city geometry priors and generative AI for realistic facade reconstruction from satellite views.
- 3.Solves distortions in SOTA methods like NeRF, 3DGS, and CityGS-X on extreme viewpoints.
- 4.Demonstrates applications for GTA6-scale cities, drone paths, and urban digital bases.
Impact Analysis
This breakthrough lowers barriers to creating detailed 3D urban models, shifting from costly manual or scanning methods to accessible satellite data. It impacts simulations, AR/VR, urban planning, and robotics by providing scalable, high-quality digital twins.
Technical Details
Orbit2Ground combines explicit urban priors with diffusion-based generative models to infer side views from overhead satellite imagery. It excels in geometry accuracy for roofs and textures/facades at street level, outperforming prior NeRF/3DGS approaches.




