Semantic 3D Reconstruction of Urban Scenes

The goal of this project is the automatic generation of semantically annotated 3D city models from image collections. The idea of semantic 3D reconstruction is to recover the geometry of an observed scene while at the same time also interpreting the scene in terms of semantic object classes (such as buildings, vegetation etc.) - similar to a human operator, who also interprets the image content while making measurements. The advantage of jointly reasoning about shape and object class is that one can exploit class-specific a-priori knowledge about the geometry: on the one hand the type of object provides information about its shape, e.g. walls are likely to be vertical, whereas streets are not; on the other hand, 3D geometry is also an important cue for classification, e.g. in our example vertical surfaces are more likely to be walls than streets. In our research, we address this cross-fertilization by developing methods which jointly infer 3D shapes and semantic classes, leading to superior, interpreted 3D city models which allow for realistic applications and advanced reasoning tasks.

3D Reconstruction

New Publications:

Contact Details:
external pageMaros Blaha, Audrey Richard, Jan Dirk Wegner

JavaScript has been disabled in your browser