19th 3D GeoInfo Conference 2024, Vigo, İspanya, 1 - 03 Temmuz 2024, cilt.48, ss.97-102
Semantic segmentation of 3D urban environments plays an important role in urban planning, management, and analysis. This paper presents an exploration of leveraging BuildingGNN, a deep learning framework for semantic segmentation of 3D building models, and the subsequent conversion of semantic labels into CityGML, the standardized format for 3D city models. The study begins with a methodology outlining the acquisition of a labelled dataset from BuildingNet and the necessary preprocessing steps for compatibility with BuildingGNN's architecture. The training process involves deep learning techniques tailored for 3D building structures, yielding insights into model performance metrics such as Intersection over Union (IoU) for several architectural components. Evaluation of the trained model highlights its accuracy and reliability, albeit with challenges observed, particularly in segmenting certain classes like doors. Moreover, the conversion of semantic labels into CityGML format is discussed, emphasizing the importance of data quality and meticulous annotation practices. The experiment as described in the methodology shows that outputs from the BuildingGNN for semantic segmentation can be utilized for the generation of CityGML building elements with some percentage of success. This particular work reveals several challenges such as the identification of individual architectural elements based on geometry groups. We believe that the improvement of the segmentation process could be further investigated in our near future work.