Busca avançada
Ano de início
Entree


Deep Learning Multimodal Fusion for Road Network Extraction: Context and Contour Improvement

Texto completo
Autor(es):
Filho Antonio ; Shimabukuro, Milton ; Poz, Aluir Dal
Número total de Autores: 3
Tipo de documento: Artigo Científico
Fonte: IEEE Geoscience and Remote Sensing Letters; v. 20, p. 5-pg., 2023-01-01.
Resumo

Road extraction is still a challenging topic for researchers. Currently, deep convolution neural networks are state-of-the-art in road network segmentation and are known for their remarkable ability to explore multilevel contexts. Despite this, the architectures still suffer from occlusion and obstructions that cause discontinuities and omissions in extracted road networks. Generally, these effects are minimized with strategies to obtain the context of the scene and not explore the complementarity of knowledge from a diversity of sources. We propose an early fusion network with RGB and surface model images that provide complementary geometric data to improve road surface extraction. Our results demonstrate that Unet_early reaches 71.01% intersection over union (IoU) and 81.95% F1, and the fusion strategy increases the IoU and F1 scores at 2.3% and 1.5%, respectively. Besides, it overpassed the best model without fusion (DeepLabv3+). The Brazilian dataset and architecture implementation are available at https://github.com/tunofilho/ieee_road_multimodal. (AU)

Processo FAPESP: 21/03586-2 - Deep convolutional neural network (DCNN) para extração automática de rede viária a partir da fusão de dados varredura laser aerotransportado e imagens de altíssima resolução em ambiente urbano
Beneficiário:Aluir Porfírio Dal Poz
Modalidade de apoio: Auxílio à Pesquisa - Regular