Advanced search
Start date
Betweenand


Deep Learning Multimodal Fusion for Road Network Extraction: Context and Contour Improvement

Full text
Author(s):
Filho Antonio ; Shimabukuro, Milton ; Poz, Aluir Dal
Total Authors: 3
Document type: Journal article
Source: IEEE Geoscience and Remote Sensing Letters; v. 20, p. 5-pg., 2023-01-01.
Abstract

Road extraction is still a challenging topic for researchers. Currently, deep convolution neural networks are state-of-the-art in road network segmentation and are known for their remarkable ability to explore multilevel contexts. Despite this, the architectures still suffer from occlusion and obstructions that cause discontinuities and omissions in extracted road networks. Generally, these effects are minimized with strategies to obtain the context of the scene and not explore the complementarity of knowledge from a diversity of sources. We propose an early fusion network with RGB and surface model images that provide complementary geometric data to improve road surface extraction. Our results demonstrate that Unet_early reaches 71.01% intersection over union (IoU) and 81.95% F1, and the fusion strategy increases the IoU and F1 scores at 2.3% and 1.5%, respectively. Besides, it overpassed the best model without fusion (DeepLabv3+). The Brazilian dataset and architecture implementation are available at https://github.com/tunofilho/ieee_road_multimodal. (AU)

FAPESP's process: 21/03586-2 - Deep convolutional neural network (DCNN) for road network extraction from fusion of airborne laser scanning (ALS) data and highest resolution image in the urban environment
Grantee:Aluir Porfírio Dal Poz
Support Opportunities: Regular Research Grants