Advanced search
Start date
Betweenand


Enhancing Video Colorization with Deep Learning: A Comprehensive Analysis of Training Loss Functions

Full text
Author(s):
Stival, Leandro ; Torres, Ricardo da Silva ; Pedrini, Helio
Total Authors: 3
Document type: Journal article
Source: INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, INTELLISYS 2024; v. 1065, p. 14-pg., 2024-01-01.
Abstract

Traditional colorization approaches rely on the expertise of artists or researchers that meticulously paint or digitally add colors to an image (or video frames), which is often a time-consuming, laborious, and error-prone task. Automatic methods, based on deep learning techniques, have replaced such approaches to colorization. Despite the advances toward improving their accuracy, there is no consensus regarding the best training procedures for existing artificial neural network techniques proposed for video colorization. In this paper, in order to fill this gap, we focus on the impact of selecting an appropriate loss function. We investigate seven loss functions to find the combination that gives the best results with Deep Learning Video Colorization (DLVC) using a U-Net topology and an attention mechanism trained on the DAVIS dataset. An investigation of the current validation metrics for colorization results was also conducted to analyze their ability to accurately judge colors between frames. (AU)

FAPESP's process: 23/11556-1 - Novel deep learning methods for remote sensing imagery
Grantee:Leandro Stival
Support Opportunities: Scholarships abroad - Research Internship - Doctorate
FAPESP's process: 22/12294-8 - Convolutional Networks with Attention for Video Color Propagation
Grantee:Leandro Stival
Support Opportunities: Scholarships in Brazil - Doctorate