Advanced search
Start date
Betweenand


Contrastive Loss Based on Contextual Similarity for Image Classification

Full text
Author(s):
Valem, Lucas Pascotti ; Guimaraes Pedronette, Daniel Carlos ; Allili, Mohand Said
Total Authors: 3
Document type: Journal article
Source: ADVANCES IN VISUAL COMPUTING, ISVC 2024, PT I; v. 15046, p. 12-pg., 2025-01-01.
Abstract

Contrastive learning has been extensively exploited in self-supervised and supervised learning due to its effectiveness in learning representations that distinguish between similar and dissimilar images. It offers a robust alternative to cross-entropy by yielding more semantically meaningful image embeddings. However, most contrastive losses rely on pairwise measures to assess the similarity between elements, ignoring more general neighborhood information that can be leveraged to enhance model robustness and generalization. In this paper, we propose the Contextual Contrastive Loss (CCL) to replace pairwise image comparison by introducing a new contextual similarity measure using neighboring elements. The CCL yields a more semantically meaningful image embedding ensuring better separability of classes in the latent space. Experimental evaluation on three datasets (Food101, MiniImageNet, and CIFAR-100) has shown that CCL yields superior results by achieving up to 10.76% relative gains in classification accuracy, particularly for fewer training epochs and limited training data. This demonstrates the potential of our approach, especially in resource-constrained scenarios. (AU)

FAPESP's process: 18/15597-6 - Aplication and investigation of unsupervised learning methods in retrieval and classification tasks
Grantee:Daniel Carlos Guimarães Pedronette
Support Opportunities: Research Grants - Young Investigators Grants - Phase 2