Advanced search
Start date
Betweenand
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

Edited nearest neighbour for selecting keyframe summaries of egocentric videos

Full text
Author(s):
Kuncheva, Ludmila I. [1] ; Yousefi, Paria [1] ; Almeida, Jurandy [2]
Total Authors: 3
Affiliation:
[1] Bangor Univ, Sch Comp Sci, Dean St, Bangor LL57 1UT, Gwynedd - Wales
[2] Fed Univ Sao Paulo UNIFESP, Inst Sci & Technol, BR-12247014 Sao Jose Dos Campos, SP - Brazil
Total Affiliations: 2
Document type: Journal article
Source: JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION; v. 52, p. 118-130, APR 2018.
Web of Science Citations: 7
Abstract

A keyframe summary of a video must be concise, comprehensive and diverse. Current video summarisation methods may not be able to enforce diversity of the summary if the events have highly similar visual content, as is the case of egocentric videos. We cast the problem of selecting a keyframe summary as a problem of prototype (instance) selection for the nearest neighbour classifier (1-nn). Assuming that the video is already segmented into events of interest (classes), and represented as a dataset in some feature space, we propose a Greedy Tabu Selector algorithm (GTS) which picks one frame to represent each class. An experiment with the UT (Egocentric) video database and seven feature representations illustrates the proposed keyframe summarisation method. GTS leads to improved match to the user ground truth compared to the closest-to-centroid baseline summarisation method. Best results were obtained with feature spaces obtained from a convolutional neural network (CNN). (AU)

FAPESP's process: 16/06441-7 - Semantic information retrieval in large video databases
Grantee:Jurandy Gomes de Almeida Junior
Support Opportunities: Regular Research Grants