Sinoara, Roberta A.
Rossi, Rafael G.
Rezende, Solange O.
Número total de Autores: 5
Afiliação do(s) autor(es):
 Univ Sao Paulo, Inst Math & Comp Sci, Lab Computat Intelligence, POB 668, BR-13561970 Sao Carlos, SP - Brazil
 Cardiff Univ, Sch Comp Sci & Informat, Queens Bldg, 5 Parade, Cardiff CF243 AA, S Glam - Wales
 Fed Univ Mato Grosso Do Sul Tres Lagoas Campus, Ranulpho Marques Leal 3484, POB 210, BR-79620080 Tres Lagoas, MS - Brazil
 Sapienza Univ Rome, Dept Comp Sci, Via Regina Elena 295, I-00161 Rome - Italy
Número total de Afiliações: 4
Tipo de documento:
JAN 1 2019.
Citações Web of Science:
Accurate semantic representation models are essential in text mining applications. For a successful application of the text mining process, the text representation adopted must keep the interesting patterns to be discovered. Although competitive results for automatic text classification may be achieved with traditional bag of words, such representation model cannot provide satisfactory classification performances on hard settings where richer text representations are required. In this paper, we present an approach to represent document collections based on embedded representations of words and word senses. We bring together the power of word sense disambiguation and the semantic richness of word and word-sense embedded vectors to construct embedded representations of document collections. Our approach results in semantically enhanced and low-dimensional representations. We overcome the lack of interpretability of embedded vectors, which is a drawback of this kind of representation, with the use of word sense embedded vectors. Moreover, the experimental evaluation indicates that the use of the proposed representations provides stable classifiers with strong quantitative results, especially in semantically-complex classification scenarios. (C) 2018 Elsevier B.V. All rights reserved. (AU)