Advanced search
Start date
Betweenand


FEATURE NORMALIZED LMS ALGORITHMS

Full text
Author(s):
Yazdanpanah, Hamed ; Apolinario, Jose A., Jr. ; Matthews, MB
Total Authors: 3
Document type: Journal article
Source: CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS; v. N/A, p. 4-pg., 2019-01-01.
Abstract

Recently, a great effort has been observed to exploit sparsity in physical parameters; however, in many cases, sparsity is hidden in relations between parameters, and some appropriate tools should be utilized to expose it. In this paper, a family of algorithms called feature normalized least-mean-square (F-NLMS) algorithms is proposed to exploit hidden sparsity. The major key of these algorithms is a so-called feature matrix that transforms non-sparse systems to new systems containing sparsity; then the revealed sparsity is exploited by some sparsity-promoting penalty function. Numerical results demonstrate that the F-NLMS algorithms can reduce the steady-state mean-squared-error (MSE) and/or improve the convergence rate of the learning process significantly. (AU)

FAPESP's process: 19/06280-1 - Integration, transformation, dataset augmentation and quality control for intermediate representation
Grantee:Hamed Yazdanpanah
Support Opportunities: Scholarships in Brazil - Post-Doctoral
FAPESP's process: 15/22308-2 - Intermediate representations in Computational Science for knowledge discovery
Grantee:Roberto Marcondes Cesar Junior
Support Opportunities: Research Projects - Thematic Grants