Associative memories (AMs) are mathematical models geared to storing desired input-outputpairs called fundamental memories. In addition, some error correction capability is also desiredfor an AM model. In this project we will investigate in detail a class of AMs called $\Theta$-fuzzy associative memories ($\Theta$-FAMs), that correspond to mappings between classes of fuzzy sets given by two-layer fuzzy neural networks. Under some conditions, a -FAM has optimal absolute storage capability and is able to achieve perfect recall for noisy input patterns that are sufficiently close to uncorrupted pattern cues. The $\Theta$-FAMs models were successfully applied in many classification problems including several benchmarks and some real world applications such as vision-based robot self-localization, classification of time series of vegetaion indixes and speaker recognition. In this project we will extend -FAMs models in two ways: topologically and conceptually. The first extension aims at providing further degrees of freedom and flexibility of architecture forregression problems. The second extension refers to the use of the $\Theta$-FAM approach as the $\Theta$-FAM mathematical model of inference for general classes of information granules based on lattice theory.
News published in Agência FAPESP Newsletter about the scholarship: