Advanced search
Start date

Bootstrap conditions for interaction-based multimodal learning in cognitive robotics


How can one incorporate knowledge and develop new abilities by interacting with the world? Different approaches to answer this question have been proposed by philosophers, psychologists and artificial intelligence researchers in the last decades. Despite all these propositions, the capabilities of machines to perform these high level actions are still far from humans in many aspects. In fact, the development of superior cognitive abilities in human beings is strongly related to their capacity to properly coordinate a complex set of mechanisms that integrate short and long term memories, reasoning, planning, emotion, decision making, learning, language, motivation, volition and attention. However, it is reasonable to assume that before complex abilities can emerge, some previous conditions must be achieved. Before someone can learn how to interact with a ball, for example, he/she must know his/her body scheme, he/she must have explored his/her possibilities of interacting with the world and even must have the consciousness of his/her own existence in the world. In fact, Piaget showed that children think in strikingly different ways compared to adults. Only in the latter, higher levels of cognition become possible. In recent approaches, some researchers have hypothesized that there may exist some minimal conditions for cognition to evolve. In this context, one important question arises: what are the key cognitive processes, mechanisms and schemes that can bootstrap basic cognitive functions? Several cognitive models and architectures have been proposed in order to provide theoretical frameworks to work with these cognitive processes in last years. Most of them, however, are not suitable for computational approaches or tests. In order to bring this discussion to a computational scenario, recent works of the authors have proposed some of the first formal models to handle cognitive and conscious, as well as cognitive toolkits and softwares. Based on these previous works of our team, this project aims to investigate these bootstrap conditions for cognition under a computational perspective. Particularly, we aim to determine the minimal subset of modules/elements that can cause cognitive abilities to emerge in an agent. We also aim to investigate the influence of the existence/absence of these modules in the agentâE"s behavior. We intend to build a humanoid robot -- also based on previous designs from our group -- as well as run simulations on a robotic simulation framework, in order to compare the evolution of the cognitive aspects of the -- real or simulated -- robots. The ability of the agent to formulate new concepts based on multimodal sensorial stimuli and previous knowledge will be investigated and the quality of the formulated concepts will be adopted as a success criteria of the approach. The capability of the agent to potentially express attentiveness, sentience, self-awareness, self-consciousness, autonoetic consciousness, mineness and perspectivalness can also be adopted as complementary indicators of the success of the approach. The developed computational framework, as well as the physical and simulated models of the robot, will be freely available for the community after the accomplishment of the project. (AU)

Articles published in Agência FAPESP Newsletter about the research grant:
Articles published in other media outlets (0 total):
More itemsLess items

Please report errors in scientific publications list by writing to: