Advanced search
Start date
Betweenand


Neural systems identification with dynamic bayesian networks and transfer entropy

Full text
Author(s):
Fernando Pasquini Santos
Total Authors: 1
Document type: Doctoral Thesis
Press: São Carlos.
Institution: Universidade de São Paulo (USP). Escola de Engenharia de São Carlos (EESC/SBD)
Defense date:
Examining board members:
Carlos Dias Maciel; Aparecido Augusto de Carvalho; Ailton Akira Shinoda; Marco Henrique Terra; Ricardo Zorzetto Nicoliello Vêncio
Advisor: Carlos Dias Maciel
Abstract

Dynamic Bayesian Networks (DBNs) are models capable of representing a dynamical system by means of a complex network which codifies statistical conditional independencies between their internal states. Among their strucutural learning methods based on data, the use of ones based on information theory are gaining ground in recent years, due to their advantages of being model-free and permitting offline learning from multiple repetitions of an experiment. However, there still remains an exploration of the parallels between the areas of DBN structure learning and those interested in obtaining measures of information transfer between elements of neural systems, mainly through transfer entropy (TE). Thus, the current work seeks to approximate these two foci of research by identifying some of their equivalences and challenges related to their usage in neural systems identification. It is noted that one of the main difficulties related to the use of information theory in multivariate neural systems concerns the high dimensionality of the probability distribution functions, requiring thus great quantities of data observed simultaneously. Furthermore, the application of DBNs and transfer entropy on continuous time systems also involves considerations about their discretization on time, which implies the necessity of relaxing the first order Markov property (instrinsinc to the definition of DBNs), and thus leads to the proposal of high-order dynamic Bayesian networks (HO-DBNs). Besides performing a review on the main proposals for solving these difficulties, this work first proposes that, under the supposition of a system with elements behaving in a similar way, the values of information theory based measures with low dimensions can be employed for learning network structures. This is shown with the use of pairwise mutual information for learning simulated Bayesian networks with fixed conditional probability distributions. And concerning the use of HO-DBNs, an algorithm based on PSO is proposed in order to pass through their search space more efficiently. Next, two applications of DBN modeling with information theory are explored in the field of neural systems, in view of obtaining knowledge about functional connectivity and even of a future application of bioinspired engineering. The challenged presented earlier are then exemplified along with some proposals of solutions. The first field regards the elicitation of functional connectivity between hippocampal subfields on the human brain based of high resolution fMRI data. Starting from a seed-to-voxel group analysis, regions of interest (ROIs) are identified and an initial DBN model is proposed, which is coherent with some studies already conducted in the literature. The second field of application concerns the neural connectivity between the neuromotor system of the locust, based on intracellular synaptic potential recordings on sensory neurons, interneurons and motor neurons under stimulation by a forceps in the femoral chordotonal organ (FeCO). Although a complete DBN model is still not possible due to the absence of sufficient and simultaneous recordings, the transfer entropy delays between stimulus and responses on the motor neuros are obtained and integrated by a Bayesian analysis, given also a pre-processing based on Singular Spectrum Analysis (SSA) which, by removing the nonstationarity characteristics of the signal (which are due to extrinsic factors on the system), considerably increased the number of available samples for learning. Such results, by helping to reduce the search space of DBNs, also direct further experiments and studies on this field. (AU)

FAPESP's process: 12/24272-7 - Structure learning of non-stationary dynamic Bayesian networks
Grantee:Fernando Pasquini Santos
Support Opportunities: Scholarships in Brazil - Doctorate (Direct)