In this project we want to obtain deeper and more comprehensive results based on our previous master research on Real-Time Granular Synthesis in order to continue the development of various software applications as foreseen in the future prospects of such research. The main goal is to obtain a flexible system for synthesis and temporal control of granular sound structures, from an extension of Gabor's Time-Frequency Space, useful to composers and performers of electroacoustic or electronic music, sound designers, soundscapes and art installations. This system should be able to synthesize soundscapes from sound of granular nature in real time directly integrated into the informational content of videos. Conversely the system should also be able to generate complex images and visual textures from a library of basic graphs. The whole process of construction of the images is controlled by the values of the parameters of a granular sound stream in real time, that is, the informational content of the audio. Finally, we want to explore and develop gestural controllers and mapping strategies appropriate for this intermodal system allowing the user, composer or performer to interact in an intuitive and expressive manner.
News published in Agência FAPESP Newsletter about the scholarship: