For the past decades, the study of the binomial music and language has been of interest in several branches of cognitive sciences, including psychology, linguistics, anthropology, musicology, cognitive neuroscience, and education. Undoubtedly, songs are the perfect medium to study the relationship between both domains. This article will explore some contributions from neurosciences that could be deemed interesting to the field of music education, focusing on the relationship between melody and words in songs. The influence of these components on song perception and production is an ongoing matter of debate both in neurosciences and music education. The background for this discussion will be set by first mentioning the evolutionary commonalities between music and language, and a discussion on the shared leaning mechanisms for music and language. Since pitch and rhythm are important songs’ components, the comparative research of these elements across music and language will also be approached. In the intertwining of both fields, a special focus will be given to Music Learning Theory, a framework proposed by Edwin Gordon, who advocates the use of songs presented both with text and neutral syllable since infancy. Considering that songs are one of the most used resources in music education, it is questioned if the scientific advances in the neurosciences can inform musical pedagogy, thus new paths of investigation are suggested at the intersection of the two disciplines.
|Number of pages||21|
|Journal||Revista Portuguesa de Musicologia / Portuguese Journal of Musicology|
|Publication status||Published - Dec 2021|
- Music education
- Songs with text
- Songs with neutral syllable