Abstract
MuSyFI is a system that tries to model an inspirationalcomputational creative process. It uses images as sourceof inspiration and begins by implementing a possibletranslation between visual and musical features. Resultsof this mapping are fed to a Genetic Algorithm (GA)to try to better model the creative process and producemore interesting results. Three different musical artifacts are generated: an automatic version, a co-createdversion, and a genetic version. The automatic versionmaps features from the image into musical features nondeterministically; the co-created version adds harmonylines manually composed by us to the automatic version; finally, the genetic version applies a genetic algorithm to a mixed population of automatic and co-createdartifacts.The three versions were evaluated for six differentimages by conducting surveys. They evaluated whetherpeople considered our musical artifacts music, if theythought the artifacts had quality, if they considered theartifacts ’novel’, if they liked the artifacts, and lastly ifthey were able to relate the artifacts with the image inwhich they were inspired. We gathered a total of 300answers and overall people answered positively to allquestions, which confirms our approach was successfuland worth further exploring.
Original language | English |
---|---|
Title of host publication | Proceedings of the 12th International Conference on Computational Creativity |
Editors | Andrés Gómez de Silva Garza, Tony Veale, Wendy Aguilar, Rafael Pérez y Pérez |
Publisher | Association for Computational Creativity |
Pages | 103-112 |
Number of pages | 10 |
ISBN (Electronic) | 978-989-54160-3-5 |
Publication status | Published - 2021 |
Event | ICCC’21 - , Mexico Duration: 14 Sept 2021 → 18 Sept 2021 |
Conference
Conference | ICCC’21 |
---|---|
Country/Territory | Mexico |
Period | 14/09/21 → 18/09/21 |
Keywords
- Computational Creativity
- Music Generation
- Genetic Algorithm
- Inspiration
- Feature Translation