Tunes of Innovation: A Framework for Analyzing Artistic Style in Musical Compositions
DOI:
https://doi.org/10.47611/jsrhs.v14i1.8441Keywords:
generative AI, music composition analyses, music LLMAbstract
In the modern technological revolution, artificial intelligence (AI) has developed capabilities to generate new musical compositions by using algorithms trained on existing music. Despite advancements in this field, the full potential of AI in music remains unexplored. We provide a framework to analyze specific stylistic aspects of an artist’s musical style including structure, instrumentation, and melodic and rhythmic style. As a case study, we used songs by the pop music star Ed Sheeran’s to define a detailed profile of an Ed Sheeran composition using an artificial intelligence program and assess its ability to capture the artists’ stylistic features. We use MuseNet as an example, but the framework is generalizable to any model. Popular Ed Sheeran songs were analyzed using concepts of music theory. After identifying similarities between these compositions, we compared our findings with the performance of MuseNet to determine what aspects of the style are captured. The study demonstrated that generative AI models used for music compositions are quite accurate in predicting artistic styles such as composition structure and melody. This reflected the rigorous training of these models into embeddings related to these features. For characteristics such as instrumentation, the models captured the styles accurately only if the original training-set included the instrument in question, but accuracy dropped for esoteric instruments. In conclusion, AI models were effective in emulating artistic styles, but still had room for further retention of a human touch to the music that invokes in the listener emotion and sentiment in the overall composition.
Downloads
References or Bibliography
AIVA. (n.d.). AIVA - Artificial Intelligence Virtual Artist. Retrieved September 29, 2024, from https://www.aiva.ai/
Donahue, C., Caillon, A., Roberts, A., Manilow, E., & Esling, P. (2023). Singsong: Generating musical accompaniments from singing (arXiv:2301.12662). arXiv. https://arxiv.org/abs/2301.12662
Google. (n.d.). Music FX. AI Test Kitchen. Retrieved September 1, 2024, from https://aitestkitchen.withgoogle.com/tools/music-fx
Helloworld Album. (n.d.). Helloworld album. Retrieved September 1, 2024, from https://www.helloworldalbum.net/
Krumhansl, C. L., & Kessler, E. J. (1982). “Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys.” Psychological Review, 89(4), 334–368. http://music.psych.cornell.edu/articles/tonality/1982K&KesslerPsychRevfromJournal.pdf
Magenta. (n.d.). Magenta: Music and art generation with machine learning. Retrieved March 4, 2024, from https://magenta.tensorflow.org/
Minsky, M. (1981). Music, mind, and meaning. Computer Music Journal, 5(3). https://web.mit.edu/6.034/www/6.s966/Minsky-MusicMindMeaning.pdf
MuseScore. (n.d.). MuseScore: Free music composition and notation software. Retrieved March 4, 2024, from https://musescore.org/en
OpenAI. (n.d.). Composer and instrument tokens. MuseNet. Retrieved March 4, 2024, from https://openai.com/research/musenet
OpenAI. (n.d.). Data set. MuseNet. Retrieved March 4, 2024, from https://openai.com/research/musenet
OpenAI. (n.d.). Embeddings. MuseNet. Retrieved March 4, 2024, from https://openai.com/research/musenet
OpenAI. (n.d.). MuseNet: A state-of-the-art music generation model. Retrieved March 4, 2024, from https://openai.com/index/musenet/
Published
How to Cite
Issue
Section
Copyright (c) 2025 Aarav Gadkar; Adam Strawbridge

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.


