Document Type
Honors Project - Open Access
Abstract
Although the idea of connecting music and art dates back to ancient Greece, recent advancements in computing have made automating this feasible. This project represents a quest to transform music into art, using three methodologies where each is an improvement towards generating images that convey our feelings and imaginations during music listening. The three methods respectively involve:
1. An element-wise mapping of sound and colors
2. Using song tags
3. Tuning an Artificial Intelligence (AI) model to generate pictorial text captions.
To create artistic images, methods two and three utilize an existing text-to-image generative AI.
Recommended Citation
Tran, My Linh (Lucy), "Music On Canvas: A Quest to Generate Art That Evokes the Feeling of Music" (2023). Mathematics, Statistics, and Computer Science Honors Projects. 76.
https://digitalcommons.macalester.edu/mathcs_honors/76
Included in
© Copyright is owned by author of this document
Comments
Conclusion:
In this paper, I attempted at three different methods to generate art that evokes the feeling of music. Method 1, which converts MIDI data to art data, helps redirect me to focus on the interpretability, realisticity, and ultimately relatability of the visualizations to the general audience. Built upon this realization, method 2, which generates images based on song tags, was able to produce concrete images that incorporate different aspects of a musical piece such as the performing artists, genres, time eras, and emotions. Finally, method 3 aims to create typical scenes that evoke the same feelings as the music, a common type of vision that music listeners have in mind. Through a workaround approach, I proved that this objective is possible, but needs a lot more research and development to implement. Overall, this project can be considered a solid starting point for a new field of computational research that encapsulates human imagination during music listening, thereby connecting music and art in a naturalistic way.
GitHub repositories:
Method 1: https://github.com/lucy-tran/MIDI-to-Pixels
Method 2: https://github.com/lucy-tran/Music-On-Canvas
Method 3: https://github.com/lucy-tran/MusCALL
Goma's Imagination dataset for method 3: https://github.com/lucy-tran/Imagination-Dataset
Slideshow of method 3's results: https://github.com/lucy-tran/Scenic-Immersion