GenAI Summit: The Future of AI in Music

This image was created using generative AI.
On February 20th-21st, 2025, UC San Diego hosted the 2025 GenAI Summit, where many brilliant thought leaders shared their research and insight on where generative artificial intelligence will take us next. As a committee chair, Shlomo Dubnov, a professor in the UC San Diego Department of Music, advocated for “AI for Music.”
Anna Huang, an assistant professor from MIT, gave a lecture on her research using AI modeling to interact with musicians. She began by thinking of different ways to create melodies, using machine learning as a “puzzle-building process.” She explored using music transformers to generate more long-form structures. Given a motif, for example, Huang wanted to decipher if AI could generate music that would remember the motif and consequently influence the generated melody.
One of the most exciting points in Huang’s research was exploring the use of generative AI to improvise with musicians. The complications are vast, given the many elements of music that need to be accounted for: melody, harmony, anticipation, recovery, etc. However, models like ReaLchords, are already making headway, “emulating the spontaneity of live music jamming.”
During the FAQ portion of the talk, someone brought up an intriguing thought: we are so focused on teaching the machine to output what is technically “correct” rather than what the music is “expressing,” a keystone component of all art. While it may be an interesting venture to teach AI how to use expression, this is one of the more controversial possibilities for artists in every field. Is AI a tool for artists? Or a competitor?
Zachary Novack, PhD student of Computer Science and Engineering at UC San Diego, had some insight into this very idea:
“I think that, as researchers, we need to include musicians and artists in the whole pipeline of development on AI systems to bring perspective on what is actively useful for creatives. I think there’s a lot of full stack text-to-song models out there, which may be incredibly divorced from what musicians desire as tools, and we can do a lot better on our end in designing useful creative tools rather than ones that automate the creative pipeline fully.”
At the GenAI Summit, Novack gave a talk as co-creator of Presto, an innovative model for accelerating music generation. As both a musician and engineer, he strongly believes that artists should be compensated for their work and the data they provide for these models, reinforcing the relevancy of artists.
It is clear that generative AI is evolving quickly in every facet of academia. UC San Diego’s GenAI Summit was not only a platform to showcase the innovative progress of generative AI, but it was also a hot seat for some of the most thought-provoking questions in AI research. Who knows what the future holds for next year’s conference?