Each fall, I begin my course on the intersection of music and artificial intelligence by asking my students if they’re concerned about AI’s role in composing or producing music. So far, the question has always elicited a resounding Yes. Their fears can be summed up in a sentence: AI will create a world where music is plentiful, but musicians get cast aside.
In the upcoming semester, I’m anticipating a discussion about Paul McCartney, who in June 2023 announced that he and a team of audio engineers had used machine learning to uncover a “lost” vocal track of John Lennon by separating the instruments from a demo recording.
But resurrecting the voices of long-dead artists is just the tip of the iceberg in terms of what’s possible—and what’s already being done. In an interview, McCartney admitted that AI represents a “scary” but “exciting” future for music. To me, his mix of consternation and exhilaration is spot-on.
Here are three ways AI is changing the way music gets made—each of which could threaten human musicians in various ways:
1. SONG COMPOSITION
Many programs can already generate music with a simple prompt from the user, such as “Electronic Dance with a Warehouse Groove.”
2. MIXING AND MASTERING
Machine-learning-enabled apps that help musicians balance all the instruments and clean up the audio in a song—what’s known as mixing and mastering—are valuable tools for those who lack the experience, skill, or resources to pull off professional-sounding tracks.
3. INSTRUMENTAL AND VOCAL REPRODUCTION
Using “tone transfer” algorithms via apps like Mawf, musicians can transform the sound of one instrument into another.
AI’S WILD WEST MOMENT
While I applaud Yaboi Hanoi’s victory, I have to wonder if it will encourage musicians to use AI to fake a cultural connection where none exists.