Every fall, I begin my course on the intersection of music and artificial intelligence by asking my students if they are concerned about the role of AI in music composition or production.
So far, the question has always elicited a resounding “yes”.
Their fears can be summed up in one sentence: AI will create a world where music abounds, but musicians are pushed aside.
In the coming semester, I expect a discussion about Paul McCartney, who announced in June 2023 that he and a team of sound engineers had used machine learning to discover a “lost” John Lennon vocal track by separating the instruments from a demo . intake.
But reviving the voices of long-dead artists is just the tip of the iceberg in terms of what’s possible — and what’s already being done.
In an interview, McCartney admitted that AI represents a “scary” but “exciting” future for music. To me, his mix of consternation and excitement is perfect.
Here are three ways AI is changing the way music is made – all of which can threaten human musicians in different ways:
1. Song composition
Many programs can generate music with a simple prompt from the user, such as “Electronic Dance with a Warehouse Groove.”
Fully generative apps train AI models on extensive databases of existing music. This allows them to learn musical structures, harmonies, melodies, rhythms, dynamics, timbres and form and generate new content that stylistically matches the material in the database.
There are many examples of these types of apps. But the most successful ones, like Boomy, allow non-musicians to generate music and then post the AI-generated results to Spotify to earn money. Spotify recently removed many of these Boomy-generated songs, claiming it would protect the rights and royalties of human artists.
The two companies quickly came to an agreement that allowed Boomy to re-upload the tracks. But the algorithms that power these apps still have a disturbing ability to infringe on existing copyrights, which can go unnoticed for most users. After all, basing new music on a dataset of existing music creates noticeable similarities between the music in the dataset and the generated content.
Moreover, streaming services such as Spotify and Amazon Music are naturally incentivized to develop their own AI music generation technology. For example, Spotify pays 70% of the revenue from each stream to the artist who created it. If the company could generate that music with its own algorithms, it could cut human artists out of the equation altogether.
Over time, this could mean more money for giant streaming services, less money for musicians — and a less human approach to making music.
2. Mixing and mastering
Machine-learning apps that help musicians balance all the instruments and clean up the audio in a song—known as mixing and mastering—are valuable tools for those who don’t have the experience, skills, or resources to create professional-sounding to make numbers.
Over the past decade, the integration of AI into music production has revolutionized the way music is mixed and mastered. AI-driven apps like Landr, Cryo Mix and iZotope’s Neutron can automatically analyze tracks, balance audio levels and remove noise.
These technologies streamline the production process, allowing musicians and producers to focus on the creative aspects of their work and leave some of the technical work to AI.
While these apps no doubt take some of the work out of professional mixers and producers, they also allow professionals to quickly take on less lucrative tasks, such as mixing or mastering for a local band, and focus on high-paying commissions that require more finesse. These apps also allow musicians to produce more professional sounding work without hiring a sound engineer they can’t afford.
3. Instrumental and vocal reproduction
Using “tone transfer” algorithms through apps like Mawf, musicians can transform the sound of one instrument into another.
Thai musician and engineer Yaboi Hanoi’s song ‘Enter Demons & Gods’, which won the 3rd International AI Song Contest in 2022, was unique in that it was influenced not only by Thai mythology, but also by the sounds of native Thai musical instruments, who have a non-Western intonation system. One of the most technically exciting aspects of Yaboi Hanoi’s entry was the reproduction of a traditional Thai woodwind instrument – the pi nai – which was re-synthesized to perform the song.
A variant of this technology is at the heart of the Vocaloid speech synthesis software, which allows users to convincingly produce human vocal tracks with interchangeable voices.
Unsavory applications of this technique are popping up outside the musical realm. For example, AI voice exchange has been used to scam people out of money.
But musicians and producers can already use it to realistically reproduce the sound of any instrument or voice imaginable. The downside, of course, is that this technology can deprive instrumentalists of the ability to perform on a recorded track.
AI’s Wild West moment
While I applaud Yaboi Hanoi’s victory, I have to wonder if it will encourage musicians to use AI to fake a cultural connection where none exists.
In 2021, Capitol Music Group made headlines by signing an “AI rapper” who was given the avatar of a black male cyborg, but was actually the work of Factory New non-black software engineers. The backlash was swift, with the record label roundly berated for blatant cultural appropriation.
But AI musical cultural appropriation is easier to stumble upon than you might think. With the extraordinary size of songs and samples that make up the datasets used by apps like Boomy – see the open source “Million Song Dataset” for an idea of the scale – chances are a user will unknowingly select a newly generated track uploads that draw from a culture that isn’t their own, or cradles an artist in a way that mimics the original too much. Even worse, it will not always be clear who is responsible for the infringement, and current US copyright laws are contradictory and woefully inadequate to regulate these issues.
These are all topics that have been covered in my own class, which has at least allowed me to educate my students about the dangers of uncontrolled AI and how best to avoid these pitfalls.
At the same time, at the end of each fall semester, I will ask my students again if they are concerned about an AI takeover of music. At that point, and with an entire semester of experience researching these technologies, most of them say they’re excited to see how the technology will evolve and where the field will go.
There are some dark possibilities ahead for humanity and AI. Still, at least in the realm of musical AI, there’s cause for some optimism – assuming the pitfalls are avoided.
This article has been republished from The conversation under a Creative Commons license. Read the original article from Jason Palmaraassistant professor of music technology, Indiana University