In the ever-evolving landscape of the music industry, the relationship between listeners and artists has undergone significant change. Trust and transparency have emerged as key elements, shaped by both past scandals and the emergence of groundbreaking tech. With the rise of artificial intelligence (AI)-generated music, a new chapter in this dynamic is being written.
The expectations of transparency between listeners and artists have shaped the music industry’s dynamics in profound ways. Past moments of musical deception, such as the infamous Milli Vanilli lip-syncing scandal in 1989, highlight the detrimental effects of breaching trust. The incident, which unravelled during a performance in front of 80,000 people, resulted in numerous refund lawsuits and they were stripped of their 1990 Grammy Award. Similarly, Lana Del Rey faced accusations of inauthenticity and artifice in 2012 when she underwent an aesthetic pivot after releasing a less successful album under her birth name, Lizzy Grant. The cultural obsession with authenticity that ensued reveals the value fans, artists, and corporations place on genuine artistic expression.
AI applications for music are not new. AI-generated music encompasses a broad spectrum of possibilities, from lo-fi ambient music for stores to rights-free music for content creators and automated mixing and mastering. Most of us utilise invisible algorithm recommendations for our daily music consumption, making it easier to discover songs and artists.
However, two significant new aspects have come to the forefront. First, the resurrection of archival vocals, such as the forthcoming “lost” Beatles song, showcases the ability of AI to extract and utilise recorded voices from the past, breathing new life into old recordings. Sean Lennon commented on the process, saying, “He [Peter Jackson] was able to extricate John’s voice from a ropey little bit of cassette and a piano.”
Secondly, the rise of audio deep fakes, allows the creation of music that mimics a specific genre, artist, or lyrics. In April 2023, “heart on my sleeve,” featuring the facsimile vocals of Drake and The Weeknd, was released by TikTok user ghostwriter977 on various streaming platforms like Apple Music, Spotify, and YouTube.
These developments raise important legal and ethical questions surrounding fair use, copyright, and intellectual property rights, along with their value. Is there scope for AI to enhance artistic creativity and originality? If so, will it maintain the existing relationship between listener and artist?
Derivative works are inherently a part of how music develops over time. Deepening one’s relationship with music often involves discovering the DNA of a song. For example, the “Take Me To The Mardi Gras” drum break, featured in nearly 500 rap, hip-hop, and jungle tracks, or (one of my favourites), the sample of Edwin Birdsong’s “Cola Bottle Baby” in “Harder, Better, Faster, Stronger.” Often to move forward, we must look back.
However, the US Supreme Court’s recent decision on an Andy Warhol painting could reshape fair use law. The case discussed whether Warhol’s 1984 Prince artworks were protected under fair use or whether photographer Lynn Goldsmith’s 1981 portrait of the musician was hers alone. Ultimately, nearly 40 years after the usage, the court decided to favour Goldsmith. “To hold otherwise would potentially authorise a range of commercial copying of photographs, to be used for purposes that are substantially the same as those of the originals,” Justice Sotomayor wrote in her opinion. “As long as the user somehow portrays the subject of the photograph differently, he could make modest alterations to the original, sell it to an outlet to accompany a story about the subject, and claim transformative use.”
Less than a month old at the time of writing, this decision could set a precedent for how copyright law applies to AI-generated works using human-made source material. For labels, the resurrection of archival vocals offers an almost unlimited supply of new content — as long as they have the copyright. But what about derivative works? Using art as an example, if Midjourney was prompted to create a painting in the style of, erm, Warhol, would that content ultimately belong to his estate?
(As an aside, the BBC breathlessly reported that “heart on my sleeve” earned at least US$1,888 from Spotify alone, while Billboard estimated that the song may have earned US$9,400 globally across all platforms, despite garnering more than 16 million views. Whether ghostwriter997 is entitled to those royalties or not, it’s clear that streaming does not work for artists.)
The newly launched Human Artistry Campaign aims to ensure artificial intelligence technologies are developed and used in ways that support human culture and artistry – and not in ways that replace or erode them. They state, “People relate most deeply to works that embody the lived experience, perceptions, and attitudes of others.”
Lauren Chanel, a writer and futurist, says that AI-generated vocals may allow “people who are not Black to put on the costume of a Black person.” Similarly, so-called virtual artists like Lil Miquela or FN Meka create competition with real, underrepresented talent, capitalising on the desire from fans for diverse voices.
Creatives and listeners alike should advocate for transparent practices, open dialogue, and responsible use of AI in music creation. While the influence of technology on creativity and trust in the music industry is still unfolding, ongoing exploration and dialogue are essential to understand its full impact. We can’t stop technological advancements but by working together, we can balance between trust, innovation, and creativity. While AI can augment production, it cannot replace the essence of human artistry. The final word goes to ghostwriter977, who puts it crisply — “can’t kill a ghost.”