Sony AI-generated music deepfakes have become a major concern for the music industry after the company disclosed a major removal effort. The label said it has asked platforms to take down more than 135,000 tracks. Those recordings falsely impersonated artists from its roster using generative AI. The company said the problem is growing as the tools become cheaper and easier to use.
The disclosure came during the launch of the industry’s latest Global Music Report in London. Sony said the fake tracks target some of its biggest artists. The affected names include Beyoncé, Queen, and Harry Styles. The label argued that the tracks cause direct commercial harm to legitimate artists and their campaigns.
Fake Songs Are Hitting Major Release Campaigns
Sony said the fake recordings often appear when artists are actively promoting new music. That timing matters because public attention is already high. According to the company, fraudsters try to capture demand created by the real artist. In the worst cases, executives said the deepfakes can damage a release strategy or hurt an artist’s reputation.
Dennis Kooker, president of Sony’s global digital business, said the deepfakes are demand-driven. He said they benefit from the audience interest that artists have already built. That means the problem is not random or occasional. Instead, it is tied closely to the moments when musicians are most visible.
Sony said the 135,000 removals likely represent only part of the real total online. Since last March alone, it said it identified about 60,000 fake tracks. Other artists who may have been affected include Bad Bunny, Miley Cyrus, and Mark Ronson. The label’s warning suggests the issue is broad, not limited to a few stars.
Streaming Fraud Is Becoming More Sophisticated
The industry says AI has intensified a wider streaming fraud problem. Fraudsters upload tracks to major platforms and inflate their play counts. That can generate royalty payments that should have gone to legitimate creators. In effect, the scheme targets both artist identity and revenue.
Industry figures estimate that as much as 10% of content on streaming platforms may be fraudulent. That estimate remains unofficial, but it reflects growing alarm across the sector. The concern is no longer only about cloned voices. It also includes fake artist profiles, manipulated streams, and AI-generated material built for monetization.
Sony AI music deepfakes fit directly into that larger problem. The company’s warning described fan confusion as a major risk. Without proper identification, listeners may not know whether a recording is authentic. That uncertainty can damage trust in both artists and platforms.
Industry Growth Has Not Reduced The Pressure
The announcement came alongside new revenue figures from the global music business. Recorded music revenue rose 6.4% last year to $31.7 billion, according to the report. That marked the 11th straight year of growth. Streaming subscriptions have continued to support that recovery after years of piracy and financial decline.
The same report said the UK remained the world’s third-largest music market. China moved past Germany to become the fourth largest. Meanwhile, Taylor Swift was named the biggest artist of 2025. Her album The Life Of A Showgirl was named the year’s most popular worldwide.
Those figures show that the business remains strong overall. However, industry leaders said growth does not solve the AI problem. In fact, a larger market may create more incentive for fraud. When real artists generate bigger demand, fake uploads can become even more profitable.
Labels Want Clearer AI Identification Rules
Executives at the London event pushed for clearer labeling of AI-generated music. They said identification is now the next major challenge for platforms. Victoria Oakley, chief executive of the IFPI, said the issue should be simple to address. She called for upload systems that can detect and label fake or AI-made material.
Sony pointed to Deezer as one example of a platform already using such tools. The company noted that Deezer says 34% of songs submitted there are now categorized as AI-generated. Executives acknowledged that no system is perfect. Even so, they said transparent labeling would help fans understand what they are hearing.
The debate also unfolded against a new discussion over AI regulation in the UK. Industry attendees reacted with relief after the government dropped a proposal that would have allowed AI training on copyrighted works without permission. Music executives said governments are trying to balance innovation and creative protection. For labels and artists, that balance now looks increasingly urgent.
Sony AI-generated music deepfakes have become one of the clearest examples of how generative AI can disrupt entertainment. The issue affects trust, royalties, and the release strategy simultaneously. It also highlights the pressure on streaming services to respond more quickly. For the music business, transparency is no longer a side issue.

